We propose a programming language for music named mimium, which combines temporal-discrete control and signal processing in a single language. mimium has an intuitive imperative syntax and can use stateful functions as Unit Generator in the same way as ordinary function definitions and applications. Furthermore, the runtime performance is made equivalent to that of lower-level languages by compiling the code through the LLVM compiler infrastructure. By using the strategy of adding a minimum number of features for sound to the design and implementation of a general-purpose functional language, mimium is expected to lower the learning cost for users, simplify the implementation of compilers, and increase the self-extensibility of the language. In this paper, we present the basic language specification, semantics for simple task scheduling, the semantics for stateful functions, and the compilation process.
mimium has certain specifications that have not been achieved in existing languages. Future works suggested include extending the compiler functionality to combine task scheduling with the functional paradigm and introducing multi-stage computation for parametric replication of stateful functions.
This article discusses the internal architecture of the MidifilePerformer application. This software allows a user to follow a score described in the MIDI format at its own pace and with its own accentuation. MidifilePerformer allows for a wide variety of style and interpretation to be applied to the vast number of MIDI files found on the Internet.
We present here the algorithms enabling the association between the commands made by the performer, via a MIDI or alpha-numeric keyboard, and the notes appearing in the score. We will show that these algorithms define a notion of expressiveness which extends the possibilities of interpretation while maintaining the simplicity of the gesture.
We present temporal-scope grammars for automatic composition of polyphonic music. In the context of this work, polyphony can refer to any arrangement of musical entities (notes, chords, measures, etc.) that is not purely sequential in the time dimension. Given that the natural output of a grammar is a sequence, the generation of sequential structures, such as melodies, harmonic progressions, and rhythmic patterns, follows intuitively. By contrast, we associate each musical entity with an independent temporal scope, allowing the representation of arbitrary note arrangements on every level of the grammar. With overlapping entities we can model chords, drum patterns, and parallel voices -- polyphony on small and large scales. We further propose the propagation of sub-grammar results through the derivation tree for synchronizing independently generated voices. For example, we can synchronize the notes of a melody and bass line by reading from a shared harmonic progression.
We introduce the W-calculus, an extension of the call-by-value λ-calculus with synchronous semantics, designed to be flexible enough to capture different implementation forms of Digital Signal Processing algorithms, while permitting a direct embedding into the Coq proof assistant for mechanized formal verification.
In particular, we are interested in the different implementations of classical DSP algorithms such as audio filters and resonators, and their associated high-level properties such as Linear Time-invariance.
We describe the syntax and denotational semantics of the W-calculus, providing a Coq implementation. As a first application of the mechanized semantics, we prove that every program expressed in a restricted syntactic subset of W is linear time-invariant, by means of a characterization of the property using logical relations.
This first semantics, while convenient for mechanized reasoning, is still not useful in practice as it requires re-computation of previous steps. To improve on that, we develop an imperative version of the semantics that avoids recomputation of prior stream states.
We empirically evaluate the performance of the imperative semantics using a staged interpreter written in OCaml, which, for an input program in W, produces a specialized OCaml program, which is then fed to the optimizing OCaml compiler.
The approach provides a convenient path from the high-level semantical description to low-level efficient code.
Live Coding is a creative coding practice, where the act of programming itself constitutes a performance. The code written during a Live Coding performance often generates media, for example a continuous stream of music or video. One of the challenges of Live Coding is in finding a balance in the language design, such that the language is both expressive enough for the artist, as well as simple enough to be programmed in real-time. In order to reduce the overhead of manually coding every part of a Live Coding performance, we propose a tool for Live Coding that leverages program synthesis to simplify the process. Program synthesis retains the "show your code" ethos of Live Coding performances, while also lowering the barrier to entry to the performance practice.