AT&Tto get an insight into the extent of this well documented branch of mathematics . In normal computing we want an algorithm to terminate as quickly as possible, to go through the least number of steps and return a value. In iterative algorithmic sound we usually want the opposite effect, to keep the algorithm running through its steps for as long as possible.
Depending on the language used and the program interpreter a few lines of code or even just a few characters can define many hours of evolving sound. Sequences that can be defined by induction and computed with recursive algorithms may only need a pair of numbers to specify their entire range. Like synthesis an algorithmic sequencer uses equations that produce functions of time, but unlike waveforms they are seldom periodic. Instead they are designed to follow patterns that are musically pleasing melodies or harmonies. Algorithms with musical uses leverage complexity (chaos), self similarity and quasi-periodicity to produce patterns that have a degree of order. Fractal equations such as those generating the Mandelbrot or Julia sets, or recursive procedures which generate number sequences like the Fibonacci sequence are common devices.
An important thing distinguishing synthesis from algorithmic sound is that synthesis is usually about sounds produced at the sample and waveform level under careful control, while algorithmic sound tends to refer to the data, like that from a sequencer, which is used to actually control these waveforms. It refers to music composition where we are interested in the abstract model, the emotional forms given by the rules of harmony, melody and rhythm, rather than in the final nuance of how the sound is rendered to the listener. The same algorithmic source might be hooked up to a string section or brass band with similar effects. See Jacob96 for an overview. For extensive resources see AlgoNet.
If coerced to a range and scale this sequence produces a pair of interesting melodies that converge. Shown in 1 is an implementation in Puredata to generate the next Collatz number in a sequence, the sequence stops at a lower bound of 1 1.
For example, a Markov machine can quickly be trained to understand 12 bar blues and notice that more often than not a F7 follows a C and that a where a chord has a major 3rd the melody will use a flattened one. This is interesting when data from more than one composer is interpolated to see what Handel, Jean Michel Jarre and Jimi Hendrix might have written together. When encoding more complex musical data such as phrasing, swing, harmony, accidentals and so on, a high dimensional probability matrix grows very quickly and contains a lot of redundancy. They are best deployed in a specific role as just one possible tool in conjunction with other methods. The Puredata implementation shown as 3 is the three state example given above.