A class of algorithmic methods, more complex than the mathematical sequences, are called AI, artificial intelligence (something of a misnomer). All algorithms have some kind of memory, to store intermediate variables such as the last two or three values computed, but these are usually discarded immediately to save memory. An AI algorithm is much more persistent, it maintains state and input data throughout its execution as well as constantly evaluating new input. When a generative sequencer is given additional data that corresponds to knowledge, and the ability to make selections based on new input we call this AI composition. The input data is processed by filters and pattern matchers to invoke actions within the system which result in output of sounds appropriate for the input states. Having extra knowledge data might seem to break the definition of procedural sound being purely a program. The question is, to what extent is the data part of the program structure or a separate store of information? Some kinds of AI clearly separate knowledge from process, in others the process is adapted and itself becomes the encoded knowledge.
AI can be broken down into several categories. Adaptive and emergent systems start with a simple initial state or no data at all. Expert systems and Markov chains rely on large amounts of initial knowledge, but Markov chains are not classed as AI because they don't act on input. There are other programs which fall into a grey area between knowledge based AI, cybernetic and stochastic systems such as variable state machines, cellular autonoma, symbolic grammars and genetic algorithms.
Expert systems have a book of rules which they can look at to make decisions. But this knowledge is not fixed. What defines the system as AI is that it may revise its knowledge. Musical knowledge about harmony, modulation, cadence, melody, counterpoint and so forth is combined with input data and state information in an adaptive AI composition tool. This works best for the kind of data that can be expressed as "facts" or "rules", when A happens then B should happen etc. What the expert system is doing is solving a problem, or question. We are asking it to output music that best fits the given input requirements. The knowledge may be very general or it can conform to specific genres such as jazz or rock music. Processing may be freeform so that the AI program develops its own styles and tricks, or very tightly constrained so that it only produces cannons in the style of Bach. What makes an expert system interesting is the explicit way it models human problem solving. It is goal driven and has a solid understanding of where it is trying to get to, including sub-problems and sub-sub-problems. The core is called an inference engine, it takes a scenario and looks at schemas that fit known solutions, maybe trying a few of the ones it thinks look best first. Sometimes these lead to other problems that must be solved. In the course of traversing this knowledge tree it sometimes finds contradictory or incorrect schemas that it can update. The resultant behaviour is potentially very complex. Proper AI music systems are generally built on existing AI frameworks. Expert systems shells are available off the shelf ready to be filled with musical knowledge, or any other kind. They are general purpose software components with well understood behaviours, equally adaptable to solving engineering and biology problems as music composition. For detailed discussion of musical applications of ES see Cope92, Cope87.
Other kinds of AI systems, much more like the human brain, are neural networks. In a neural network knowledge is stored within the program structure as weights or numbers that determine how likely a given output is for a given input. Initially the network is blank, like a baby with no understanding of the world. Over time it is trained by providing it with examples. An example consists of an input pattern and and an expected output. Feedback systems within the network "reward" or "punish" it according to how well it gets the answer right. Eventually the network converges on an optimal set of weights so that it gives a correct answer each time. Examples of use might be training it to produce counterpoint for a melody, or complete a chord cadence. Neural networks are best at solving very narrow problems in a fuzzy way. A neural network, once trained in picking harmonies will be quite useless at another task. They are single minded and do not multitask well. Their fuzziness is their amazing strength, from which "true AI" seems to emerge. Those who play with neural networks will tell you that their output can seem quite spooky or uncanny at times. In contrast to expert systems which are quite brittle and tend to freak out when given unusual or unexpected input they are able to interpolate and extrapolate examples. For instance, trained in some simple examples of melody and then given input which has never been seen before, a neural system can produce what it thinks is the best fit, often a musically correct or quite creative production. They are able to find hidden data which is implicit in a set of examples. Work on neural models of sound is extensive, see Griffiths bibliography .
For a comprehensive FAQ sheet on NN technology see Sarle et al .
Automata are a collection of very simple self contained programs that live in a bounded environment and interact. Each is a state machine that depends on a matrix of rules, similar to the Markov state machine, but also on input from the environment. The environment can consist of input data from other nearby automata. The emergent intelligence comes from how many of them work together as their respective outputs affect the inputs of others. The original "game of life" by J.H.Conway is rooted in finite set algebra . Say that we set a measure of happiness or sadness to each individual in a group of cells that can reproduce. It will depend on how close they are to other cells. Then we specify that too many cells close together will annoy each other and commit suicide while completely isolated cells will die of loneliness. Now something interesting happens. The cells start to move about in patterns trying to build an optimal social arrangement. Hooking these cells up to musical notes can produce interesting compositions.
Genetic algorithms are a way, often a very slow way, of generating and selecting things. In this case we are selecting other algorithms. The idea is similar to cellular autonoma except the cells are far more complex and share much in common with artificial lifeforms. Like neural networks they start with little or no knowledge other than an initial state and some target goals, then they adapt and evolve towards those goals by improving themselves to best cope with an environment which represents the input state. Usually the implicit goal is just to survive, because we will make our selection based on the ones remaining after a period of time. These algorithms, patterns, mini programs, or whatever we wish to call them have certain characteristics like lifeforms. They reproduce and pass on genetic characteristics to their offspring, which can occasionally mutate at random. They die before a maximum lifespan expires. They are affected by their environment. If they were plants with leaves and roots in an environment that was randomly sunny, rainy and windy we might see that when it's too sunny the ones with big leaves prosper for a while because they can photosynthesise, but then die quicker because their leaves have a big surface area. When its windy the ones with big roots survive longer and so on. Eventually plants emerge that are best adapted to the environmental input conditions. If the plants represent musical phrases and the environment represents states in the game that correspond to emotive themes then we would hope to get phrases best adapted to the input. The trick is to choose the appropriate initial rules of life, the right environmental conditions and leave the system running for a long time. The we "harvest" the good ones to use generatively in later simulations.