It’s easy to feel flummoxed wading through the quagmire of jargon when trying to get to grips with AI – let alone what it means for us as creators of music. More often than not, conversations about AI and music become circular – focussed on our fear over it as a threat to our artistry. However, if we are unable to break that circle, we don’t get to talk about what we want AI to be. What role do we want it to play in our musical lives?
If we get the right protections in place for us as creators, could it even help us be more human?
For this to happen, we need more conversations between music creators and AI coders, computer scientists and policy makers. As a first step to making this possible, here is a brief guide to some of the key terms we often hear.
Let’s start at the beginning… What actually is AI?
AI (Artificial Intelligence) is a broad term that means any type of simulation of human intelligence by computer systems or machines. Contrary to popular belief, if AI were human it would actually be a baby boomer. The first examples of working AI programmes were checkers and chess playing programmes developed in 1951.
AI music is also something of a boomer. The first known AI composed piece was ILLIAC 1 part of Lejaren Hiller’s incredibly sophisticated Illiac Suite composed in 1956.
What is an algorithm? An algorithm is simply a sequence of rules that we can give to an AI machine – usually to perform a task or solve a problem. For example, Bowie used a lyric shuffling algorithm to create ‘The Verbaliser’ back in 1995.
One way we can differentiate between different types of AI is how they learn to perform the tasks they do.
Machine learning is a type of AI that uses a mixture of mathematics, coding and computer science. It tends to focus on creating algorithms and models in order to learn from data and make predictions without human assistance.
Training is the bit where an AI system is taught to spot patterns in, learn and interpret data. Once it has been trained, it can then be able to make decisions based on new information it’s provided. The end result of this process – is often some kind of computer program which we call a machine learning model. In other words, an AI model is a program that can analyse data that you put into it to find patterns and make predictions or decisions.
You may sometimes come across the term data ingestion when learning about Machine Learning. This is a technique used to extract raw data from one or more sources and then transform it to make it suitable for training Machine Learning Models.
Deep learning is a type of machine learning that tries to imitate the human brain by learning how it makes decisions. Instead of simply following an algorithm or ‘rule’ that can only perform a specific task, deep learning systems can learn from unstructured data without supervision.
A neural network is designed to in some sense mimic the structure of the human brain. One application of these types of networks is usually used in speech and vision recognition.
Reinforcement learning is a type of machine learning perhaps most similar to how we think of human learning. As the name suggests, reinforcement learning is rooted in a system being rewarded or penalized for its actions when it interacts with its environment.
In supervised learning, classified output data (in our case music) is used to train the system and produce the ‘correct’ or desired algorithms. When it comes to AI music, It is much more common than unsupervised learning. Usually, this involves some kind of feedback system – one example might be whereby we tell the AI if we like the composition it has made or not, thereby teaching it over time what is ‘good’ or ‘desirable’ to us. In Unsupervised learning, unclassified and unlabeled data (or in our case music) is used to train an algorithm so that it can act without supervision.