Computers don’t care what pop songs humans like. However, they can, given enough data, learn why humans like them. They, or, more precisely, their algorithms, can analyze these popular tunes and find commonalities among them that seem to appeal to members of the human species.
This is exactly what one researcher, Dr Mick Grierson, was asked to do by car manufacturer, Fiat. They wanted him to use his computer expertise to find the most iconic song of all time so that they could use it to promote a new model. Grierson used machine learning algorithms to identify the characteristics that differentiated iconic songs from normal pop songs. He presented the algorithm with songs from all-time-best-songs lists and let it determine what it was that appealed most to humans. Grierson found that the top songs used “sound in a very varied, dynamic way when compared to other records.” He suggested that “this makes the sound of the record exciting, holding the listeners’ attention”. The algorithm also analyzed such features as beats per minute, key, chord variety, and lyrical content. In the end, the algorithm identified these 10 songs as having the features that most appeal to humans.
Grierson has, subsequently, downplayed these results saying, “my work has also been the subject of quite a few technology news articles over the years. In many cases, the things I have achieved have been misreported, exaggerated and sensationalised. There’s little I can do about this and I try to smile whenever I see Youtube playlists featuring my top 50 best songs of all time, some of which I have never even heard.”
Grierson is now involved in a project which will allow composers to use algorithms to produce music in the genre they choose. This seems to be something that many major tech firms are interested in. Google has Magenta Studio, IBM has Watson Beat, and Facebook has its Universal Music Translation Network. Although they all claim to be tools to help artist creativity, they also have the ability to compose music. Here is how Magenta’s MusicVAE blends two simple melodies into a more complex musical composition without any human intervention.
Watson Beat takes this a bit further. All you have to do is give the algorithm a simple melody and it will do the rest. As IBM Watson’s research team member Janani Mukundan puts it, you just “tell the system what kind of mood you want the output to sound like. The neural network understands music theory and how emotions are connected to different musical elements, and then it takes your basic ideas and creates something completely new.”
There is a strong, unspoken suggestion here that it is just a matter of time before a computer is able to create a number one song. Actually, the first step may have already been taken. Singer Taryn Southern only wrote the lyrics to her song, Break Free. She told the computer what mood, tempo, and other elements she wanted and the computer did the rest. It is a perfectly acceptable song and has had over 2 million YouTube views. There is every indication that, with a little marketing, it could become a number one song. This is a great improvement over Sony’s attempt to get a computer to write a new Beatles’ song. After feeding the algorithm with all the Beatles’ songs, the computer composed the confusing and annoying, “Daddy’s Car”.
The Sony experiment produced a song which seemed to be written by a computer, and I don’t mean the wacky lyrics which were, surprisingly, written by a human. This could indicate that there might be a Turing Test for such music. If you, as a human, cannot tell if a tune was written by a human or a computer, the computer should attain the same level of musical respect as a human composer. In my opinion, this has already been done, at least on the individual song level. Lyrics are a linguistic problem which will take a lot longer for a computer to work out as these lyrics would have to pass an original Turing Test (i.e. does Google Translate perform as well as a human.)
The problem with these algorithms is that the quality of their compositions is inconsistent. It is mostly a hit and miss scenario. They are the ‘One Hit Wonders’ of the cyber world. Then again, even the Beatles made some poor tunes. So, perhaps, we should judge computer composers with a consistency factor that roughly equates to that of its respected human counterparts.
Experts in the field have mixed opinions on how well computers will fare as composers. Some, like Scott Cohen of Sony, believe that “there will be a number one song that’s 100% AI-written.” Others aren’t so sure. They base their argument on the fact that computers may mimic human creativity and emotions but lack a deeper understanding of them. Although most believe a collaborative effort will eventually result in a number one song, few think a song totally written by a computer will attain this goal. My feeling is that the novelty factor alone may contribute to a computer-generated tune making it to the top of the charts, but whether a catchy tune should be considered a genuine composition is another matter.
There is another angle to this story. A growing number of composers are now beginning to wonder if these computers will eventually put them out of work. These algorithms can churn out thousands of tunes in a day. You’d have to expect that at least one of them might be a winner. In fact, why not create an algorithm that chooses the best of what is produced? Actually, Spotify is already doing this by using algorithms to determine which songs uploaded to its site are best.
Apparently, seeing the writing on the wall, Elton John is investing in an algorithm that will continue to write songs in his style long after he is gone. Couple song writing with a virtual AI Elton John and he, or his avatar, could continue going on tours and making money for years after he’s no longer around to do the same. It is, however, uncertain whether people would pay for this experience.
The bottom line is this. A computer cannot sing of its loneliness, heartbreak, or love. It cannot refer to life experiences that it had. If it did try to fake emotions, humans would, or at least, should, realize the phoniness behind the words. In fact, if a battle for top-of-the-chart dominance ever erupts, humans would be forced to become more ‘human’ to differentiate themselves from their digital competitors. This will lead to a new wave of emotion-based songs related to true life events. It would, in fact, usher in something akin to a neo-Romantic Era.
If, however, algorithms continue to advance, they may be able to learn how to use music to manipulate human emotions, even if they don’t feel those emotions themselves. They would, in this case, become nothing short of being digital psychopaths. If they made up life events to appeal to human listeners, they would become scam artists. We’ll just have to see how this develops. But clouds are undeniably on the horizon.