As the project progressed, the human side and the machine side of the collaboration evolved. Werzowa, Gotham, Levin, and Röder deciphered and transcribed the sketches from the 10th Symphony, trying to understand Beethoven’s intentions. Using his completed symphonies as a template, they attempted to piece together the puzzle of where the fragments of sketches should go – which movement, which part of the movement.
They had to make decisions, like determining whether a sketch indicated the starting point of a scherzo, which is a very lively part of the symphony, typically in the third movement. Or they might determine that a line of music was likely the basis of a fugue, which is a melody created by interweaving parts that all echo a central theme.
The AI side of the project – my side – found itself grappling with a range of challenging tasks.
Ahmed Elgammal
Fascinating accomplishment, one that simultaneously highlights the current limitations of AI (the Scientific American podcast episode where I heard about this makes it a lot clearer how much human input was required at each step, with composer Walter Werzowa listening to hundreds of software-generated variations daily and selecting the closest to Beethoven’s work) and its dangers. Specifically, that people would use similar tools to revive famous past musicians, flooding the market with familiar sounds and thus reducing public demand for young musicians and fresh creative work. Current stars may also use the same tools to ‘compose’ music and release hits at a faster pace, keeping them in the public’s attention – and, again, marginalizing young talent with fewer resources. Either way, the end result would be a less diverse music selection and a growing concentration of income in the hands of those who can afford to make use of these complex algorithms.
Post a Comment