Hit the reset button: Rethinking how we teach music technology
Kaley Lane Eaton
The couch had to go. After many smokey, poorly-ventilated and dark afternoons detangling dusty cables, I stood in the dank recording studio in the basement of Cornish’s Kerry Hall. I focused my ire on this couch: black leather, ripped, dirty. Not big enough to be useful for an educational setting like this, but just big enough for two young people to sit next to each other uncomfortably.
Two months into my new position as Music Technology Director at Seattle’s Cornish College of the Arts, I found myself—a 30-year old blonde with skinny jeans, a statement lip and a doctorate—confronting a century of abusive practices in this field. When I was hired at Cornish, the hope was to not only fill a staff position that could manage the recording studio and support classroom audio technology, but also reconceptualize the role of technology in our music curriculum. Our department was in the midst of a massive shift that included a comprehensive curricular overhaul, and the hope was that a fresh perspective on music technology as a creative practice could help to craft innovative and meaningful experiences.
I attempted to embody this shift with every step I took in the building. Imposter syndrome raged, as my path to developing a strange expertise in music technology practices was backwards: two opera degrees, a teaching degree, and then a doctorate in composition where I—weirdly—fell deeply into algorithmic composition and creative computer programming. For someone who had unlocked mystical secrets of psychoacoustics and electronic sound in the solitude of my tiny apartment on an old laptop using open source software, a $90 interface the size of my wallet and a single decent mic I got for Christmas when I was 15, I found recording studios like the one I stood in now to be Very Extra, expressions of our obsession with patriarchy and capitalism, and ultimately, a shrine to these most destructive power dynamics in human history. Though there is obvious utility in a traditional studio setup, in this moment I could see through the supremacy built into this particular kind of signal flow, the redundancy of many tired rack mounts, the exhausting and intimidating nature of ProTools, expensive plugins, and implicit vertical integration schemes. I asked myself: are spaces and concepts like this the best or only way to serve students interested in working with recording and electronic sound? A resounding no bounced off the walls. This space was gatekeeping, manifested.
It is no secret that, historically, many people—notably women—have faced challenges being accepted as experts in the field of music technology: which is unbelievably paradoxical, given that so many of the field’s influential pioneers were women, artists of color, and makers from humble backgrounds. Susanne Ciani created sound design as we know it; Laurie Anderson invented ways of interacting with technological sound, in real time, with our voices and bodies; Pauline Oliveros taught us that space could be synthesized, expanding compositional parameters and ways of hearing; Pamela Z shows us the thin line between digital technology and human gesture; these are just four of many.
Similarly, exceptionally influential artists of color are often left out of academic conversations surrounding music technology, even though the vast majority of innovations in music production technology since the 1970s have come from Black artists: in your BM (or MM, or DMA) in music composition, did you learn about Pierre Schaeffer or Grandmaster Flash? Steve Reich or Jimi Hendrix? Who were the “electronic composers” you studied for your general exams? Bob Ostertag extols the virtues of Hendrix in a 2001 article questioning the state of academic electronic music performance, which endears critical minds like mine to his point, but a deeper reading of his piece reveals an exceptionally problematic erasure of the work of artists like Ciani, Anderson, Oliveros, Pamela Z, and even Flash (who he does mention, but briefly). He claims that the body is not present in the performance of electronic music today, and even in popular genres such as dance music, the presence of the body is mapped onto the audience from the DJ, ultimately defining this music as not “virtuosic.” To him, to not sense the body of the performer detracts from the performance: “Music that uses electronically generated sound from synthesizers or computers suffers from the problem that one cannot actually get one's fingers into the generation of the sound.”
But in 2001 all of the aforementioned artists as well as pioneering rock bands, hip hop artists, and producers had quite literally gotten their “fingers into the generation of the sound,” and millions of people collectively loved this music, much of which remains enormously more popular than academic computer music. Do they not count, because of their relationship to commercial music? Does popularity negate artistry? Do the thousands of young artists who have followed their lead not count? Their absence from Ostertag’s article is confusing given his lauding of Hendrix, and ultimately undermines its premise. Leaving out these artists and the fact that the state of electronic music in 2001 was generally one of great popularity, excitement, and innovation asserts that Ostertag is so blinded by academic definitions of artistic success that his point can only be proven in that vacuum.
The erasure from the academic canon of people of color, women, and artists of all stripes who embraced popular music, as well as the problematic power dynamics at play in the larger music industry, speaks to the vortex of power in this field that has favored white men, economic privilege, and academic prestige. Much of the audio technology that we currently use has its origins as war communication technology; and many of the music industry’s norms as they relate to recording and production have roots in the racist and sexist power structures of production studios in the early 20th century. Whoever owned the gear owned your music.
The culture around music technology still expresses these problematic origins: we see guitar pedals with blatantly sexist names and marketing strategies; we see audio routing jargon continuing to use such phrases as “master” and “slave”; we continue to see access to audio technology hindered by exorbitant prices and gatekeeping patterns that are designed to exclude and intimidate people. The result of this segregation is a view that music technology is somehow a subset or tangent to the field of music itself. “Othering” technology in this way allows musicians of all walks of life either to disengage with it or to convince themselves of their own perceived ineptitude, even while learning curves are smoother than ever and most people know more than they think.
Importantly, the story of human music-making is fundamentally a story about technology: from the first tools appearing alongside the first flutes, to the explosion of hip-hop culture, to the fact that binary code itself has its origins in the functionality of musical instruments. To hold true to this story, 21st-century musicians must co-opt and improve technology in all of its forms—as an instrument, as an aid to collaboration, as a tool of communication, and as a force for social good. In an age when literally anyone with access to a computer or iPhone can produce their own music for no cost at all, it is baffling to see the ghosts of this culture still work their magic into the psychology of young people. Capitalism and academia perpetuate this psychology as a form of self-preservation.
The pathway to undoing patterns of harm and bias, empowering my students to use and create technology in service of their art, became clearer in that moment, as I stood in our broken studio: change the space, change the tools, change the conversation, and change the canon.
DATA: HOW DO PEOPLE ACTUALLY FEEL?
In October 2020, I took the pulse of the Cornish music department’s relationship with technology through an informal Google survey. The survey asked faculty and students to rate their interest, comfort level, and expertise with music technology, provide anecdotes about their relationship with music technology, and share some basic (anonymized) demographic information. An equal number of men and women, with a small representation of folks not identifying with the gender binary, took the survey. Overall, everyone rated their interest level much higher than their comfort or expertise. While this is a small sample at an unusually experimental urban art college, the data revealed that, interest level aside, women and folks not identifying with the gender binary rate their comfort and expertise lower than men. Race did not appear to have a discernible correlation amongst the rating questions, although this might be due to the small sample size, as folks did indicate anecdotal experiences that indicate racism in their written responses.
One white female student who identified as a producer and who had been “[using] DAWs since the age of 10 and ha[d] been super interested in making music with technology for as long as I can remember” rated herself a 2 in expertise. This comment alone illustrates the vast problems with this field: this student has MORE experience than me—her main professor of music technology—with producing music in DAWs, and yet she does not consider herself an expert.
A white female faculty member responded “I'm a classical [musician] and have not had opportunities to learn or feel comfortable with music technology. When people start talking about set-ups, microphones, and different pieces of equipment, I am lost and feel very stupid asking questions. I want to get better at this—I just don't know a way to do it without it costing an arm and a leg, feeling out of my depth, etc.” This respondent echoes the idea that much information about music technology is kept from students studying classical music.
As someone who was also never taught anything about music technology in my conservatory education (prior to my much more stylistically progressive doctorate), I would posit that this practice, rampant throughout a conservatory culture that favors intense specialization in one’s instrument, is rooted in classist ideas of labor distribution: that person rigging up the sound is a different person than you (read: is in a different caste than you) and you needn’t know those skills. Someone will always be around to do that for you. This faculty member’s angst is further compounded by the toxically masculine culture of knowledge gatekeeping that surrounds learning these simple skills on one’s own, and, finally, by the extreme economic barrier that comes with the expense of owning audio gear. This last point was reinforced by several other respondents, notably a black female 2nd-year student: “I use to make beats … and I find it really hard to make it past the learning curve of most DAW's nowadays.” This student also noted accessibility concerns: “music technology still seems pretty inaccessible to me when cost, and equipment needed, and learning curves are all added together (plus any additional things like time needed).”
Many people rated their interest level as high, but their comfort level as mediocre and their expertise as low. A woman who responded 5 for interest and 1 for expertise (white female, 4th year) says: “I don’t really know anything about music technology except for the small knowledge I’ve gained from practicing with Ableton by myself this year.” How is this “not really knowing anything”? Can we, as educators, better facilitate students embracing that point at which they know that they “know something”?
CHANGE THE SPACE
Ultimately, that fateful day of untangling in the basement studio at Cornish resulted in a 4-hour coffee conversation with my chair in which I asked if we could rebuild the space into a more versatile room that could accommodate traditional recording projects, rehearsals and practice sessions with electronic media, composition projects involving spatialized sound, work with video, larger class sizes, students plugging their own laptops into the studio system, and anything and everything in between.
To support my visions of dismantling inequities, it was important to me that we not only revamp the functionality of the space but the aesthetic as well: move mics from a locked, inaccessible closet to a safe cabinet in the space, TAKE OUT THE COUCH and replace it with multiple, stackable chairs that could be moved around the space, decentralize the “control console” by including instruments and other tools distributed equally around the room. And finally, paint it dark purple. Something about this color was warm, invited focus; the parts of me that are interested in sensory healing modalities understood this as a color that can connect us with something higher than ourselves, beyond ego.
It’s still a studio, and embraces all of the functionality that such a space needs, but there is an ineffable feeling in the room, one that is more inviting, inspiring, versatile, and enriching. A space that can support a wildly diverse array of artistic projects is also a space that can support a wildly diverse array of humans. Unsurprisingly, after the renovation was completed, studio use skyrocketed—it was no longer the 2-3 students who were specifically interested in music production, but now included performers who wanted to experiment with live processing, singer-songwriters who wanted to start learning how to record and arrange their own music with technology, electronic composers working on installations, and—of course, because it’s Cornish—mavericks who went into the space with no agenda and came out with a new skill.
CHANGE THE TOOLS
If you were to purchase the minimum of what many consider “industry standard” software and hardware—ProTools ($600), a pack of orchestral plug-ins ($500 or more), two AKG 414 condenser mics ($2,349), high quality monitors and headphones (totaling upwards of $800) a full keyboard MIDI controller ($700), a high quality audio interface (around $2000) you would be out at least $7,000. If you were to build a robust modular synthesis set-up—the expectation for many film composers and music producers—you could be out as much as $20,000.
Logic Pro X is much more affordable at $200, but it is proprietary to the generally expensive Mac computers required to run it. Ableton Live provides a range of options, from $99 for the basic version (which, quite honestly, has an enormous amount of functionality) to $749 for the Suite, which is advertised as suited for “professionals” (but how can you afford $749 before becoming a professional?). Reaper, increasingly common among DIY musicians and Gen Z, is an affordable $60 for a basic license and $225 for a commercial license, but is seldom taught and occasionally dismissed amongst academics and industry professionals. Hardware— interfaces, microphones, monitors, headphones, and cables—range widely in price, the main caveat being the stigma surrounding using “affordable” audio gear. If your gear is cheap, you must not be serious or knowledgable about technology. In 2020, as technology increases in quality and prices continue to drop, this stigma is almost baseless; I’ve done performances with exceptionally complex live electronic processing components with a $300 condenser mic, open-source software and a 10-year-old PreSonus Audiobox, and no one would be able to tell the difference in sound if I had used $5000 worth of gear and software.
Even the less expensive options are not realistic for many people. And yet, in higher education, the norm is to teach “industry standard” software: invest in expensive on-campus studios by getting “deals” (sort of—I’ve done a lot of campus studio designing, and it is far from a cheap venture) from Avid or Apple, teach the students what they will need to know to have a career as an audio engineer, producer, or film composer—and then leave them with $100k in debt and no access to these tools when they graduate. This is not unlike the cost barriers facing classical musicians, who must contend with instruments costing tens of thousands of dollars, competitions costing hundreds, summer programs in the thousands, private lessons in the thousands per year, on top of their conservatory debt. Is this a pyramid scheme?
Luckily, there are still ways of engaging with music technology and building a robust and well-rounded expertise without joining the scheme. Music schools can teach important principles of acoustics, electricity, and voltage, which are as much fundamental properties of music in the 21st century as the concert hall and the score. Classes in building analog instruments, soldering cables, and harnessing raw signal sources encourage a culture of making and building that can cut through expenses and empower students.
When Billie Eilish and Finneas won the Grammy for Record of the Year in 2020, one of the major glass ceilings shattered: the need for a record label and corresponding studio access became obsolete. They produced this (masterful) record from their house, with basic recording gear. “Bedroom pop” is a lens into the future: you can make a record on your bed with just a few pieces of inexpensive gear, and it can sound great. Indistinguishable, in fact, from records produced in giant studios with big names plastered all over.
As educators, we can harness this reality in our programs, shaping our assignments, curricula, and expectations around what many students will already have in their possession. We must be sensitive and nuanced about this, however, as access to a laptop and software is by no means a universal given. Schools should provide laptops, ideally, but can also incorporate technology in their studios and computer labs that resembles the technology students may actually be able to purchase someday soon (i.e., not a $7,000 studio setup), so that the learning curve is flexible and transferrable. Simply put, resist the idea that a fully stacked recording studio is the only way to create a Grammy-winning (or any other) recording: encourage students to work with what they have, facilitate collaborations among students, and let your school’s available technology support them in discovering less expensive and more accessible avenues that they can harness when they leave.
I had a student a few years ago who was (and continues to be) one of the more wild artists I have encountered in my lifetime. Their compositional interests ranged widely and their instrumental virtuosity was at an expert level. They felt a strong pull to incorporate these many skills into a career as a film composer, and applied to multiple graduate school programs in this field. I was shocked to hear that part of the portfolio review for admission to one of these highly competitive programs included a “studio review” in which they interviewed applicants about their home studio set-up and the gear that they owned.
Let us ponder how egregiously problematic this kind of economic gatekeeping is: to be admitted to a graduate program, one must have already invested tens of thousands of dollars into a professional level home studio. Graduates of this program often go on to successful careers in film composition, but it is no secret that this is likely not because of the rigor of the program, but because of the existing economic and social privilege of the students who are accepted and enroll. Again, we feel whispers of parallels to the field of classical music, where admission to top programs is often dependent on the quality of your instrument and the engineering sophistication of the recording you used for the pre-screening application. Is this a pyramid scheme? Emphasizing these expensive tools as the only ways to achieve a meaningful career in film composition and music production is a problem not only with these music schools, but with the film industry itself. We can facilitate progress as educators by embracing technology that provides greater access, and ultimately, greater creativity, to students. In this way, we can steer the industry.
The composer, performer, and live-coding genius Alexandra Cárdenas was a guest in my interactive electronic music class this fall, and shared with us her ethos of the live coding movement: that virtuosity, in electronic music, should be a virtuosity of empathy and expression, and that open-source technologies are the future of a representative and empowering culture of music technology. When she was a composition student at a conservatory, the institution committed to using all open-source software, which is where her interest and expertise in SuperCollider began.
My story is similar. The choice these institutions made to use free software has sent a diverse cohort of artists out into the world, literally making waves and developing creative practices that avoid the capitalist pyramid. While many musicians are intimidated by the seeming knowledge barrier of learning a programming language to make music, the vast amount of free and public resources, examples, templates, and experts who will support your learning at no cost make the access curve so flat that the learning curve is flattened slightly as well. That said, both Cárdenas and I learned SuperCollider and other programming languages with zero computer programming experience and not a dime spent.
If all music schools committed to teaching open-source software, the landscape of our industry would look vastly different and more equitable, and music being made in the world would no longer reflect the subconscious of the companies that make the software. Further, learning to program is a skill that, if acquired by a wider population of people, cuts through the sinister forces of surveillance capitalism. Our society is run by code—and the people who write that code are, overwhelmingly, white men with computer science degrees. What would happen to our culture if the majority of artists harnessed coding as one of their many tools? Think about code’s obvious historical analog, the written word: at first, it was the purview of monks and priests and used as a weapon of power and oppression. But then came the poets. Language is as much a force for good as it is for evil, now. We need the Maya Angelous of code to emerge. Incorporating it as a musical practice is one step towards a better society.
CHANGE THE CONVERSATION
Music technology is an empowering tool that we are all simultaneously inventing and learning: this has been the relationship humans have had with musical instruments throughout human history. Instruments are technologies, and technology is an instrument. Let us not “other” this field in the ways we do, especially by socially coding the function of people who engage with technology alternately as hyper-powerful gatekeepers or mere service providers. By changing the tools that we use to teach music technology, we can dismantle these preconceptions.
One way to do this would be to incorporate music and sound technology throughout the curricula that lead to a music degree, for example as a part of musicianship or theory sequence. Essentially, trick students into becoming experts: when teaching harmony or voice leading, have them practice in a DAW, linking what they see on the staff with the keyboard and the MIDI piano roll. For sight-singing assignments, or any activity that requires a student to sing or play their instrument, incorporate recording with microphones into a DAW. In performance settings, have students design their own stage plots and work out the signal flow their performance will require. Teach acoustics and psychoacoustics with open-source, code-based programming languages, having students explore spectrograms, waveforms, synthesis, and digital processing as a part of musicianship.
To mitigate lingering effects of cultural conceptions that students will bring with them, an effective strategy I have implemented with students with high expertise and low confidence is to place them in situations where they are mentoring other students. That student who has been making beats and producing their own songs for ten years can now be a theory tutor, because technology becomes integrated into musicianship. By incorporating these kinds of immersive tasks throughout the teaching and learning experience (as we have begun to do at Cornish), this student will develop a rich, hands-on relationship with technological tools without ever having taken a class specifically in music technology. This will serve them in the professional world and open the doors to creative expressions they may have never considered. It will cease to be “othered”: it will be a part of music itself.
CHANGE THE CANON
We must abolish the idea that there is some kind of academic electronic music that is in any way different from popular music. There is Stockhausen, but there is also Can; there is La Monte Young, the Velvet Underground, Laurie Anderson, Grandmaster Flash, Quincy Jones, Jimi Hendrix, Pauline Oliveros, Prince, Jonathan Harvey, Imogen Heap, Beyonce, Susanne Ciani, James Blake, Laurie Spiegel, Tristan Murail, Wendy Carlos, Kanye West, Aphex Twin, Radiohead, Questlove, Timbaland. The list goes on, and on, and on. When we simply make the list of folks who have invented, engaged, and advanced uses of music technology in the 20th and 21st centuries, genre disappears. It is an explicit effort we make to segregate these musics.
To cultivate young musicians who are emboldened to self-express with meaningful tools, we have no choice but to press the reset button on the field of music technology. We have begun this work at Cornish; I firmly believe that the data I shared above would have been profoundly different—in a bad way—four years ago. It is not insignificant that my own mentorship has played a role as well, which speaks to the importance of equitable hiring practices. Resetting the field in these ways becomes exceptionally important in an age where technology, spurred by Moore’s Law, is quickly surpassing our ability to understand it, leading to the destruction of our rights, our democracies, and our private lives. Artists have a responsibility to speak truth to this power.
Kaley Lane Eaton (www.kaleylaneeaton.com) is a composer and pianist and Interim Associate Professor and Director of Music Technology at Cornish College of the Arts.