Music in the Digital Age

Brief History of Electronic Music

The digital age has transformed many industries, an interesting example of which is the music industry. These days, electronic music, especially Electronic Dance Music (EDM) is arguably the most popular genre, both among the masses and in the eyes of music’s most powerful conglomerates.

But what defines electronic music? The genre originated naturally around the time computers were invented, but in a rudimentary form. It was later expressed through turntables and tapes, and took form in unheard of genres such as house and techno. German musicians who were also proficient in computer technology, were experimenting with strange sounds that were a far cry from the danceable beats of today’s top-40. Synthesizers were introduced in the 60s and 70s, and the 80s saw a popularization of accesible electronic music production technologies. But only in the past decade or so, did electronic music explode to become such a dominant force in the music industry. Today, some of the most widely-distributed songs are composed and produced entirely on computers, with or without a vocalist or traditional instrument used.

Programming Sound Production Software in Computer Science Curricula

So what measures are being taken at computer science schools, to incorporate learning to program sound production software into the curriculum?

At the University of Oregon, the Department of Computer Information and Science (CIS) teaches a variety of computational solutions to many of today’s challenging problems. Its students can learn an array of computer-based skills, from theory to programming. While a general knowledge of programming can be applied to many industries, learning to program sound production software as part of the CIS curriculum is still being passed around as an idea. However, that’s not to say there isn’t an interest in the subject among CIS students.

Programmer, Producer, Composer?

Computer Science major Mitch Meabe is a fledgling producer and aspiring programmer.

One day, Meabe desires to program his own sound production software. In the meantime, he is familiarizing himself with not only the multitude of software out there, but also with a basic understanding of music theory and composition. Apparently, this is the right way to go about it. “If the programmer is going to do a good job,” says Joseph Stolet, a professor of music technology at the U of O, “They have to have a good musical background. If they’re a computer scientist and they hope to make musical decisions they have to know about music.”

Programming one’s own software means one has complete control over the sound that emits from their platform, as well as the ability to determine just how it’s malleable. Meabe also desires to express his own unique voice, by producing music using the sound he creates. This combination, as opposed to using common plug-ins, is employed by only the most veteran producers such as Grammy Award winner Skrillex. “I just want a really unique sound but at the same time is appealing to a lot of people. Even if you program your own sound it’s really hard to get away from the synth that everybody else uses.” Interestingly, Meabe says programmers add elements of randomness to their sound to allow those using their software to feel that their expression is their own creation. “A synth is not just as computerized as it seems.” says Meabe. “[Programmers] make it random on purpose.”

The interrelatedness of the computer programming and music industries may seem an odd marriage, with computer scientists being notoriously “left-brained” and logical while musicians are “right-brained” and artistic. However, some of the biggest stars in music today are like Meabe, and have a mastery of both the technical and artistic aspects of creating music.

“There would be no industry if it weren’t for technology,” says Professor Stolet. Stolet is the director of Future Music Oregon, the Intermedia Music Technology at the University of Oregon School of Music, which is a program “dedicated to the exploration of sound and its creation, to new forms of musical and new media performance, and the innovative use of computers and other recent technologies to create expressive music and new media compositions.” Stolet’s works in electroacoustic music have been featured at prestigious expositional venues ranging from the Museum of Modern Art in New York to the International Academy of Media Arts and Sciences in Gifu, Japan. He understands the level of intimacy a modern musician must have with technology, in order to create and distribute their works. And all this, despite the fact that Stolet came from a classical conservatory background. Still, he refers to the music industry as “technology dominated and technology dependent.”

Electronic Timbre

The programming of sound production software is so nuanced, that the tonal quality (timbre) of the synthetic instruments originates from a dull, basic sound wave. In creating their sound production software, the programmer is also responsible for of the process of subtractive synthesis (step-by-step guide taken from Wikipedia).

1. First, two oscillators produce relatively complex and harmonic-rich waveforms:

2. In this case we will use pulse-width modulation for a dynamically changing tone:

3. The two sounds are mixed. In this case they are combined at equal volume, but any ratio could be used.

4. The combined wave is passed through a voltage controlled amplifier connected to an ASDR envelope. In plain language, it is changed according to a pre-set pattern. In this case we attempt to emulate the envelope of a plucked string:

5. We then pass the sound through a shallow low-pass filter.

6. In this case, to better emulate the sound of a plucked string, we want the filter cutoff frequency to start in the mid-range and move low. The effect is similar to an electric guitar’s wah pedal.

There are an innumerable number of direction to take a simple sound wave, and thus the market for sound production software is expansive. Many plug-ins for different sounds are made available for free, and companies struggle to find relevance in the confusing and evolving market.

Profile With Singer/Songwriter Jaqui Grae

Jaqui Grae is one of many female singer/songwriters who desires to become a recording artist. She studies voice at Berklee College of Music, and just released her first EP, “I’m Not Cold,” last October. Berklee College of Music is one of the most contemporary schools of music, in contrast to more traditional conservatories, and they pride themselves in being on the cutting-edge of music, incorporating new trends in the industry into their classes. Some would say their students represent the next generation of contemporary musicians.

While fluent in sound production software, and familiar in the recording studio, Grae composes her songs in the most basic setting- in front of a piano. However, acoustic music struggles to find relevance in the contemporary music industry, where more and more technology is arising to alter, enhance, or even create the sounds in a song. Jaqui Grae shares her experience being an acoustic composer, in a technology-dependent music industry.

So you write your songs acoustically; how is the transition to digital production?

JG: I would describe it as long and tedious. Especially when you’re trying to produce the music yourself, along with other people. Trying to evoke the right emotions and send the same message that the songs do acoustically, and have the same level of intimacy, is really, really hard to do. Many people would agree that technology tends to barricade the true emotion of songs. So it’s really hard to keep the same feeling that the acoustic versions of the songs have. But once you get there, it’s that incredible “Aha!” moment. It takes a really long time though.

Do live instruments make it through the production anymore? I.E. how edited to they become, and is it really the same sound? Off of that, maybe you could explain the recording process of instruments.

JG: Well, it really depends what genre you’re doing. For my style of music there’s a lot of production involved. It’s this big, alternative rock genre. There’s a lot of electronic elements and a lot of technology creeps in. Sometimes the real instruments get taken out. It is harder to bring in players to play bass, and play guitar and do all this stuff. And me being a piano player and composer, I can just create most of the parts myself. As a rule of thumb though, I think a lot of artists would agree that the more real instruments you have in your music, the more real your music will be, and the more people it will be able to connect with. For this EP, due to budget and time and all that stuff, we didn’t have that many actual instruments, but to say that they’re being taken out of all music and being substituted is totally not true.

Would you have preferred to be a musician before the digital age?

JG: Hmm… no. A lot of singer/songwriters wish they could have lived in the 60s, where you can just get your guitar and play and sing, and be like Joan Baez or Bob Dillon or whatever. But I am so grateful to be born in this generation, where you can do so much with music. The sounds that we create and the types of music we create is incredible- it’s limitless. And anybody can be their own producer. Not to say that you can create an entire album on your own, because you need to get out of your own head. But anybody can create a skeleton of the final product. They can create the demo and produce most of it themselves. And artists are able to have so much more creative control over their music these days, because it’s not just you and your guitar and then the one recording studio that’s operated by the one team of people. Everybody’s their own producer these days, and I think it’s incredible.

Here is Jaqui’s title track off the EP, “I’m Not Cold”.

The entire EP can be found here:

http://jaquigrae.bandcamp.com/album/im-not-cold-ep

A separate subject, but probably the one of most heated debates in the music industry, is the issue of file-sharing and pirating that have driven music-sales to record low numbers in recent years. This is just one more way the digital age has transformed the music industry, into one that some musicians think it’s incredible to be a part of, and others are skeptical of, longing for a time when the perfect song could be performed by nothing more than a guitar and a pair of vocal chords.

-Lucas Stewart

This entry was posted in Uncategorized and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s