The Struggle

The next few posts will actually be in personal “blog” format, delineating the events that happen in the next few months. I may post some musicology posts between these more sentimental ones.

Ahem…

My feet squeaked as I entered the halls of Tech, one of the largest buildings on the Northwestern campus. I was nervous, moreso than on the performance of Mahler’s Ruckert Lieder I had played last night for my studio class. I had printed the music on cardstock but failed to bind the pages in anything stronger than scotch tape. The result was a disastrous and embarrassing page turning catastrophe, aka the music actually fell to the floor during the end of the performance as I scrambled to pick it up; it was almost comical. Well…at least that will never happen again, or so I hope.

Today was the day I was pitching the idea of “Acoustical Engineering” to McCormick, a program that would essentially meld the arts and sciences into a single entity.

…at least that’s what I planned.

As I raced up the stairs of Tech, I stopped momentarily to look pensively at the snow falling outside one of the windows. My mind was racing; one could say that it was actually wandering over the past six months of thoughts ever since I changed majors.

I was a dual degree B.S. B.M. student in Manufacturing and Design Engineering and Tuba Performance for three years. I took at least seven to ten classes a quarter. It was grueling to say the least, but I decided to change my engineering area of expertise into something more musical. This paradigm shift (man I love that word) resulted in a startling realization: Northwestern didn’t have an acoustics program, let alone an architecture program. The two acclaimed areas that the school was known for: music and engineering, held nothing between the two. There was music and there was engineering, either/or, not both.

Who’s to say I couldn’t change that?

I walked into the undergraduate office about two minutes late. I cringed at the thought: acceptable for a meeting, terrible for a musician. I would get fired on the spot if I had done that in any professional musical setting; “just another thing to work on,” I made a mental note.

While building up the courage to confront the school about this particular area over the past four months, I had emailed a number of professors across a wide array of educational fields that, I figured, could potentially be interested in the subject matter. The only thing I received back, if anything at all, was disinterest.

It made sense: why would the school offer a subject that professors were not compelled to teach? I had reached a dead end, and figured “hell, I have nothing to lose, I’m just going to go for it.” Today was the day I would toss the idea out to an administrative head.

The man I was meeting that day was the associate dean of undergraduate engineering: Mr. Steve Carr. I was incredibly intimidated to talk to him, though I guess my nature is to be intimidated by powerful people. The irony that I regard myself as an affable person was not lost on me as I straightened my full-windsor’d purple tie and walked in.

We shook hands cordially. He seemed friendly, and as I began to introduce myself the familiar waiver of my voice signified that I was nervous as hell. Despite my quavering, he was polite and listened to my pitch in full detail. I specifically stated that I understood the gravity and practicality of a program like this, and wanted to only throw the idea out there, noting that it would be an accomplishment to even get a class in acoustics.

He understood me in full, and between some personal anecdotes about the importance of noise control, I began to understand that he was open to the idea. I remember saying if it were at all possible if he could send this to the dean of the school himself, and he specifically stated, “I’ll make sure he sees it, I’m sure he would be interested.”

I don’t think he understood how my heart almost leapt out of my chest at that point.

He gave me a number of faculty names that could potentially be interested in the subject, and I wrote them down with gusto. We shook hands and parted, and I eagerly began to craft a slew of polite and delicate emails asking some of the faculty about something like this.

What I received threw me back down into the dark pit. “I don’t know why you are bringing this to me.” “I doubt that we could make a major in this topic.” “You seem to be pursuing the ad hoc major already, and that is good, and it may be all you can do.” I was scolded, I was treated like I didn’t know any better, like a ten year old child. I felt like these professors, these people who devoted their lives to teaching people like me, harbored no interest in my motives because they held none themselves for the topic…the topic of education and the pursuit of knowledge. The thought was alien to me on a pragmatic level, but it was not the first time I had been treated in a similar manner.

A lot of professors seemed to approach me as a student before they approached me as a human being.

It was disheartening, that lack of empathy. It stung in a way unfamiliar to me; it was different from the normal stabs at my pride and at my work. I normally shrugged them off, but something stuck this time. I am more determined than ever to see this thing through, to prove them all wrong. I was the underdog on multiple fronts, and have been for quite some time.

Though there is one thing I have learned through undergrad: never underestimate the runt of the pack.

And now with the cards down on the table, I wait. It may be a long one.

~MJ

Musicology - The Authenticity of Guitar Hero

I’m really scratching the surface here, but I thought I would post a presentation I did for my musicology class the other day about the game Guitar Hero.


Video game music is evolving to the point where even non-musicians can get immersed in the music. The idea that music is getting into the hands of more and more people through games is incredible. The physical gestures of a live music performance with previously recorded sound is often dubbed a “Schizophonic Performance.”  Combining the words “Schizo,” which is commonly referred to as “Schizophrenia,” or a loss of contact with the outside environment, and “Phonic,” sound.

A level of coordination is needed in order to play a game like this.  The player feels the need to continue to succeed so that the song continues. The game inspires the capacity to inspire the feeling of making music. If one messes up, it is notated as some sort of cacophonous sound, so the incentive is there to do well. The song doesn’t end if a note or even two is missed, but if one keeps consistently missing notes, the song ends. This is an incredibly important psychological effects of the game.  It is actually a good lesson: it rewards good play and makes people want to continue to do better than before.  That can really be applicable to anything we do!

This is a performance. However, there is the issue of authenticity of the performance. “Perform the feat on a real instrument.” Players are required to “decode” what they see on the screen and transfer that to their physical coordination with the instrument. The buttons must be pressed at the right time, and one must have the cognitive capacity to do so. 

In fact, one often starts to appreciate the music more, because the individual parts are highlighted in the game and allows a listener to pick out the nuances and is able to appreciate the work put into the song. We could dive into the idea of cognitive listening verses passive and active listening here as well.  This fact stands as an intriguing tension between critics and designers.  The mission statement often repeated in media interviews with designers at Harmonix Music Systems, the company that developed Guitar Hero and Rock Band is labeled: “To give that awesome feeling [of performing music] to people who aren’t musicians, who would never get to have it.“

There is a certain atmosphere when one is playing the game. If someone is playing a particularly hard song, they garner attention from the entire room. In that sense, that person is really the “ rock star” of the performance.

So I ask you this: is there an air of creativity when one plays Guitar Hero?

…well yes and no to an extent.  No because you are reenacting someone else’s work, yes because you are creating the music; you are guiding the song.  Without you there is no music and no play.

There is a reason why the game is so popular, and that is ease of use. It takes a lot of time and hard work to learn to play a real guitar, as opposed to the ease of just jumping into a song with no knowledge of it and press buttons that represent the melody. There is a certain instant satisfaction of doing something like that, where as learning a real guitar does not grant that initially, but does later down the road if one continues to pursue it. “What 9 year old kid would want to pick up a real guitar when picking up a plastic one is so easy?”

However the “pros” of the guitar hero world put as much time, if not more, than an actual musician learning guitar or playing in a band. If the game existed in the 80’s, would we have the bands we know and love today? …Or would we have all Freddy Mercury and Hendrix and Bellamy all playing plastic guitars? Comparing a game to the real art, despite the numerous similarities, is seen to be as “absurd” to the more high-end players. All in all at the end of the day is the simple fact that this is just a video game where you learn to push buttons rhythmically, nothing more than that. 

“Why play a racing game when you can go out and drive your car? Why play a musical game when you can just learn the instrument yourself? Why play a role playing game when you can just stab squirrels in your backyard?”

In the future, we will all be using some sort of alternate technology for creating music, different from the fundamentals we encounter today. But technology is still evolving today, to the point where games and music start to become intertwined. Here is an article about the direction the “controller guitar” is headed.

Audio in games is also evolving to the point where the game is controlled by the audio: Beat Hazard and Audiosurf are both programs where the user gets to choose the song and the effects in game are directly affected by the song:

Beat Hazard:

…or vise versa: The Future of DJing?

Additionally, new games like Rocksmith have been under production that actually feature a real guitar in a gaming setting.

Reflecting back here:
In terms of the physics of sound-creation, there’s a necessary relation on the "real” instrument between buttons pressed and sound produced, which obviously isn’t the case on the plastic controller.  But, in terms of what the performer does as far as physical exertion and pattern memorization, the labor done by the fingers in executing a musical “script” really is quite similar.  I fear that we often romanticize music-making and the creativity involved when there’s actually a huge component of “real” musical proficiency that adds up to nothing more than note learning, conformity to scripts, and muscle memory.

As a closing statement: Guitar Hero is a video game that inspires anyone to play music, whether they are actually a musician or not. It evokes joy in people because for a brief 3 minutes, they can actually feel like rockstars. The game was never meant to be a simulation for an actual instrument, but was supposed to elicit that feeling of “epicness” one gets when they put their hands over the five colored buttons on their plastic guitar.

10-4,
~MJ

Musicology - Fidelity in Music

The Quality of Sound:
First off, before I discuss anything else, let’s talk about bit rate and sample quality: something the majority of audiophiles like myself deeply care about.  For starters, more expensive equipment does not necessarily equate to higher fidelity listening.  In fact, most of the time it never does. I cannot stress this enough.  This is such a common fallacy among DJs, recording artists, and audio hobbyists.  This is the same psychological fallacy of “if an item is more expensive, it is obviously going to be better."  No, no, no, no, this is not true at all.  The majority of audio products that companies try to market are incredibly expensive compared to the quality of the sound the consumers receive from them.  I remember trying headphones on at an actual studio, and I had brought my $120 headphones along with me.  The sound producer told me that the headphones I was going to try were $2000, sleek, and of the highest quality.  When I did try them out, I will admit that they were very very good.  I also plugged in my own headphones to see whether or not I could hear a huge difference in the sound quality.  I predicted that I would, simply due to the nature and quality of better and more expensive equipment.  What happened surprised the hell out of me.  I could not hear any distinct difference between my own headphones and ones of "the highest quality."  My brain simply assumed that the "better” headphones would in fact be better because they were more expensive and unique.


Here’s the kicker people: music and equipment claiming to play frequencies and sampling rates at high fidelity, say 96kHz and 24 bit (even 32 bit on rare occasions), are going to sound almost identical to a 16-bit 44.1kHz system.  The upper range of human hearing does not exceed 22kHz; in fact, most adults, simply due to prolonged sound exposure, have a range of 30+Hz to 18kHz.  Because the sampling rate is always double that of the fundamental frequency, the highest sampling rate of human hearing is double that of the highest range of hearing: 44kHz.  This is why many things are sampled at 44.1kHz (among other reasons like compatibility).

Stereo vs Mono:
One of the fundamental ideas behind high-fidelity listening is the idea of stereo sound as opposed to mono sound.  The idea that stereo sound is necessarily better than mono sound is itself a fallacy.  There are plenty of ideas where mono sound actually amplifies a musical idea more than stereo sound could.  In instances where more and more instruments or voices are played, one could argue that mono sound is going to be better because stereo only augments small musical ideas in different ears, such as guitars or singing.  The idea of stereo vs. mono is actually a musical idea in itself but unlike most music, the sound is determined by the taste of the listener rather than that of the musician.  One of the main promotions over stereo was that it added an extreme sense of realism.  The idea of “surround sound” was slowly becoming more of a reality for people, and stereo development really pushed that idea.

Stereo dramatically influenced the way a lot of the sound production of emerging rock.  This idea worked so harmoniously with genres such as rock and other lightly layered pieces because it did create that sense of realism as well as the sense that you were actually in front of the band playing live.

The idea of stereo is augmented and preferred for the most part because of the way our ear hears music. One could argue that the stereo version of Hey Jude is more suited to a mono sound because the sound would not be realistic in that sense of “Hey this sounds live!”   Two guitar players are not playing on one side of your ear or the other in the acoustical space.  

Another example of a preferred style of music for mono sound is: 

Due to the fact that this is a louder and heavier piece, mono reins supreme because the ear cannot comprehend all of this “sound.”  I personally think that a mono sound would create a better musical picture in the mind of the audience. This holds true for many classical pieces, and quite honestly I am not sure I have heard a classical recording in true “stereo,” simply due to the fact that it would sound quite strange from that perspective. If we were to try and emulate an actual concert hall, a live performance to an audience member is actually going to sound very similar to a mono recording. This is simply due to the acoustical space of a concert hall. Sounds from the high strings and brass on stage right and the low strings and brass on the stage left are going to meld together before they reach the ears of the audience, regardless of the unique acoustical space the sound is housed in.

 

The invention and immersion of headphones also changed the way people listened to music at the time.  A sense of stereo sound within such a “confined” listening area such as headphones allowed the listener to experience a more individual taste to their music without other judgmental ears listening to it.  The new development also led the way to more intimate music.  One of the interesting developments that people had to later grapple on to was the decision of speakers or headphones.  Both had their advantages and disadvantages, and the idea of stereo really helped push headphones as an emerging and eventually a primary listening device for the majority of the population in America.

More on acoustical recording spaces and digital music to come…

Cheers,

~MJ