Monday, April 13, 2015

Calibrating Your Musical World

Since I haven't blogged in a long time, I will write a post for every major topic that I have been exploring the past few weeks, which is not necessarily in order of when I learned the topic.

First topic:

Unsurprisingly, both the speakers one uses and the room in which one listens back to recorded audio will have an effect on the sound of the end-product.

But even music studios thought to be perfectly calibrated are not able to play back the music as it was played and recorded in a sonically pleasing way; sometimes speakers can compound the problem by brightening, or increasing the low end of audio.

Problems with playback can be illuminated by using this trick: play the Steely Dan album, Aja, back in the room you are listening to your recordings, then play back your recordings. Then compare the two.

Aja, among other albums that are considered to be "audiophile" albums (those albums which excel in produced dynamics and recording clarity because of superior recording and engineering) are said to be excellent calibrators with which to judge how your room will respond to a variety of different sounds. Thus, if you want to know if your room and speakers are fit for reproducing a sort of sound, play a record back which you consider to be excellently produced and compare it to what you are producing.

Don't stop there though. The trick can be taken a step further. If you want to know if your project is capturing the right sound, you can take the file of a song that captures a sound you want and drop it in your virtual work space and play it, and then play your song afterwards. Repeat until you have achieved the sound you are looking for.

Calibrating your musical world is important, because you may think something sounds awesome in your studio, but if the studio and your project file are not capable of playing back the records which you draw your influence from, your finished product could sound too bass heavy or too like there are too many high frequencies. 

Monday, March 9, 2015

Week 3:

The most valuable thing that I learned during my time at George's studio last week was how important it is for an artist to come to a music studio with enough confidence to voice opinions to the producer about where the music should go, but without overconfidence which can make for an unproductive and difficult studio session.

I sat in on a session where the artist was constantly referring to George to direct the song's artistry (not strictly his job). It seemed to me that the artist was not confident in his abilities, and so was causing George to have to work overly hard to make a low-quality recording sound decent. In essence, George had to overcompensate on the technological side of things to make up for the artist's lack of direction and confidence.

I think what would have made the session (and song) better is if the artist came to the studio confident in his singing abilities so that George did not have to spend time (and the artist's money) fixing mistakes that could have been prevented, had the artist practiced his part.

The situation, though, seemed to not allow that, because the artist had to take the job of lead vocal duties, last minute. So it was a problem with his band as well.

In sum: Auto-Tune doesn't make up so well for lack of artistic direction because its designated job is to add spice to the direction that already exists not necessarily create the direction, itself.

Monday, March 2, 2015

Week 2:

This was a pretty long week, but what was weird was that I felt exhausted after just two days of work.

The first day, George was recording a hard-rock artist. She and George had been working for a few weeks, I think, because when I arrived there, they were working on an already-completed song which had guitar, vocals, bass guitar and drums.

He spent most of their session recording a guitar solo for the song, which took several hours. I sat there listening to the process, where he seemed to play the track over and over again, as his ideas for the solo slowly evolved into a comprehensive mold. Every time he replayed the track and tried to record the solo he would take parts that he liked, but discard the rest, sometimes even the whole recording.

Eventually, he got to a point where he had everything he liked, but there was a piece of the guitar solo which required playing an arpeggio at a high rate, which he played and re-recorded several tens of times, but eventually the solo came out and it sounded exceptional.

That day I learned that it is okay to spend several hours on one piece of music, as long as there is progress mounting.

The next day was long. It was eleven hours (with breaks, of course). I was there for that long on my own volition.

The day started with a solo piano artist, but his songs had more than just piano sounds in them. I arrived at a point in the song's evolution where it already had programmed drums and guitar. There were also vocals, and tons of keyboard-created sounds all played by one person.

The part that most interested me about this song was it's unconventional structure. What I mean by that is most radio-songs have the verse-chorus-verse-chorus-bridge-chorus structure, whereas this song seemed to weave in and out of tempo-changes and more than three different distinct sections which were not returned to, yet the song sounded cohesive and musically pleasing.

At the end of the session, the artist explained to me that that song had been through about twenty years of evolution, where he meticulously picked and crafted the sections to produce the unconventional structure.

Later that day, I sat in on a rap artist's session, there I learned about ad-lib rapping. That is where an artist adds "comments" to the main vocal track that work to accentuate the main track. The ad-lib track might have vocal samples like the word "what" after the main track says something absurd or noteworthy.

The last session of the day was with a hip-hop duo. One of the members made the music while the other rapped lyrics. It was interesting to see that the the member who made the music was not using the most updated equipment. In fact, he was using a drum machine and keyboard, both from the 80s.




Week 1:

The first thing I learned is that the Pros (users of Pro Tools, that is) don't use auto-tune anymore; instead they use something called Melodyne.

This application is similar in function to auto-tune, but expanded in capability and sensitivity. When using the application, it allows for sound to be recorded and then translated into visible and modular musical score for the producer to make changes to. Keeping in mind musical keys and timing, one can use Melodyne to change both the pitch and timing of a sung musical note once it is translated to musical score.

That's not all either. When I was at George's studio two weeks ago, I observed him using the application and he was even able to change how much vibrato (that shaky sound a singer can put into a sung note) a note had. When he was done working with the artist, the piece he sang sounded pretty different. George had moved the singer's recorded notes around and changed the amount of vibrato on each to make the sound more smooth and less out-of-key.

I was even able to help with this process: the song he was recording was in the key of A Major and I noticed there there was a note he had missed that probably should have been changed to fit the key of A, so I told him which fixed a problem he had been hearing, but was, at the moment, unable to fix. I felt like I was catching on.

I am glad to be working at Luna. I haven't made any music for about four or five months because I have been out of ideas, but being at Luna with George has given me plenty of new ideas with which to work once I start on my own music again. Not to mention, it reminds me that I still have much to learn.

For now, though, I will continue to shadow him as much as possible and continue to observe the processes he is using to make exceptional-sounding work in as much detail as possible!


Wednesday, February 11, 2015

Introduction:

It is a common misconception that making effective music (music which influences culture) is easier because of virtual work spaces like Pro Tools. But because everybody has access to music production software, everybody now has the ability to make processed, manipulated, perfect sounds that were once the product of strenuous studio hours without highly modular equipment or auto-tune. So the question remains: how does the homogenous availability of software change the playing field, and what are people doing to maintain artistry?

Since I was in middle school, I have been interested in knowing how a large number of discernible auditory components come together to make one cohesive song. When I was a sophomore, I downloaded Audacity software which has basic recording capabilities: multi-track recording/playback and some basic effects like Delay and Reverb. I then recorded demo-songs for an album which I had written.

Last year, I started using FL Studio to make electronic music, using mostly virtually-created instruments and melody lines and looped drum samples.

The process of making a song that effectively conveyed meaning became more complicated when I took an introduction to Critical Theory class at BASIS where I studied about the hegemonic power structures which surround culture. I learned that they often ultimately determine what is “good” and what is “bad.”

In that class, I deconstructed Rolling Stone’s 500 Greatest Albums of All Time to show how the magazine faultily assumed absolute power in determining “good” music.

I hope to use the internship I have acquired at Luna Studios with producer and engineer George Nardo (who has been recording music for over twenty years) to learn the process of making a successful song in a music studio. I wish to learn about both the recording and mixing process of making successful-sounding music to apply it to my own music, as an end-product.