Working Through the Suck, Musically Speaking
I’ve mentioned before that when it comes to making music and using the stuff for doing that on the computer, I am still at a stage that I would consider “enthusiastic amateur.” This is to say, I like playing with all the toys that I’ve accrued, but my knowledge of how they all work is still somewhat surface level. In order to get better, recently I’ve taken to making little projects for myself that have a single task at hand: Learning how to use a specific audio plug-in, for example, or how to do a particular task in the digital audio workstation.
The current “you have one job” task: Learning how to fiddle with vocals to make them better, by using various tools that either come with Logic Pro X (the DAW I use the most), or that I’ve bought as plug-ins and add-ons. The idea here is to see how much fiddling with the raw vocals can iron out a performance without sounding like, you know, a robot.
To engage in this attempt, I picked a song that is, uhhh, slightly out of my usual range: “The Scientist,” which is originally by Coldplay, but the arrangement I’m doing is based off Aimee Mann’s cover of it (here that is; if you’ve never heard it, it is lovely). As I’m working off that arrangement, I’m singing higher than I usually do, which means I’m pitchy as hell — not great if you have to listen to it live, but perfect for this particular task. Having recorded a pass through of the song in a single take, I set myself the task to see what I could do with it in the software.
Here’s the result:
1. In fact, the software can do a lot, and can do it so the result sounds reasonably natural. Logic Pro X has a built in “Flex” tool that can let you specify a key and then tracks a vocal performance to that key (you can adjust it to humanize), and then also lets you control things like vibrato (i.e., make yourself less pitchy), and more esoteric aspects like formant (this is the computer’s attempt to model your voicebox). This got me 70% – 80% of where I wanted to go, and the rest I was able to cover with other plug-ins (including, yes, Auto-Tune, which can be set not to sound like an android). The result of the fiddling sounds (mostly) pretty good and reasonably natural.
To be clear, I was actually singing in (well, around) the key of G# minor, so the issue was not wholesale yanking of my voice from the wrong key into the right one, but more of sticking my voice to a note it was otherwise hovering near to. Photoshopping my voice, as it were.
2. There is only so much the software can do. There’s a bit where I attempted falsetto, and the result was, shall we say, not great. I fiddled with it as much as possible, but there was only so much lipstick that pig could take. I then took the whole falsetto phrase and dropped it an octave, which made it sound better, but also clearly not natural. I decided to lean into that and make that bit sound even more robotic. You’ll know it when you get to it.
Likewise, while I think the vocal performance in the first half of the song is decent, the second half has me straining at notes and timing, and again, there’s only so much software can do with that (or that I can do, with my knowledge of the software). For better or worse, the human singing is going to come through, no matter how much one fiddles with computers. Which is actually good to know! Software can improve a decent-to-good vocal performance, but it’s not going to save a genuinely poor performance.
3. Aside from software and plug-ins that are directly meant for vocals, other software that address things like compression and reverb can do a fair amount to tweak a performance, but again, there’s only so much that can do, and also, too much of it begins to make one’s track sound muddy. One of the things I know about myself is that I like a good reverb as much as kids like cake, so as a result the tracks I’m putting out sound overly busy. Part of my learning curve is learning what things to leave out.
4. This track was also educational as to why most recorded songs are not a single vocal performance but assembled out of a bunch of takes. As noted, in the second half of my single long take I was all over the place, and some subsequent takes were poor enough that I left them out of the final mix entirely (the final vocal is the first vocal take, triple-tracked, with each track having some different effects on it). Which means one of my next projects will be to figure out “comping” (doing alternate vocal takes) on Logic Pro X. It’s apparently one of the easiest DAWs to do comping on, but that doesn’t mean it’s all that easy. But then, this is why I’m doing all this recording: To figure out how to actually work the program.
5. This recording isn’t good (well, it’s about 35% good and 65% “he tried”), but I like it, because I learned a whole of stuff about how to operate Logic Pro X, and because I feel reasonably competent that the next time I record myself slightly out of my vocal comfort zone, I’m going to be able to get a better version of it out of the software. This, I think, will be useful in eventually helping me get a good performance outside of the software as well, since now I will have a reasonable simulacrum of my voice hitting notes, which I can use as a guide vocal. Practice makes perfect, on the computer and off of it.
Also, of course, it’s just fun to learn things. This is why I don’t mind working through the suck here; yes, I may suck, but I’m figuring things out, and next time I will suck slightly less. Sucking slightly less each time is how you get better, and then, one day, maybe, actually good. What a day that will be!