Who doesn’t like new sounds to get inspired by? This track started out with a video from Venus Theory demonstrating a new sound pack just released for Lunacy Audio’s Cube synth. Cube is a great sounding synth with this dynamic way of shifting sounds between samples, meaning it generates hugely lush synth sounds with very little effort. So I picked up the new Rumble expansion pack, splashed out a few chords and immediately got inspired.
For reasons lost to time I thought I’d try writing a track in the Phrygian mode. Maybe the darker sound of this felt well suited to the distorted grungy sounds coming from the Rumble expansion? Maybe I just fancied doing something different than a major key? All valid reasons! I went for C Phrygian, settling on a Cmin - Ebmaj - Bbmin - Dbmaj progression. This has a nice resolution but with C minor as the root does indeed make for an interestingly dark sound. The Phrygian mode is close to the natural minor but with a flattened 2nd, in my progression only the Db chord actually has this note in it so I probably didn’t do a great job of being very 'Phrygian' in the end. But I think it does sound suitably 'dark'.
I hammered out a bit of a melody, trying to remember the usual melody-writing tips like repetition, doing slightly different things, and putting in a big jump for a hook. My melodic brain has a hard time straying from the 1-2-3, 1-2-3, 1-2 sort of rhythm, which the internet suggests is common in pop music (that would explain it) and can be described as a '4/4 rhythm with syncopated accents'. So, stray I did not. The chorus has two dotted quarter-notes followed by a quarter-note, with the emphasis given to falling notes to add to the darkness of the melody. It’s hardly original or award-winning but it’s got suitable stuck in my head as I’ve been working on the track, so I consider this a success.
Originally this was going to be an instrumental. Spitfire Audio’s BBC Symphony Orchestra Piano is absolutely gorgeous with a lovely full sound, I love just noodling around with it. In this track it takes a well-earned centre stage with the melody and accompaniment, helped by a generous dollop of H-Delay. Am I the first person to whack a delay onto a piano? No. Does it sound absolutely beautiful? Heck yes.
And then.. I got a bit carried away adding layers of interest. The lower grunge sounds come from a few Cube Rumble patches. Interest higher up the registers come from UVI’s Synth Anthology 4, specifically the 'Windbounce' and 'Tinkle Bell' patches. While I was writing the track, Motorlab was released by Venus Theory & Dave Hilowitz, which fitted the tone of the piece well enough, so that went in as well.
There’s a little countermelody from Westwood’s Electric Home Piano, which I don't really know if it should be there, but I tried taking it out, and missed it. So it stayed.
Inspired by what Venus Theory did in his video, I layered one of his drum loops (from VT Ocalos) with some epic drums from Damage. This layering trick is a revelation and really adds a new dimension to an already decent drum loop. I then added risers and further percussion hits from Action Strikes and VT Phaeros in an attempt to make it more interesting.
Then I thought the track needed a ‘B-section’, so to avoid needing to write any more melody I found some vocal chops from Slate Digital’s Omnivox pack and stuck them in.
And finally, despite all the synths I thought it could do with a little more padding in places so threw in some subtle chords from VSL’s free Big Bang Orchestra. They’re just serving to add some body so could easily have been another UVI synth patch but I rather wanted to play with VSL’s Synchron player because it is very cool.
I now had a reasonably ‘finished’ track, but it didn’t feel ‘done’. I certainly couldn’t really put any more instruments into it, actually I even took out a couple of parts I thought were “just a bit too much”. But it didn’t stand out as a completed project yet. As with so many tracks that I start but never finish, I proceeded to leave it alone for a while. It niggled me, though, because I’d put quite a lot of time into it and felt developed enough as an idea to be worthy of actually finishing. I just needed to get it over the gate.
Something I’ve never done is write lyrics. I’ve had half a bash at penning the odd song - I have noticed that lyrics feature in quite a lot of music, so it feels like a good thing to get into - but I found quality lyrics certainly don’t liberally spout forth from my pen, and in fact even finding a muse is surprisingly challenging. I’ve got a bunch of half-ideas, phrases I like, nothing really song-worthy, but I started to realise that turning this track into a song could be just what it needed.
Looking around for some genuine inspiration I noticed how my kids find endless music online that’s little more than a single phrase sung over an EDM track. Simple, but it works annoyingly well. I thought, am I overthinking my first ever attempt at lyrics? Probably yes. So I went back to basics and aimed to start with a single phrase about something, yet in a dark and grungy sort of way to fit the track. I don’t recall how I came upon this idea, but I settled on writing a song about that ultra-famous Minecraft mob: the Enderman. For the uneducated, Endermen are monsters that can 'get you'. There’s probably more to them. But the word “Enderman” has three syllables, and that rather nicely fits the metre of my track. And it is a monster, which fits the dark grungy feel. Close enough. Suddenly the pieces started to fit together. I did a bit of research (asked my kids) about what the monster does, got ChatGPT in on the act to suggest some lyrics and between all of these inputs managed to craft a simple little song to the melody I’d already composed.
But there was a major problem: I cannot sing in tune. At all. Seriously, hearing me try is not dissimilar to experiencing a set of broken bagpipes being dragged through a mangle backwards. It is not pretty. Most problems have solutions though, and the solution to my inability to carry a tune came in the form of a VOCALOID.
The world of VOCALOIDs is, well, it’s quite something. Very crudely, imagine a computer text-to-speech engine that can sing. This “singing synthesis” engine started out in the early-2000s as a research project and shortly thereafter was adapted by Yamaha into a product called Vocaloid. The genius of the system is that the raw samples are sung by genuine humans. The samples are stitched together by the synthesis engine which give the voices their own charm, even to the extent that fully 3D-modelled virtual performers were created for the voicebanks, and some of these continue to perform concerts to this day. By using different humans for the source samples, and different adjustments to the synthesis engine, and indeed different engines - more than just Yamaha are in the game now - there is a wide choice of Vocaloid voices. Some, in fact arguably the most famous ones, maintain the robotic sound of the original Vocaloid. Others are virtually indistinguishable from real singers. The whole Vocaloid phenomenon is more popular in the far East than in the West, but there are plenty of decent English voices available too.
To this end I dusted off my copy of Synthesiser V from Dreamtonics, which I’ve been meaning to get around to taking this for a spin for a while. A couple of their voices, in particular Solaria from Eclipsed Sounds, sound, to me, so mind-bogglingly lifelike when tuned well it’s uncanny. (Tuning is the Vocaloid term for fine adjustment of the voice’s parameters, often on a per-syllable basis.) Solaria’s vocals come from a broadway-trained soprano and the magic Dreamtonics have developed in SynthV is nothing short of amazing. With very little effort on my part I had my melody programmed in, my lyrics added and my computer was actually singing at me. It’s not perfect, but infinitely better than if I had sung myself! There are some places where Solaria sounds suspect but with some adjustments I reckon I’ve got the vocals sounding fairly realistic. I even added a harmony and a backing vocal, adjusting some of the parameters to make her sound a little different.
Next came mixing and mastering. Both huge topics in themselves and, just like with the rest of this, topics I am honestly just muddling my way through. After gain staging everything, I EQ if necessary (often it isn’t) and gently compress tracks individually, then route instruments through groups and apply further gentle compression to those. I balance the levels, using volume automation in places to make things like the vocal poke out during the choruses. I use the EQ to stop the bass getting muddy, or to add a bit of air on the high end or sometimes to take a nasal tone out of an instrument. I pan things around a bit. I add some effects, reverb, delay, stereo widening on the backing vocal, and use a sidechain to duck the vocal reverb out of the way of the clean vocal to make it clearer. Then I apply some more gentle compression at the master bus and finally run it through iZotope’s Ozone for some mastering magic. Most of the time I try to keep the gain reduction to only a few dB at each round of compression, which isn’t much, but when doing this at several points in the mix the overall effect is much less harsh than one heavy round of compression. The resulting waveform looks nicely uniform.
I tend to favour plugins modelled after old analogue gear. While on a channel by channel basis they don’t (to my ears anyway) sound very different to, say, a completely ‘clean’ digital version, I reckon there’s an aggregate effect on the track as a whole so all the analogue warmth does add up.
This part of the project never really has an end - one can fiddle and adjust the seventy-zillion knobs, sliders and buttons forever. I’ll have a session of fiddling, then leave the track for a day to let my ears ‘forget’ it. When coming back there’s usually something I think “oh, that needs fixing”. Next I try playing it back on different speakers, on my phone, through my bluetooth earbuds, anywhere I can, to check if it still sounds reasonable. After a few rounds of all this I stop noticing things I want to change, and at that point I class the track as “that’ll do, I guess”.
The project wasn’t over though! Although, I was getting to the end.
I tend to stick tracks on YouTube so they need a video of some sort. For Endermannnn I felt inspired to do a full lyric video; what with it being the first time I’ve actually had lyrics to show. As luck would have it I happened to be on a walk through some local woods on the night of a full moon, so I whipped my phone out and shot a bunch of videos in the dark. They actually turned out quite well once I tweaked the levels in Premiere. The whole track is a bit dark and grungy of course, so they fitted nicely. To set the tone I found a bunch of free glitch effects from Mixkit and FILM CRUX and sprinkled these throughout the video.
And there we have it. Without doubt my most ambitious track to date. From an unusual scale to layered synths, with lyrics, Vocaloids, an attempt at proper audio production and a video to go along with it.
I’m really pleased with the result, and above all, it was a LOT of fun getting here.
Full list of VSTs:
Pianos
Spitfire Audio’s BBC Symphony Orchestra Piano
Westwood Instruments’ Electric Home Piano
Synths
Lunacy Audio’s Cube with Rumble expansion
UVI’s Synth Anthology 4, 'Windbounce' & 'Tinkle Bell' patches
Motorlab by VT & Dave Hillowitz
VSL Big Bang Orchestra
Persission
Drum loop: VT Ocalos
Risers/impacts: VT Phaeros
Percussive elements: Native Instruments Action Strikes
Big drums: Heavyocity Damage
Vocal samples: Slate Digital Omnivox
EQ
Pulsar W495
Steinberg StudioEQ
Steinberg Cubase channel EQ
Lindell Audio 902 De-Esser
Dynamics
Steinberg Compressor
Waves R-Compressor
Lindell Audio 245E
Lindell Audio MBC
Shadow Hills Mastering Compressor
iZotope Ozone 10
Effects
Reverb: Baby Audio’s Crystalline
Delays: Waves’ H-Delay & Sixth Sample’s Deelay
Stereo imager: DJ Swivel’s Spread
Panning: Soundtoys’ PanMan
コメント