I started the audiobooks off with The Bullet. Why? It’s short. It’s slow. It’s silent.
I needed, more than anything else, to launch an audiobook off the ground and see how it flies. Unfortunately for The Bullet, it’s my Woobie. The book I abuse when I want to ‘Test the Waters’ and see how things work. Don’t get me wrong, I enjoyed writing it, and I enjoyed reading it out loud, but since it was the first of my AudioBooks, I made the most mistakes with it.
Because it is short, it lends itself to being the one to be thrown in the test-tube to see how it reacts. Because it is short, it gets pushed around, it gets forgotten. Because it is short, I could finish the audio and see how difficult the process would be.
I stammer. It’s a thing I’ve got going with my mouth. The jaw moves, the lips move, but, quite often, the words don’t form properly, and I find myself yammering out the same syllable again and again. It’s very difficult to control, and I often don’t say what I truly want to say, because I know that if I try, I’ll mangle the words up. Sometimes I’ll sit and practice saying a sentence just to build up enough confidence to get it out. Too often the topic has gone stale and I’ve missed the opportunity.
The Bullet, being a slow, rhythmic piece, forced me to pace over the words, bring my normally rambling and mumbling mouth to account and put effort into forming words properly and slowly. I don’t remember how many takes I did of the first few paragraphs, or even the chapter. Each time I’d listen and realise that I was stammering and rushing through my words.
Voices. I’m not bad at voices. I’m not great, but I’m not bad. Joey tells me. He likes my various accents. I know that a true Scot would laugh at my attempts, and a Londoner would scoff, but that’s not what I’m aiming for. All I really need is a way to associate a voice with a character.
Still, adding the necessity of dialogue on top of the rigours of the audiobook proper was way too much to handle. As such, The Bullet, having no conversation, is a prime choice for a book upon which to cut my teeth. I could speak freely, then, with no need to put on a voice or persona or accent. I could just be me and concentrate on speaking slowly, properly, carefully.
I am pleased with the end result. The Bullet is still my little friend, that book that I kick around and abuse when I’m unsure about things.
The time came to just get the rotten thing up, up onto the grand international platform of electrons called the ‘internet’. I had tried ACX, but that was ruled out by geography. I mean, in this age of inter-everything, I didn’t think it would be an issue. Go figure.
So I turn to Smashwords again for help. They are teamed up with Findaway. I had just finished Tedrick Gritswell Makes Waves and, at the end of the publishing cycle, the Smashwords website suggested using Findaway to get vocals for my book.
Well, that certainly sounds like a nice idea, but I wanted to try it myself, first, before going the whole hog. The good news is that they carry many of the same requirements as ACX, but no restrictions to geographical boundaries. What’s more, like Smashwords, they do this aggregation thing called ‘Voices Plus’ where you publish through them to lots of other channels, not just Audible. This I can get onboard with.
I’ve never liked the whole ‘exclusivity’ thing. You know, “Only deal with us or else!” I think if something is available here, it should be available there, and there and there, otherwise you’ve got bullying and monopolies and all of that.
So the uploading process is alright. You need a cover, square, 3000 x 3000 pixels. Big. As you can imagine, the dimensions don’t quite work with the standard rectangular book shape. No worries, though, you can always insert the book into a square and add some text to the left or right of it:
So with the Bullet, I used the original cover and ‘squarified’ it. With the Paranormology series, starting with Grosvenor Lane, I’ve pushed the book to the right and put the necessary meta on the left. Too easy. Thankfully I already had my books in a large-enough format, so that meant getting it to 3,000 squared was less of a challenge than it might have been.
Opting into Voices Plus is optional, which is cool (I like that you aren’t forced to use their tools). By doing so, you are enrolled to all of the channels you can think of, and then some, and then some more. Pretty neat:
And while writing this, I think some more were added. Audiobooks, man, it’s like the next thing or something.
Uploading your audio is pretty good, too:
You fill in the bits: Title, subtitile, author(s), narrator(s), dates, etc. Then you upload each chapter, plus title, front matter, afterwords, etc. You can review them as you go, rearrange them, download them to check, etc.
My only problem was that my internet connection was crappy. Uploads of only a few kBs sometimes, and then it would cut out half way through. I would literally start an upload, go have a shower or mow the lawn or read Joey a story, come back and see if it passed or failed. On many occasions, it failed. Boo.
With better internet, I’m sure you’d have a better time. I can’t fault their servers for my lazy electrons. After this comes the grand part – publishing!
Once I’d finished with the front cover of Adaptation, I had a look at some of my other titles. Yep, you guessed it, I wasn’t happy with them. I mean, the Paranormology series ain’t so bad (except, let’s be honest, Beaumauris Road Ghost) and Atlas, Broken is almost where I want it to be, but The Bullet stood out as the poor, underloved book that just wanted to have its day.
The Bullet was one of the stories, back in 2014, that I pushed out without too much thought. It was the first to be put into hard-copy, because it was small and easy to manipulate I guess, good for a trial run. But it’s still a book and it still wants love.
So there’s the old cover. Come to think of it, that’s the one for the print, since the text is slightly to the left and squished in a bit, but never mind that. The whole point is that while the bullet is front and centre, sure, and the story is about the bullet, the cover doesn’t really let your eye do anything more than read the text and see the bullet. The factory in the background isn’t prominent. In fact, I was showing Joey just the other day and he said, “Yeah, I like it, but what’s are all those lines at the back?”
Good lesson there, too. Ask a kid. They’ll be honest.
So I got to thinking about covers and what makes this yawn-worthy? Firstly, it doesn’t convey anything about the book apart from the obvious – The Bullet, with a bullet on it. OK, great, what else? The factory is stunted, there’s nothing steampunk about it, and it doesn’t challenge me in any way. It’s also very symmetric (aside from the squishing to fit it to a print book) so, really, there’s nowhere for the eye to go but top to bottom.
I trudged back over my source material again and looked at a bunch of other book covers and realised, yup, it needs a make-over.
So here we have a completely different design. Firstly, it’s darker. There’s no factory to fuddle things up, but the implication is there what with all the smoke billowing about. You’ll also note, there isn’t one bullet, but many, highlighting the major theme of the story, of this bullet and its peers. It’s challenging in that it asks what’s so special about this bullet that looks exactly the same as the ones next to it. The font is an older newspaper-style, formed but haggard, rough and rusted. Lastly, the symmetry is removed, with the words somewhat right aligned, but not perfectly.
The eye is free to bounce about a bit, first gathering the bullet, then the words, then picking at the bullets in the rows to see if there is any difference between them, anything further to see through the haze of steam and smoke. The rows of perfect rounds suggests a factory, a process, so there’s no need to harp on about it.
With such a large print and uncluttered image, it looks waaaaaay better on the small scale which, as I’ve come to realise, is very important, considering most book sites display their wares in small icons and thumbnails.
I asked Joey what he thought about this one. He said, “I dunno. I liked the first one.”
Kids, eh? What do they know?
This cover change was also necessary because, well, I’ll let you know in a bit.
In year nine I wrote a poem. It was chock full of symbolism and meaning. I thought it was pretty straight-forward, the rest of the class, teacher included, stood dumbfounded.
“What was that about? Was that a collection of random words?”
“Well, er, don’t you see? Marching across the clock-face and the bit about the hooked cross and, um, the star twice-threed and… um…”
He crossed his arms and shook his head, “I don’t get it.”
And there they are, those four (and a half) damned little words that strike fear into the heart of the author.
Your artistic integrity demands that you tell a story (or write a poem) the way it is supposed to be written, warts and all. The population in general demands that you keep things palatable and digestible.
I’ve tried to take the high road when writing my books. Rather than beating the audience over the head with the meaning behind the book, I’ve opted to respect that their mind is more than capable of making its own decision. Noble attitude, right?
Well it still sucks when you get told, point blank, “I don’t get it.”
“Who was the Target? Why didn’t you just tell us who the Target was?”
“Is there a sequel?”
“Rifles don’t get mounted on the shoulder, you know…”
And it goes on. Each time I bite my lip and do my best to explain that the story isn’t a war-story, nor is it an historical account, nor do I want to labour the meaning behind it. It’s metaphysical. It’s abstract. The story is what it is, and it becomes what you, the audience, makes of it.
Then there’s the flip-side of the coin. After the looseness of The Bullet, I tightened up the underlying metaphor. Alas, with Atlas, Broken – everyone else seems to get it wrong:
“It’s a zombie book, right?”
“It’s a modern-day twist on ‘Atlas Shrugged’?”
“Are you Henry?”
“Is this just a self-indulgent platform to complain?”
And I bite my lip and try to explain that it’s a book about depression, and that I could have entitled it “Henry, Depressed” but that would be akin to taking out the Mighty Metaphor Mallet and smacking the reader over the head.
What can I do?
I’m still grappling with that question, and something comes to mind every time I try and figure it out. Writing is art, like sculpture or painting or music or dancing. And you know the thing about art? You don’t have to like it. That’s so important that I’ll say it again. You don’t have to like an artwork.
You don’t have to appreciate it. You don’t have to get it. You don’t have to like it.
BUT, and it’s a big but, there will be some art that you do like, that you do appreciate, that you do get, that just resonates with you.
And if that’s true for you as a member of the audience, it’s true for all members of your audience. Not only that, if your reader sees something else in your book that you didn’t intend, great, that works too.
The thing about The Bullet is that it’s so open ended, that the audience is bound to make its own interpretation, and I have to be able to accept that. And Atlas, Broken is really only going to resonate with those who have experienced depression. To everyone else, it’s a gross-out zombie book.
So what can one do? Sure enough, there isn’t a silver bullet, although I’m tempted to say the following: “Write for your audience.”
If your target audience doesn’t get it, then you’ve failed. If they do, you’ve succeeded! If the wider population doesn’t get it, too bad. You didn’t write it for them, after all. That’s not to say that you can write any old tosh and claim that “It just hasn’t found the right reader yet.”
That’s just being lazy.
What it does mean is that, when you put your final story out there, and strange questions and ideas come flooding back, it’s not the end of the world. Feedback is feedback and, heck, at least people are reading your stuff and, what’s more, they’re thinking it over. That can only be a good thing.
As for the poem, no, I don’t have it handy. It’s landfill.
Let me go back to where these updated began: As an independent author, it is up to me to organise any form of marketing or promoting of my books. To this end, I embarked on an adventure – Yes, I’ll go as far as to call it an adventure – to create an animation about The Bullet. Let’s see how this came together.
The Pieces of the Puzzle
Hindsight is a wonderful thing. Here is a rough chronological list of my tasks:
I considered what I was after. I made a plan, sketched out my ideas into scenes, refined these down to what was I considered was doable, selecting five main sections.
I researched software that was available for sketching, vector drawing and animations and downloaded Inkscape for creating the vector graphics, Synfig for animation and Gimp for image manipulation.
I sketched out my characters faces and brought these into a digital format, converting them to vector graphics.
Using Synfig, I created my scenes, one by one, according to my original design.
I recorded a bunch of sounds on my phone, uploaded these to the machine and edited the soundwaves with Audacity, and hunted down a gunshot for the climax.
With the aid of Anvil, I wrote the musical track.
I used VirtualMIDISynth and the “Fluid GM” Midi Soundfont to get a richer sound
I exported the music from Anvil and blended this as a separate track together with the sound effects in Audacity.
I rendered the animation from Synfig to a movie file.
Lastly, using Microsoft’s Movie Maker, I added the audio to the video and exported the whole shebam to a YouTube ready file and uploaded it.
The end result is a one minute and twenty second clip that I’m pretty chuffed with:
Sure, it’s not refined, it’s not going to win any medals. If I get to do it again, if I ever have time, there will be several things I’d concentrate on.
In the programming world, we use retrospectives or post-mortems to see what went wrong, what went right and what can be done better. Forgive me if I cannot resist giving the animation the same treatment.
The first issue that jumps at me is the lack of sophisticated motion. It was suitable for what it needed to be, and that’s fine, but as I think about how I might create other animations, I figure there will be more ‘going on’. Background motion, moving lips with synchronised speech, blinking eyes, torsos turning, limbs flailing. While too much can be distracting, too little can be boring.
The music I enjoyed. A lot. Creating it piece by piece, getting the soundfonts, discovering reverb and chorus (albeit too late to apply it) and adding tracks as layers was just fun. Pure and simple. I reckon I could lose hours just knocking out tunes and mucking about with rhythms.
Then comes the sound. That was a headache. It was the opposite of fun. It doesn’t matter how I look at it, it just didn’t sound ‘right’. I guess I just don’t have the skillset or the proper equipment for sound engineering, so I’d probably ask for help, or try and find someone to hire.
Likewise with voice-overs. I think a voice-over would have been great. Again, lousy recording equipment and an even lousier voice let me down, to the point where I omitted the voice-over altogether. For this I’d definitely hire someone with a voice appropriate for the context.
Lastly, I think the sketching and vectorising the characters worked out just fine, only I’d spend more time getting details and layers so as to add more dimension to them. And I’d really like to try the ‘bones’ feature out in Synfig and get some complex motion happening. Oh, for another lifetime!
In any case, I’ll call it a wrap. I’ve got to get back to writing, so I bid a fond farewell to the Land of Animation – for now. I’ve got my little bag of tricks for next time, and I hope to share with you my next foray when I get a bit of breathing space between titles.
Making the music for the The Bullet Animation was definitely one of the more fun aspects. I had a general tune going, I’d made a rhythm track and mucked about with the instruments.
Playing it back, it didn’t sound right. Sure, the tune was fine and the timing was correct, but there was something definitely NQR. It wasn’t until I played it back on my phone that it twigged: The instruments sounded tinny.
No, not what one calls when a tree is cut down. I’m talking the quality of sound, the richness. If the sound coming out was colour, it would be a pastel, muted shade, not a rich, vibrant one. The instruments used sounded very much like those I was playing with back on the ol’ 386, probably because (and please correct me if I’m wrong) they were the same ones.
The ‘instruments’ used to play the midi file were the issue. Windows comes with a set of sounds that can be used to play midi files which is, well, average. So the piano, the harpsichord, the bass guitar, all sound like they’re supposed to. Kind of. Ish. If you squint.
“OK,” I reason, “It’s just a matter of getting a better quality set of instruments.”
In a way, yes. Only the correct term is Soundfonts. You can think of it like text-fonts. You’ve got your standard set of Arial, Times New Roman, Courier, Helvetica. Throw Comic Sans into that mix. They serve a purpose, they’re a good, vanilla set, and you can make them bold, italic, underlined, yeah, but they aren’t particularly interesting. Now you can get a whole bunch of fonts, of all different shapes and themes to suit a bunch of purposes. Different fonts make things interesting.
The default Windows soundfont is decidedly average. In Anvil, it was the default midi synthesizer. I looked through the help and it seemed easy enough to add other synthesizers as well, and with the free version of Anvil, I can have up to two. Hey, I’ll settle for one good one.
I went online and downloaded VirtualMIDISynth (http://coolsoft.altervista.org/en/virtualmidisynth), which acts as a virtual midi endpoint, something that can render the midi files. By itself, it’s just an empty sound studio – I needed to fill it with instruments (I needed to download a soundfont).
Back to the web I went, seeking out this new ‘soundfont’ beast. Turns out they come in all shapes and sizes (just like normal fonts) and range from a few megabytes to a few hundred. What’s the difference? Well, I started with the ‘few megabytes’ option and ended up with a single instrument, a piano. It sounded nice, a lot better than what I had, but I was after a lot of instruments, not just one.
Have a look on the VirtualMIDISynth webpage for links to soundfonts. I eventually went with the Fluid Soundfont (http://www.synthfont.com/soundfonts.html), after trying a bunch of others, and I got a tiny glimpse into the world of sampled sounds. If time permits (Ha!) I’d love to revisit this and play around with some of the really cool soundfont sets I downloaded.
To use it, I opened up VirtualMIDISynth and chose the Fluid GM Midi soundfont set to use. Then, inside Anvil, I went into the synthesizers tab, chose to import a new synthesizer and picked my VirtualMIDISynth. After that, it was only a matter of selecting the instrument for the track and listening to how much better it sounded.
If only I could have done the same kind of thing for my sound effects.
Not really a complication in a technical sense, more so that when I finished composing the main tune, I played it against the animation. Who would have guessed? It was too long. I had the choice of either upping the tempo, which made it sound ridiculous, or removing a slab of twiddly bits from the middle.
It wasn’t a complicated task, removing some notes and pushing the ones over that side to over this side, except that the twiddly interlude bit that got ripped out was a sort of bridge between two different keys. Playing the resulting set revealed a dissonance that highlighted the rift between the two parts. In the end, I highlighted the offending section and, using the power of Anvil, shuffled them down a tone or three. Job done.
I wanted the musical piece to run from a simple tune and get incrementally built up to a crescendo. The short animation time meant I was left without a lot of run-up space, so I broke the music into parts and added the rhythm track and the accompaniments in varying stages, putting it all together at the end.
The accompaniment tracks started off sounding very boring and flat: Just a single note played for each beat. To spice it up, and to allow a bit of dissonance as it progressed, I changed them to alternate between Oom-pa-pa and (rest) Ba-da-ba.
So they go Oom-pa-pa, (rest) Ba-da-ba, Oom-pa-pa, (rest) Ba-da-ba. Then, as I like, I can adjust one pa to bring the piece up to another key, or replace an Oom-pa-pa with a Ba-da-ba to add a bit of urgency. I’m sure there’s a musical term for this, can’t tell you what it is so don’t ask.
Lastly, by the time we get to the last phrase, there wasn’t enough behind the crescendo. Sure, it hit the high notes (better than I could!) but because everything had gone up by an octave or so, there was no bass left. To round it out, I inserted some simple bass notes to keep the whole thing grounded.
Whew! How about that! Music is done. Now what? Well, in my next post, I’ll show you how I brought it all together.
Having moving images and sound effects for the animation wasn’t enough. After toying with layering sounds upon sounds to build to a crescendo, I figured out that what was needed was not more crappy sounds, but music. Actual music. It sets the scene, it binds the flow together, it lends to the atmosphere of it all.
First, I came up with a tune. It’s one that’s been stuck in my head for ages, I don’t know if it’s an actual song or not, but it’s what I chose to run with. So I sang it. Ha! Bad move. Firstly, singing in the shower is one thing, singing into a microphone is something else. In fact, I did try recording it in the shower. It didn’t turn out much better.
There were a few problems. Firstly, I had no musical backing, no metronome, no drums or pianos or violins or guitars. OK, I thought, I’ll just hum it out as a chorus and layer my voice over itself in Audacity. Yeah. Nah. Not good. After a few solid attempts tucked away in the garage, I recorded myself a few times in different keys, mimicked a ‘pom pom-pom’ for the beat and opened the recordings in Audacity.
While it wasn’t terrible, it wasn’t great. It wasn’t even good. Passable might be a stretch. I adjusted the pitch and tempo to get two tracks into line, which helped a bit, but the overall result was underwhelming and unsuitable. Why? Because of a second, larger problem.
While Audacity allows one to increase or decrease the apparent tempo, there’s only so much it can stretch before it starts to sound distorted. So unless I fluked it and got my recorded tune to be pretty close to the timing of the animation, I would have to record it all again. And I was still without instruments.
Back in the day, when we first got a Sound Blaster, I was introduced to the world of Midi. This topic is pretty huge, but the concept is pretty straightforward. In a similar fashion to Vector versus Raster graphics, using Midis frees one from actually having to play or, in my case, sing a song. Rather one provides instructions for playing the song. Consider a record player versus a sheet of music. A record player plays the record placed upon it. It cannot play an abstract piece of music unless that music is encoded onto a record.
A sheet of music, on the other hand, is similar in that a tune may be derived from it, yet it cannot be used to create that song. Instead a musician, acting as an interpreter, and an instrument, upon which to play the tune. Give the musician a different instrument, and you get a different sound. Up the tempo, change the key, and it’s just a matter of the musician playing the same tune differently.
Not only that, you can give different sheets of music to different musicians and, hey, presto, you’ve got yourself a band. OK, not exactly the same thing, but you get the idea. It allows musical plebs, like yours truly, to slowly create a piece of music, assign instruments, even put in a rhythm track, and make music. You can use your midi to ‘talk music’ to devices like electronic keyboards and sample pads.
Nuts. I don’t have my Sound Blaster anymore. And Midi-mapper, a tool that one could use to define the output device for playing midi files, that used to be in the control panel of Windows 3.1 just isn’t there in Windows 8. A bit of poking about on the web, reading up on a few sites, yep, it’s gone. No! Surely not!
Fret not! For midi, as I came to find out, is alive and well and not going anywhere soon. As with everything else about this whole project, it took a bit of reading of forums, blogs and how-tos to get my head around it all, but I’m glad I did.
I downloaded a few nasty midi composers, not to my liking. They were too clumsy, or they wouldn’t even install properly. Finally, I settled upon a great piece of software called Anvil Studio (www.anvilstudio.com) that enabled me to, from scratch, knock up a tune, add a rhythm track, add a couple more tracks for harmony and, tada! Music!
Alright, maybe it wasn’t that easy. First I had to fish out my old music books and remember things like ‘Middle C’, 4/4, 3/4, 2/4 time, rests, quavers, semi-breves, sharps, flats, chords, staccato, keys. After struggling for a solid hour, I discovered that Anvil doesn’t force you to do things solely with sheet music. For example, I found that there is a ‘view’ of a ‘piano roll editor’, shown on the right here, that let’s you mark out your tune in a graphical format. Purists, look away!
Not only that, if you’re a guitar buff, you can plot your music on a tablature view.
With each track, I can pick an instrument I want to use to play that tune. It’s kind of cool, really, to see how a song sounds when played with a piano, or a guitar, or a glockenspiel. Best of all, no need to re-record.
What about percussion? I added a rhythm track. First, I played with adding some bass and a crash symbol, just to see how a backing rhythm would sound, then proceeded to fill that in all the way across the tune. Whoa, there’s a better idea. Loops.
Anvil allows me to make a loop, of the various percussive sounds and I can then instruct it to play that over the next portion of a tune. Now that’s handy. No copy and paste errors, and no tedious filling out of a rhythm.
So that’s great news! I had got a veritable orchestra at my disposal, right? Right. Almost. It certainly solved most of the problems outlined above. I can adjust the tempo of the song to fit into the timing of my animation. I have a musical score that I can tweak. I can apply musical instruments to different tracks.
Above all, I don’t have to sing. You can thank me later.
So why ‘almost’? That comes back to how the midi files are rendered. I’ll get onto it in the next post.
Up to this point I had been toying with graphics and animating things and sketches and learning about vector graphics – and I’d completely neglected the audio! Well, not completely. Mostly.
The issue, as I saw it, is that I had to finish the animation before I could add the sound. I could hardly hope to figure out the timing without something against which to time. Anyway, by the time I got to the first major iteration, I thought I’d better spend some time on sound.
Music, Sound, Voice
I broke up the tasks of sound into three main categories: The background music, ambient sounds and voice-over. I chose to go without a voice-over for reasons previously mentioned, but I think I’d like to give it a try in the next animation. I can imagine it would present its own challenges and I’d like to explore them one day.
For now, this post will concentrate on the ambient sound, the next will be on music.
My first task was to think about what scenes needed what sounds. Going back through my animation files, I watched the silent progress and imagined what might lend itself to the matter. I made a wee list:
The hissing of the furnace
The rattling of the conveyor belt
The kak-klunking of the machinery
Heavy breathing of the Assassin
The bang of the Bullet
Armed with my dodgy microphone, I tried my hand at making noises with my mouth. I discovered a couple of things. Firstly, my microphone ain’t no good. I thought it was broken at first. No, not broken, just really crappy. The resulting sound was barely above a whisper. Upping the gain only upped the noise and clipped the sound. I couldn’t make too much noise: I’ve got a young ‘un who is usually asleep by the time I’m doing anything. On top of that, everything came through with a hum that I later tracked down to being the fan of the computer.
Secondly, while my vocal impersonations of the garbage truck on a Friday are enough to impress small children, Michael Winslow I ain’t. Even when I did manage to get a sample of something loud enough to be workable, it sounded pretty lame. The rattling conveyor sounded like an old man about to lose his lunch, the kak-klunking sounded like nutshells being rubbed together.
Microphone = Inadequate. Location = Terrible. Source = Abysmal. To address these issues, I looked at the palm of my hand. My phone! Not only can it take telephone calls, it has a recorder built into it. On the weekend, I buzzed about outside, in the garden, in the garage, trying to find sources for sounds. The roller door. A hammer. The hose. The air conditioning unit. The lawn mower. The can opener. There were clunks and rattles and hisses and sighs all over the place.
By the time I came back inside and thawed my nose (it’s Winter time), I had a phone full of sounds, ready for use. Only, they weren’t. First, I needed to download and convert them into something usable.
A long, long time ago, Dad splashed out on a Sound Blaster Pro. Tucked into the whopping ISA expansion slot on the motherboard, it allowed, for the first time, not only playback of awesome sounds and music, but also recording of awesome sounds. As a family we huddled around the box to record funny messages for windows startup, add reverb and warp the pitch until we sounded like chipmunks.
When I tried the recorder the other day, I was sorely disappointed. Yes, I could record, but that was about it. Where had all the fun gone? Why couldn’t I fade in or out? What about the echo and hiss-reduction and all of that. We had it back in the 90’s, right?
Well, all of that is still there. A quick search on the net brought me to Audacity (http://audacityteam.org/). Simply download and enjoy. The interface was a bit daunting to look at, granted, but stick with it. Go ahead, import a sound file, boom, there’s the waveform, ready to be fiddled with. First port of call for me was to trim out the bits of the samples that I didn’t want. Highlight the section and delete it, simple as that. And if you need to insert a block of silence, sure, select menu > generate and make as much sweet silence as you need. I had to do that a fair bit: I had a two year-old shadow following me around, nattering all the while.
You can cut and and copy and paste, or select a region and make it repeat x number of times. Importantly, you can fade in or out, cross-fade between left and right channels, or apply some really cool filters to knock out high hisses or low hums. I must admit, I lost myself for quite some time as I mucked about with different filters, seeing what each one did.
One of the really cool features of Audacity is the ability to have multiple, parallel tracks. They end up working a lot like layers in the graphics programs, so you can tweak one track independently of another, speeding it up, slowing it down, adjusting the volume, whatever you like. And you can play it back, just like that, to hear how it goes.
In the end, banging a lump of wood on the roller door provided a decent ka-thunk, ka-thunk, and the air-conditioner gave up a bunch of interesting sounds, whirrings and groans and squeaks and hisses. One thing I couldn’t find in my garage wonderland of noises was the distinct sound of a rifle shot.
Essential to the animation, I simply could not recreate a convincing bang that was distinctly a gunshot. Short of rocking around to a rifle range, I poked about online to find free online noises. I listened to the report of a few different models and settled on the Springfield M1A rifle: It has that heavy crack that I was after, along with a lasting, gaseous hiss.
Pushing this all together, I must say that I’m not entirely happy with what I ended up with. If there’s anything I’d go back and do again, it’d be the sound, simply because its just not punchy and distinct enough. In fact, I’d probably seek help from a sound engineer in this department. Anyway, enough prattling about that, next time I’ll prattle on about the music.
If you’ve been following along, you’ll know that I started off this whole animation project with defining a bunch of scenes I wanted to render, converted some sketches into vectors and I figured out how to make stuff move.
By putting in a background and having layers for your characters, you could very easily knock up a South Park looking animation, or even a smooth transitioning storyboard, depending on what you’re after. If you’re after motion of parts of your characters, eyes, for example, or mouths, you’ll need to get into some of the finer points.
The scenes for The Bullet did not call for a lot of motion, contrary to what the subject matter might suggest. As I was getting through it, though, I figured I wanted a bit more realism with my characters, the Worker and the Assassin especially. The eyes of the Worker were quite important since, if the Bullet went under his gaze and his eyes did not move, it would destroy the notion that the Bullet was being scrutinised.
Having already labeled the layers that held the eyes, it was easy enough to identify them. Had I known about canvases (http://wiki.synfig.org/wiki/Canvas) before I started this, I would have used this to make the eyes group on an independent time frame. Not to worry, got there in the end and the concept is still the same.
I started the eyes pointing off to right (viewing the previous bullet), swiveled them back sharply and had them smoothly roll in time with the viewpoint of the Bullet before snapping back again, ready to inspect the next bullet. I toyed with TCB and the Constant waypoints, but neither gave the impression of what might constitute real eye motion, while Linear seemed far too unnatural. Clamp turned out to be the best first for the task, although I think the flyback should have been a little faster. If I were to do it again, I’d consider some jerkiness and random motions of the eye. When a person is looking at something closely, the eye will make many microscopic adjustments as it scans the intricate details of the subject. Lesson learnt.
The worker was going to be smoking a cigarette originally, but, as one might imagine, cigarettes and gunpowder don’t mix. In the end, I pulled the stick out of his mouth. It didn’t belong and it detracted from the enormous, distorted eyeballs.
Speaking of eyeballs, the Assassin, coming in at the end of the rifle run, needed to have a bit more life to him. I wound up giving him a goatee beard, shaggier hair and sinister eyes.
His mouth starts off flat, almost grumpy, but it turns to a smile as he approaches. How? Select the mouth layer and simply move the mouth to where you want it to be at a certain time, and let the animation engine do the rest. Curling up the edges of the mouth, I found, was not a very effective way to bring life to a character. It was just too subtle and was lost in the motion of the whole head as it zoomed and rotated.
Upping the extent of the smile didn’t cut it. Exaggerating the mouth motion looked too, well, exaggerated. And, besides, the smile was for the Assassin, no one else, and needed to be almost imperceptible. Instead, I got him to blink.
Blinking involves the covering of an eyeball with the eyelid. Again, since I had labeled my layer previously, it was only a matter of finding it on the right hand layer panel, clicking the little red man to begin animating, grabbing the waypoints and closing them together, then opening them up again.
Now, a normal, natural blink is very fast indeed. A Step / Constant waypoint certainly looked like a blink, but, at only a single frame, the animation was just too flashy. Instead, I used a Clamp to animate in and out, but over only a few frame. The result is that the eyelid closes rapidly, but not so rapidly that it’s lost on the viewer.
If anything, it’s slow enough to give the Assassin the air of being cool, calm and calculating, which is exactly what I was after.
Without the use of thick lines to define my characters’ features, or any form of cross-hatching or shading, I had to rely on the slabs of colour of the regions. Not terrible. Not great. You don’t get a lot of depth out of it. Or mood. Or ambiance. This is where gradients can help.
Taking the worker scene, it looked far to bright and airy, not at all like the confused, claustrophobic world into which the Bullet was born. To bring the focus back onto the worker, and provide a narrowness of view, I used a radial gradient over the top of the worker group, running from transparent to a dark red on the edges.
The radius of this layer, like pretty much everything else, can be animated. This way, the field of view grows and shrinks as the Bullet travels along, obscuring the image. I did apply a fish-eye, or sphere distortion, which added to the confusion, but I pulled it: It was just becoming too confused.
To aid the idea that the Worker was near a furnace or a boiler, I applied a linear gradient, which I labeled ‘Heat Flare’ across his face to give it a rosy hue. I did something similar in the next scene, the Metamorphosis, to have the Bullet move from a hot red area to a cooler grey one, animating the endpoints and colours of the gradient as the scene progressed.
Lastly, I had to decide between voice-overs or text. I have a microphone on my webcam, and another that I can plug in the back of the box. Neither, I discovered, were suitable for recording clean, crisp voice. In fact, I think I’ll have to get onto the whole sound portion of this clip in another post. In any case, I decided upon text to display contextual snippets.
To do this, simply add in a Text Layer. Type in the text as the ‘value’ and, Presto! You have words. I imagine one might want to animate words in or out, or type one letter at a time, but I went for a simple fade in / fade out option.
Changing the font is a tricky matter, though. You need to know the name of the font that you’re after. I opened up Open Office and scrolled through the fonts I was after, but the Windows Font Viewer will do the job. Put the name, verbatim, into the font family field and that’ll do the trick.
Because fonts behave like vectors, they’ll scale and rotate very nicely without all the pixelation.
Can you add a gradient to your text? Of course! Can you use your text to define the alpha channel of an underlying layer? Definitely (and how cool would that look?). The only real issue I found with text is that the rendering gets a bit jumpy if you try to animate the size. Maybe non-integer values aren’t suitable for the rendering engine, but I’d only be guessing. Everything else is fair game.
But an animation isn’t all just visual. In my next updates I’ll go over the music and sound.
The last post yammered on about how cool Synfig is. For a dude like me who hasn’t the training, the cash or the patience for the professional stuff, it does the job admirably and there’s a whack of stuff in there that I haven’t even had the chance to look at yet.
Alright, alright, I’m getting to it. With my vector images ready to go and a little practice under my belt, I was ready to try and get a scene in motion. I started by importing the svg into Synfig. Go to File > Import and select your file (you can use this to bring in jpg and png images as well). This will make a group layer that contains a bunch of other layers, one for each layer in the original file, and inside each of those is a layer for each path.
If you’ve labeled your layers in Inkscape, they won’t have these labels when imported into Synfig. No worries, just spend a minute to select and re-label those layers – a stitch in time and all of that.
Now, one issue I did come across was that regions that had bits that had been subtracted (I’m talking annuli, doughnuts, holes, cutouts) still had the paths present, but they hadn’t been subtracted, like you can see in the image on the left.
The Worker’s glass and head brace naturally wanted to allow him to be able to see the Bullet as it goes past. Only thing is, the import left the cutout as the same colour and composition style as the rest of the headgear, so I ended up with a blast shield over the lenses.
One workaround for this, I found, was to select the layer that was to be the cutout, and set its composition to subtract the alpha from the underlying region. This will mimic the hole punch effect of one region upon another.
To add a little bit of depth, I also took the opportunity to add in a ‘lens’ layer, merely a circle that colourised everything beneath it, so as to give the skin and eyes an unnatural, soft, blue hue.
So there’s a little bit of tidying up that needs to be done when bringing in your SVGs, but it’s not a killer and it also gives you a chance to get everything prepped, and also, more importantly perhaps, the incentive to fiddle around with the settings to see what flies.
Animate it !
Synfig lets you animate properties of your layers, and these can be done by adding waypoints for these properties. The various styles of your waypoints affect how the engine calculates the resulting points in between your waypoints. Each of these can have a different style for in and out. For example, you can use a ‘Linear’ waypoint (the yellow one) to linearly transition from one state to another. Alternatively, you can choose for a more flowing ‘Clamped’ to give a smoother lead in or out. There’s also TCB which is pretty cool but hard to control.
There’s also the step or constant which you can use to instantly transition a state, which comes in handy when you want to make something, say, disappear or reappear or change from one colour to another in the blink of an eye. The best way to figure these out is to muck around with them and get a feel for how they behave.
One of my scenes that I decided upon was to have my characters zooming in from the sides, scaling from little to really big, and fading in from invisible to visible and invisible again. I’ll pick on the Foreman to give you an example. I’ve taken a shot just a frame or two after my first waypoint so you can get a feel for where he starts off: Top left, barely visible and tiny.
With the little man set to green, I prime my layer to be where I want it to be at the start.
Then, I click on the green man to turn him red. This means I’m in ‘animating’ mode, and I can create waypoints. Moving the timeline forward to an applicable spot, I can drag the balls on the graphical layer to position it where I want it to be at this time. This will create a waypoint for the position.
You can see these waypoints in the bottom panel. Each will have a graphic for the ‘in’ portion and one for the ‘out’. So you can, for example, linearly transition something in, then apply a constant on the out.
To change a waypoint, you can drag it left or right to set the frame, or right click to alter the type of in/out/both, or duplicate it, remove it, etc. Notice, too, that the waypoints are inline with the properties that they are animating. In the example there, the Amount (think opacity) is animated with clamped points from 0.0 to 1.0 and back down to 0.0 again to fade in and fade out. The Transformation (position, scale, rotation) can have a completely separate set of waypoints to follow.
To resize or rotate or move the layer, you can just grab the little balls on the screen, or if you want precise control, you can set the numeric value directly in the property – value window on the bottom left. This is handy if you want to perform precise motion, like the rotation of gears or the path of a conveyor belt.
This is how I got the bulk of my animation covered, but there are still some facets that I’d like to cover, namely animating paths or object within other objects, as independent entities.
Say what? Just bear with me, it’ll make sense in the next post.