Live a Little before you Die a Little

Being an independent author is a tough gig. You’re looking at months, years even, of scratching out enough time from your hectic life to whip words into line just to adequately convey a story to someone you’ll never meet. It’s daunting to publish your first story, make no mistake, and there is an underlying, all consuming, inescapable fear that what you’ve written is just no good.

Not no good in the sense that it’s not going to be a best seller, no good in that your book stinks.

Fear

It’s a natural fear, and a healthy one in many respects. Really. Firstly it encourages you, by default, to check and recheck over your work to make sure it’s in a suitable condition for that person over the other side of the world. You are actually compelled, for the umpteenth time, to check over the editing, punctuation, sentence and paragraphs structure, beats, tics, cliches, metaphors and vocabulary.FearIsTheMindKiller

Secondly, and possibly more importantly, it draws you out from your authoring role and puts you into the reader’s role. “Will they get it?” you wonder, “Does it make sense to someone who isn’t me? Will they actually get it? Was that metaphor too subtle? Is the premise lost in the drama?”

Thirdly, you’re more likely to ‘look over the fence’ at other books, see how other authors deal with killing off their characters, see what language they get up to, what works, what annoys you (as a reader).

Of course these fears, left to fester, can prevent you from ever getting that manuscript out. Kind of a paranoia induced paralysis. If you’re anything like me, you’d make up excuses. You’d say that you simply don’t have enough time to do it properly. You don’t have the money to pay an editor to read over your stuff. You haven’t the ability to make a front cover or your laptop isn’t fast enough or, well, you’ll find anything.

That was me for years until I reached the ‘stuff it’ moment, the point where I had to do grow a pair and actually do something.

The time issues I worked through. The technological and artistic issues, I nutted out. But the fear remained. The fear sat on me like weight, holding me back.

Motivation

I got to thinking, in my ‘stuff it’ moment, about what it was that was driving me forward. Why was I even bothering to write? What did I hope to achieve? I was not scrounging spare minute to sit in front of a screen for my health, nor my sanity, nor because I enjoyed the feel of keys beneath my fingers.

A few beers and some heavy introspection later, I woke up with a clearer understanding: Originally, I wrote as a way to exercise my mind, to express some little ditty I had in there, to unwind from the stress of work. Then, as I got stuck into Darkness From Below, I discovered that I could write a book and all that it needed was a solid bit of grunt. It was a proof of concept (Really, Jez? A proof of concept? Hey, I’m a software engineer. Sue me) that showed me how a little bit of writing each day can culminate, eventually, in a book.

When it came to getting that book published, however, I discovered that I could not (for legal reasons) and, I confess, I let that be my excuse not to publish. “Ah, well. Such is life.” I was relieved, since I didn’t have to face up to the stomach twisting fear that someone, somewhere out there in the big, wide world, didn’t like what I wrote. Fear had won out again.

It was a failure on my part, of course, and I had no one else to blame but myself. I could have re-written it. I could have kept the premise and the story and changed the characters and scenery and it would have been my own once more, but I was deflated, depressed once more. So what, then, was the final motivator to get me going once more?

Assertiveness

I wanted to have something in my hand to point to and say, “See that? I did that.” I wanted to look back on my life and know that I didn’t spend so much time in front of the television that I could produce something that someone, somewhere out in the world, would read and like. I wanted my children to know that hard work and dedication, persistence and determination, do pay dividends.

At first I thought I was being ‘Selfish’. After all, I was doing something for myself, not for others. Aren’t I, as an author, supposed to put my audience first? Yes, I say, absolutely but this is not the right context.

When I write, I keep the Reader in my mind at all times, but why I write, well, that’s not up to the Reader, that’s up to me, the Author.

And this is what it all came down to: I write because I want to. Not because I’m forced to. Not because I’m paid to. Not for any other external motivator. I do my best to proof-read and edit. I treat the Reader with respect. I learn from other books, I listen to criticism, I do better each time. Still, I write because of the satisfaction it gives me, the enjoyment I get out of creating something from nothing, the sense of accomplishment when someone ‘gets it‘.

My biggest lamentation is that I sat on my thumbs for so long, letting fear eat at me, ignoring what it was that I wanted, dismissing my desires as selfish, delusional, unachievable. It would erode me, and feed that black beast. So many New Year’s Eves of my life I’d look back and think, “Yup. Didn’t get anything I wanted to get done in that year. Maybe, somehow, next year will be the different.”

Of course, it wasn’t going to be different until I decided, actively, that it would be.

If you are a prospective author, artist, musician, craftsman, whatever, if you’ve got that little spark in you crying to be stoked, all I can say is this: You can either look back and wonder if you ever could have done it, or you could just go ahead and do it, and phooey to those faceless fears. Figure out what it is you want, honestly, and then do something to make it happen.

Live a little, before you die a little.

The Bullet Animation – Music Issues

Making the music for the The Bullet Animation was definitely one of the more fun aspects. I had a general tune going, I’d made a rhythm track and mucked about with the instruments.

Playing it back, it didn’t sound right. Sure, the tune was fine and the timing was correct, but there was something definitely NQR. It wasn’t until I played it back on my phone that it twigged: The instruments sounded tinny.

Timbre

No, not what one calls when a tree is cut down. I’m talking the quality of sound, the richness. If the sound coming out was colour, it would be a pastel, muted shade, not a rich, vibrant one. The instruments used sounded very much like those I was playing with back on the ol’ 386, probably because (and please correct me if I’m wrong) they were the same ones.

The ‘instruments’ used to play the midi file were the issue. Windows comes with a set of sounds that can be used to play midi files which is, well, average. So the piano, the harpsichord, the bass guitar, all sound like they’re supposed to. Kind of. Ish. If you squint.

“OK,” I reason, “It’s just a matter of getting a better quality set of instruments.”

In a way, yes. Only the correct term is Soundfonts. You can think of it like text-fonts. You’ve got your standard set of Arial, Times New Roman, Courier, Helvetica. Throw Comic Sans into that mix. They serve a purpose, they’re a good, vanilla set, and you can make them bold, italic, underlined, yeah, but they aren’t particularly interesting. Now you can get a whole bunch of fonts, of all different shapes and themes to suit a bunch of purposes. Different fonts make things interesting.

SoundFonts

The default Windows soundfont is decidedly average. In Anvil, it was the default midi synthesizer. I looked through the help and it seemed easy enough to add other synthesizers as well, and with the free version of Anvil, I can have up to two. Hey, I’ll settle for one good one.

I went online and downloaded VirtualMIDISynth (http://coolsoft.altervista.org/en/virtualmidisynth), which acts as a virtual midi endpoint, something that can render the midi files. By itself, it’s just an empty sound studio – I needed to fill it with instruments (I needed to download a soundfont).

Back to the web I went, seeking out this new ‘soundfont’ beast. Turns out they come in all shapes and sizes (just like normal fonts) and range from a few megabytes to a few hundred. What’s the difference? Well, I started with the ‘few megabytes’ option and ended up with a single instrument, a piano. It sounded nice, a lot better than what I had, but I was after a lot of instruments, not just one.

Have a look on the VirtualMIDISynth webpage for links to soundfonts. I eventually went with the Fluid Soundfont (http://www.synthfont.com/soundfonts.html), after trying a bunch of others, and I got a tiny glimpse into the world of sampled sounds.Anvil4 If time permits (Ha!) I’d love to revisit this and play around with some of the really cool soundfont sets I downloaded.

To use it, I opened up VirtualMIDISynth and chose the Fluid GM Midi soundfont set to use. Then, inside Anvil, I went into the synthesizers tab, chose to import a new synthesizer and picked my VirtualMIDISynth. After that, it was only a matter of selecting the instrument for the track and listening to how much better it sounded.

If only I could have done the same kind of thing for my sound effects.

Other Complications

Not really a complication in a technical sense, more so that when I finished composing the main tune, I played it against the animation. Who would have guessed? It was too long. I had the choice of either upping the tempo, which made it sound ridiculous, or removing a slab of twiddly bits from the middle.

It wasn’t a complicated task, removing some notes and pushing the ones over that side to over this side, except that the twiddly interlude bit that got ripped out was a sort of bridge between two different keys. Playing the resulting set revealed a dissonance that highlighted the rift between the two parts. In the end, I highlighted the offending section and, using the power of Anvil, shuffled them down a tone or three. Job done.

Anvil3I wanted the musical piece to run from a simple tune and get incrementally built up to a crescendo. The short animation time meant I was left without a lot of run-up space, so I broke the music into parts and added the rhythm track and the accompaniments in varying stages, putting it all together at the end.

The accompaniment tracks started off sounding very boring and flat: Just a single note played for each beat. To spice it up, and to allow a bit of dissonance as it progressed, I changed them to alternate between Oom-pa-pa and (rest) Ba-da-ba.

So they go Oom-pa-pa, (rest) Ba-da-ba, Oom-pa-pa, (rest) Ba-da-ba. Then, as I like, I can adjust one pa to bring the piece up to another key, or replace an Oom-pa-pa with a Ba-da-ba to add a bit of urgency. I’m sure there’s a musical term for this, can’t tell you what it is so don’t ask.

Lastly, by the time we get to the last phrase, there wasn’t enough behind the crescendo. Sure, it hit the high notes (better than I could!) but because everything had gone up by an octave or so, there was no bass left. To round it out, I inserted some simple bass notes to keep the whole thing grounded.

Whew! How about that! Music is done. Now what? Well, in my next post, I’ll show you how I brought it all together.

The Bullet Animation – Music

Having moving images and sound effects for the animation wasn’t enough. After toying with layering sounds upon sounds to build to a crescendo, I figured out that what was needed was not more crappy sounds, but music. Actual music. It sets the scene, it binds the flow together, it lends to the atmosphere of it all.

Recording

First, I came up with a tune. It’s one that’s been stuck in my head for ages, I don’t know if it’s an actual song or not, but it’s what I chose to run with. So I sang it. Ha! Bad move. Firstly, singing in the shower is one thing, singing into a microphone is something else. In fact, I did try recording it in the shower. It didn’t turn out much better.

There were a few problems. Firstly, I had no musical backing, no metronome, no drums or pianos or violins or guitars. OK, I thought, I’ll just hum it out as a chorus and layer my voice over itself in Audacity. Yeah. Nah. Not good. After a few solid attempts tucked away in the garage, I recorded myself a few times in different keys, mimicked a ‘pom pom-pom’ for the beat and opened the recordings in Audacity.

While it wasn’t terrible, it wasn’t great. It wasn’t even good. Passable might be a stretch. I adjusted the pitch and tempo to get two tracks into line, which helped a bit, but the overall result was underwhelming and unsuitable. Why? Because of a second, larger problem.

While Audacity allows one to increase or decrease the apparent tempo, there’s only so much it can stretch before it starts to sound distorted. So unless I fluked it and got my recorded tune to be pretty close to the timing of the animation, I would have to record it all again. And I was still without instruments.

Phooey.

Midi

Back in the day, when we first got a Sound Blaster, I was introduced to the world of Midi. This topic is pretty huge, but the concept is pretty straightforward. In a similar fashion to Vector versus Raster graphics, using Midis frees one from actually having to play or, in my case, sing a song. Rather one provides instructions for playing the song. Consider a record player versus a sheet of music. A record player plays the record placed upon it. It cannot play an abstract piece of music unless that music is encoded onto a record.

A sheet of music, on the other hand, is similar in that a tune may be derived from it, yet it cannot be used to create that song. Instead a musician, acting as an interpreter, and an instrument, upon which to play the tune. Give the musician a different instrument, and you get a different sound. Up the tempo, change the key, and it’s just a matter of the musician playing the same tune differently.

Not only that, you can give different sheets of music to different musicians and, hey, presto, you’ve got yourself a band. OK, not exactly the same thing, but you get the idea. It allows musical plebs, like yours truly, to slowly create a piece of music, assign instruments, even put in a rhythm track, and make music. You can use your midi to ‘talk music’ to devices like electronic keyboards and sample pads.

Nuts. I don’t have my Sound Blaster anymore. And Midi-mapper, a tool that one could use to define the output device for playing midi files, that used to be in the control panel of Windows 3.1 just isn’t there in Windows 8. A bit of poking about on the web, reading up on a few sites, yep, it’s gone. No! Surely not!

Anvil2Fret not! For midi, as I came to find out, is alive and well and not going anywhere soon. As with everything else about this whole project, it took a bit of reading of forums, blogs and how-tos to get my head around it all, but I’m glad I did.

I downloaded a few nasty midi composers, not to my liking. They were too clumsy, or they wouldn’t even install properly. Finally, I settled upon a great piece of software called Anvil Studio (www.anvilstudio.com) that enabled me to, from scratch, knock up a tune, add a rhythm track, add a couple more tracks for harmony and, tada! Music!

Alright, maybe it wasn’t that easy. First I had to fish out my old music books and remember things like ‘Middle C’, 4/4, 3/4, 2/4 time, rests, quavers, semi-breves, sharps, flats, chords, staccato, keys. After struggling for a solid hour, I discovered that Anvil doesn’t force you to do things solely with sheet music. For example, I found that there is a ‘view’ of a ‘piano roll editor’, shown on the right here, that let’s you mark out your tune in a graphical format. Purists, look away!

Anvil1Not only that, if you’re a guitar buff, you can plot your music on a tablature view.

With each track, I can pick an instrument I want to use to play that tune. It’s kind of cool, really, to see how a song sounds when played with a piano, or a guitar, or a glockenspiel. Best of all, no need to re-record.

What about percussion? I added a rhythm track. First, I played with adding some bass and a crash symbol, just to see how a backing rhythm would sound, then proceeded to fill that in all the way across the tune. Whoa, there’s a better idea. Loops.

Anvil allows me to make a loop, of the various percussive sounds and I can then instruct it to play that over the next portion of a tune. Now that’s handy. No copy and paste errors, and no tedious filling out of a rhythm.

So that’s great news! I had got a veritable orchestra at my disposal, right? Right. Almost. It certainly solved most of the problems outlined above. I can adjust the tempo of the song to fit into the timing of my animation. I have a musical score that I can tweak. I can apply musical instruments to different tracks.

Above all, I don’t have to sing. You can thank me later.

So why ‘almost’? That comes back to how the midi files are rendered. I’ll get onto it in the next post.

The Bullet Animation – Sound

Up to this point I had been toying with graphics and animating things and sketches and learning about vector graphics – and I’d completely neglected the audio! Well, not completely. Mostly.

The issue, as I saw it, is that I had to finish the animation before I could add the sound. I could hardly hope to figure out the timing without something against which to time. Anyway, by the time I got to the first major iteration, I thought I’d better spend some time on sound.

Music, Sound, Voice

I broke up the tasks of sound into three main categories: The background music, ambient sounds and voice-over. I chose to go without a voice-over for reasons previously mentioned, but I think I’d like to give it a try in the next animation. I can imagine it would present its own challenges and I’d like to explore them one day.

For now, this post will concentrate on the ambient sound, the next will be on music.

My first task was to think about what scenes needed what sounds. Going back through my animation files, I watched the silent progress and imagined what might lend itself to the matter. I made a wee list:

  1. The hissing of the furnace
  2. The rattling of the conveyor belt
  3. The kak-klunking of the machinery
  4. Heavy breathing of the Assassin
  5. The bang of the Bullet

Armed with my dodgy microphone, I tried my hand at making noises with my mouth. I discovered a couple of things. Firstly, my microphone ain’t no good. I thought it was broken at first. No, not broken, just really crappy. The resulting sound was barely above a whisper. Upping the gain only upped the noise and clipped the sound. I couldn’t make too much noise: I’ve got a young ‘un who is usually asleep by the time I’m doing anything. On top of that, everything came through with a hum that I later tracked down to being the fan of the computer.

Secondly, while my vocal impersonations of the garbage truck on a Friday are enough to impress small children, Michael Winslow I ain’t. Even when I did manage to get a sample of something loud enough to be workable, it sounded pretty lame. The rattling conveyor sounded like an old man about to lose his lunch, the kak-klunking sounded like nutshells being rubbed together.

Take two

Microphone = Inadequate. Location = Terrible. Source = Abysmal. To address these issues, I looked at the palm of my hand. My phone! Not only can it take telephone calls, it has a recorder built into it. On the weekend, I buzzed about outside, in the garden, in the garage, trying to find sources for sounds. The roller door. A hammer. The hose. The air conditioning unit. The lawn mower. The can opener. There were clunks and rattles and hisses and sighs all over the place.

By the time I came back inside and thawed my nose (it’s Winter time), I had a phone full of sounds, ready for use. Only, they weren’t. First, I needed to download and convert them into something usable.

Audacity

A long, long time ago, Dad splashed out on a Sound Blaster Pro. Tucked into the whopping ISA expansion slot on the motherboard, it allowed, for the first time, not only playback of awesome sounds and music, but also recording of awesome sounds. As a family we huddled around the box to record funny messages for windows startup, add reverb and warp the pitch until we sounded like chipmunks.

When I tried the recorder the other day, I was sorely disappointed. Yes, I could record, but that was about it. Where had all the fun gone? Why couldn’t I fade in or out? What about the echo and hiss-reduction and all of that. We had it back in the 90’s, right?

AudacityWell, all of that is still there. A quick search on the net brought me to Audacity (http://audacityteam.org/). Simply download and enjoy. The interface was a bit daunting to look at, granted, but stick with it. Go ahead, import a sound file, boom, there’s the waveform, ready to be fiddled with. First port of call for me was to trim out the bits of the samples that I didn’t want. Highlight the section and delete it, simple as that. And if you need to insert a block of silence, sure, select menu > generate and make as much sweet silence as you need. I had to do that a fair bit: I had a two year-old shadow following me around, nattering all the while.

You can cut and and copy and paste, or select a region and make it repeat x number of times. Importantly, you can fade in or out, cross-fade between left and right channels, or apply some really cool filters to knock out high hisses or low hums. I must admit, I lost myself for quite some time as I mucked about with different filters, seeing what each one did.

One of the really cool features of Audacity is the ability to have multiple, parallel tracks. They end up working a lot like layers in the graphics programs, so you can tweak one track independently of another, speeding it up, slowing it down, adjusting the volume, whatever you like. And you can play it back, just like that, to hear how it goes.

In the end, banging a lump of wood on the roller door provided a decent ka-thunk, ka-thunk, and the air-conditioner gave up a bunch of interesting sounds, whirrings and groans and squeaks and hisses. One thing I couldn’t find in my garage wonderland of noises was the distinct sound of a rifle shot.

Essential to the animation, I simply could not recreate a convincing bang that was distinctly a gunshot. Short of rocking around to a rifle range, I poked about online to find free online noises. I listened to the report of a few different models and settled on the Springfield M1A rifle: It has that heavy crack that I was after, along with a lasting, gaseous hiss.

Pushing this all together, I must say that I’m not entirely happy with what I ended up with. If there’s anything I’d go back and do again, it’d be the sound, simply because its just not punchy and distinct enough. In fact, I’d probably seek help from a sound engineer in this department. Anyway, enough prattling about that, next time I’ll prattle on about the music.

The Bullet Animation – Paths, Text and Gradients

If you’ve been following along, you’ll know that I started off this whole animation project with defining a bunch of scenes I wanted to render, converted some sketches into vectors and I figured out how to make stuff move.

By putting in a background and having layers for your characters, you could very easily knock up a South Park looking animation, or even a smooth transitioning storyboard, depending on what you’re after. If you’re after motion of parts of your characters, eyes, for example, or mouths, you’ll need to get into some of the finer points.

Animating Paths

The scenes for The Bullet did not call for a lot of motion, contrary to what the subject matter might suggest. As I was getting through it, though, I figured I wanted a bit more realism with my characters, the Worker and the Assassin especially. The eyes of the Worker were quite important since, if the Bullet went under his gaze and his eyes did not move, it would destroy Anim1the notion that the Bullet was being scrutinised.

Having already labeled the layers that held the eyes, it was easy enough to identify them. Had I known about canvases (http://wiki.synfig.org/wiki/Canvas) before I started this, I would have used this to make the eyes group on an independent time frame. Not to worry, got there in the end and the concept is still the same.

I started the eyes pointing off to right (viewing the previous bullet), swiveled them back sharply and had them smoothly roll in time with the viewpoint of the Bullet before snapping back again, ready to inspect the next bullet. I toyed with TCB and the Constant waypoints, but neither gave the impression of what might constitute real eye motion, while Linear seemed far too unnatural. Clamp turned out to be the best first for the task, although I think the flyback should have been a little faster. If I were to do it again, I’d consider some jerkiness and random motions of the eye. When a person is looking at something closely, the eye will make many microscopic adjustments as it scans the intricate details of the subject. Lesson learnt.

Anim2The worker was going to be smoking a cigarette originally, but, as one might imagine, cigarettes and gunpowder don’t mix. In the end, I pulled the stick out of his mouth. It didn’t belong and it detracted from the enormous, distorted eyeballs.

Speaking of eyeballs, the Assassin, coming in at the end of the rifle run, needed to have a bit more life to him. I wound up giving him a goatee beard, shaggier hair and sinister eyes.

His mouth starts off flat, almost grumpy, but it turns to a smile as he approaches. How? Select the mouth layer and simply move the mouth to where you want it to be at a certain time, and let the animation engine do the rest. Curling up the edges of the mouth, I found, was not a very effective way to bring life to a character. It was just too subtle and was lost in the motion of the whole head as it zoomed and rotated.

Anim4Upping the extent of the smile didn’t cut it. Exaggerating the mouth motion looked too, well, exaggerated. And, besides, the smile was for the Assassin, no one else, and needed to be almost imperceptible. Instead, I got him to blink.

Blinking involves the covering of an eyeball with the eyelid. Again, since I had labeled my layer previously, it was only a matter of finding it on the right hand layer panel, clicking the little red man to begin animating, grabbing the waypoints and closing them together, then opening them up again.

Anim5Now, a normal, natural blink is very fast indeed. A Step / Constant waypoint certainly looked like a blink, but, at only a single frame, the animation was just too flashy. Instead, I used a Clamp to animate in and out, but over only a few frame. The result is that the eyelid closes rapidly, but not so rapidly that it’s lost on the viewer.

If anything, it’s slow enough to give the Assassin the air of being cool, calm and calculating, which is exactly what I was after.

Gradients

Without the use of thick lines to define my characters’ features, or any form of cross-hatching or shading, I had to rely on the slabs of colour of the regions. Not terrible. Not great. You don’t get a lot of depth out of it. Or mood. Or ambiance. This is where gradients can help.

Taking the worker scene, it looked far to bright and airy, not at all like the confused, claustrophobic world into which the Bullet was born. To bring the focus back onto the worker, and provide a narrowness of view, I used a radial gradient over the top of the worker group, running from transparent to a dark red on the edges.

Anim3The radius of this layer, like pretty much everything else, can be animated. This way, the field of view grows and shrinks as the Bullet travels along, obscuring the image. I did apply a fish-eye, or sphere distortion, which added to the confusion, but I pulled it: It was just becoming too confused.

To aid the idea that the Worker was near a furnace or a boiler, I applied a linear gradient, which I labeled ‘Heat Flare’ across his face to give it a rosy hue. I did something similar in the next scene, the Metamorphosis, to have the Bullet move from a hot red area to a cooler grey one, animating the endpoints and colours of the gradient as the scene progressed.

Text

Lastly, I had to decide between voice-overs or text. I have a microphone on my webcam, and another that I can plug in the back of the box. Neither, I discovered, were suitable for recording clean, crisp voice. In fact, I think I’ll have to get onto the whole sound portion of this clip in another post. In any case, I decided upon text to display contextual snippets.

To do this, simply add in a Text Layer. Type in the text as the ‘value’ and, Presto! You have words. I imagine one might want to animate words in or out, or type one letter at a time, but I went for a simple fade in / fade out option.

Changing the font is a tricky matter, though. You need to know the name of the font that you’re after. I opened up Open Office and scrolled through the fonts I was after, but the Windows Font Viewer will do the job. Put the name, verbatim, into the font family field and that’ll do the trick.

Because fonts behave like vectors, they’ll scale and rotate very nicely without all the pixelation.

Can you add a gradient to your text? Of course! Can you use your text to define the alpha channel of an underlying layer? Definitely (and how cool would that look?). The only real issue I found with text is that the rendering gets a bit jumpy if you try to animate the size. Maybe non-integer values aren’t suitable for the rendering engine, but I’d only be guessing. Everything else is fair game.

But an animation isn’t all just visual. In my next updates I’ll go over the music and sound.

The Bullet Animation – Animating

The last post yammered on about how cool Synfig is. For a dude like me who hasn’t the training, the cash or the patience for the professional stuff, it does the job admirably and there’s a whack of stuff in there that I haven’t even had the chance to look at yet.

Enough already!

Alright, alright, I’m getting to it. With my vector images ready to go and a little practice under my belt, I was ready to try and get a scene in motion. I started by importing the svg into Synfig. Go to File > Import and select your file (you can use this to bring in jpg and png images as well). This will make a group layer that contains a bunch of other layers, one for each layer in the original file, and inside each of those is a layer for each path.

Synfig4

If you’ve labeled your layers in Inkscape, they won’t have these labels when imported into Synfig. No worries, just spend a minute to select and re-label those layers – a stitch in time and all of that.

Now, one issue I did come across was that regions that had bits that had been subtracted (I’m talking annuli, doughnuts, holes, cutouts) still had the paths present, but they hadn’t been subtracted, like you can see in the image on the left.

The Worker’s glass and head brace naturally wanted to allow him to be able to see the Bullet as it goes past. Only thing is, the import left the cutout as the same colour and composition style as the rest of the headgear, so I ended up with a blast shield over the lenses.

Synfig3

One workaround for this, I found, was to select the layer that was to be the cutout, and set its composition to subtract the alpha from the underlying region. This will mimic the hole punch effect of one region upon another.

To add a little bit of depth, I also took the opportunity to add in a ‘lens’ layer, merely a circle that colourised everything beneath it, so as to give the skin and eyes an unnatural, soft, blue hue.

So there’s a little bit of tidying up that needs to be done when bringing in your SVGs, but it’s not a killer and it also gives you a chance to get everything prepped, and also, more importantly perhaps, the incentive to fiddle around with the settings to see what flies.

Animate it !

Synfig lets you animate properties of your layers, and these can be done by adding waypoints for these properties. The various styles of your waypoints affect how the engine calculates the resulting points in between your waypoints. Each of these can have a different style for in and out. For example, you can use a ‘Linear’ waypoint (the yellow one) to linearly transition from one state to another. Alternatively, you can choose for a more flowing ‘Clamped’ to give a smoother lead in or out. There’s also TCB which is pretty cool but hard to control.

Synfig5There’s also the step or constant which you can use to instantly transition a state, which comes in handy when you want to make something, say, disappear or reappear or change from one colour to another in the blink of an eye. The best way to figure these out is to muck around with them and get a feel for how they behave.

One of my scenes that I decided upon was to have my characters zooming in from the sides, scaling from little to really big, and fading in from invisible to visible and invisible again. I’ll pick on the Foreman to give you an example. I’ve taken a shot just a frame or two after my first waypoint so you can get a feel for where he starts off: Top left, barely visible and tiny.

With the little man set to green, I prime my layer to be where I want it to be at the start.

Synfig6Then, I click on the green man to turn him red. This means I’m in ‘animating’ mode, and I can create waypoints. Moving the timeline forward to an applicable spot, I can drag the balls on the graphical layer to position it where I want it to be at this time. This will create a waypoint for the position.

You can see these waypoints in the bottom panel. Each will have a graphic for the ‘in’ portion and one for the ‘out’. So you can, for example, linearly transition something in, then apply a constant on the out.

To change a waypoint, you can drag it left or right to set the frame, or right click to alter the type of in/out/both, or duplicate it, remove it, etc. Notice, too, that the waypoints are inline with the properties that they are animating. In the example there, the Amount (think opacity) is animated with clamped points from 0.0 to 1.0 and back down to 0.0 again to fade in and fade out. The Transformation (position, scale, rotation) can have a completely separate set of waypoints to follow.

Synfig7To resize or rotate or move the layer, you can just grab the little balls on the screen, or if you want precise control, you can set the numeric value directly in the property – value window on the bottom left. This is handy if you want to perform precise motion, like the rotation of gears or the path of a conveyor belt.

This is how I got the bulk of my animation covered, but there are still some facets that I’d like to cover, namely animating paths or object within other objects, as independent entities.

Say what? Just bear with me, it’ll make sense in the next post.

The Bullet Animation – Synfig

In my previous post, I showed you how I turned my sketches into vector graphics, ripe for insertion into Synfig.

When I first opened Synfig Studio, I didn’t know where to start. There were panels and boxes and buttons and, yeah, no idea. I went online and looked up a tutorial on getting started. It covers the basics and a few gotchas, so it’s worthwhile sitting back and absorbing the info, even if it’s just to get the general idea of it all. Don’t skip any bits, even if it seems obvious, because you might miss out on an important detail.

Like what?

Synfig1For example, you’ll notice that there is a little green man in the bottom right of the main panel which, when clicked, turns red. It also highlights the main panel in a thick, red border to reinforce the message. This defines the static versus dynamic state, and is fundamental when animating waypoints and keyframes.

It’s mentioned in the tutorial, so I won’t go over it here, but know that if something ain’t working for you, check to make sure what mode you’re in. Once I figured out what that was all about, one of many Aha! moments, things became a whole lot easier.

The next thing to know is that Synfig works a lot with Layers. You can group layers together with a Group Layer (fancy that), and you can specify the height of one layer with respect to another. For example, if you wanted to create a layer that held your character and give it a speech bubble, you could add a Layer to hold the character and all of its paths, and a Layer to hold the speech bubble with the text in a layer above that.

But Synfig has more cool features than just that. It can also provide Blurs, radial and motion and Gaussian, to give a sense of movement or depth of field. Just add in the layer above the layers you wish to blur, adjust the amount, and you’re ready to rock.

Animating Properties, not Pixels

A groovy thing that took me a good while to get me head around was the ability to animate the various properties of a layer over time. My original concept was that I would need to move things about manually, doing it all frame by frame like claymation. Not so. In fact, a lot of the hard work is removed once one figures out how to use waypoints to define the path of a property.

For example, you could change the starting angle of a conical gradient layer in a smooth sweep by setting waypoints, and letting the Synfig engine do the math to smoothly map between the points on a frame by frame basis, which is called tweening. In fact, almost any property of a layer that is numeric can be gracefully animated, including the starting and ending colour of a gradient, or the rotation or scale of a set of points, even the opacity!

Not only this, you can also set the Blend Method for your layers, allowing you to apply the layer as a screen to the underlying layers, or add/subtract/multiply/divide, or burn or dodge or colourise.

One of the features you might be interested in before you get started is that many of the value Types can be converted. For example, you can change from a real to a random or an integer, or change a vector to a radial composite. Oh yay, maths again. It’s kind of inescapable, but it does allow you to do some very interesting things, especially when you tie one variable to another variable. If you are interested, go and have a look at some of the examples posted by Synfig gurus.

So why did I go to all the trouble of converting my Synfig2 sketches to vector graphics? Because Synfig plays very nicely with vectors and paths, allowing you to animate any of the path features easily. I’ll get into how I imported my characters in the next post.

There are a few features that I wish I’d learnt about before I finished up, as one might expect. These include the use of bones to aid complex motion, onion skins and Time Loop layers. There is even the option to have multiple canvases with independent animation within a project, which, I reckon, would be way cool for things like lip-synching, twitches, eye movements, gestures, etc. If I’m ever given enough time to make another animation again, I’m going to be looking very carefully at these features.

Lastly, you don’t have to work only with vector graphics. You can import pngs and jpgs to help give you backgrounds and al of that, and they play nicely with the whole layering concept. There’s even the ability to add a sound layer so you can insert noises or music over the top of your animation! What more do you need? (Professional animators, please don’t answer that).

What can you take away from all of this? With Synfig studio under your control, the world is your oyster.

In my next posts, I’ll show you how I composed a scene, including where I think I went wrong, where I know I went wrong, and where I could do it better.

The Bullet Animation – Vector Graphics

In my previous post, I showed you how I sketched up my characters to bring them into a digital format.

The problem is that they were still unsuited for animation in Synfig. I had to convert them from raster images into vectors. But how? Enter Inkscape stage left (inkscape.org). I had downloaded this on a previous occasion, toyed around with it and put it away because I had more pressing, important issues to attend to. Work is like that. Anyway, I’m happy I revisited it because it’s the bee’s knees. It can convert bitmap images into regions for you, it can apply paths in layers, it can fill with gradients and stroke with different styles, and you can even muck about with opacity and geometric shapes and…

Awesome, so…

So, I inserted the image into Inkscape as a layer, with the result as shown. Yeah, it looks scrappy, but it does get better. Working over the lines into paths, I traced over the top of the lines. Then I noticed that Inkscape has a layering facility. A bright idea struck me (I still have the mark) and I worked at breaking up the image into logical parts.

AssassinVector1After a couple of faces, I got into a pattern of figuring out all the different layers and regions of the eventual picture and added each of these as a separate layer. So, in the example of the Assassin here, there’s the hat, the glasses and his coat, along with the scarf. Furthermore, he has skin and hair and, on the skin, he will have skin in shadow and skin in light (Think old school, silver salt photography).AssassinVector2

The skin will appear underneath the hair and the hair underneath the hat. Get the idea?

With each of these as layers, it was then only a matter of sketching out the relevant outlines over the top of the picture to create paths. These can then be drawn and/or filled however you wish. To make life easier, I set the opacity of each layer as 50% so I could still continue to use to the underlying sketch as a guide.

I recently discovered that I could achieve the same thing if I locked the sketch layer, placed it at the top, and set the opacity of that layer to 30%. Eh, Pot-ay-to, Pot-ah-to. Got there in the end.

I chose not to have an outline on the regions, preferring instead to let the colours do the work of definition. I like line art, but after a bit of experimentation having lines on and off and with different thicknesses, I opted to go for fill only.

Darn it, now that I look at that partially coloured sketch, I kind of like the scruffy black-lines and washed up colours. Kind of like water colours. Storing that one in the back of my head for next time.

AssassinVector3Continuing on, filling in layer by layer, I ended up with the images that I could then use within Synfig. I reckon I could’ve spent longer but I had time limits imposed on my venture. After all, I’m supposed to be writing Hampton Court Ghost.

These images can be exported as .svg files, and can be used as scalable vectors in a bunch of programs. InkScape has a bundle of really cool features that I haven’t had a chance to play with yet. I was going to get all fancy with some of the plugins I did get to play with (there are some really cool plugins!) only when I tried to import the resulting svg file into Synfig, there were some issues. For example, importing regions that had bits knocked out them, like doughnuts, came in as filled.

I’m not overly sure, yet I think there’s a slight disparity in Synfig when importing SVG files in the current iteration, but it’s not a show stopper. I’ll get onto the topic of importing into Synfig in a later post.

Neat, huh?

ForemanVector1There you have it. I repeated the process for my various characters,  even for the Tester (who didn’t make it into the reel) and ended up with the pieces that I needed, ready to go. This really was a neat way to get my faces ready for animation:

Oh, whoops! I made the Foreman’s hair too dark. That’s OK, just select the hair and change the fill, adjust the hue, the lightness, the saturation.

Drat! I made him too small! Not a problem, being a vector, it’ll scale up or down without any loss in quality.

Blast! I needed a larger chin, the hat’s too short, the eyes are all wrong, I want scruffier hair. All good, just grab the path tool and add, remove or tweak the points until you’ve got it the way you want it.

That’s one of the beautiful things about this: You can add more layers to add more detail, add more points to define a better edge or, conversely, remove points and layers to simplify and posterise. I got into a rhythm of defining small paths, using the union option to join the regions together, then simplifying the path to get to a more cartoon-like style.

Creating the slug and the shell of the Bullet was the easiest of the lot. Being geometric in nature, Inkscape’s path tools made short work of it. I added another layer for the casing, and created a couple of ‘shines’ but left the gradients to be added after I imported it into Synfig Studio.

In my next few posts I’ll go through how I approached animating the scenes, you know, the ones I did way back then, and some of the problems I came across.

The Bullet Animation – Artwork

In my previous post I spoke about how I was making an animation as a promotional video for The Bullet and I got as far as laying out the scenes and getting the timing right.

For each scene I needed to get something to animate. Pictures, right? Right. Back in the old days (did I just say that?) I used to use some software that came with the Genius Mouse that allowed one to draw, fill, cut, etc. With this I could sketch an outline on the screen using a bunch of connected lines, then apply a fill and, presto! Art! I could save them in PCX format and, well, that was about it.

Enough Reminiscing!

My first thought, when approaching the task of drawing, was to open up Paint and do pretty much the same thing. Paint has come a long way from its 3.1 days (unlike Notepad, but then Notepad++ fills that glaring void) so I wasn’t too worried that I’d be able to get something knocked up. After a couple of strokes, though, I realised that it wasn’t quite suitable for my purposes.

Why not? Because drawing a picture as a bunch of pixels doesn’t lend itself to scaling or rotating or shearing without a lot of pixelation or tearing. Not only that, I freestyle draw a whole lot better with a pen or a pencil than I do with a mouse. So I made a plan that I would draw my characters freehand, take a picture of them with my phone then convert them into some appropriate format. Which format?

Well it turns out that the format I chose influenced the style of drawing. After reading up on Synfig’s tutorials, vector graphics (as opposed to raster) is ideal for 2D animation since the images are a bunch of instructions rather than a bunch of pixels. Without getting all techo, the image can be rotated or scaled or pinched or whatever and it won’t suffer the same fate as a bitmap image. The other really cool thing about vector graphics is that they behave a lot like the old painting program I used to use: The image can be built up from a set of outlines or shapes (paths, I think the lingo is), give it a stroke and a fill and away you go.

So I put the mouse down and picked up my pencil, sat down at the kitchen table and drew the characters I was after.

Sketching

I had to search through a few books and online to find the right kind of face for the job. Then is was a matter of sketching it onto some paper, rubbing and scrawling and positioning the eyes until I got what I was after.ForemanRawSmall

I started with the Foreman, the dude with the cap and moustache, then went onto the Tester, the Courier (neither of which ended up in the final feature) and the Boss.

Before getting too far into it, I took a copy with my phone’s camera, transferred it over to the computer and opened it up in GIMP (www.gimp.org) to make it suitable. I desaturated it, increased the contrast and fiddled with the levels to get it into the form you see here.

So that meant I had a rough, digital sketch on my box. Yippee. Doesn’t look much, does it? It needs colour, of course, and refinement, and a solid tidying up. The chin isn’t a strong as I would like, the cap bulge isn’t in the right spot.

MerchantRawSmallThis is where the whole business of turning the image into a vector affects the drawing style. Why? Because I’m not sketching to perfection, I’m sketching to get an outline. I didn’t need to colour in the picture. I could have left the moustache unshaded, even though it helped visually, since, when I make the moustache as a region, I can colour it any way I want. As you can see from the sketch, the hat has some rough shading, the chin is darkened, the hair is filled, all unnecessary.AssassinRawSmall

So when I got back to making the others, I concentrated more upon the outlines of the elements within the image, and the region of shades.

The Merchant, the bald guy with the awesome chops, has his features marked out like the Foreman does, but there’s a line running from his chin, weaving up past his nose and around the left side of his head, marking a region of shadow or darker skin. I shaded his chops to help out with the visuals for later, but, again, this was unnecessary.

ClientRawSmallThis is even more pronounced in the Assassin, with the glasses and stovepipe hat. You can see his hat just has a rectangle marked out for the ‘shine’, and his scarf and collar are outlines only.

By the time we get to the Client, it’s all outlines and regions. No facial hair for him. Just a warm cloak and a decent hat. That’s the kind of guy he is.

So, to wrap up, I sketched out the characters that I wanted on paper, photographed them, downloaded to my machine, stripped the colour and increased the contrast to get a set of outlines that I could use for the next step.

Stick with me. In my next post I’ll go over how I converted these images into vector graphics that I could then use in the animator.

The Bullet Animation – Conception

I realized only a few months ago that, whether I like it or not, promotion is a part of being an independent author. Like the saying goes, if you don’t blow your own trumpet, no one else will do it for you. So this next series of posts could also be titled, “How I learned to blow a trumpet”.

A big shout-out and thank you to Erman for giving me the inspiration to make a video for The Bullet. His suggestion to make an animation sparked in me a memory of a former interest. I had, back in the days of bulletin board systems (BBS’s, remember those?), 2400 bps modems and 5 1/4″ floppy drives, dabbled in animation and music, but my experience was frustrated by the poor interfaces (ASCII based), slow 286 CPUs, no sound card and a small hard drive.

It’s 2015, and we’ve come a long way, so, I considered, maybe I’ll give it another crack. I put my digital pen down and started poking around on the net for ideas and software. Good thing I had a decent supply of coffee! From low level to high, I considered my options. Not wishing to delve into 3-D modelling, nor draw every frame / cell by hand like I did back in the good ol’ days, I skirted past those options and settled on Synfig (www.synfig.org). A couple of demo videos on YouTube later and I was convinced.

The next problem was figuring out which book I was going to pick on. Almost immediately I decided upon The Bullet, since it gave itself nicely to 2-D animation, what with the old-skool Steampunk thing going on, and a couple of scenes jumped straight into my head.

Inspired, I got cracking.

Of course, there’s more to it than that. My next few posts will relate the process I took to get from an idea to the screen.

After playing around with Synfig for a bit, getting a feel for how it operates, I turned my machine off and picked up a piece of paper and a real pen. I’m a big fan of pen and paper for ideas. I’ve tried using tablets and styluses and finger scrawls but, in the end, I just end up frustrated. There’s just something about the freedom that nice paper and a good pen affords.

Anyway, although it sounds obvious, I had to take a step or three back and decide what it was that I actually wanted from the video, what the message was, how it was to appear, how long it would be, how the viewer was to see it. Important stuff. Boring stuff. Stuff that was getting in the way of actually making something. Yet I knew that it would be fruitless if I didn’t plan properly.

I had these grand ideas whirling around in my head, some stupendous, others just stupid. A full-blown twenty-something minute video just wasn’t feasible. What was this video for, anyway? Telling the entire story? No. Reading out a slab or two? No.

It was to be a short promotional video, enough to give a feel for the book without giving too much away.

To this end, I opted to keep it simple.

I’m not a fan of videos that go on and on, or have a massive lead-in time, so, doing a virtual demo inside my head, I whittled the scenes down to four or five to fit within a self-imposed timelines of roughly a minute. To visualise the scenes and test the timing, I scrawled out a timeline on some paper, complete with markers to indicate where things happened, and worked at it (scrapping some unnecessary scenes) until I got it down to a concise flow.

The end result is fairly consistent with what I ended up with and I think that this planning stage was crucial to getting this thing off the ground. Of course, a video isn’t really a video if there isn’t sound included. I marked out a few key sounds that needed to be included, pointed out roughly where they needed to go on the timeline, and put them on the back burner. But I’ll get onto the sound and the music in a later post.

There was a lot of squeezing here and poking there, to make sure that each scene was given a fair go. In the end, I used the conceptual stages of the book, rather than the chapters as I had originally planned, to create the scenes. A great deal of emphasis is placed upon just the creation of the Bullet, so this would naturally require a lot of detail, thus the first three scenes are devoted to the genesis and refinement of the Bullet.

The confusion and chaos on its journey was going to be almost a minute long but, in the end, I got it down to twenty odd seconds. Why? Because this wasn’t a movie; each detail of the Bullet’s journey didn’t need to be exactly plotted. Instead, the feel of the story was what was needed. So players like the Boss, the Courier and the Tester aren’t shown, but this isn’t really an issue, in fact, I think it was necessary. Too much detail can be as bad, if not worse, than not enough.

Finally, the realisation of the Bullet’s destiny, and its relationship between the Assassin and the Target, was to be the climax. Of course, there is no mention of who exactly the Target is, since that would be skewing the reader’s opinion, so I had to be careful not to put too much emphasis on the character’s visuals outside of what might be gleaned from the book.

In the end, I wound up with a bundle of pages, the first and neatest of which is shown below. The others look a lot like this, only there’s a lot more furious scribbling, crossing out, annotations, arrows (lots and lots of arrows) and times.

BulletScenes

In my next post, I’ll show you how I got my characters onto the screen with the aid of pencil and paper.