One of the coolest things about producing content for the web is the ability to cultivate a global audience. As our lives continue to orbit ever closer to the internet we have begun to develop and maintain some of our strongest relationships online – and this has implications beyond just who is watching your videos. It should come as no surprise that some of us have begun talking about “remote collaboration,” and in our case we tried it.
When I started assembling the cast and crew for the webseries I wrote and directed, “Steve’s Quest: The Musical,” I had a pretty clear idea of which people I wanted to work with. The trouble was that some of them live on the West Coast of the USA, and some are on the East Coast.
Because my show is animated, it isn’t absolutely necessary for the cast and crew to be in the same location (and I’m sure there are also situations where that’s not necessary for live-action series). So, I decided, with some trepidation, to come up with a way for my far-flung cast and crew to work together.
In the interest of sharing knowledge with anyone else who wants to make a webseries with a team scattered across the country, or maybe even the globe, here’s how we’ve been able to make it work from a technical standpoint.
The Music and Dialogue
Because our series is a musical, and songs generally need to be recorded and mixed separately from dialogue, this has been a pretty involved part of the process.
To record the instruments in the songs we’ve completed, I started by using a music notation program called Finale NotePad, which lets you write out the notes each instrument is supposed to play on a staff. I converted my Finale file into MIDI format, and imported the MIDI file into recording software called Pro Tools. The result was a file with a bunch of Casio keyboard / chiptune-sounding instrument tracks that we could add vocals and real instruments to later on.
I live in Northern California, and the guitarist I recruited to play on the tunes (my brother Tim) lives in New York. So, we used Dropbox to pass the Pro Tools sessions to Tim, who then recorded the guitar tracks in his home studio, converted the rest of the instrument tracks (drums, piano, etc.) to some higher-quality MIDI sounds, and sent the session back to me.
Next, it was time to record the singing. Some of the cast members who live far from me don’t have Pro Tools (Pro Tools 11 can run you about $699 if you’re not upgrading from a prior version), so they couldn’t just add their vocals into the Pro Tools session Tim and I created.
To deal with this, I recorded an MP3 file of each Pro Tools session and sent it to the remote cast members, and had them import it into Audacity (free audio recording software you should definitely get if you haven’t yet). Then, as long as they were using a high enough-quality microphone, they could lay down their vocals in the Audacity project, while listening to the instrumental track I provided.
For dialogue, the process has been similar — the cast members who lived close enough to me came into a studio, and recorded their own parts, as well as reading the parts of the remote cast members, in a ProTools session. Then, we sent an MP3 of the session to the remote cast, and they sent back their dialogue parts in an Audacity project (which I edited somewhat to fit into the dialogue we already had).
Some film animators record their sound and dialogue first and then build the animation around the audio, while others take the opposite approach. For our purposes, it has made the most sense to start by recording the audio. That way, all our remote cast members need to do is send over an Audacity file with their vocals and dialogue, rather than having to go into a (potentially expensive) studio and record their parts while watching the animated visuals.
I then send MP3s containing the sound for each episode to our two animators, who are based in Atlanta, Georgia. They draft storyboards based around the audio, and once I approve them, they begin working on the animation.
The animators work with software called ToonBoom, which is, to some degree, capable of automatically syncing the mouth movements of an animated character to a dialogue track. To make this feature of ToonBoom work in the context of a character singing a song, the animators need me to send them two separate MP3s for each song: one containing the instrumental tracks, and the other with only vocals. That way, ToonBoom can sync the characters’ mouth movements to the vocals without getting confused by the instrument tracks.
The animators then create a QuickTime file of the episode using ToonBoom, and send that to me via Dropbox. After a few rounds of comments, they send me the final QuickTime file, which I post to YouTube.
Because of all the e-mailing and file-sharing we need to do, the logistics of all this haven’t been easy to work out, and I’d like the next project I work on to be a bit less “location-independent.” But the upside of this approach has been that I get to work with the people I want, regardless of where they are. And these days, I suspect, no matter how close the cast and crew are to you geographically, there will probably be times when having someone work on your project remotely is necessary or more efficient, and it’s helpful to have a sense of how to handle that situation.