Producing our First Screencast
on Oct 19, 2017
It’s done! We produced out first screencast, in a very distributed fashion. Here is the video showing off our vision:
The rest of the blog post shows you the tools we used to take it from idea to finished video file.
From Idea To Script
Our current website is pretty static. At the same time there have been moments when we were not available to demo our product in person. Of course, a screencast can never replace a live demo, but it can be a good stand-in.
Before we got started we set ourselves some constraints: A hard limit of two minutes, while giving a bit of background, and showing all three parts of the product. Limiting ourselves to two minutes was harder than we thought. This became clear when we wrote a rough script of what to cover:
- Mini presentation of the company and the problem space
- Present empty UI
- Start writing queries
- Show label completion
- …
You get the idea. The problem was during a first dry run the first bullet already took 60 seconds. We needed more control over the time and put everything into a table with rough timings:
Time | Video | Voice |
---|---|---|
0-30s | Mini presentation of the company with slides, logo slide, ... | At Kausal, we’re transforming observability... |
30-45s | Present empty UI, list of available metrics, ... | First off, let me show you how Kausal simplifies the way you write Prometheus queries, ... |
... |
This timed script was extremely helpful for coordinating during the next steps.
Recording Video and Voiceover
In true distributed fashion we tried to balance our workload. I was recording the video while Tom was recording the voiceover. For recording the screen I used macOS’ built-in screen recording, without audio. First I recorded the presentation part, giving each slide the appropriate time. As the final slide I included a demo link so that I can simply click to switch over to the browser. Once the browser showed up, I kept it on the screen for another five seconds, then stopped the recording.
Next I started a fresh screen recording of the application walkthrough in the browser. I worked naturally through the application while keeping an eye on the time. I did three runs in total and then sent the best one to Tom.
Tom then recorded the voiceover on his phone while watching the screen recording. He used his phone as this was actually a pretty good mic, which allowed him to get it close to his noise box and cut out any background sounds. He did three takes as well and then sent his favourite back with some notes.
Putting It All Together
I dusted off the old iMovie and quickly put the parts together: The presentation video that ends on the click on the link, the walkthrough video aligned just after the click, and Tom’s audio track.
During the first run it became obvious that some timings were not right. In fact Tom’s notes said that he spent some more time in some parts, and less in others, slightly deviating from the script timings.
To synchronise the two, I had to stretch some and speed up the parts that Tom spent less time explaining. This was quite easy to do in iMovie, I selected a segment and sped it up to custom percentage. The final five seconds then show a still of our logo.
That’s a wrap!