The Art of Closed Captioning

by Justin Sevakis,

For years, dubbed anime programming has been shown on video streaming services in the US – but without closed captioning for the hearing impaired. While both anime publishers and streaming providers knew that this wasn't ideal for hearing impaired fans, the general thought was that those fans could simply watch the subtitled version.

But as anyone who has tried to watch a subtitled anime on mute can tell, those subtitles are often not enough to tell what's going on. Anime has a lot of inner monologues, off-screen dialogue and narration, and it can often be impossible to tell who's saying what. Hearing impaired have been complaining about this for years, but now with many streaming services enforcing new accessibility rules, companies are finally starting to do something about it.

Does dubbed anime HAVE to be closed captioned? In the US, that's a little murky. While FCC rules mandate that streaming full-episodes have to be closed captioned, that rule only applies to shows that were broadcast on television with captions – not series from other countries. Other laws, like the Americans with Disabilities Act, might conceivably apply, but would have to be tested in courts. But regardless of whether it's a legal thing or not, the major streaming sites now require it, so it's finally getting done.

There's a reason anime publishers have been dragging their feet on this issue for so long: closed captioning is a huge amount of work and a lot of hoops to jump through.

My company, MediaOCD, has been getting our feet wet in the closed captioning business. We've had to learn a lot about the format and its history, so an article seemed like a good opportunity to share some of that info.


At its core, closed captions aren't all that different from subtitles: they're basically a script of text lines, with timecodes for when they're supposed to appear. But that's pretty much where the similarities between the two end.

Closed captions are REALLY old technology. The process was first developed by PBS stations and the FCC back in the mid-70s, and hasn't really changed much since. Basically, a single line at the top of a video signal (one that's normally cropped off by TVs) has a small amount of encoded digital data that comprises text. That data is slowly gathered by the decoder (usually part of a TV or cable box) until it's ready to display. As one caption is displayed, data for the next caption is loaded. Captions don't have a set duration or “out timecode” at which they disappear; if you want to clear the screen, you have to have another caption appear (which can be blank). 

The format is so ancient that it's limited by a seemingly arbitrary number of rules, all of which date from the analog days of television. There are only 14 lines for text on screen, and each line of text can only be 32 characters long.  There's only one font available, and it appears over a black block background. When the format was updated for digital TV broadcasts, some newer features were added, such as multiple fonts, scalable sizes, colored and transparent backgrounds, and more accented and symbol characters. However, in order to maintain backwards compatibility, almost no captions actually use these features.

Most TV networks and movie studios maintain closed caption scripts for their entire library of content, and it's these caption scripts that are used for everything from modern TV broadcasts to streaming. And its these ancient specifications that services like Netflix and Hulu have built their own caption/subtitle capabilities around. Most of them have the same technical limitations as captions did in 1976.


One of the more annoying aspects of all of this is that all dubbed anime was transcribed and timecoded at one point. As part of the dubbing process, every line was once neatly typed out and timecoded – often done painstakingly by hand – as part of pre-production of a dub script. But those weren't saved, so now everything has to be redone from scratch.

The first step to captioning is transcribing the whole show. This is literally just what it sounds like – you literally watch through the entire episode and write down every single line of dialogue. How difficult of a job that is depends entirely on the show itself: slow, moody, atmospheric shows might only have 150 lines of dialogue and go pretty quickly. A hyperactive high-speed comedy can destroy you.

In addition to noting every line of dialogue, you also have to notate who's speaking when it's not clear visually – i.e. the character's back is turned, or is having an internal monologue (and in anime, this can be a huge percentage of the dialogue). Sound effects and music queues also need to be notated when it's important to the show.

After the entire show is transcribed, it must then be timed out. This part works just like subtitles – we copy the text script into a subtitling program, and find each line's beginning and end point. We have to be very careful at this step, because the lines need to be on the screen long enough to be legible, and also have to update slowly enough so as to not overwhelm the ancient subtitle spec. We have a list of rules that each line has to follow: no more than 3 lines on screen at once; each line has to be up for a minimum of one second, etc.

This process is slow and laborious, and rife with opportunity for human error. For our first major caption job, I identified a handful of friends that seemed up to the challenge, and hired them to do the hard work of captioning. They did a bang-up job for the most part, but with so much going on (I had my usual DVD/Blu-ray work to do), I tried to make do with just spot-checking the episodes as they came in.

That was a mistake. There are so many little things that need checking with captions that I quickly discovered that they require an editor to go over the entire episode with a fine-toothed comb. I didn't have time for that at all. I ended up hiring Anthon Meyer, fellow nerd, to start unifying the various styles of the captioners and oversee their work more closely.

His biggest challenge has been trying to get everyone on the team working the same way stylistically. “Everyone has their own style. I'm Canadian, so I also had my own grammatical and formatting differences – spelling and punctuation.” Eventually, after researching published standards from the bigger captioning organizations, he was able to come up with our own style guide. He also has a multi-step process to go through each script, checking for things like timing, line length, and punctuation.

“One of the biggest challenges for anime captioning is that there's a huge amount of dialogue that's spoken while nobody's mouth is moving,” Anthon notes. “Their back is to the camera, or they're wearing a helmet, or it's just internal monologue or something. When you're watching and transcribing, it's hard to remember that unless you can see their mouth, it's not obvious who's talking without hearing the sound. You have to notate the character speaking, or it won't make sense.”

Once the scripts are done, they're converted to the final delivery format and sent to the client. 


Once a show has been closed captioned once, the scripts for those captions can easily be converted into any number of different formats, including the various formats used for online streaming. They can also be rendered out for use as subtitle tracks on DVD and Blu-ray.

The shows we've been working on have been classic titles – ones that have been available on DVD for years. However, now that closed captions are becoming a normal part of English anime production, we're hoping that anime publishers will also start including them on physical media releases as well. It may take some (okay, a lot of) additional work, but being able to better accommodate hearing-disabled fans is something that has been back-burnered for way too long.

discuss this in the forum (27 posts) |
bookmark/share with:

Feature homepage / archives