A short definition of closed captioning is text that represents speech and sound in a video that usually appears at the bottom of the video as white text on a black background. Closed captions are in the same language as the speech in a video. In addition to the speech in a video, closed captioners also record any sounds that are important to the plot so that viewers can follow along. Closed captioning technology began in mid-century broadcast television in the UK and US. Closed captions are required by law for some media to ensure that individuals who are deaf, hard of hearing, or neurodiverse have access to publicly available information. By creating a text representation of the audio, video creators can make their content more accessible and enjoyable for wider audiences.
How are closed captions made?
In general, captioning starts with the transcription of speech in the same language as the original video. Closed captions also include sounds that are important to the plot. Imagine watching a movie or TV show with the sound off. The climax of a film could be ruined if an important sound gets missed! That is why it is so important to create closed captions to make content accessible. And closed captioning starts with capturing the speech in a video and determining which sounds are important enough to include in the captions.
How does a person doing closed captioning know what to include? If a sound should be included if it moves the plot along, clarifies information, or shows an important aspect to the character. A knock at the door, the starting of a car’s engine, or a ringing phone can all be important sounds that move the plot along. Tagging pieces of dialogue with the characters name can clarify who is saying what in a scene. And if a character suddenly changes accents during a big twist in the plot, making sure that the audience knows about the change is part of making the video accessible.
After all of the information is written down, captions are synchronized to the timing of the video, and then they are ready to be seen by the world!
Where do closed captions come from?
In the early days of CC, closed captions were only available through decoder boxes sold separately and made to plug into televisions in people’s homes. Near the end of the last century, closed captioning technology automatically came with standard televisions. Closed captioning information was sent to these decoders and then displayed over the video on the television screen.
What are Subtitles for the Deaf and Hard of Hearing?
Subtitles for the Deaf and Hard of Hearing, or SDH, are very similar to closed captions. Both have the same intention, which is to record the speech and important sounds in a video. Both SDH and CC play along with the video so the text matches the sounds. But while they have similar intentions and contents, they come from different media technologies. While closed captioning comes from broadcast television, SDH were created in the DVD industry. They are encoded differently and can come in different styles and subtitle formats.
Why are they called “closed” captions?
Closed captions are captions that can be turned on or off on the video. This was originally because the captioning information was sent to a separate decoder box which then displayed the captions over the video program. If captions can’t be turned off, they are called “open captions.” Read on through our article library to learn more about the differences and usefulness of both open and closed captions.
What about web captions?
While we have moved beyond TV and DVD technology in our media consumption, we still see the effects in our online spaces. In many video players, there will be a little “CC” in the toolbar at the bottom of the video. This button is a signal to viewers that captions are available (unless the button is disappointingly grayed out). And in the menu for that button, there can be captions as well as translations into different languages. Closed captioning is the standard for video accessibility, and it’s so exciting to see media go above and beyond to make videos accessible in other languages too! For non-native speakers, people trying to move past language barriers, and people curious about global media– it can make all the difference! It is an exciting time for our mission of supporting accessible media ecosystems for all.
The Benefit of Using Amara Captioning and Subtitling Services
For those seeking high-quality, accessible subtitles, Amara provides collaborative and AI-enhanced tools that streamline the captioning and subtitling process, ensuring accurate translations, synchronization, and accessibility compliance. With its user-friendly subtitling platform and award-winning editor, content creators can efficiently produce multilingual subtitles that maintain the original meaning and tone of the dialogue, making global content more inclusive and engaging. To learn more, check out our Amara Enterprise Platform solutions.
If you’d prefer professional assistance with your captioning and subtitling, check out our Amara On Demand Professional services. Email us at client-services@amara.org and one of our Amara On Demand Project Managers will be happy to assist you in finding the perfect solution to take your audio and video to a global audience.
This article was updated on April 10, 2025.
