Whether you need to understand a quiet speaker, to catch a verbal “Easter egg” in a cult film, or discern speech with an unfamiliar accent, closed captioning and subtitles are your friend while watching videos. We talked recently about the rich and storied history of subtitles; from operatic tradition to Weird Al music videos, subtitles and captions silently strengthen the connection between content and its audience.
But there is some debate online about what to call those little words at the bottom of your video. Some people call them subtitles, and some people call them captions. If you want to know why, then you’re reading the right article.
- Who calls it a “subtitle”?
- Who calls it a “caption”?
- Captions: the Instagram takeover
- What do subtitles and captions have in common?
- Origin of captioning
- Open captioning
- Closed captioning
- Captioning legislation
- Custom captions
- What’s in a name?
“Subtitle” is the most common term worldwide for text that accompanies video content. Subtitles can be in the same language as the video or translated into another language. There is no difference there, depending on which part of the world you are in (keep reading to find out more). The vocabulary debate occurs with same-language subtitles, which some people refer to as “captions,” which you’ll see next.
Fun fact: The origin of the word “subtitle” comes in two forms: a secondary title to a written work and the textual version of dialog in a dramatic production.
“Caption” is the term used primarily in North America. Captions only refer to subtitles that are in the same language as the spoken video. When it comes to translated video, those are called “subtitles,” same as commonly used worldwide.
Fun fact: The origin of the word “caption” is to take or seize. The term developed both a metaphorical meaning (capturing the meaning of a photograph) and a literal meaning (the seizure of tangible property).
The history of photo captions is longer than video captioning, and it seems like the term is making a return to its roots through the popular photo-and-video sharing app Instagram. The evidence is clear in Google Trends: “Instagram caption” is searched for 10 times more than “closed captioning.” And apparently, the right caption can make or break the popularity of an Instagram post.
As for the preference between the terms subtitle and caption, there has been a clear shift over the years in favor of “subtitling.” For deaf and hard of hearing viewers that need additional descriptions of audio information, Subtitles for the Deaf and Hard of Hearing (SDH) has become 4 times more popular than the term “closed captioning.” Even Netflix has moved to using “subtitles” to categorically refer to all subtitling text options for the screen. For movies in English, if available, they feature two options for English under the list of subtitle options, once as “English” and once as “English (CC).”
At the beginning of this article, we described how the term “subtitling” is the most popular way to describe accompanying text information in the global video sharing environment. It refers both same-language subtitles and translated subtitles. “Subtitling” is used in most countries around the world, with some exceptions. Countries that continue to use “captioning” are the United States, Canada, New Zealand, Australia, and the Philippines.
But terms change over time and it can be difficult to phase out deprecated terms when the laws, organizations, and user interfaces that govern word usage still contain those terms. Who knows how long we will continue to see that “CC” at the bottom of our video players?
Subtitles and captions are both text displayed on a video that provide additional or interpretive information for viewers who are deaf, Deaf, deafened, hard of hearing, or just need more than audio. Most of the time, the displayed text includes a transcription or translation of the spoken language in the video.
Other uses for subtitles and captions have developed around the needs of different audiences. For example, Subtitles for the Deaf and Hard of Hearing (SDH) include descriptions of other auditory information that viewers with hearing impairment might miss: sounds that are off-screen, indications that the speaker is off-screen, music descriptions and more.
Some subtitles and captions describe visual information instead of auditory information. These activity-based subtitles can be used as a script to create supplementary audio description that help visually impaired users engage with content more easily.
If you work with subtitles and could use faster workflows or assistance in creating subtitles, captions, or translations, check out our subtitling platform Amara Enterprise and our Amara On Demand services to purchase subtitles for your media and expand your global audience. For simple subtitling needs, the Amara Public subtitle editor is always free.
Let’s talk about the history of captioning in the United States and the motivations of the people and organizations that fought to bring captioning services to the public airwaves for the first time.
Captioning was developed to assist deaf or hard of hearing individuals to access television programming. Public Broadcasting Service stations pioneered the captioning movement.
Open captioning is displayed directly in the video and cannot be turned off. At first, open captioning was the only option available for television viewers; and only for a small number of programs.
The United States’ first captioning agency, the Caption Center, was founded in 1972 at the Boston public television station WGBH in an effort to make television more accessible to the millions of Americans who are deaf or hard of hearing. Captioned broadcasts began with re-runs of PBS’s The French Chef in 1972. Audiences tuned in to see their favorite television chef of the time, Julia Child, with captioning at the bottom of the screen. The captions were burned-in, or “open” as opposed to closed captioning which could be turned on or off by viewers and was developed later. After the successful open caption broadcast of The French Chef, WGBH expanded open captioning of other programs like Zoom, ABC World News Tonight, and Once Upon a Classic. The legacy of WGBH’s captioning efforts live on today through the Media Access Group. And the words of its famous chef also lives on through the culinary world and beyond.
Closed captioning gives viewers the choice to turn captions on or off. Television viewers had competing needs: some needed the captions to access the content, others were annoyed at the new addition to their viewing experience.
The technological development of a closed captioning system started with an unexpected intention: telling time. In 1970, the National Bureau of Standards started to investigate the possibility of sending precise time information on a nationwide basis using a portion of network television signals. This investigation was unsuccessful, but it introduced the idea of sending non-video information over television, which led the way for encoding captioning information.
The closed captioning system was a separate decoder box manufactured by Sanyo Electric and marketed by the National Captioning Institute (NCI). You had to purchase the box separately and install it on top of your television set in order to decode the closed captioning information coming in from television broadcasts. The closed captioning system was successfully encoded and broadcast in 1973 with the cooperation of PBS station WETA, giving viewers with the decoder box the option of using closed captioning where it was available. Learn more about the history and mission of the National Captioning Institute in this Wall Street Journal article about American linguist and Klingon language creator Mark Okran.
So closed captioning was accessible to the people who had the resources to research, purchase, and install the closed captioning system. Of course, a big issue was that the decoder box could cost as much as a television itself!
On January 23, 1991, the Television Decoder Circuitry Act was passed in the United States Congress. The law stated that all analog televisions 13 inches or larger that were manufactured for sale in the U.S. were required to contain caption decoders.
The legislation above made the necessary captioning hardware available in all television sets sold in the United States. But that was only part of the battle for accessibility. The next step was to ensure that television programming actually included captioning information so that the in-set decoder could do its job and provide captioning for viewers with hearing impairment.
The FCC ruled that television programs must include captioning, with some exceptions. The exceptions include advertisements that run less than five minutes and programsthat aired between 2 a.m. and 6 a.m. And in 1990, the Americans with Disabilities Act (ADA) was passed, which requires that public facilities provide access to verbal information on televisions.
As technology developed, the regulations around captioning had to be updated to accommodate new ways of sending information. The Telecommunications Act of 1996 expanded on the Decoder Circuitry Act to place the same requirements on digital television receivers that had been placed on analog receivers. And the 21st Century Communications and Video Accessibility Act (CVAA) expanded the scope of devices that must display captions to include all video devices that receive or display video programming transmitted simultaneously with sound, including those that can receive or display programming carried over the Internet.
Most consumers are big fans of personalization. When the FCC in 1976 set aside line 21 for the transmission of closed captions (CEA-608) over television broadcasts, there were no options for personalization for closed captions. While CEA-608 captions can be played over both analog and digital television, the style became outdated by the FCC’s recent rules and standards for communication and video accessibility in the 21st century.
An updated set of standards and improvements was developed by the Electronic Industries Alliance, an organization famous for its industry standards. These new captions, called “CEA-708” were created exclusively for digital television and included many improvements to the closed captioning system. Several new options for the appearance of the text helped people who had both hearing and visual impairment: text size, text color, background options. The expanded character lists supported multiple language families (includes Cyrillic characters, logographic characters) where CEA-608 subtitles only supported characters from the Latin alphabet. These improvements allow users to choose the option that is best for them, which is an integral part of accessibility for large and diverse populations.
Whatever terms we use, the goals are the same: accessibility and choice. Recording information in more accessible ways and creating customizable formats for that information gives agency to viewers so that they can make informed choices about their entertainment, educational materials, and their lives. It took the active efforts of many people and organizations to get the subtitle and caption resources that we have today. And knowing the history, etymology, and connotations of the terminology that we use today can help us appreciate past efforts and perhaps inspire us to dream about where we can go tomorrow.