Whether you need to understand a quiet speaker, to catch a verbal “Easter egg” in a cult film, or discern speech with an unfamiliar accent, closed captioning and subtitles are your friend while watching videos. We talked recently about the rich and storied history of subtitles; from operatic tradition to Weird Al music videos, subtitles and captions silently strengthen the connection between content and its audience.
But there is some debate online about what to call those little words at the bottom of your video. Some people call them subtitles, and some people call them captions. If you want to know why, then you’re reading the right article.
- Who calls it a “subtitle”?
- Who calls it a “caption”?
- Captions: the Instagram takeover
- What do subtitles and captions have in common?
- Origin of captioning
- Open captioning
- Closed captioning
- Captioning legislation
- Custom captions
- What’s in a name?
Who calls it a “subtitle”?
“Subtitle” is the most common term worldwide for text that accompanies video content. Subtitles can be in the same language as the video or translated into another language. There is no difference there, depending on which part of the world you are in (keep reading to find out more). The vocabulary debate occurs with same-language subtitles, which some people refer to as “captions,” which you’ll see next.
Fun fact: The origin of the word “subtitle” comes in two forms: a secondary title to a written work and the textual version of dialog in a dramatic production.
Who calls it a “caption”?
“Caption” is the term used primarily in North America. Captions only refer to subtitles that are in the same language as the spoken video. When it comes to translated video, those are called “subtitles,” same as commonly used worldwide.
Fun fact: The origin of the word “caption” is to take or seize. The term developed both a metaphorical meaning (capturing the meaning of a photograph) and a literal meaning (the seizure of tangible property).
Captions: the Instagram takeover
The history of photo captions is longer than video captioning, and it seems like the term is making a return to its roots through the popular photo-and-video sharing app Instagram. The evidence is clear in Google Trends: “Instagram caption” is searched for 10 times more than “closed captioning.” And apparently, the right caption can make or break the popularity of an Instagram post.
As for the preference between the terms subtitle and caption, there has been a clear shift over the years in favor of “subtitling.” For deaf and hard of hearing viewers that need additional descriptions of audio information, Subtitles for the Deaf and Hard of Hearing (SDH) has become 4 times more popular than the term “closed captioning.” Even Netflix has moved to using “subtitles” to categorically refer to all subtitling text options for the screen. For movies in English, if available, they feature two options for English under the list of subtitle options, once as “English” and once as “English (CC).”
At the beginning of this article, we described how the term “subtitling” is the most popular way to describe accompanying text information in the global video sharing environment. It refers both same-language subtitles and translated subtitles. “Subtitling” is used in most countries around the world, with some exceptions. Countries that continue to use “captioning” are the United States, Canada, New Zealand, Australia, and the Philippines.
But terms change over time and it can be difficult to phase out deprecated terms when the laws, organizations, and user interfaces that govern word usage still contain those terms. Who knows how long we will continue to see that “CC” at the bottom of our video players?
What do subtitles and captions have in common?
Subtitles and captions are both text displayed on a video that provide additional or interpretive information for viewers who are deaf, Deaf, deafened, hard of hearing, or just need more than audio. Most of the time, the displayed text includes a transcription or translation of the spoken language in the video.
Other uses for subtitles and captions have developed around the needs of different audiences. For example, Subtitles for the Deaf and Hard of Hearing (SDH) include descriptions of other auditory information that viewers with hearing impairment might miss: sounds that are off-screen, indications that the speaker is off-screen, music descriptions and more.

Some subtitles and captions describe visual information instead of auditory information. These activity-based subtitles can be used as a script to create supplementary audio description that help visually impaired users engage with content more easily.
If you work with subtitles and could use faster workflows or assistance in creating subtitles, captions, or translations, check out our subtitling platform Amara Enterprise and our Amara On Demand services to purchase subtitles for your media and expand your global audience. For simple subtitling needs, the Amara Public subtitle editor is always free.
Origin of captioning
Let’s talk about the history of captioning in the United States and the motivations of the people and organizations that fought to bring captioning services to the public airwaves for the first time.
Captioning was developed to assist deaf or hard of hearing individuals to access television programming. Public Broadcasting Service stations pioneered the captioning movement.
Open captioning
Open captioning is displayed directly in the video and cannot be turned off. At first, open captioning was the only option available for television viewers; and only for a small number of programs.

The United States’ first captioning agency, the Caption Center, was founded in 1972 at the Boston public television station WGBH in an effort to make television more accessible to the millions of Americans who are deaf or hard of hearing. Captioned broadcasts began with re-runs of PBS’s The French Chef in 1972. Audiences tuned in to see their favorite television chef of the time, Julia Child, with captioning at the bottom of the screen. The captions were burned-in, or “open” as opposed to closed captioning which could be turned on or off by viewers and was developed later. After the successful open caption broadcast of The French Chef, WGBH expanded open captioning of other programs like Zoom, ABC World News Tonight, and Once Upon a Classic. The legacy of WGBH’s captioning efforts live on today through the Media Access Group. And the words of its famous chef also lives on through the culinary world and beyond.
Closed captioning
Closed captioning gives viewers the choice to turn captions on or off. Television viewers had competing needs: some needed the captions to access the content, others were annoyed at the new addition to their viewing experience.
The technological development of a closed captioning system started with an unexpected intention: telling time. In 1970, the National Bureau of Standards started to investigate the possibility of sending precise time information on a nationwide basis using a portion of network television signals. This investigation was unsuccessful, but it introduced the idea of sending non-video information over television, which led the way for encoding captioning information.

The closed captioning system was a separate decoder box manufactured by Sanyo Electric and marketed by the National Captioning Institute (NCI). You had to purchase the box separately and install it on top of your television set in order to decode the closed captioning information coming in from television broadcasts. The closed captioning system was successfully encoded and broadcast in 1973 with the cooperation of PBS station WETA, giving viewers with the decoder box the option of using closed captioning where it was available. Learn more about the history and mission of the National Captioning Institute in this Wall Street Journal article about American linguist and Klingon language creator Mark Okran.
So closed captioning was accessible to the people who had the resources to research, purchase, and install the closed captioning system. Of course, a big issue was that the decoder box could cost as much as a television itself!
Captioning legislation
On January 23, 1991, the Television Decoder Circuitry Act was passed in the United States Congress. The law stated that all analog televisions 13 inches or larger that were manufactured for sale in the U.S. were required to contain caption decoders.
The legislation above made the necessary captioning hardware available in all television sets sold in the United States. But that was only part of the battle for accessibility. The next step was to ensure that television programming actually included captioning information so that the in-set decoder could do its job and provide captioning for viewers with hearing impairment.
The FCC ruled that television programs must include captioning, with some exceptions. The exceptions include advertisements that run less than five minutes and programsthat aired between 2 a.m. and 6 a.m. And in 1990, the Americans with Disabilities Act (ADA) was passed, which requires that public facilities provide access to verbal information on televisions.
As technology developed, the regulations around captioning had to be updated to accommodate new ways of sending information. The Telecommunications Act of 1996 expanded on the Decoder Circuitry Act to place the same requirements on digital television receivers that had been placed on analog receivers. And the 21st Century Communications and Video Accessibility Act (CVAA) expanded the scope of devices that must display captions to include all video devices that receive or display video programming transmitted simultaneously with sound, including those that can receive or display programming carried over the Internet.
Custom captions
Most consumers are big fans of personalization. When the FCC in 1976 set aside line 21 for the transmission of closed captions (CEA-608) over television broadcasts, there were no options for personalization for closed captions. While CEA-608 captions can be played over both analog and digital television, the style became outdated by the FCC’s recent rules and standards for communication and video accessibility in the 21st century.
An updated set of standards and improvements was developed by the Electronic Industries Alliance, an organization famous for its industry standards. These new captions, called “CEA-708” were created exclusively for digital television and included many improvements to the closed captioning system. Several new options for the appearance of the text helped people who had both hearing and visual impairment: text size, text color, background options. The expanded character lists supported multiple language families (includes Cyrillic characters, logographic characters) where CEA-608 subtitles only supported characters from the Latin alphabet. These improvements allow users to choose the option that is best for them, which is an integral part of accessibility for large and diverse populations.
Frequently Asked Questions
Are captions and subtitles the same?
Captions and subtitles both capture speech in a video. Both are used to make content more accessible by creating a text copy of what is being said in a video.
They have similarities but are not exactly the same.
What is the difference between subtitles and captions?
Captions are exclusively in the same language as the video but subtitles can either be same-language subtitles or translated subtitles.
What are the synonyms of caption?
Same-language subtitles, closed captions, open captions, CC, SDH could all be used as synonymns in certain contexts.
Each of those synonyms have their own meaning and context so be careful to understand the difference! SDH, or Subtitles for the Deaf and Hard of Hearing, are specifically created for people with hearing disabilities. They are subtitles created in the same language as the video which also capture other plot-relevant sounds like a doorbell, phone ringtone, or offscreen footsteps.
On some video players, the user interface has a CC button which opens the subtitles menu. On YouTube, for example, you can click the CC button and see all of the available subtitle options. So the CC button can show you closed captions or closed subtitles in translated languages, but that might have been too hard to fit all of that on a little button!
What is cc for subtitles?
CC means “closed captioning” which is a subtitle file that can be turned on or off by the video audience.
What is the difference between English and English cc?
English subtitles describes any subtitle file in English whether it is open or closed or matches the video language. English CC is more specific and is only used for same-language captions that can be turned on or off by the audience.
What is closed caption?
A closed caption file is a separate file with transcript and timing of the video which can be turned on or off in the video by the viewer.
What are subtitles and captions used for respectively?
Subtitles are usually for a hearing audience unless otherwise specified while captions are usually created with people with hearing impairment in mind.
Subtitle is a more widely used term around the world to refer to timed text in a video and it can include either same-language subtitles or translated subtitles. Captions are exclusively used to capture speech in the same language as the video with no translation.
But both can make content more accessible for non-native speakers of a language and for people with disabilities.
What are the differences between cc and sdh?
Subtitles for the Deaf and Hard of Hearing are meant to give the same information as closed captions for deaf and hard of hearing viewers who do not have CC-supported video.
What’s in a name?
Whatever terms we use, the goals are the same: accessibility and choice. Recording information in more accessible ways and creating customizable formats for that information gives agency to viewers so that they can make informed choices about their entertainment, educational materials, and their lives. It took the active efforts of many people and organizations to get the subtitle and caption resources that we have today. And knowing the history, etymology, and connotations of the terminology that we use today can help us appreciate past efforts and perhaps inspire us to dream about where we can go tomorrow.
Very helpful :). Thanks a lot!
Hi Allison,
I am seeking help to find the BEST tool to generate auto-scrolling transcription aligned with video – similar to the way TED talks display clickable scrolling text. We are developing a student-produced oral history of the Civil Rights Movement public website – similar to http://www.tellingstories.org – and want the public viewer to be able to read along as well as text search to jump to segments. The plan is to use Otter.ai to generate the actual transcript. Is Amara the right tool? Other suggestions?
Hi Howard, great question. Amara currently does not have interactive transcript functionality. I suggest checking out otranscribe.com. It’s a free, open source tool that has interactive timestamps that allow viewers to jump to segments.
En serio, nunca pense encontrarme con ‘vimeo’ en esta pagina; pero bueno ay! vamos.
Gracias por sus escritos sobre subtitulos, que es nuestro interes.
Atte. Carlos Manuel