In the scope of human history, subtitling is a relatively new form of writing. From chiseled tablets to movable type, the intention of written language has always been to capture and communicate language. And the way we choose to communicate speaks to who we are as a civilization: translated subtitles are a mark of an increasingly globalized world while same-language subtitles are a mark of an increasingly accessible one. In an online world saturated with video content for a wide range of audiences, subtitles can play a pivotal role in content creation.
From positioning and syncing to the delicate art of localization, a lot of effort goes into creating subtitles that are easy to read without being distracting to the content’s audience. The styles, guidelines, and standards of this form of writing are still being forged today. As culture changes, so must subtitles. We share information faster than the generations before us, and that accelerated exchange is a catalyst for language development. People are constantly coming up with new ways to greet each other, to share humor, and to connect. For that reason, professional subtitlers have to stay at the cutting edge of their language to enhance the viewers’ enjoyment.


There are often exceptions to the rules of subtitling as online video creators innovate in parallel. When subtitling the fast-paced cadence unique to YouTubers or speech with slang, dialects, and accents, professional subtitlers have to make discerning decisions to keep the reading rates at a steady pace and to prepare content to reach local audiences. Behind each writing system, language, or dialect that you can think of, there are real people who have real needs that must be met if they are going to understand what content creators are trying to communicate. For some videos, global audiences need localization, not just translation; a common scenario is when subtitlers replace original products or concepts with local ones. For example, a local audience might not know a brand name or a specific kind of food and that kind of confusion might prevent the audience from fully connecting to and understanding the context and meaning in the video that they are viewing.
Whether or not you agree that “everyone uses subtitles” you might have seen a meme or two about them:

Image of pie chart with reasons to watch subtitles: because I can't understand the language, because there are too many accents and slang, and the biggest section is because i'm gonna eat chips
Image used for educational purposes (from me.me)

Subtitles are increasingly present in our online experience, from subtitled auto-play ads on social media to the familiar CC icon at the bottom of some streaming video players. Let’s take a look at the evolution of subtitles from a few different perspectives: before film, during the silent movie era, the advent of accessibility rights, legislation, and the latest innovation

Subtitles before film

The practice of displaying text alongside performances predates film and television. In the case of opera, text often accompanied live performances to help local audiences engage with foreign language productions. Born out of Italian madrigal tradition, many operatic composers continued to favor Italian as the language for the libretto of their operas. Local audiences in Germany, or in Mozart’s own Austria for example, had difficulty understanding the works performed in front them.
To help close this gap in understanding, text was displayed on a separate screen above the stage for the audience to see a translation of the performance into their native language. Because of its position above the stage, this text was called a “surtitle” or “supertitle.” This tradition is carried on today in a handful of modern theatre companies as well as at international film festivals.

Intertitles and transition from silent film

When the art of film moved beyond the experimental efforts of the Lumière brothers and Thomas Edison, narration became essential in order to connect audiences with the more nuanced stories of the day. Intertitles were text-only clips spliced into films to explain important plot points and fill in the narrative space of silent cinema. The relationship between the scenes and intertitles became its own form of artistic expression, inviting an extra layer of humor or emotion on top of the actors’ performances. Check out this assortment of humorous intertitles from silent films. The development of sound film changed the film industry in many ways: actors had to limit their movements to not interfere with microphones, famous US singers and musicians were able to make their debut as Hollywood stars, and filmmakers had to find a way to make their cameras operate more quietly. And, of course, intertitles were used less frequently.

Image of intertitle from the CBS television show Frasier
Image used for educational purposes © CBS

We can still see hints of the intertitle tradition carried on after the silent film era in movies and TV shows like “The Shining,” “Frasier,” and even “Spongebob Squarepants.” The purpose of intertitles in modern works is usually to indicate the passage of time or change in location instead of to provide narration and exposition. While narrative text in film has mostly faded from practice, it has left a lasting impression. The use of text in the introduction to the original “Star Wars” trilogy was a daring and iconic introduction to a new franchise and this remains a pertinent example of the complementary relationship between text and film as the franchise continues into this century.

Image of original text crawl from Star Wars VI: A New Hope copyright Disney
Image used for educational purposes © Disney

Subtitles for accessibility and literacy

For people who have hearing impairment and for people who have difficulty processing audio, subtitles make media more accessible. For video content creators, subtitles also expand their potential audience while improving their search engine optimization at the same time.
For creators with multilingual audiences, hiring someone to translate subtitles is often the more popular option compared with the practices of dubbing or lectoring. We will explain each practice for comparison. Dubbing is the process of replacing the original speakers with voice actors in the intended translation language. You might recognize the classic side effect of dubbing: mouths moving but the words don’t match the movement. See a list of the 6 most famous English dubbing flubs for a closer look into the challenges of dubbing video content.
Different from dubbing, lectoring is supplementary narration that is played alongside the original audio. For videos that use lectored translation, the volume of the original program’s voice track is lowered and the translated audio voice track is played at higher volume. Lectoring is especially popular in Eastern Europe and is preferred over dubbing in most cases outside of children’s entertainment.
In both dubbing and lectoring, voice actors and audio engineers have to perform and coordinate to achieve the final result. For audiences, there is a concern that information might be lost or missed when audio is removed or added–a gunshot in the distance or the exasperated sigh of a side character might be integral to the plot but might not be accessible due to the audio interference.
“Why don’t we just use the script instead of making subtitles?” is a question worth exploring, if only to find a new appreciation for subtitling professionals. Screenplays or scripts do not contain the timecodes needed to display the text alongside the dialog. They also do not capture ad-libbed lines or plot-important noises for the hearing impaired.
And same-language subtitles have been shown to improve literacy rates. The Same Language Subtitling program sought to help people in India improve their reading comprehension skills using mainstream TV subtitles. These subtitles use a more precise timing feature than usual that highlights individual words on the screen exactly when they were being said (or sung) in the video. This feature helps many people improve their literacy in their native language while watching TV in their own communities.

Subtitle Legislation Today

Many countries today have legislation requiring subtitles for both online and broadcast content for national accessibility concerns. For example, the Twenty-First Century Communications and Video Accessibility Act of 2010 and additional rules adopted by the FCC in 2012 set the stage for the future of subtitles in the United States. And due to Title III of the ADA, subtitles could even be coming soon to a theater near you.
In the European Union, national governments are holding public institutions to a higher accessibility standard and subtitles are a part of this initiative. From September 2018, the United Kingdom enacted new regulations to ensure that websites and applications succeed in reaching disabled people in the short term. In the long term, these institutions can benefit from making themselves more accessible by increasing their audience and improving relations with their pre-existing users. After all, designing a site with accessibility in mind gives users more choices in how they can interact with that site and makes it more likely that they can successfully reach their goals.
In the Philippines, “An Act Requiring All Franchise Holders or Operators of Television Stations and Producers of Television Programs to Broadcast or Present Their Programs With Closed Caption Options” went into law in 2016. The law requires that broadcast television have closed captions on all programming, with some exceptions. If broadcasters fail to provide closed captioning, they are subject to fines, jail time, or having their broadcasting license revoked.
While most subtitling and captioning legislation focus on public institutions and do not enforce regulations on private content creators, they do provide accessibility standards that might encourage private companies to follow their example.

Modern subtitle innovation

Sometimes, audience members have competing needs, and innovators have been working on ways to meet those differing needs. For example, open subtitles displayed directly on the screen can be distracting for some theater-goers. To accommodate these competing needs, personal titling systems like the Marconi multimedia patented interactive electronic libretto were developed to put the choice in the viewer’s hands. Patrons of some theaters are provided personal subtitle display screens to clip to their cupholders or wear like 3D glasses. Other theaters project mirrored subtitles on the back wall and individual audience members are provided a mirror to place in front of themselves so they can view the subtitles.
Since the 1920s, people have been interested in the problem of automatic speech recognition. Companies that funded research and development of automated speech recognition technologies learned what professional linguists have known all along: that language comprehension is not a set of straight-forward instructions on vocabulary and grammar. Programming a machine to detect speech turned out to be a complex struggle with many stages and innovations. Again and again researchers have had to return to the drawing board, to rediscover what language means to the mind, and learn a new respect for hidden context and unintended linguistic connections: that an airplane does not flap its wings, for example. And although great strides have been made in the automatic speech recognition in the last few years, it is still no match for the power of the human mind to parse, comprehend, and relay language. And some content creators have taken humorous advantage of the flaws in automatic speech recognition technology that continue confounding audiences today:

Creative Expression in Subtitles Lives On

Now that we have a solid history of subtitles and a foundation for understanding what subtitles do and how they can be implemented, it’s important to remember that subtitles have been and likely always will be an outlet for creative expression.
Like intertitles from the silent film era, subtitles in modern times can also be their own form of entertainment. One popular technique is breaking the “fourth wall,” which is when characters acknowledge that they are acting for a production. Most of the time, this happens when actors look directly at the camera or references the audience in some way. To break the fourth wall with subtitles, characters on screen might point to subtitles at the bottom of the screen, shove the subtitles completely off of the screen, or even pick the subtitles up and hit another character with them.
Subtitles also provide another meta-level of humor in some cases. In the Weird Al music video “Smells Like Nirvana”, the burned-in subtitles contain several lines that playfully mock the singing style of the band and then the line “It’s hard to bargle nawdle zouss / With all these marbles in my mouth.”

One thought on “A History of Subtitles

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s