The year is 1928, Warner has just released “The Jazz Singer,” the first feature film with spoken dialogue, and audiences are completely and utterly stunned. It’s the start of a new era in which audio is taken just as seriously as visuals.

Before the advent of spoken dialogue in film, going to the cinema was even more of an event. Audiences sat to the tune of live music, ranging from a full orchestra for larger Hollywood productions to a theater owner with a guitar for smaller venues, and title cards synched with the visual cues of actors to create not-so-closed captions. For particularly dynamic plots, a roving narrator would also join the stage—making the in-theater production nearly as complex as the on-set production.

Sound was the new frontier that changed all of this—and more. While filming, loud reeling cameras had to be moved far away from recording devices so they didn’t pick up unwanted noise, which led to the development of the boom microphone. Then there was the problem of distribution, solved by audio heavyweight Thomas Edison and his phonograph.

Yet some things stayed the same. Because early films used sound only intermittently, leaving long periods of silence, many actors and actresses from the purely visual area were able to continue surviving on their dashing good looks alone, dodging the need to deliver on believable dialogue.

However, the largest change brought about by the introduction of talkies isn’t one most would associate with sound: the standardization of frame rates.

The History and Standardization of Frame Rates

Disclaimer: here is where we get nerdy

Frame rate essentially refers to the speed of a film when played back, which can drastically affect audience perception. Modern cinema has adapted a standard frame rate of 24 frames per second—or 23.98 to be exact—but the period leading up to this was a painful and inexact nightmare.

Just outlining the history of frame rates is enough to give film historians and restorers headaches. In simplification, the frame rate of film was standardized in 1929 after years of fluctuation. Much earlier, filmmakers had deduced that 12 frames per second was the minimum rate the eye needed in order to “believe” it was watching motion and not just a series of still images.

Prior to the addition of audio tracks, there was no standard speed for film, which meant there no standard look either. Some theaters, for example, would “over-crank” a film and run it faster in order to squeeze in more showings, while others would guess and experiment—meaning it was entirely possible for the same film to have three different run times in as many theaters in the early 1920s (and three different looks).

However, because sound was recorded on a separate track that needed to be synched alongside the visuals, directors and editors needed to work together (for once) to agree on a standardized frame rate—lest audiences be forced to sit through artificially high voices and out-of-synch movement.

Together, these sound directors and editors decided that the frame rate needed to be a number divisible by many other numbers, which would allow them to more easily determine the length of a cut. For example, because 24 can be divided by 2, 3, 4, 6, 8, and 12 evenly, editors could immediately estimate that a half-second sound clip was equal 12 frames, a third-of-a-second clip was equal to 8 frames, and so on.

Since at the end of the day, the film industry is a business and each frame costs money, anything more than the 24 frames was considered redundant and a not worth the investment, so 24 FPS became the industry standard with credit given to sound, the great equalizer.