Dialogue Editing – The Trouble with Digital Picture

   By Sarah C   Categories: GeneralMastering Audio

In theory, a picture file in a computer will flawlessly sync with a DAW session playing from the same computer. Sometimes it does. Other times, picture or sound will be systematically delayed. Or sync may be spot on one time you hit play, and soft the next. Or worst of all, sound and image may weave aimlessly in and out of sync. What’s behind this foolishness? Here’s an overview of some of the causes.

• It takes longer to process a video signal than audio, so it’s not unusual to hear it and then see it. This is a problem. Given that light travels significantly faster than sound, we’re accustomed to sound arriving a bit late. It goes against nature to hear something before we see it.

• Good old-fashioned CRT monitors were analogue and showed virtually no delay between incoming signal and the picture shown on the screen. Today’s video displays use digital video processers to drive the flat panels, and that processing imposes a delay.

• Unless your editing system and everything attached to it are locked to an external clock source, you are much more likely to encounter unpredictable sync drift.

• Each codec compresses and decompresses data in its own manner. This results in different CPU and bandwidth demands from one codec to the next, which in turn affect sync.

• Some codecs prefer one wrapper over another. A mismatch here can result in inconsistent lock-ups.

• There are many hardware considerations (video cards, drives, CPU, memory) that will greatly affect sync.

• You could, of course, have a sample rate mismatch (e.g., 48 kHz/47.952 kHz), but this problem is common to all playback systems, so it’s not worth getting into here.

Figuring out how to get your client’s video in sync with your workstation isn’t really rocket science. But—like everything else in dialogue editing—it requires communication. From the photography crew that creates the initial image files to the picture editing facility that delivers them to you, there are plenty of variables, and plenty of possibilities, to get things wrong. But a little planning can put the odds in your favor. In the last chapter we looked at the specification sheet that you provide for the production company well before your job begins. This lists the things you must have in order to get to work. Being sound people, most of our demands have to do with the audio side of things. However, you must also specify the video format that you need in order to make for happy audio postproduction. Of course, when it comes to picture, things often work backward: The client tells you what to expect and you must adapt. Either way, you’re OK, as long as you give it enough lead time.

Get Your System Ready for Digital Video

The most common obstacle between you and happy sync is hardware—your hardware. Inadequate RAM, sluggish processor, slow or fragmented drives, too many drives on a bus . . . the list goes on. Heavy editing sessions and streaming video each crave resources. So, what can you do to get things in sync and keep them there?

If your picture is out of sync but not drifting, you can usually perform a simple offset to align sight and sound. Pro Tools, for example, has a tool called “Video Sync Offset,” shown below, that does just that. Many other DAWs have a similar feature. By changing the sync offset in 1/4-frame bumps, you will eventually find the sync you want. This process works, but it can make you crazy. First, you’re forced to stare intensely at the screen— first at the sync pop and then at people speaking—and from that divine the sync. Do this for more than a few seconds and you’ll swear that anything is in sync, or out, just to end the torture. Second, it’s maddening to know that throughout the filmmaking process sync was honored religiously, but now, when it really matters, it’s totally subjective. How can you decide which shot, which ADR line, is truly lined up if you can’t trust the overall sync of the film?

Trouble_1

Pro Tools’ “Set Video Sync Offset” dialogue window.

Since you’re not the first editor to run into this frustrating problem, there are tools to help you line things up. Syncheck is a device that uses a sequence of flashes and beeps to measure the offset between picture and sound. Beep sequences are provided for different codecs, containers, frame rates, and type of scanning, since each variable may impose its own latency. Once you measure the offset, you can apply this value to your workstation, at least for the duration of this project.

Trouble_2

Syncheck is used to establish sync between an audio workstation and file-based video playback. Image courtesy of Pharoah Editorial, Inc.

If sync is not stable, but instead is dancing back and forth around an offset, there are several things worth looking at. While it’s an unbending fact that you can never have too fast a CPU or too much RAM, there are a few common bottlenecks that are first worth looking into before you replace your computer.

• Don’t put your DAW session and audio files on the same drive as the video. Putting the drives on separate I/O paths helps even more. Get faster drives. This is not expensive and it really makes a difference.

• If you normally use an Apple Cinema Display, you can put a small QuickTime window over your DAW session display. This will be in sync. But then who wants to work with one monitor where session and video vie for space? Despite really great sync, you can’t take this option seriously.

• A common low-budget option is to use a DV converter that streams picture data through your FireWire port and converts it to composite video, which can be displayed on most TVs or monitors. Unfortunately, this setup is prone to long latency and, worse yet, sync tends to drift unpredictably. As your session gets heavier, picture sync is likely to become less stable. In all fairness, these analogue/digital video converters are not intended to do what we ask of them. So don’t insult them; just don’t use them.

• Invest in a good video card that is locked to the same reference as your workstation. Processing the video stream on a dedicated card may solve many of your picture sync problems. Then again, it may not. Do your homework before investing.

• For even more zip, take the picture load off your audio computer altogether. Video Satellite, Virtual VTR, and similar products allow you to stream video from a second computer. Now the CPU is free to take whatever abuse you want to give it without thinking about that hungry video stream. Normally, this second computer needn’t be rocket powered, so it’s a great way to use the one you threw out to make way for the new, more glamorous workhorse you’re now using. The video slave should not, however, be so ancient that it can’t handle new video codecs. These solutions are not overwhelmingly expensive, and as a bonus there are often ADR management and recording tools built into the system.

• Finally, if you regularly must deal with several different codecs, stream off of a network, or handle uncompressed HD 4:2:2 files, there are top-of-the line options such as Pyxis and VCube that are commonly used in large facilities. It’s unlikely that a budding editor working in mom’s basement will opt for such exquisite solutions, but it’s good to know they’re out there.

Containers and Codecs

Tweaking your computer, drives, and video card may not be enough to distance you from the gates of sync hell. You must also accept the fact that incoming material will arrive in all sorts of flavors, any of which may cause you trouble. It’s worth knowing a bit about how a video file is constructed. A video stream is conceptually layered in a way that enables different types of materials to piggyback on the same package. This allows for much greater flexibility. Not all of these hierarchical layers are physical; some are about norms and conventions:

• A framework is a standard. For instance, Digital Cinema Initiatives9 and SMPTE have defined a framework for digital cinema distribution, security, and playback. This framework is called DCP, Digital Cinema Package. (DCP in film workflows is discussed in Chapter 3.) OMF is another example of a framework.

• A container or wrapper is a file that provides a unified way to structure content and to access it. In other words, it describes how information is stored on a disk. MXF, AVI, and QuickTime are examples of containers.

• Essence. The raw video, audio, and data streams held in a container. In other words, content.

• Metadata are the files that describe the essence.

A codec is a program that encodes a data stream for storage or transmission and then decodes it for editing or playback. Codecs are optimized for different tasks, so it’s not surprising that each has its quirks. Some use very high compression to enable mass storage or quick transfer across the Internet. Others provide quality images and frame-accurate playout but little compression. There are codecs that handle only video and those that deal only with sound. Some handle both. To find out which codec was used to encode the video you’re holding, open the file in your media viewer and then select “Get Info.”

Video files are very big, so it’s almost always necessary to compress them— sometimes aggressively. You can compress each frame of a video sequence, but only to a point. There’s just so much data you can drop without severely affecting quality.

Excerpt from Dialogue Editing for Motion Pictures: A Guide to the Invisible Art, 2nd Edition by John Purcell © 2014 Taylor and Francis Group. All Rights Reserved.

About the Author

Emmy recipient John Purcell has over 30 years of varied studio experience in picture and sound. He has edited projects ranging from documentaries and concerts, to presidential campaigns and television series, to feature films and classical albums. On top of editing dialogue for movies, he writes training programs for audio engineers, and teaches at film schools in the Middle East and in Latin America.

RELATED POSTS:

No Comments

Tell us what you think!

*

The Latest From Routledge