Multiple-microphone techniques: A-B placement & X-Y placement
Although the great majority of modern recordings make use of complex multitrack systems and instrument overdubbing, there remains a school of thought that simple stereo recording is preferable, especially for live recording. There are many ways to accomplish this type of recording, several of which make use of just two microphones. As you will soon discover, however, it is not always as simple as putting two microphones in front of a musical group: the distance of each instrument as well as the individual instrument sound level outputs may vary too much to create a proper balance.
Also, we are used to hearing stereo in a room with our two ears. Our hearing is designed to make use of two main cues about sound source placement: relative loudness and time of arrival. Sounds originating closer to one ear will be louder in that ear and will arrive sooner than at the other ear. In fact, other more subtle information also contributes to our perception of the sound field: phase relationships within the complex sound pressure signal that we hear can convey information about the height and front-to-rear placement of a sound source, interacting with the pinna of the ear in a way unique to each individual. Each auditory system has adapted to its own particular input filtering to produce the sensation of hearing. This adaptation is described by the head-related transfer function (HRTF), a mathematical model of the input filtering produced by sound waves interacting with the ear and head. Unfortunately, microphones do not “hear” the same way we do; no adaptation takes place except through the engineer’s perception and experience. Consequently, recorded signals may not always convey the original sounds the way we would have heard them live when they are played back. The great challenge of stereo microphone technique is to bring to the listener a convincing image of the actual sonic event.
The key to capturing a convincing stereo sound field is similar to what allows us depth perception in vision: the overlap of sensory input from two separate sensors. Based on the cues we use to determine spatial placement of sounds, the way we select and position the microphones will determine how realistically we are able to recreate the spatial origins of the sounds we are capturing. Of course, in the real world there are confounding factors complicating the decision, like background noise, physical limits on where microphones may be placed, and limited time to try the alternatives. Considering the systems commonly used for stereo microphone placement will provide alternatives that can be used for a range of different recording challenges.
The figure to the right shows many of the common orientations using two microphones. Probably the most straightforward stereo placement is spaced pair referred to as A-B placement: two identical microphones are placed, at some distance apart, in front of the source. Omnidirectional microphones are frequently employed for this setup, although directional microphones can be used. This system captures both time of arrival and relative amplitude information, but if the microphones are spaced farther apart than our ears are, as is usually the case, the reproduced stereo field can be unnatural sounding. This approach might sound acceptable when listened to on speakers that are also placed far apart. The stereo separation reproduced depends on the distance between the microphones. A “hole” can be created in the middle of the stereo field if the microphones are too far apart. A third microphone is sometimes used in the middle of the spaced pair to prevent this effect, but undesirable cancellations may occur if this is not done carefully. Individual microphones (accent microphones) can also be placed near a soloist and combined with the spaced pair if necessary. When using spaced-pair recording techniques, it is helpful to consider the 3-to-1 rule to minimize undesirable phase cancellations: the sound sources should be at least three times as far apart as they are from the microphones. (Obviously, this placement is not possible for a single point source.) Observing the 3-to-1 rule helps ensure that phase cancellations will be reduced to an acceptable level due to the inverse square law that governs the dissipation of sound intensity with increasing distance. The interaction of the microphones is important not only in stereo recording but also any time more than one microphone is used to pick up the same sounds, as in the case of multitrack studio recordings.
X-Y placement refers to the use of two closely placed microphones when the outputs are simply recorded and not in a matrix to produce the stereo sound. The angle between the microphone axes determines the apparent width of the stereo field. Angles between 90 and 135 are often used. Narrow angles emphasize the center of the sound field; wider angles create a wider image but may leave ambiguity in the center of the image. Many variations of this simple system of stereo microphone placement are possible using directional or omnidirectional microphones and either coincident or near-coincident placement.
When two directional microphones are placed together and aimed at an angle with respect to each other, a stereo recording can be created due solely to the amplitude differences because the time of arrival (or phase relationship) will be the same for both mics. (This placement does not work with perfectly omnidirectional microphones; however, most real omnis are not perfectly omnidirectional at all frequencies, as can be seen from their off-axis response at high frequencies.) This technique is also known as single-point stereo microphone placement. Coincident placement can be used to record a single instrument in stereo or for ensembles, but it may not be optimal for large groups because the far ends of the sound source may not be adequately picked up. When two bidirectional (figure-eight) microphones are used at a 90 angle, the system is known as a Blumlein pair. The Blumlein pair produces a very natural sound, but because the rear is picked up as well as the front, the placement is sensitive to sounds coming from the rear. This placement tends to work best close to the sound source in a good-sounding room without a restless audience. Also, sounds coming from the sides of the array will appear out of phase in left and right outputs, leading to potential problems if the stereo signal is mixed to mono.
A slight separation of a coincident pair can also yield a pleasing stereo image, providing the sound field is not too wide. The ORTF (Office de Radiodiffusion Television Francaise) has devised a method of separating two cardioid microphones 17 cm (6.7″) at an angle of 110. This system yields a good localization and depth of field because the capsules are close at low frequencies but adequately separated at higher frequencies to give some time-of-arrival information in addition to the relative level difference. Sounds arriving from far left and right may cause mono compatibility problems because of interference caused by the time of arrival difference, so the placement distance should be checked by listening in mono as well as stereo.
Jay Kadis is the author of the just published book The Science of Sound Recording. Jay has played guitar since his school days, written and recorded original music, built studios and done research in psychoacoustics and music technology. He teaches sound recording at Stanford University’s Center for Computer Research in Music and Acoustics.