A Q&A Session with Dave Swallow
Living and Working with Live Audio
Dave Swallow is Mixing Engineer, Live and Studio Audio Engineer, Tour Manager and Tour Consultant who has toured extensively in Europe, North America, South America, Australia and Japan. He has mixed and supervised countless sessions, including Itunes, Aol, Yahoo, BBC, and B-side cuts. His live TV appearances include Jay Leno, Saturday Night Live, Dave Letterman, Austin City Limits, Conan O’Brien, Regis & Kelly, VH1, Later with Jools Holland, Brit Awards, Live at Abbey Road, BBC One Sessions, Parkinson, Friday Night Project, Album Chart Show, E4, Taratata, New Pop, Jonathan Ross, Alan Carr, Top of The Pops, CD:UK, T4, Davina, and Mobo Awards. Dave is also the author of the book Live Audio which is published by Focal Press.
What is the best Show you’ve ever done?
This was actually only just last November at Brixton Academy in London. It was a band called “The Cat Empire” from Australia. I was covering their UK tour for a friend of mine who would normally take charge of FOH, but he had other commitments at this time. I got to use a Funktion One PA system, Midas XL4, and minimal outboard units that made the show enjoyable to mix. Not only that, I had a great PA supplier, Audio Plus, who I’ve used before on some La Roux tours. We spent a long time looking at the very basics, and rather than just putting the PA up like everyone else does, we made a conscious decision to only point the sound where there were ears to hear it. All of those factors, coupled with a fantastic band, lead to an amazing show
What is the most difficult Show you’ve ever done?
There have been a fair few, but one technically difficult show I can remember was a La Roux show in Ottawa, Canada. I ran some music through the PA system, and it just sounded really odd, like the sound was very confused. There was no definition between the different frequencies, like someone had put a duvet over the whole PA. When you get presented with situations like these you have no idea where to start. The room was very narrow but quite long with a balcony running around the entire edge of the venue. The subs were in a split configuration, pointing straight underneath the balcony, so the first thing I did was move the subs into the center of the room, and this cleared up the definition in the low end. We then had a look at the crossover points to find that the high end was delayed by 35ms, compared to the subs. This translates to the high end being placed at least 35 feet behind the subs. Once we had put everything back to zero, it was much easier to hear what was happening, and then match up the components as required. I suppose the moral of this story is to start at the beginning, rather than just trying to EQ your way out of trouble. If you move the PA or the alignment so it clears up the sound, then you have a much better canvas to paint your sonic picture.
Why did you get into writing?
I was in a very difficult place in my personal life a few years ago, and writing started out as therapy. It soon became obvious that writing was more than just therapy; it was rather enjoyable. In a way it was self-perpetuating; the more I wrote, the more I enjoyed it, and as a result I would write more! Actually, my mum does a lot of writing, and it’s something that I’ve always enjoyed doing but never had the platform to do it.
How did you know you wanted to be a sound engineer?
I always loved live music, but hated being in the middle of the crowd. I was at a Pearl Jam show at London’s Wembley Arena in the mid 90s, and I noticed in the middle of the audience was a section cornered off for the sound department! This was how I was going to enjoy live music without being pushed around by an overly excited audience.
What excites you about the future of live sound?
There are really so many things on the horizon that are exciting. Digital technology has really opened up the potential of what is possible. That’s not to say I prefer digital consoles; in fact, I think it’s an artistic choice. But I think the one thing that really gets me excited is the potential for 3D audio. It’s a little like when we first had 3D film where things were jumping out at you all the time, and as the gimmicky side dies away, it leaves a very credible visual behind. You have gained a new subtle depth of perception, without intrinsically changing the feel and the idea of the picture. We’ve had surround sound for quite a while now, but this isn’t what I’m talking about, what I’m talking about is creating an audioscape where you can audibly place different instruments in a different place. You should be able to close your eyes and feel like you can walk around between the band members. It’s very exciting, but that’s not to say the public or most engineers would like it. It’s just a step, and I hope it’s a good step and will open up our minds and ears to a different perspective.
Have you any concerns about the future of live sound?
The concerning thing for me is maybe that the fundamentals of sound are being lost in technology. It’s a little too easy to come up with solutions to problems rather than working out where the problem came from. So, as much as I think digital tech is a great thing and opened up the world of audio, there is a fine line between an exciting new future and an overprocessed future.
We are hearing more and more about phase these days. How important do you think phase is?
Phase is the glue that sticks all our mixes together. In days gone by, phase was reserved for the studio and a small button at the tops of our consoles. These days, because of digital technology, we now have delays on each channel, giving us the ability to have much more phase coherence. I was introduced to the idea of “transient smear” by my friend Tony Andrews a few years ago, and as I started to take this on board and try and change the way I would set my PA systems up, this whole idea just fell into place. If you want a well-defined sonic picture, you need a very fine brush, and the only way to achieve such a fine brush is by having this phase coherence.