Phoenix, Arizona (March 25, 2021)—Emmy-award winning sound designer and FX editor Angelo Palazzo has worked on blockbusters such as Disney’s Frozen, Stranger Things and Bridgerton, but he hit the curveball of a lifetime last year when COVID-19 brought the world to a standstill. Palazzo was working with filmmaker Robert Rodriguez on another project when the pandemic halted production one Friday afternoon in early 2020; by the following Monday, he was on board with Our Thing with Sammy the Bull, a Mafia podcast that puts his cinematic skills to use in a new format.
“I’m steeped in feature films and the TV world,” says Palazzo, “[and] it’s a real fine line when you’re putting sound to narration. The music is what is emotionally leading you through the story, but the sound design and sound effects root you in the reality of it. I didn’t wanna go too deep, because if you go too deep, then it can get corny.”
Instead of relying on gimmicky, on-the-nose audio cues that closely follow the action of a story—for example, the sound of a door creaking on its hinges when the protagonist walks into a dark room—Palazzo strives to put listeners in a scene without them even noticing.
“If it’s too literal, it can backfire, so when there was a major plot point, I wanted to kinda ease you into it and set you up for the big moment,” he says. “Then, slowly fade that reality out and bring you back in with just the narration with the music. If you’re nuanced about it, before they know it, you’re out of it and there was no distraction.”
The protagonist of Our Thing with Sammy the Bull is Salvatore Gravano, the notorious mobster whose hit list runs 19 murders deep and who served as underboss of the Gambino crime family under John Gotti.
Palazzo works with Richard Miller, the general manager of the Sammy the Bull organization, to produce each episode of Our Thing with Sammy the Bull. Miller, whose background is in seminar production, records the narration with Gravano on a Shure SM93 lavaliere microphone (Gravano’s preference over typical podcasting models) into a Zoom H6 recorder. Miller says they’ve since moved on to the Shure MX150 lavaliere, which doesn’t pick up as much ambient sound.
After a few rounds of editing in Adobe Audition, the narration tracks and archival sound clips go to Palazzo for placement and mixing with the score, which he composes and records himself using Native Instruments and vintage synths.
“Most everything starts with a piano idea, and as I get a certain progression or vibe, piano and strings are where I usually start,” says Palazzo. “There’s these moments where there’s a lot of flutes riffing in the background that has a real ’70s vibe to it that I liked. Also, in the beginning, I went with this beat bassline thing with a Fender Rhodes, just to set the city vibe.”
Elements of Palazzo’s original score pop up in various moments throughout the podcast, including a piece he wrote for the finale that is now the signature opening and closing music for each episode of the podcast.
“They wanted a big orchestral thing—a big, sort of swelling finale,” he says. “If someone gives me a reference, I’ll check out the reference and I’ll listen, and as soon as I get into the vibe of it, I’m almost immediately off onto my own tangent. And then it becomes its own thing, which is what happened with that.”
Building a sound library was key to creating a cohesive sonic identity for the Mental Floss and iHeartMedia podcast The Quest for the North Pole, says producer and editor Dylan Fagan. In the early stages of production, his job was to figure out how to interpret host Kat Long’s vision and what she was hearing in her mind. That mix of sounds and music would become key to the podcast’s ability to recount both how and why explorers like Sir John Franklin, Fridtjof Nansen, Robert Peary and Matthew Henson made the first, sometimes fatal, expeditions to the Arctic.
“[Kat] gave me some notes and their scripts, and I just ran with that,” he says, taking her suggestions for “chilly” sounds and audio to represent sled dogs and other sounds that could be considered common for the Arctic.
While Fagan occasionally taps into licensed music to find the 40 to 50 tracks he’ll use in a typical season, most of the show’s audio comes from iHeartMedia’s deep in-house library. “I went through there and tried to find some ambient tracks that I thought fit the mood, and built a library out of that on my computer … to represent different moods and different shifts in storytelling.”
But instead of curating clips for the podcast’s theme, he created original music for the intro and outro clips. “I thought that was a nice way to differentiate it from using stock music for a theme song,” he says. “There’s a lot of great stock music out there, but I’ve run across podcasts that I think unknowingly end up using the same theme song [as another podcast] because they both licensed the same theme. I try to not do that on shows that I work on.”
Each episode begins with the narration, which Long records in a closet with the HVAC turned off, speaking into a Shure SM7B microphone that is tethered to a Zoom H6 recorder. For interviews, Long connects her smart phone to the Zoom and uses the standard phone call feature to connect. Although not an ideal audio situation, Fagan puts in the editing work to make it gel.
“I usually just try to take out any unnecessary frequencies—since they’re phone calls, [that means] any of the low or the highs, just to see if I can get rid of any hiss,” he says. “I might run a de-clicker on it and some noise reduction [from iZotope Rx], but for the most part, I just make sure that it’s matching the levels of the voiceover so that nothing sounds too jarring or too much of a dip.”
From there, Fagan creates a fully soundscaped rough cut for Long to review to make sure the editing is moving in the right direction. A given episode usually goes through a few edits before it’s finalized and prepped for publishing to various podcasting platforms.
“I always send out everything to where I think it’s as close as it can be to what I think it would sound like in the end,” he says, “and then the rest of [the edits come from] input. By the time I send out the third version, it’s good to go—and I’d say that if I start an edit on Monday, send it out on Tuesday, we usually have it wrapped up by Friday.”
Working with eight different phone calls of guest audio for the inaugural season, each one lasting from an hour to an hour and a half, Fagan says keeping everything organized can be a challenge. To break down silos and keep everyone working from the same script, so to speak, they use the cloud.
“We have a master Dropbox, an enterprise account, that has folders that sync up our shows, and so when Kat records her voiceover, she can upload it directly to that folder,” he says. “I have all the recordings in that folder, as well as the transcripts and my folder for music and sound effects, so everything’s there. I always know where everything is, and Kat knows where to go for anything.”
Brooklyn, NY (January 28, 2021)—It’s telling that Longform editor Jenelle Pifer spends more time perfecting the flow of the conversations on the podcast than obsessing over the audio quirks of an episode—and that’s not a knock on the latter. Longform, the long-running podcast that features authors and journalists talking about their craft, is simply all about the art of the interview and how to present it.
“My approach to editing is to make it as clean as I possibly can, and condensed as I possibly can, without ever letting people hear an edit,” says Pifer. “I do relatively little reordering of the conversation—sometimes it’s necessary, [but] a lot of times, I find that you can tell when the conversation is reordered. It’s more chipping away at the raw file to kind of make the arc of what seems to be the most meaningful themes pop up.”
Co-founder and co-host Max Linsky, who also owns the podcast production company Pineapple Street Studios, hit up his friends who worked in audio for interview tips when Longform first launched in 2012. “They would always say, ‘You want it to feel like a casual, informal conversation’—but if you actually listen to a casual, informal conversation, it’s incredibly boring. And that’s part of what the editing process does to it.”
Pifer’s editing job doesn’t begin until Linsky and co-hosts Aaron Lammer and Evan Ratliff wrap their work. Each host books and interviews their own guests over Zoom, recording themselves through Shure SM7B microphones while guests like ESPN writer Wright Thompson and New York magazine’s Olivia Nuzzi record locally on a smart phone, which Pifer later syncs. A typical interview conversation runs 90 minutes, while the final edit clocks in around one hour.
“Whoever was the host that week will send me the raw tape along with some general notes about how they think the conversation went, any concerns they have, anything that I should particularly look out for while I’m editing,” says Pifer. “I’ve been doing this for about five years now, so the notes have gotten lighter as they started to trust me and know we were on the same page about how we wanted the show to sound.”
After editing the raw audio in Adobe Audition for content and pacing, as well as eliminating distracting stutters and filler words like um and uh, Pifer applies noise reduction and compression from processing built into the program.
Although Linsky says he’s proud of the work the Longform team has published since the pandemic began, there are some drawbacks to videoconferencing. “From a technical aspect, it’s hard to have it really be a back-and-forth conversation,” he says. “You do lose a lot in terms of body language, and part of that is just the rhythms of how people talk. It’s hard to know when to jump in, almost.”
One of the secrets of the podcast is the guests themselves. “Do you know who’s incredible at telling stories? Journalists. They’re great, natural talkers and storytellers, for the most part,” he says. “And one of the things that I’ve learned doing the show is that most journalists, even investigative war reporters, most people who do this work are on some level writing about themselves. The most memorable moments for me in the show are moments in which we’re able to see something, some kind of pattern or trend in someone’s work, that they haven’t totally recognized or seen themselves.”
Los Angeles, CA (January 21, 2021) — Rarified Heir, a new podcast that takes listeners into the surreal lives of children of celebrities, recorded its entire seven-episode debut season before COVID-19 social distancing protocols and shutdowns went into effect in spring of 2020. While many podcasters have already tweaked their recording and production workflows during the last year, Rarified Heir’s production team is now catching up to distanced recording.
“It worked out great to be face to face, but now obviously since the pandemic has set in, we’re reassessing how that goes,” says podcast producer and engineer Erik Paparozzi. “I’ve been really trying to make sure that we don’t lose the integrity of the sound that we’ve worked hard to achieve through the channels that are available to us now working remotely.”
On Rarified Heir, host Joshua Mills, son of actress and comedian Edie Adams, interviews other children of celebrities who grew up just out of the spotlight. Season one guests include Carnie Wilson, daughter of The Beach Boys’ Brian Wilson who had multi-Platinum success of her own in the ‘90s pop trio Wilson Phillips, and film producer Antonia Bogdanovich, daughter of film director Peter Bogdanovich.
Mills and Paparozzi, along with co-host Jason Klamm, recorded the entire first season at Paparozzi’s garage studio in Los Angeles. Guests sat with them in a circle in front of Shure SM7B microphones—chosen because the famed SM7 was used on Michael Jackson’s Thriller, the best-selling album of all time—while Mills led the conversations. After wrapping recording sessions in March, they got their first taste of working while distanced when it came time to edit the episodes.
Beginning later that month, they met once a week on a video conference while Paparozzi edited in Pro Tools. “Josh and I would hop on a conference call and literally go over word for word and figure out what was essential and what could be trimmed down for time purposes, or for potential future Patreon episodes that we are considering,” explains Paparozzi. Then, he would send the entire episode to Mills for another review and get back time codes for further edits. “I can just go in and chop that stuff out, and that’s been a pretty effective way of working.”
The team is working through potential setups for recording season two now. “Josh has been experimenting with what works in his home office, as far as doing Zoom calls,” notes Paparozzi. “The technology is pretty plug and play these days, but Josh, who’s not a musician or a sound dude, he’s still learning [that a] room has a certain tone to it and the microphone should maybe move a little bit, or [his] face should be closer to get the best tone.”
Gear wise, Mills is currently planning to use the Focusrite Scarlett Solo Studio kit, which includes a USB interface with a Scarlett preamp, a condenser microphone, headphones and cables. “When it was becoming apparent that being in a 15-by-15 studio was not realistic during this time, I did a cursory search on Amazon and sent Josh a few ideas [of gear to purchase],” he says. “Just to sort of get him started, we had him open up a GarageBand session. We did all this over FaceTime and he was getting a signal.”
They’re also considering audio quality on the opposite end of the recording from future guests in season two. “I think are going to focus on people that we know can record themselves well and see how it goes,” says Mills.
As a music producer, Rick Rubin is known for stripping away the clutter and guiding artists to focus on what they do best, whether it’s Johnny Cash’s deep baritone voice, the primal energy of Danzig’s guitar riffs or Run DMC’s iconic breakbeats. Broken Record, a podcast that fosters conversations between musicians and their audiences in the way album liner notes once did, follows the same premise by keeping the setup simple.
“The main focus of Broken Record is the conversation,” says Leah Rose, producer of the Pushkin Industries podcast. “Because the conversations go so deep, when you do hear the music, you hear it in an entirely new context. You might hear things that you didn’t hear before, and learning about the artist’s motivation or the backstory really adds a lot to their music.”
Producing Broken Record, which bills itself as “liner notes for the digital age,” is a bicoastal endeavor led by Rubin, co-interviewer Malcolm Gladwell and host Justin Richmond, from Shangri-La Studios in Malibu, California and Pushkin Industries’ studio in Hudson, New York. The podcast’s guest list has included Industry veterans like Bruce Springsteen and Don Was, as well as newer artists like FKA Twigs, and conversations are free-format affairs that can include playbacks of recorded music and even live, off-the-cuff performances.
In a recent episode, Rubin and artist James Blake dissected Blake’s recording and creative process, and how he often records a single vocal phrase, then stacks it and manipulates the pitch while playing along on the piano. “He lays out that entire process while he’s tinkering around on a piano during the interview, which is just really special and incredible when you hear it,” she says. “It’s like all of a sudden you have this new information to hear the song with, and it makes for an incredible experience.”
Face-to-face interviews like the one used for the Blake episode, which was recorded at Shangri-La on Neumann U87s using Neve 1073 mic preamps into an API console, are typically the most productive. [Rose says Rubin has a doctor onsite who does rapid COVID testing.] The raw audio from the Blake session clocked in at two and a half hours, giving Rose plenty of material to use when building toward the final edit.
“With Rick, nothing is linear,” she says. “As an editor, my job is to look at the entire thing as a puzzle and figure out how the pieces fit together, [to] take something that could be completely non-linear and make it linear.”
As the main facilitator and producer, Rose is on standby via Zoom during recording sessions to cue up recordings for the host and guest. Many of the episodes released in the last year were recorded with the guest at home, with mixed results. Sometimes they get lucky and the artist has a world-class studio at their disposal—as was the case with Springsteen—but often Rose works directly with the guests to ensure their recording setup will be up to standards. She’s even shipped gear to some guests.
After the interview is done, Rose compiles the audio files into an edit that gets reviewed by Richmond and Mia Lobel, executive producer at Pushkin Industries. Once the edit is locked in, she sends it to engineers Jason Gambrell and Martin Gonzalez for mastering.
Producing audio on behalf of one of the most successful and enigmatic producers of his generation might intimidate some, but Rose says Rubin is hands-off for most of the process. “He trusts us,” she explains. “We take the finished product, the conversation, once it’s done and then it’s really up to us to figure out the best way to present it.”
As if facilitating pristine indoor recordings isn’t hard enough, some podcasters seek out harsh audio environments in order to bring adventurous stories to life. We’ve brought together some of the best field recording pros in the business here to share insights they’ve learned on location. Read on to see how they get the job done in the face of wind, water and reverberant warehouses.
Outside Podcast and Outside/In
More than 40 volcanoes in Alaska’s Aleutian Islands form the northern curve of the infamous ring of fire that encircles the Pacific Ocean with hundreds of active peaks. But for audio producers charged with field recording in the region, that’s not even the most terrifying fact about this vast region of fire and ice.
“I don’t know if you are familiar with the Aleutian Islands,” says audio storyteller and podcast producer Stephanie Joyce with a laugh, “but their nickname is, ‘the birthplace of the winds.’”
Smuggled dinosaur bones? Scuba diving under a pyramid? Binaural audio recording onsite? It’s all part of the Overheard at National Geographic podcast’s third season. For the show’s production team, gathering field recordings from exotic locations and subjects is just another day at the office.
“I went to a warehouse in Queens [New York] where a paleontologist had dinosaur fossils given to her by Homeland Security because they had been illegally shipped to the United States,” says producer Brian Gutierrez. “Just following her with the recorder and letting her tell her story, I think brings you into the moment more than just being in the studio.”
The environmental touches that connect listeners to place and setting in Missing in Alaska are the real deal. When producer Seth Nicholas Johnson needed sounds to represent the idea of lowering a search boat into the water, he simply referenced their own collection of curated audio, captured while field recording on location.
“It’s like, ‘Okay, we’re building Alaska, we’re painting a picture of this three-day trip and this search, there’s no need to pretend that just a random soundscape of the ocean that I found online was the Pacific Ocean,’” says Johnson.
Capturing the vibe of a big-budget spy thriller was crucial for Wind of Change, a podcast that asks an intriguing but potentially dangerous question: What if the U.S. Central Intelligence Agency wrote “Wind of Change,” the enormously successfully 1991 power ballad by hard rockers Scorpions, in a bid to bring the Cold War to an end?
While chasing leads and operatives from New York to Russia and Germany, producer Henry Molofsky was tasked with capturing audio in a multitude of environments—a Scorpions stadium concert held in Russia, a boat on the Moskva River in Moscow on a windy night, telephone calls with secret agents, and even random hotel rooms with former CIA spies.
2020 will be remembered as the year we’d like to forget, but when 2021 is recalled one day as the year everything bounced back, much of that will be due to groundwork laid down in the preceding 12 months. That includes the pro-audio industry—next year, when live events and concerts return, new hits rule the airwaves and the latest must-hear podcasts land in your listening queue, many of them will be created using pro-audio equipment that was introduced over the last 12 months. With that in mind, here’s the Gear of the Year for 2020.
So what was the Gear of the Year? That’s not an easy thing to determine, so rather than weigh a hot new plug-in against an arena-filling P.A. or an audio console years in development, we decided to let our readers show the way.
Product announcements have always been among the most popular stories on prosoundnetwork.com, so we dug through our Google Analytics (readership statistics), sifting through all the “new product” stories we ran 2020 (well into the triple digits!) to determine which ones were the most popular with PSN readers. With that in mind, here’s the Gear of the Year that YOU unknowingly picked—a true Top-20 for 2020.
1. YAMAHA RIVAGE PM3 AND PM5 DIGITAL MIXING SYSTEMS
This dual product launch in May was far and away the most popular product announcement of 2020 with our readers. Yamaha introduced two consoles—the PM5 and PM3—as well as a pair of DSP engines—DSPRX and DSP-RX-EX—and version 4 firmware that provides features to new and legacy Rivage systems.
Both of the new consoles feature large capacitive touchscreens that allow users to use multi-finger gestures, with the PM5 sporting three screens and the PM3 getting one. As with their predecessors, the PM5 and PM3 sport 38 faders—three bays of 12, with two masters—but each of the new control surfaces is laid out with an eye toward increased efficiency.
2. SOLID STATE LOGIC 2 AND 2+ USB AUDIO INTERFACES Solid State Logic unveiled its first personal studio-market products—the USB-powered SSL 2 (2-in/2-out) and SSL 2+ (2-in/4-out) audio interfaces—at the Winter NAMM Show. The 2+ in particular caught our readers’ eyes, with a 4K analog enhancement mode “inspired by classic SSL consoles,” monitoring and an SSL Production Pack software bundle. Offering expanded I/O for musicians collaborating, it includes two analog mic preamps, 24-bit/192 kHz AD/DA AKM converters, multiple headphone outputs with independent monitor mix, MIDI I/O, and additional unbalanced outputs for DJ mixers.
3. JBL 4349 STUDIO MONITOR
The JBL 4349 studio monitor is a compact, high-performance monitor loudspeaker built around the JBL D2415K dual 1.5-inch compression driver mated to a large format, High-Definition Imaging (HDI) horn, paired with a 12-inch cast-frame and pure-pulp cone woofer. The JBL D2415K compression driver features a pair of lightweight polymer annular diaphragms with reduced diaphragm mass, while the V-shaped geometry of the annular diaphragm reduces breakup modes, eliminates time smear and reduces distortion, according to JBL.
4. APPLE LOGIC PRO X 10.5 Apple updated Logic Pro X with a “professional” version of Live Loops, new sampling features and new and revamped beatmaking tools. Live Loops lets users arrange loops, samples and recordings on a grid to build musical ideas, which can then be further developed on Logic’s timeline. Remix FX brings effects to Live Loops that can be used in real time, while the updated Sampler augments the EXS24 plug-in with new sound shaping controls. Other new tools include Quick Sampler, Step Sequencer, Drum Synth and Drum Machine Designer.
5. AMS NEVE 8424 CONSOLE
The AMS Neve 8424 is a small-format desk based on the 80-series console range. Intended for hybrid studios, the desk provides a center point between analog outboard gear, synths and the like, and the digital world of DAW workflows, software plug-ins and session recall. As an analog mixing platform, the 8424 offers 24 DAW returns across 24 channel faders or, for larger DAW sessions, a 48-Mix mode that allows a total of 48 mono inputs with individual level and pan controls to be mixed through the stereo mix bus.
6. MILLENNIA MEDIA HV-316 MIC PREAMP Millennia Media bowed its fully remote-controllable microphone preamplifier, the HV-316. Offering 12V battery operation, the HV-316 is housed in a 10-pound, 1U aluminum chassis housing 16 channels of Millennia HV-3 microphone preamplifiers with simultaneous analog and Dante 32-bit/192 kHz Ethernet outputs. Other digital audio output options are planned, including USB and MADI. The unit is designed for high-temperature continuous operation (up to 150° F), is powered by both 12V DC and worldwide 80–264V AC, and features “pi filter” shielding on audio and digital feeds to prevent interference.
7. SHURE SLX-D DIGITAL WIRELESS SYSTEM
The Shure SLX-D, offered in single- and dual-channel models, provides operation of up to 32 channels per frequency band. Transmitters run on standard AA batteries or an optional lithium-ion rechargeable battery solution with a dual-docking charging station. For less technically inclined users, it offers Guided Frequency Setup and a Group Scan feature that sets up multiple channels by assigning frequencies to all receivers automatically via Ethernet connections, allowing a 30-plus channel system can be set up via Group Scan within a few seconds.
8. MEYER SOUND SPACEMAP GO
The Meyer Sound Spacemap Go is a free Apple iPad app for spatial sound design and mixing. Working with the company’s Galaxy Network Platform, Spacemap Go can control Galaxy processors using a single or multiple iPads as long as the units have current firmware and Compass control software. Spacemap Go is compatible with various sound design/show control programs such as QLab, so designs assembled using them can be implemented into a multichannel spatial mix using Spacemap Go’s templates for common multichannel configurations.
9. D&B AUDIOTECHNIK 44S LOUDSPEAKER
Housed in a flush-mountable cabinet, the d&b audiotechnik 44S is a two-way passive, point source installation loudspeaker with 2 x 4.5-inch neodymium LF drivers and 2 x 1.25-inch HF dome tweeters, delivering a frequency response of 90 Hz–17 kHz. The 44S features a waveguide and baffle design intended to provide horizontal dispersion down to the lower frequencies while being focused vertically, providing a 90° x 30° dispersion pattern to direct sound to specific spaces.
10. BEYERDYNAMIC TG D70 AND TG 151 MICS Beyerdynamic made two additions to its Touring Gear (TG) series. The second-generation TG D70 dynamic kickdrum mic is meant for capturing the impact of bass drums and similar low-frequency-intensive instruments, while the TG 151 instrument mic is a lean microphone with a short shaft that can be used on everything from snares and toms to brass instruments and guitar amplifiers.
11. QSC Q-SYS CORE PROCESSORS QSC’s Q-SYS Core 8 Flex and Nano audio, video and control processors provide scalable DSP processing, video routing and bridging for web conferencing, as well as third-party endpoint integration without the need for separate dedicated control processors. The 8 Flex includes onboard analog audio I/O and GPIO plus network I/O, while Nano offers network-only audio I/O processing and control.
12. TELEFUNKEN TF11 MICROPHONE Telefunken‘s TF11 is the company’s first phantom-powered large-diaphragm condenser mic. The CK12-style edge-terminated capsule is a single-membrane version of the capsule featured in the TF51, and the amplifier is a proprietary take on the FET mic amplifier similar to the M60, coupled with a custom large-format nickel-iron core transformer.
13. L-ACOUSTICS K3 LOUDSPEAKER
K3 is a compact loudspeaker from L-Acoustics that is intended as a main system to cover up to 10,000 people, or for use as outfills or delays for K1 or K2 systems. Designed as a full-range line source, K3 integrates 12-inch transducers for large-format system performance in the form factor of a 10-inch design.
14. CLEAR-COM HEADSET SANITIZATION KITS Clear-Com has sanitization kits for its CC-300, CC-400, CC-110, CC-220 and CC-26K headsets. They include replacement ear pads, pop filters, sanitizing wipes, ear sock covers and temple pads in a cloth bag. Items for each kit vary depending on the headset, and can also be purchased separately.
15. ZOOM PODTRAK P8 PODCAST STUDIO
The Zoom PodTrak P8 provides recording, editing and mixing capabilities all in one unit. Six mics, a smartphone and PC can be recorded simultaneously, each with its own fader and preamp with 70 dB of gain. A touchscreen controls monitoring, adjusting, onboard editing and more.
16. WAVES SHIPS KALEIDOSCOPES PLUG-IN Waves’ Kaleidoscopes plug-in creates classic analog studio effects such as 1960s phasing and tape flanging, 1970s stadium tremolo-guitar vibes and 1980s chorus sounds.
17. OUTLINE STADIA 28 LINE ARRAY SYSTEM
The Outline Stadia 28 is a medium-throw system intended for use in permanent outdoor installations. A single enclosure weighs 46.2 pounds and can reportedly reach 139 dB SPL.
18. LAB.GRUPPEN FA SERIES AMPLIFIERS Lab.gruppen‘s FA Series Energy Star-certified amplifiers are intended for commercial and industrial applications, and are offered in 2 x 60W, 2 x 120W and 2 x 240W.
19. D.W. FEARN VT-2 PREAMPLIFIER
The updated D.W. Fearn VT-2 Dual-Channel Vacuum Tube Microphone Preamplifier now features an integrated, switchable 43 dB pad, aiding patching into a master bus.
20. KEF LS50 META SPEAKER
Our Gear of the Year list concludes with the LS50, featuring KEF’s Metamaterial Absorption Technology driver array, a cone neck decoupler, offset flexible bass port, low-diffraction curved baffle and more.
Stockholm, Sweden (November 12, 2020)—Practical real-time remote music creation and collaboration took a step closer to reality recently when Stockholm-based developer Elk began posting videos made by beta-testers of its Aloha service. Conceived for use over high-speed internet and, ultimately, over 5G networks, Aloha combines ultra-low latency audio with a video chat user experience and is scheduled for a 2021 release.
The effect of latency—the time it takes for a signal to pass through a digital audio system and back to the originator’s ears—varies from one individual to the next and according to musical content. One rule of thumb is that latency becomes noticeable at 15 to 30 milliseconds, but performers are often more sensitive, and some find that more than 7mS is too much to handle and remain in sync (for example, U2’s Bono favors an analog in-ear monitoring path for that reason).
“We knew there would always be limitations; the speed of light is still there,” says Elk CEO and co-founder Michele Benincaso over Zoom from Sweden, “but there are things to make it better and there are different kinds of experiences. If we want to have an in-the-same-room experience for a pro musician, with good fiber, you’re looking at around 1,000 km,” or about 620 miles, maximum, between participants.
Benincaso’s path to developing Aloha was somewhat circuitous. Born in Italy, he has a master’s degree in violin making. Yet despite such a decidedly analog beginning, “I’ve always been fascinated by technology, and I had a dream to put technology and musical instruments together,” he says.
His original goal was to create the guitar of the future. “We had a vision to bring the guitar to a level where you can give the musician access to more sounds and a new way of expression.”
About six years ago, Benincaso met a professor from Stockholm’s Royal College of Technology, one of Elk’s co-founders, who introduced him to the potential of modern wireless communications and the Internet of Things. In pursuit of his original vision, that led ultimately to the development of the Linux-based Elk Audio OS.
“The magic is that we can run it on a general-purpose CPU; we use Raspberry Pi, but we can do analog-to-analog in one millisecond. There’s [typically] no way of getting that without a very expensive computer and a very expensive audio interface.”
Through the company’s relationship with Steinberg, Elk has already ported over 500 VST plug-ins to its OS. The Elk Audio OS is at the core of the company’s Sensus smart guitar and other products, and it also enabled the building of a custom solution—in combination with Fishman and Arturia—for Muse frontman Matt Bellamy to generate Prophet V synth sounds from his wireless guitar.
It is the Elk Audio OS within in Aloha, which will ship with a small stereo audio device and an Ethernet cable, that makes real-time remote collaboration feasible. “There are technologies that work on a LAN, like Dante, but the internet is another story,” Benincaso says. “What Aloha technology in the device does is optimize the audio to be sent to the internet. Aloha considers packet drop, distance and jitter.”
Jitter can be worse than latency, he says. “In an orchestra, you can have 30, 40 milliseconds between one side of the orchestra and the other, but it’s consistent, so musicians adjust. What is harder is unpredictable latency—jitter. Aloha, based on the network conditions, adapts to constantly deliver the most consistent latency. And we use other technologies to maximize the audio experience for the user.”
No less important is the user experience, he says. “It’s a video chat app that you can access on a computer, phone or tablet. It’s important to have the visual feedback from the other musicians and also for recording and livestreaming. Once the device is connected, you will see a list of people connected. Pick who you want to play with, call him or her and you’re ready.”
An early version of Aloha was previewed by Swedish telecom company Ericsson at the 2019 Mobile Congress in Barcelona, where a band performed from two locations in the first live demo of the service over 5G. With 5G, processing power in the cloud will eliminate the need for a local CPU, RAM or dedicated audio interface. For Aloha, he says, “Edge computing is where you get ultra-low latency, so there will be a major impact when 5G comes.”
Elk has been flooded over the past several months with suggestions for where Aloha might also be employed, such as in music education, says Benincaso, so the company has now adjusted its focus and put more resources behind launching the service. However people use it, he says, “We let musicians connect in the shortest time possible with a seamless experience.”
United Kingdom (November 12, 2020)—Where does a rock legend record his podcast? Anywhere he wants to. That’s certainly been the case with Digging Deep with Robert Plant, where the famed Led Zeppelin frontman and solo artist discusses his work across his long and storied career. Every podcast recording session is held in a different location with distinctive acoustics, such as Plant’s favorite pub, one of his homes or in front of an audience of 200 people at a London record store.
Faced with recording in such diverse environments, Matt Everitt, the producer and co-host of Digging Deep, sticks to hard-and-fast rules for microphone placement when tracking the music legend’s stories about songs he recorded with Led Zeppelin and his many post-Zep projects.
“When it comes to singing, obviously he’s got incredible microphone technique, but [for the podcast] we spend quite a bit of time beforehand making sure that wherever we’re going to be sitting, there’s a good kind of catchment area,” says Everitt. “You’ve got to keep an eye on the mic positioning—never handheld, always boom, always between the nose and the chin point.”
While the recording sites might occasionally pose a challenge, the reward, says Everitt, is that they foster engaging discussion. “We’re going to make sure the production standards are good, but it’s also about creating a space where Robert can really relax,” he says. “Part of the production is making it feel natural—not feel like you’re sitting in a chair under a spotlight being interrogated, because he’s not interested in that and neither are we. [We try] to make it a place where you feel like you are eavesdropping.”
Achieving uniformity in such a range of spaces can be difficult, so Everitt records Plant with a Beyerdynamic M201 microphone that has a hypercardioid pattern. “They’re pretty directional, which means that sometimes people are a bit scared of using them because the catchment is quite narrow, but they sound so warm.”
Another mainstay of Everitt’s on-location setup is to use extra-thick cables: “The thicker the cable, the more reliable it is, the better it sounds—simple as that,” he says. He tracks to a Zoom H6 portable recorder for its ability to maintain separation between channels.
During post production, Everitt and the audio team work up a fairly completed product for Plant to review, even if it’s only a first cut. Everitt compiles the audio so the mastering and EQ pros can clean it up and take out any clicks and hisses, and then he assembles a “version one” edit, occasionally moving pieces around to maintain story pacing. Plant then listens and gives his input on what does and doesn’t work.
“He’s more knowledgeable than anyone about how he wants the show to sound,” Everitt says. “A lot of that’s worked out pre-interview. We don’t talk too much about what’s going to be in it because it takes away the spontaneity, but we’ll know why this song is really interesting.
“I think one of the reasons it works is that there’s a real honesty,” he adds. “He takes his music very seriously, but I don’t think he always takes the world around showbiz particularly seriously, so he’s happy to puncture some of the myths around the kind of ‘rock god’ world.”
While many podcasts are leaning into the limitations of COVID culture and adapting to audio recorded over a videoconferencing platform or iPhone, Everitt is playing a longer game with Digging Deep and creating a podcast that isn’t tied to a particular moment in time.
“It’s great doing podcasts over Zoom, it’s fantastic, but we’ve spent a lot of time and effort investing in microphones and audio equipment to get people sounding great because the ears deserve a really well-produced show,” he says.
“They’re all good, all those approaches. Sometimes you need to listen to Fugazi, sometimes you need to listen to Steely Dan. Whether it’s a garage band or a beautifully produced L.A. session thing, both are good depending on what you want. That’s the power of the format, isn’t it? The power of podcasting.”
New York, NY (October 22, 2020)—Sound design choices in podcast production often follow the theme of the work. But when a podcast carries the weight of journalistic credibility, as Radiolab producer Dylan Keefe notes, there is a big difference between using sound in service of telling a story and using sound to help tell a true story.
“You have to have a journalistic mind or understanding in order to do something like Radiolab, because sound is an editorial statement,” says Keefe. “You can easily mislead people with sound. It’s like being a photographer: Where you point the lens or where you put it in a story are all editorial choices. If you’re not aware of that stuff, [and] you’re just trying to make cool sounds or make things dramatic, you’re not going to be trusted, and that’s a big deal.”
The two-time Peabody award-winning radio program produced by WNYC was already tailor made for podcast audiences when it hit airwaves in 2002, remaking long-form journalism with an ear toward sound design. To this day, the show takes listeners on a deep dive into a new topic every week. Creator and host Jad Abumrad established its light-touch audio style from the beginning, a tradition Keefe carries forward today.
“I put a priority on trying to [achieve] Jad’s vision,” he says. “I wasn’t really chomping at the bit to put my own stamp on it. My real love is for narrative non-fiction, and being able to exist in a musical and sound environment within narrative non-fiction is the coolest ever.”
To maintain the show’s sonic stamp while recording remotely, Abumrad is using an AKG C414 large-diaphragm condenser mic into a Universal Audio interface. The show’s production team and reporters at home are using a variety of other mics, including direct USB podcasting mics. The ongoing situation has also forced the producers to abandon their “no phoners” stance, at least as a last resort. That can make for some nightmare mixing scenarios, Keefe says, which can mean more time spent in post production.
“Our hosts, reporters and producers record directly into Pro Tools and we sync them in post,” he says. “Obviously, right now, everything is remote, so we use a combination of tape syncs and interviews over ipDTL, Zoom, phone, ISDN—anything we can do.”
The variables change with each week, as well. The recent episode “Dispatches from 1918” contains five stories within a single podcast episode, with separate sound motifs for each segment. Another, “Baby Blue Blood Drive,” mixes field recordings of horseshoe crabs with studio-recorded narration. An episode on the bizarre behaviors a female Octopus displays while watching over her eggs, including self-starvation and eventually death, took a different creative turn.
“In the ‘Octomom’ piece,” he says, “we even went as far as determining that there was going to be a symphony-type sound that was going to be describing a particular way in which the brain of the octopus shuts down over time. The scientists we were talking to described it as like a symphony that was slowly shutting down, like voice by voice.”
In order to keep up with Radiolab’s busy schedule, producing a new episode every week, Keefe gets involved early in the editorial process. “Each story is very different, although it’s all told through the Radiolab voice,” he says. “I’m at all the pitch meetings where we’re deciding what stories get green-lighted and [I] have a voice, so I can see it from beginning to end. Because in Radiolab, the sonic identity of it is…I wouldn’t say just as important, but it’s up there as important as just the straight-up journalism behind it.”