The way we listen to music today is not going to last. A bevy of new technologies is set to radically change our relationship to auditory media. Novel speaker materials, remarkable advances in recording equipment, and pioneering mind-machine interfaces have perched our culture on the verge of a world we might scarcely recognize: where music can be played back on any surface; where headphones have been replaced by custom, isolated, open-air audioscapes; and where we don’t even need mouths to sing or hands to play our instruments. For your consideration, I present the following major innovations – each of which, sooner or later, will force us to reconsider how we think about communication.
Like many wonderful discoveries, the first example here arose from failure. In an attempt by the UK Ministry of Defense to find a suitable sound dampening material for their helicopters, they instead stumbled upon a unique honeycomb structure that conducts sound with surprising efficiency. The technology has since been sold to NXT Sound; marketed as SurfaceSound, the innovative design is being crafted into folding flat-panel speakers (14 mm thick) and “speakerless” automobile interiors and mobile phones.
New Ultra-Thin Speakers by NXT
It has also been fashioned into transparent overlays for computer screens, which can be segregated into as many as six isolated sound panes. It’s only a matter of time (less than a year, according to NXT’s projections) before we might see integrated speakers in our greeting cards and digital photo displays, or ultra-thin clip-on speakers for juicing up obsolete non-musical surfaces. One of the most exciting prospects for SurfaceSound is as a responsive natural interface for audio engineering. According to a Discovery News article, it “can be made to vibrate when touched, with individual frequencies tailored to each finger” (a benefit of its capacity to be partitioned). With the ability to place sound-conducting surfaces almost anywhere imaginable, the next challenge for NXT seems simple enough: to make “silent loudspeakers,” which can only be heard when the listener is in direct contact with the speaker surface.
This is an end that may already have been achieved (albeit through different means) by Holosonic Research Labs. Their incredibly cool Audio Spotlight technology fires a narrow beam of ultrasound that distorts in a predictable pattern as it travels through air. The result is the sonic equivalent of a laser – an invisible ray of sound that can only be heard by someone standing directly in its path. (A technical breakdown of Audio Spotlight is available here.)
In case you missed the impact of this technology, I’ll say it again: Audio Spotlight turns the air into a loudspeaker that can only be heard by standing inside of it. Sound can be projected like a beam of light, bounced off of surfaces, and manipulated in all kinds of other novel ways. The New York Times called Audio Spotlight “the most radical technological development in acoustics since the coil loudspeaker was invented in 1925” – and with good reason.
Headphone museum tours will soon be a thing of the past, to be replaced with isolated audio programs for each display. You will be able to listen to music over open air in a public library without concern. The insane cacophony of public advertisements will be forgotten in favor of more discrete “hotspots” (which savvy pedestrians will learn to systematically avoid). You’ll never register a noise complaint against your neighbor’s bass-heavy stereo system again. Performing musicians will be able to broadcast multiple sub-mixes to their audiences to compensate for micro-variations in venue acoustics – or even play several concerts at once, through which listeners can move as they dance from one end of the room to the other.
This revolutionary technology is already being adapted by an impressive array of clients, including Eastman Kodak, Hewlett-Packard, GM, Motorola, and the amusement ride developers at Walt Disney Imagineering. (A full list of current applications can be found here.)
With these innovations soon to hit the market, it won’t be long before teenagers are digging iPod earbuds out of the attic and querying their Internet implants as to what the hell those curious things are. And when they do, they’ll probably be using technology similar to Emotiv Systems‘ Epoc, a new videogaming interface that replaces handheld controllers with a mind-reading headset.
Epoc “Mind-Reading” Headset
Combining 100-year old EEG technology with new software algorithms that analyze human brainwave patterns, the Epoc is a glorified biofeedback device, enabling its users to navigate computer interfaces with nothing more than intent. Beyond its immediate gaming applications (headsets will be on the market for $300 this Christmas), Emotiv is exploring numerous applications in robotics, education, and medicine – making it possible, for example, for quadriplegics to operate household devices on their own. I’m estimating about a year before progressive musical acts are using these or similar headsets to control electronic music production arrays – heralding the advent of a long-imagined age when artists are able to directly convey their thoughts to an audience.
(A speculative recipe: combining the Epoc with the Audio Spotlight yields the potential for multi-scaped audio arrays that are activated and operated without so much as lifting a finger.)
And if that weren’t enough, it is easy to imagine how such a device – apparently already well on the road to ubiquity – might catalyze a radical development of mental acuity in our culture. Using these technologies could help people to develop what is currently an uncommon finesse for concentration and intent, improving skills in mental focus and self-control. I can already hear future generations marveling with pity and disbelief at our current society’s limited attention spans and cognitive agency.
(For more on this, check out this profile from the Discovery Channel.)
The Epoc’s clever decryption of brainwave semantics has its limitations, however. One significant “drawback” (if that term can even be applied to such a stunning advancement) is that it cannot read your mind with enough precision to decode speech. You’ll still have to move your mouth to talk.
Unless, that is, you’re using Ambient Corporation’s Audeo, a neckband-mounted microchip that relays nerve impulses on their way to the vocal cords to a computer, where they are translated into an audible computerized voice.
Ambient Corporation’s Audeo, doing its thing.
Although the device can currently recognize fewer than 200 words, Ambient is working to release an improved model by the end of the year that recognizes individual phonemes and has a functionally limitless vocabulary. Michael Callahan, Ambient’s twenty-four year old co-founder, recently placed the first public “voiceless phone call” at a technology conference (You can find the video embedded in this New Scientist article on the Audeo.) In support of my hypothesis that we are fostering a generation of “techno-yogis,” Callahan explains that in order to send out clean electrical signals that the Audeo can understand, one must practice the specific, deliberate imagining of voicing each word – a technique he describes as “a level above thinking.”
It’s an innovation whose significance extends beyond the obvious benefit of giving voice to the speech impaired. Private telephone calls will be made in public – by people who look like they’re listening to you. Ventriloquism through invisible wall-speakers and audio beams will further challenge our confidence in human perception. Maybe our hyper-attentive descendents will even be able to deliver two different speeches at once. (Most of us already know how to talk without thinking; all it would require is to also talk while thinking.)
But in my opinion, fettered as I am by my uni-lingual peasantry, one possible application takes the cake. Linguistic software could be packed into the neck-mounted auxiliary computer, finally realizing the long-fantasized Universal Translator.
It’s not technomusical telepathy yet, but we’re getting close. Indeed, the future is singing quite a tune.
“Tamara’s Heart” by Michael Garfield, republished from Zaadz Visionary Music.