Tamra Lucid: What is Drone Cinema and what inspired you to create the Drone Cinema Film Festivals?
Kim Cascone: Drone Cinema is mesmerizing, slow, hypnotic, cinematic tapestries woven from the drones of light and sound. The drone has been a part of musical history for thousands of years—the word “drone” deserves to be reclaimed. The inspiration for the festival came from my love of 60’s and 70’s experimental cinema. There was a lot of innovation and exploration of the relationship between film and sound during that era. Experimentation was in the air, artistic boundaries were breached and expanded cinema was birthed. I felt that this sort of cinema needed to be rekindled and thought it would be interesting to see what drone-oriented sound artists would create when asked to portray the sound of a drone in cinematic form.
You’ve commented on the way that technological and media arts favor materialism. You’ve called for an approach where “the material is the carrier for the spiritual.” How do you accomplish this in your own work, and have you seen any progress in that direction?
It’s an unfortunate development in the arts that more and more creative work is offloaded on to technology. In the academic sound art community there is a lot of attention on, and ink spilled about, the “materiality of sound.” Sound art has become primarily focused on the sensory/perceptual experience of it and the technology used to produce it. So from the get-go it’s been heavily rooted in the material plane.
I prefer to work more like a writer or painter. I’ll begin work on a piece by doing months of research in a particular subject before I begin to compose. In pre-composition mode I spend time meditating and writing in my journal. After months of this I build the basic structure of a piece then sketch in details on music paper. I test different ideas in the studio and short studies fall out of that that are sometimes worthy of release. These studies are like landing lights or lights along a dark path for me.
To me, pre-composition is a form of world-building—it nourishes my imagination and allows me to channel musical ideas before picking up an instrument or going to a laptop. I picked up these practices in the 70’s from friends who were painters, writers and filmmakers. My music school never taught us anything about pre-composition—they just taught the mechanics and left the abstract creative stuff up to us.
How can music be a portal, as opposed to mere product?
A few years ago I picked up a wonderful book at the Rudolf Steiner Bookstore in Manhattan titled “Sound Between Matter and Spirit” by Frits Julius in which he talks about concentrating on a blue flower until it becomes “the narrow mouth of a harbor on which we sail out on a sea of color.”
So the formation of a portal or conduit must start with the artist. If they have developed what Goethe calls “organs of perception,” they are sensitized to things in their environment that others miss. And for me this state of mind is a very important part of pre-composition, i.e. using new perceptual apparatus in service of world-building.
The act of infusing an artifact with energies gathered from this other plane is the difficult part of the work. It is like creating a sigil or divination: the artifact stores those energies that the percipient uses to form a conduit and acts much like the blue flower I mentioned. John Cage did this by throwing coins and using the I Ching to create many of his musical works and John Coltrane channeled this other plane in his music; these two men in particular created many musical portals.
Jaron Lanier has said that the internet economy has replaced money with attention. All creatives find themselves competing with everyone else for space in the feed. What are your observations about how this has influenced music and your own creativity?
The act of directing someone’s attention is not unlike what a stage magician does, but in online media it works on more levels, e.g. from where the viewer’s eyes travel on a screen, colors and shapes used in graphics, sound effects, selecting the right symbols, words and phrases to trigger certain responses, etc.
Digital marketing took what it learned from television advertising, pumped it full of neuro-marketing science and leveraged the power of analytics and metadata to control what people click on, and consume. We only need look at Facebook to see how social media is used to elicit certain emotional responses by controlling what appears in one’s feed. The newly minted job of “influencer” is an indication of just how normalized this has become.
Culture has become a schizophrenic tapestry woven from trends and hashtags found in social media, and most music has become a form of selfie posted to social media. And because the Internet moves as the speed of light, culture is driven by the public’s constant anticipation of the new. Technology fulfills the public’s voracious appetite for constant newness easily. There is no shortage of bedroom artists ready and willing to feed this beast for their fifteen tweets of fame. I maintain a certain mindset when using social media: I filter out the pollution and try to keep a certain detachment. If you don’t protect yourself you leave yourself open to having your imagination atrophied.
Your performances have included acoustic interferences played in total darkness and low frequencies that pulsate a room not in the acoustics of the space but in the basilar membrane. Can you share more information about these experiments and your motivation?
While developing my Subtle Listening workshops I researched using sound to elicit altered states of consciousness. I began experimenting with binaural beats and testing them on myself.
I found some obscure techniques for creating beat frequencies and I programmed some of them in an environment called Pure Data (a dataflow programming language) then tested them on myself during meditation sessions.
I researched other ideas about sound and consciousness and folded those into my mix of various techniques. These experiments yielded some interesting experiences during meditation but it wasn’t until I performed “Dark Stations” live that I discovered psycho-acoustic properties like the modulation of the inner ear using low frequency beating. I created “beat stacks” that introduced more than one beat frequency at a time then mixed that with a sheet of sinewave frequencies that I put in a rear speaker. This set up patterns of acoustic interference in a room where if the listener moved to another spot in the room resulted in hearing a completely different mix of beat frequencies. Some of the audience reported out-of-body experiences, another reported seeing a deceased relative while yet another saw slowly evolving fractal-like mandalas with their eyes closed.
You’ve differentiated between divination and intention as sources of inspiration. How does this relate to the inspiration you draw from the work of Rudolf Steiner and the occult/metaphysical in general?
It’s difficult to put into words, but the mechanical process of artistic work is fueled with intention, i.e. a conscious waking-world decision to create something. As I immerse myself in reading and meditation, an inner world starts to form. Once this inner world starts to form it guides your waking-world decisions, but you need to develop the correct organs of perception to sense this. Then you need to go back and forth between the divination of creating the inner world and the intention of manifesting in the outer one.
In my Subtle Listening workshops, I conduct a guided meditation that brings the meditator to an inner world then have them bring back whatever sounds they heard there and use them in a piece of sound art or music. This was adapted from Jung’s work in active imagination which I’m surprised is not used more in the establishment of an artist’s creative process.
You learned to meditate in a Buddhist Center in the 70s. What were the 70s like for you, especially in relation to music?
The early 70’s were the reverb tail of the 60’s, so there was a continued interest in Eastern spiritual practices like Buddhism, macrobiotics and meditation. There was an earnest integration of spiritual experiences in music, as evidenced by the music of John & Alice Coltrane, Pharaoh Sanders, Sun Ra and Miles Davis, as well as John Cage. The 70’s quickly devolved into the gaudy plastic, hedonistic culture of cocaine and disco music that was countered with the rise of punk rock. So when disco and punk become the dominant strains in culture, I gravitated towards listening to Can, Ashra Tempel, Popol Vuh and experimental composers like Morton Feldman, Earle Brown, Iannis Xenakis, Pauline Oliveros.
I was deeply involved in electronic music and this became the focus of my studies after I left Berklee College of Music in 1976. I studied synthesis privately with a composer at the New School in Manhattan, but most of the compositional techniques I learned on my own. The 70’s was a period of intense study and listening to as much experimental music as I could get my hands on.
What was it like working on Twin Peaks and Wild at Heart as an assistant music editor?
It was very educational. As I worked more and more with cinematic concepts in my music, I became interested in the possibility of working in film sound in the late 80’s. Through contacts at the studio I recorded at, I landed a job as a post-production intern at the Saul Zaentz Film Center in Berkeley, where I eventually got to work on the pilot for “Twin Peaks.”
This was in 1989, just before sound editing went digital. Editing and compositing sound was still done using reels of 35mm magnetic stock and mixed on huge machines called dubbers—very medieval technology compared to Pro Tools, which didn’t really exist at that time. There was software, I think was called “Cue Sheet,” which would sync to video via SMPTE and trigger sounds on an Emu sampler.
I learned a lot about sound design by working in film sound and watching sound editors work. I was inspired by how they selected or recorded sounds, and how they layered sounds to create certain effects—it was like going to sound design grad school.
After a while I worked as an assistant on the foley stage at Skywalker Ranch in Marin, then was hired on another Lynch film called “Wild At Heart.” I got to watch Lynch do sound design in the mix theater, and I have to say, watching him work was an education in itself—how he worked from his unconscious, he could hear in his imagination how certain sounds worked together, knew how pitch-shifting a sound and mixing with something else would create a certain sort of feeling. It was like sitting at the feet of a guru in many ways.
Why did you choose to release four albums in the mid 90s under the pseudonym Heavenly Music Corporation instead of your own name?
I had been releasing dark ambient-industrial work as PGR before that, but I wanted to differentiate that project from the newer chill-room ambient material I had been creating.
The aesthetic of Silent, as well as my own, was heavily influenced by the San Francisco rave/chill-room scene. Our employees discovered new music at raves, DJ’s would come by the office to buy imports—so there was a lot of cross-pollination happening. Many ambient-industrial artists created beat-oriented ambient tracks under pseudonyms—there was something in the air, a kind of permission to explore a less dystopian, more lysergic aesthetic. I borrowed the name of a Fripp & Eno piece (called “Heavenly Music Corporation” from their “No Pussyfooting” album) that conveyed the mood in the San Francisco chill room scene then. It wasn’t until 1998 or so that I started using my own name on the more experimental computer music pieces I was working on.
Why did you sell Silent Records and Pulsoniq Distribution in 1996 to take a job as a sound designer and composer for Thomas Dolby’s company Headspace?
It’s a long story, but essentially the indie music industry was changing very rapidly, digital was just on the horizon, the web was brand new, downloadable music was still in its embryonic form, and we suffered a distribution snafu here in the US that pretty much left us broke. Our choice was either to face bankruptcy or to sell the company. An employee of ours expressed an interest in taking it over so we sold him the company and Kathleen and I reentered civilian life.
I found work as a sound editor for Thomas Dolby’s company Headspace that was doing the sound for a video game called “Obsidian.” After the game was completed Thomas started his web audio company Beatnik and kept me on as a sound editor, where I worked on a sound bank used to sonify web pages.
I was there for two years and saw the tidal wave called mp3 coming, so I wasn’t all that confident about the future of sonified web pages, especially given the fact that most people still worked on desktop computers in office environments where sonified web pages would be intrusive and distracting.
After Headspace, you became Director of Content for Staccato Systems, a spin-off from CCRMA, Stanford University. You co-invented an algorithm for realistic audio atmospheres and backgrounds for video games called Event Modeling. What was it like in those early days of the new order of video games?
While at Beatnik, I learned a music programming language called Csound and created my album “blueCube( )” with it. I became very interested in computer music and explored other software tools and digital synthesis techniques that were emerging at that time. After Csound I learned Max/MSP and started building my own sound tools.
While I was at Beatnik, I created these little cinematic sound dioramas using the Beatnik sound engine. I can’t remember all the ones I created, but one of them was a winter scene with the crunching sounds of walking in snow, sleigh bells and horse whinny sounds.
I took the idea of crafting cinematic sound scenes with me to Staccato Systems. But rather than work with a pre-determined sequence of sounds that played back the same way each time, Staccato’s software tools allowed for different flavors of randomness to be used in how and which sounds were triggered each time a scene was visited. This was useful in creating more realistic game sound.
As an example, a car crash is made up of dozens of short sounds like different pieces of metal crunching, glass shattering and falling at different rates, the sound of metal and glass falling and bouncing on asphalt, tires skidding, etc. The selection of sounds and their qualities would randomly change with each car crash, making it more realistic since no two impacts would ever sound exactly the same. But I was never into video games so I had little interest in the game play aspect of the technology.
What inspired you to return to music (and being an indie music label) in 1999?
Although less visible, I was releasing music before 1999, mostly on European labels. At that time there was a surge of innovation in digital music due to laptops becoming fast enough to compute audio in real-time and online access to free academic computer music software. There was a beta-testing atmosphere when using academic software since much of it was works in progress and rife with bugs, so working with this software often uncovered bugs that yielded happy sonic accidents.
I found “software bug-hunting as an artistic tool” very inspiring, and having had experience in testing software, I began to experiment with this in my own work and wrote a paper for Computer Music Journal titled “The Aesthetics of Failure” that outlined how bugs, now called glitches, were being used to create experimental music. Many of my computer music pieces were released on my vanity label anechoic.
In 2016 you reinvented your label Silent Records. What’s different about running a label now versus when you first started Silent Records?
The mechanics of getting music from creator to listener has changed completely since I started working in the music industry in the 80’s. Atoms have become bits and the Internet has become any sort of container in which to pour digital information.
It was telling that when I rebooted Silent, people asked me to explain what the role of a record label was in 2016 when music just automagically appears on YouTube for free. Other than the obvious modes of creation, distribution, and consumption of music, the biggest difference between 80’s and now is the role of social media. Accessing your fanbase is much easier today than it was then. Analytics is a big help and gives us information that we didn’t have then, unless you had tons of cash to pay a marketing research company. But another difference is the extreme difficulty of making a living from one’s creative work. Gone are the days of advances on royalties and a record label developing an artist’s work. The paltry sums paid via streaming is barely enough to run a business on so artists and labels both suffer.
Any advice for young music and film makers?
Well, a couple of things come to mind. If you plan to pursue a career as an artist, avoid academia unless you want to teach. Too many young people are led to believe that the only way to become a “serious artist” is to get a degree in your medium. The arts have become too institutionalized and academia does little to nourish the artistic imagination. I suggest that young artists study on their own or with a teacher one-on-one. I’ve found that the musicians and composers whose work I respect the most studied privately and were life-long autodidacts. This is not to suggest one should forgo a higher education, just that the academic environment is not the best place to nourish one’s creative imagination these days. Frank Zappa once said “if you want a real education, go to the library and educate yourself.”
Which segues nicely to my second suggestion: read everything you can get your hands on. Research is the soil your work grows in. To quote another great artist, Werner Herzog said ““Read, read, read, read, read, read, read, read, read, read, read, read, read…if you don’t read, you will never be a filmmaker.” I’d add: “any type of artist” to that claim.