John M. Chowning was born in Salem, New Jersey. Following military service and studies at Wittenberg University, he studied composition in Paris with Nadia Boulanger. In 1964, with the help of Max Mathews of Bell Telephone Laboratories and David Poole of Stanford University, he set up a computer music program using the computer system of Stanford’s Artificial Intelligence Laboratory. Beginning the same year he began the research leading to the first generalized sound localization algorithm implemented in a quad format in 1966. He received the doctorate in composition from Stanford University in 1966, where he studied with Leland Smith. Chowning discovered the frequency modulation synthesis (FM) algorithm in 1967. This breakthrough in the synthesis of timbres allowed a very simple yet elegant way of creating and controlling time-varying spectra. Inspired by the acoustic and perceptual research of Jean-Claude Risset, over the next six years he worked toward turning this discovery into a system of musical importance, using it extensively in his compositions. In 1973 Stanford University licensed the FM synthesis patent to Yamaha in Japan, leading to the most successful synthesis engine in the history of electronic musical instruments.
A lecture demonstration showing how the capacity of computer systems 50 years ago limited composers/researchers to only one of the sound generating/processing tools that are available today: synthesis. But, we learned much about the perception of sound as we wrapped our aural skills around the technology and discovered how to create music from fundamental units. I will demonstrate how my earliest work in spatialization led to the discovery of FM synthesis in 1967 and how I then used FM synthesis for synthesis of the singing voice and in the music that I composed—based on what we learned about perception—from the inside-out.