“Machine Listening” in music: A beginner’s guide

Amy Beeston

Dr Amy Beeston invites you to learn about machine listening at either the University of Leeds or the University of Huddersfield.

This workshop introduces the idea of Machine Listening in music. It demonstrates how computers hear and process sound in comparison with your ears, to help you make the best of this information in your own music/sound work.

 

Duration: 2 hours

Number of participants: 8-10
Age of participants: 16+
Level: Beginner – you don’t need to be an experienced producer, but you probably have some experience of making music on a computer.

Registration

University of Leeds: Monday April 4th 6-8pm please register here

University of Huddersfield: Tuesday April 5th 2-4pm please register here

amplitude image
Amplitude following (the excerpt is from a prepared piano piece)

 

Workshop details

By understanding how our own ears and auditory systems work, and how a microphone picks up sound and lets the machine ‘hear’ aspects of the acoustic surroundings, you can begin to really get the very best out of your technology.

We use acoustic instruments alongside digital technology in our music making, e.g. by playing an instrument or singing through a microphone into a computer running some form of digital signal processing software like Cubase or Logic Pro. However, the techniques employed by musical applications to ‘listen’ to sound are far less sophisticated than human listening skills.

 

Hands on

You will attend several short presentations, and coding/software demonstrations developed to guide you through the process of capturing the musically-relevant information in recorded sound.

This will include a consideration of:

  1. What goes on in the human auditory system so that we unconsciously maximise our chances of hearing well, even in difficult environments and,
  2. What attempts have been made to give some of this amazing functionality to machine listening systems.

Using software such as Sonic Visualiser, Audacity, Pure Data (all available free) and/or Max you will explore sound analysis techniques for extracting information that is useful when making music.

In particular this workshop will explore related techniques: for amplitude following (keeping track of changes in loudness), for pitch tracking (recognising the notes in a melody), and for describing timbral features of sound such as its ‘brightness’ or ‘noisiness’. Time permitting we will also consider how these techniques can be used as sonic controllers within the music/sound systems that you are already using or familiar with.

machine listening microphone
Photo by Flickr user Chad Kainz, licensed under Creative Commons CC BY 2.0

Workshop leader: Dr Amy Beeston 

http://dcs.shef.ac.uk/~amyb

Born into a musical family in Edinburgh, I spent much of my childhood listening to and playing various musical instruments before studying Music Technology (at the University of Edinburgh, 2001). I began building interactive sound installations at around this time, and subsequently focused on sonic control for interactive audio installations during my masters degree in Sonology (Royal Conservatory, The Hague, 2005). More recently, my PhD work (at the University of Sheffield, 2015) has helped me understand some of the challenges that I faced at those times, moving sound installations between practice studios and performance spaces. My PhD work allowed me to firstly examine how human listeners compensate for reflected sound in everyday listening environments and, secondly, to develop machine listeners that exploit principles of the human auditory system in order to deal with the reverberation present in real room recordings.

I am now a researcher in the Speech and Hearing Research Group, Department of Computer Science, University of Sheffield. My research is typically interdisciplinary and collaborative, and primarily involves the development of bio-inspired digital sound processing methods to derive control data from specific parts of audio signals. I have worked in two projects developing computer-assisted language learning software (firstly creating a pronunciation training tool for Dutch school children learning English, and secondly developing software to promote strategies for cochlear-implanted listeners to handle overlapping talk in conversation). In my current role I am developing software for acoustic detection and assessment of snore sounds recorded overnight in the home via users’ smartphones. And when time and opportunity coincide, I enjoy applying these human- and machine- listening skills in musical applications too!

sonogram
Pitch tracking: a synthetic chirp signal analysed in Sonic Visualiser

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s