Sound and Music Computing Network

The soundscapes of Barcelona

Title: Sound and Music Computing Summer School
Dates: July 17-20, 2010.
Place: Universitat Pompeu Fabra, Barcelona, Spain
Alternate web / Blog:
Contact: Please, fill this form.

The SMC Conference is the forum for international exchanges around the core interdisciplinary topics of Sound and Music Computing. Prior to the Conference there will be the SMC Summer School.

The goal of this Summer School is to give an opportunity to young researchers interested in the field to learn about some of the core interdisciplinary topics and to share their own experiences with other young researchers, through the study of the soundscapes of Barcelona. For that, we will use, a huge collaborative database of sounds, released under the Creative Commons Sampling Plus license.

The creation and study of the soundscape of Barcelona will be divided into different subtopics, covering all the required aesthetic and technical aspects for soundscape analysis and creation. For that, the program of the summer school is divided into lectures and hand-on practical sessions. 

The School caters to suit different student backgrounds and interests. If your background is an Electrical Engineering, Computer Science, Music, Art, Sound recording, Design, etc., there is something new for you to learn at this Summer School.


Academic program


The four days program includes lectures and hands-on practical sessions. The 3 main lectures cover the following topics:
  • Soundscape composition: Documentation, listening, and creation using computers: Acoustic ecology and Soundscape composition, by Barry Truax.
  • Sound and music content processing: Theory and applications of sound and music description, by Fabien Gouyon.
  • Introduction to recording techniques using handheld recorders: Technical concepts for recording and mixing audio in optimal conditions, by Enric Guaus.

The hands-on practical sessions cover the following topics:

  • Composition of Realistic and Interactive Soundscape: Analyse, record and annotate a target soundscape of Barcelona, in order to re-compose it, by Mattia Schirosa.
  • Augmented Soundscapes: Creation of augmented soundscapes, starting from recordings of real soundscapes, using realtime machine listening and signal processing techniques, by Stefan Kersten.
  • SMC Tales: Develop a mobile phone application for collaborative story telling, by Vincent Akkermans.
  • Tangilble interface for graph based music representation: Building a tangible interface for the music representation used in radio freesound, by Gerard Roma.


Time Sat, 17th. Sun, 18th. Mon, 19th. Tue, 20th. Wed, 21th.
10:00 - 11:30   Lecture: Soundscape composition Lecture: Soundscape composition Lecture: Soundscape composition
11:30 - 12:00   Coffee break Coffee break Coffee break
12:00 - 13:30   Lecture: Sound and music content processing Lecture: Sound and music content processing Lecture: Sound aand music content processing
13:30 - 15:00   Lunch Lunch Lunch
15:00 - 16:30 Course presentation Sound Walk Projects Projects
16:30 - 18:00 Lecture: Sound Recording Sound Walk Projects Projects
18:00 - 19:30 Lecture: Sound Recording Projects Projects Projects Summer School Concert

More detailed information can be found in this Google Calendar

Course material

  • Lectures:
    • Soundscape Composition: Documentation, listening, and creation using computers: The World Soundscape Project (WSP) was established as an educational and research group at Simon Fraser University during the early 1970s, and documented soundscapes as part of an over-arching concern to draw attention to the importance of the sonic environment. R. Murray Schafer's definitive soundscape text, TheTuning of the World, and Barry Truax's reference work Handbook for Acoustic Ecology were outcomes of these early years, followed by Truax's publication Acoustic Communication that deals with all aspects of sound and the impact of technology. Inthe field of sonic design, the computer increasingly provides toolsfor dealing with both the basic material of sound composition, suchas granular synthesis, convolution and digital signal processing, and the creation of multi-channel soundscape compositions using recorded materials. Prof. Truax’s presentations will outline the background of acoustic ecology as an area of both scientific research and artistic endeavour, leading into a discussion of soundscape composition as an art form per se, ranging from the pioneering workof the WSP to contemporary compositional approaches to creative presentations of soundscape materials.
    • Sound and music content processing: The amount and availability of different media (professionally produced or user-generated content) is continuously increasing. This is profoundly changing the ways we interact with sound and music today and how we expect to do it tomorrow. Music Information Retrieval (MIR) is a fast-paced multidisciplinary field of research where different long-tradition disciplines (such as e.g. Signal Processing, Information Science, Computer Science, Musicology) meet to empower these changes. This lecture will provide an overview of MIR research, we will look into ways to extract semantic information from a number of media (from specific audio signals to contextual annotations). A special focus will be put on surveying state-of-the-art algorithms and available systems for the automatic description of music audio signals from a musical perspective (focusing on musically-meaningful dimensions, such as e.g. rhythm, harmony, timbre). We will explore diverse uses of this step of information extraction, such as music search and recommendation, music signal transformations or real-time music performance.
    • Sound recording: Not all the handheld recorders do the same. Not all the audio formats sounds equal. This lecture will provide the basic concepts of audio recording and microphone techniques to perform high quality recordings taking into account different aspects such as directivity, stereo techniques and portability.
  • Projects:
    • Composition of Realistic and Interactive Soundscape: The process focus on recording and annotation techniques to compose realistic virtual Soundscape using an MTG system for Soundscape interactive composition. There will be 4 phases: soundscape analysis and recordings, annotations and database creation, virtual soundscape composition, soundscape performance. The students will analyze on-site a target Soundscape, they will define relevant Sound objects, set-up and performs the recordings using shot-gun microphones. Then, in a second phase, they will annotate interesting segment through Sonic Visualiser, creating and preparing the soundscape database. Finally, they will creates the virtual SoundConcepts using the MTG system for Soundscape generation & composition, written in SuperCollider language. In the final performance the student will play the soundscape moving a listener around the virtual space and controlling the soundscape parameters defined during the annotations. The environment recordings will be held in acoustically interesting environment, such as the Cathedral, the Boqueria or St Antoni Marketplace, the Barceloneta beach, the kitchen of the Hostelería Hofmann school, a Factory. Note that when we speak about acoustically interesting environment we means both the sonic properties of the acoustic space (the Cathedral) and the scenographic properties of its actors: the presence of interesting sources producing a notable activity of aesthetically interesting sound events (the Marketplace, the Factory, a huge Kitchen).
    • Augmented Soundscapes: The capabilities of portable devices such as current generation mobile phones and media players is still severely limited in terms of computing power and storage space, such that only a subset of today's analysis and synthesis algorithms can be run in realtime. The goal of this workshop is to develop techniques and applications around the concept of augmented soundscapes according to the following assumption: What if we had available now the computing power of current desktop devices in a mobile package? Without the limitations of today's mobile technology, how would we design augmented reality applications? In this workshop we will create augmented soundscapes, starting from recordings of real soundscapes, that will be recorded by the participants during the first day of the workshop workshops. Using realtime machine listening techniques like onset detection, event extraction and segmentation, tempo tracking, etc. we will extract prominent features from real soundscapes. These features will be used to process and recompose the audio material with signal processing techniques such as granular resynthesis, concatenative synthesis, audio mosaicing and spectral modification. The processes to be developed by the participants can include --but are not limited to-- electroacoustic compositions, musically augmented soundscapes, auditory display and data sonification applications, etc. The goal of the workshop is to develop a single particular augmented soundscape in a small group, with the focus of presenting the results in a final concert or interactive performance.
    • SMC Tales: The goal of the 'SMC tales' workshop is to develop a mobile phone application for collaborative story telling. The phone will be used during the conference to extract from the visitors a story that they create together. The story is formed by adding a recording of a single sentence at a time. But before the story can be extended, the story teller is presented with the last five recordings in the story. They can then choose to advance the story in whatever way they want. After the new storyteller has recorded a sentence, he can transform his voice to a robot, a kid, an old woman, and other types of voices. When satisfied with the result, the recording, together with a picture of the storyteller, gps data, possibly any subjective annotations, is uploaded to a server. The resulting story can then be listened to by visiting the 'SMC tales' website.
    • Tangilble interface for graph based music representation: The purpose of this workshop is to build a tangible interface for the music representation used in radio freesound ( In radio freesound, a music composition is represented by a graph where nodes represent sounds, and edges represent transitions between sounds. This representation is aimed at collaborative composition based on a shared repository of sounds (freesound). Collaborative work, on the other hand, is one of the main applications of Tangible User Interfaces (TUIs). By representing sounds of the database as tangible objects, users will be able to discuss, manipulate and exchange compositions without the restrictions of a single-user computer interface. An example of this approach is tangible sequencer ( Before the workshop, participants will have recorded, edited, labelled and uploaded their sounds to freesound. These sounds will be used to develop and test the interface.




The faculty for the lecture sessions is:
  • Barry Truax is a Professor in both the School of Communication and the School for the Contemporary Arts at Simon Fraser University, Canada, where he has taught courses in acoustic communication and electroacoustic music. As a composer, he is best known for his work with the PODX computer music system that he has used for tape solo works and for pieces that combine tape with live performers or computer graphics. His music has been released on 8 solo CDs, including the double CD of his opera Powers of Two,and most recently Spirit Journies, a set of virtual soundscapes. In 1991 his work, Riverrun, was awarded the Magisterium at the International Competition of Electroacoustic Music in Bourges, France, a category open only to electroacoustic composers of 20 or more years experience.
  • Fabien Gouyon is Invited Assistant Professor in the Faculty of Engineering of the University of Porto, in Portugal, and senior research scientist at the Telecommunications and Multimedia Unit of INESC Porto where he co-leads the Sound and Music Computing research group. His main research and teaching activities are in Music Information Retrieval and Music Pattern Recognition. He has published over 50 papers in peer-reviewed international conferences and journals, published a book on computational rhythm description, gave the first tutorial on the topic at the International Conference on Music Information Retrieval in 2006 and participated to the writing of the European Roadmap for Sound and Music Computing, published in 2007.
  • Enric Guaus is a researcher in sound and music computing at the Music Technology Group (MTG), Universitat Pompeu Fabra (UPF), and at the Artificial Intelligence Research Institute (IIIA), Spanish National Research Council (CSIC). He obtained a PhD in Computer Science and Digital Communications in 2009 with a dissertation on automatic music genre classification. His research interests cover music information retrieval and human interfaces for musical instruments. He is assistant professor in audio and music processing at the Universitat Pompeu Fabra (UPF) and lecturer un acoustics, electronics and computer science at the Escola Superior de Música de Catalunya (ESMUC).

Tutors for the hands-on practical sessions

The tutors for the practical sessions are:
  • Vincent Akkermans is a system architect at the Music Technology Groups (MTG, Universitat Pompeu Fabra (UPF). He got his master's degree in Sound and Music Technology at Utrecht School of the Arts, in 2008. He got internships to work at the Patchingzone, STEIM, the Netherlands Architecture Institute, and the Music Technology Group (MTG). Currently, he is working at the MTG integrating the most recent technologies of the Web 2.0, advanced on-line tools for music creation, and large sound and music repositories.
  • Stefan Kersten is a is a researcher and PhD candidate at the Music Technology Group (MTG), Universitat Pompeu Fabra (UPF). His research focuses on sound texture modeling for analysis and synthesis and automated soundscape generation techniques. In his work with the DissoNoiSex collective, he explores the fringes of human interaction in interactive sound and video installations. He is an expert SuperCollider user, and has taught signal processing techniques in various workshops around the world.
  • Gerard Roma got involved in programming computers while searching for new sounds and musical tools. He is currently a researcher and Phd Candidate at MTG-UPF. His work focuses on computational models and techniques for collaborative music creation.
  • Mattia Schirosa is an Interaction Sound Design Researcher working at the Music Technology Group (MTG), Universitat Pompeu Fabra (UPF). He obtained the Cinema and Multimedia Engineer Master Degree in 2009, in Turin. He worked in several augmented reality productions for theatre, at the Music Technology research Lab, and in various media productions. His research interests focus on Soundscape exploration/composition and Acoustic Ecology research. He worked on the development of a software application written in SuperCollider.


Important dates

  • Deadline for applications: Friday 30 April 2010
  • Notification of acceptance: Monday 17 May 2010
  • Deadline for Student's project submission: Friday 2 July 2010

Registration forms

Applications must include the following documents in pdf format:
  • Curriculum vitae (max. 1 page)
  • Proof of university enrollment.
  • Short description of the student research interest and motivation to participate (max. 2 pages)
The three pdf files must be incuded in a file. The application must be sent using this form.


There is a student registration fee of 150 Euros to the Summer School. This fee covers coffee breaks and course material. This fee also grants the access to the SMC Tutorials for those students that also attend the SMC conference. This fee does NOT cover costs for meals.

Travelling and Accomodation


The SMC Conference and the Summer School take place at the Communication Campus of the Universitat Pompeu Fabra in Barcelona.

See Communication campus - Poblenou on a bigger map

More information at the SMC Conference - Location web page.

How to arrive

To reach the Communication-Poblenou campus we recommend the public transport:

  • Subway: L1 - Glòries
  • Bus: 7, 92, 192, N7
  • Tram: T4 - Ca l'Aranyó

More information at the SMC Conference - How to arrive web page.


There is a large list of hotels, residences and apartments near the campus. Please, go to the SMC Conference - Accomodation web page.