Sound and music computing

"Rhapsodie démente" de François Verret

IRCAM - Mon, 03/09/2015 - 00:00

Premier opus de "Chantier 2014-2018". En tournée en France du 10 au 27 mars.

"Aliados" (Alliés) de Sebastian Rivas

IRCAM - Mon, 03/09/2015 - 00:00

La dictature chilienne, le thatchérisme, la guerre des Malouines, sont la trame de fond de cet opéra du temps réel. 13-18 mars, Nancy

Max - S'initier à la programmation musicale

IRCAM - Mon, 03/09/2015 - 00:00

À partir de patchs simples jusqu'à un patch destiné au concert. 23-28 mars

PHENICX team at Singularity University Summit

MTG (Universitat Pompeu Fabra) - Sun, 03/08/2015 - 11:46

Singularity University, the most innovative and forward-looking institution, has chosen to host their yearly S

More Information:  More details available here

read more

Seminar by Eugenio Tacchini on Music Recommender Systems

MTG (Universitat Pompeu Fabra) - Sat, 03/07/2015 - 09:17
12 Mar 2015

Eugenio Tacchini, from Università Cattolica di Piacenza (Italy), gives a research seminar entitled "State of the art of Music Recommender Systems and open challenges" on thursday M

read more

Joseph Anderson: The Epiphanie Sequence OR A Few Thoughts on the Reflexive Moment in Acousmatic Music

CCRMA-Stanford University - Sat, 03/07/2015 - 00:40
Date:  Thu, 03/12/2015 - 5:15pm - 6:45pm Location:  CCRMA Listening Room Event Type:  Guest Colloquium

In the age of High Fidelity audio transmission, storage and reproduction, the Western Art Music Tradition has tended to regard the apparatus as a silent or invisible component of the art, particularly in the performance context of what was once called "tape music". One notion, being that the ideal art music is pure: without media, without embodiment, and without the intervention and corruption of performers or performance. In publicly staged "tape music" events, audiences usually see loudspeakers dressed in black, with the intention of fading into darkness.

FREE Open to the Public

read more

Creating a Computational Architecture of Group dynamics for Socially Aware Systems

KTH Royal Institute of Technology - Fri, 03/06/2015 - 17:18

Time: Fri 2015-03-06 15.00 - 17.00

Location: TMH, Room Fantum, 5th floor, Lindstedtsvägen 24

Type of event: Seminars

Michael Mandel on Auditory bubbles: Estimating time frequency importance functions

CCRMA-Stanford University - Thu, 03/05/2015 - 23:28
Date:  Fri, 03/20/2015 - 11:00am - 12:30pm Location:  CCRMA Seminar Room Event Type:  Hearing Seminar Listeners can reliably identify speech in noisy conditions, but it is not well understood which specific features of the speech they use to do this.  This talk presents a data-driven framework for identifying these features.  By analyzing listening-test results involving the same speech utterance mixed with many different "bubble" noise instances, the framework is able to compute the importance of each time-frequency point in the utterance to its intelligibility, which we call the time-frequency importance function.  These results can be seen as a quantification of a listener's strategy for understanding a  FREE Open to the Public

read more

Guests during Out of Hours in Queen's

Advice for students bringing guests into Queen's' Creative Recording Studios (CTS) during Out-of-Hours

For Health and Saftey requirements regarding the the CTS in Queen's, all students
who are planning to bring in guests (e.g. members of a band or their own band,
guests of DemonFM, etc.) to record in the CTS during 'Out-of-Hours' must advise Security
24 hours in advance, that guests will be working in the pre-booked studio(s).

This can be done by sending an email to Security. The student who has made the booking on the booking system must keep a record of the names of each guest including phone numbers and email addresses.

Before the start of the session all guests have to be registered at Security (Estates Building) handing in a paper listing the required details, including the room number of the studio(s) they are working in and an expected time of departure.

At the end of a session Security must be informed that all guests have vacated the building. The latter can be done using the phone at the rear entrance to Queen's.

In case of questions, please get in touch with one of the technical demonstrators in either Queen's (Q1.38) or Clephan (CL00.06) or via email (audiotech).

Sound Classification by Prof. Dan Ellis (Columbia)

CCRMA-Stanford University - Wed, 03/04/2015 - 21:15
Date:  Fri, 03/13/2015 - 11:00am - 12:30pm Location:  CCRMA Seminar Room Event Type:  Hearing Seminar

I’m happy to welcome Prof. Dan Ellis (from Columbia, and on sabbatical at Google) to Stanford CCRMA to talk about recognizing environmental sounds. Recognizing speech and music are relatively common applications of machine learning. But what about the rest of the world? Speech and music are only a small fraction of the sounds that we hear throughout our day.

Dan Ellis has been at the intersection of speech recognition, audio analysis and music processing research throughout his career. He brings an unusual range of interests and skills to all three problems, and I highly recommend his work.

FREE Open to the Public

read more

SMC Researchers participate in ESART Forum in Castelo Branco

SMC Group, INESC Porto - Wed, 03/04/2015 - 18:46
Matthew Davies, Gilberto Bernardes, Georges Sioros and Diogo Cocharro participate in “Composing by Listening, Learning & Remixing” session in the ESART Forum at IPCB, 25th February, 2015.

New Door Lock for Sound Booths

The door to the sound booths in Queen's (Q1.06) has a new Salto lock.
Please update your access card at an online card reader!
These can be found among others at the rear entrance to Queen's, main access door to the CTS (between 1.26 and 1.27), main entrance to PACE, entrance to the MTI basement, Clephan 00.16. Please be also aware that you will need a valid booking for one of the booths.
Should your card not work after having updated it at an online reader AND you have a valid booking, please see one of the technicians in either Queen's (Q1.38) or Clephan (00.06) or get in touch by email or phone (see The Technicians on the MTI intranet).

Abonnez-vous à notre newsletter

IRCAM - Wed, 03/04/2015 - 13:02

C'est gratuit, c'est tous les mois et c'est le meilleur moyen d'être au courant de l'actualité de l'Ircam.

CNMAT Users Group Presents: Dana Jessen, David Wegehaupt, and Jeff Anderle

CNMAT, UC at Berkeley - Wed, 03/04/2015 - 01:17
Start: 2015-03-08 20:00 End: 2015-03-08 22:00

Soutenance de thèse de Jules Françoise

IRCAM - Wed, 03/04/2015 - 00:00

"Motion-Sound Mapping by Demonstration". Mercredi 18 mars à 14h30

Jules Françoise soutiendra en anglais sa thèse de doctorat, réalisée à l'Ircam au sein de l'équipe Interaction Son Musique Mouvement.


Frédéric Bevilacqua, Directeur de thèse, IrcamThierry Artières, Directeur de thèse, professeur à l'UPMC et Université d'Aix Marseille)Thierry Dutoit, Rapporteur, Professeur à l'Université de MonsMarcelo Wanderley, Rapporteur, Professeur à l'Université McGill, MontréalCatherine Achard, Examinatrice, Maître de Conférences à l'UPMC (ISIR)Olivier Chapuis, Examinateur, Chargé de recherche à l'Université Paris-Sud, OrsayRebecca Fiebrink, Examinatrice, Lecturer at Goldsmith University of LondonSergi Jordà, Examinateur, Professeur à l'Université Pompeu Fabra, Barcelona

Résumé "Apprentissage des Relations entre Mouvement et Son par Démonstration"

Le design du mapping (ou couplage) entre mouvement et son est essentiel à la création de systèmes interactifs sonores et musicaux. Cette thèse propose une approche appelée mapping par démonstration qui permet aux utilisateurs de créer des interactions entre mouvement et son par des exemples de gestes effectués pendant l'écoute. L'approche s'appuie sur des études existantes en perception et cognition sonore, et vise à intégrer de manière plus cohérente la boucle action-perception dans le design d'interaction. Le mapping par démonstration est un cadre conceptuel et technique pour la création d'interactions sonores à partir de démonstrations d'associations entre mouvement et son. L'approche utilise l'apprentissage automatique interactif pour construire le mapping à partir de démonstrations de l'utilisateur.

En s'appuyant sur des travaux récents en animation, en traitement de la parole et en robotique, nous nous proposons d'exploiter la nature générative des modèles probabilistes, de la reconnaissance de geste continue à la génération de paramètres sonores. Nous avons étudié plusieurs modèles probabilistes, à la fois des modèles instantanés (Modèles de Mélanges Gaussiens) et temporels (Modèles de Markov Cachés) pour la reconnaissance, la régression, et la génération de paramètres sonores. Nous avons adopté une perspective d'apprentissage automatique interactif, avec un intérêt particulier pour l'apprentissage à partir d'un nombre restreint d'exemples et l'inférence en temps réel. Les modèles représentent soit uniquement le mouvement, soit intègrent une représentation conjointe des processus gestuels et sonores, et permettent alors de générer les trajectoires de paramètres sonores continûment depuis le mouvement.

Nous avons exploré un ensemble d'applications en pratique du mouvement et danse, en design d'interaction sonore, et en musique. Nous proposons deux approches pour l'analyse du mouvement, basées respectivement sur les modèles de Markov cachés et sur la régression par modèles de Markov. Nous montrons, au travers d'un cas d'étude en Tai Chi, que les modèles permettent de caractériser des séquences de mouvements entre plusieurs performances et différents participants. Nous avons développé deux systèmes génériques pour la sonification du mouvement. Le premier système permet à des utilisateurs novices de personnaliser des stratégies de contrôle gestuel de textures sonores, et se base sur la régression par mélange de Gaussiennes. Le second système permet d'associer des vocalisations à des mouvements continus. Les deux systèmes ont donné lieu à des installations publiques, et nous avons commencé à étudier leur application à la sonification du mouvement pour supporter l'apprentissage moteur.

  • Date : mercredi 18 mars, 14h30
  • Lieu : Ircam, salle Stravinsky
  • Entrée libre dans la limite des places disponibles

LLEAPP Festival, March 4 - 6th 2015

PACE, Phoenix Café Bar & Leicester Hackspace
LLEAPP (Laboratory for Laptop and Electronic Audio Performance Practice) is a collective of musician-researchers based in Edinburgh. It is run on the basis of a 3-day practice-led symposium, discussing tactics and strategies for collaborative play, a series of open rehearsals, and finishes with a performance each day.LLEAPP started in 2009 at the University of Edinburgh, has since been held at different universities across the UK, and is being hosted this year by De Montfort University, Leicester.Among featured guests will be Hong Kong-based Takuro Lippit (aka DJ Sniff), a turntablist working in the field of improvised and experimental music; cellist and string arranger Audrey Riley whose work ranges from The Smiths to the Merce Cunningham Dance Company; Swedish noise artist Max Wainwright; mobile artist Steranko; and John Richards with members of the Dirty Electronics Ensemble.

LLEAPP all participants: Owen Green, Taku Lippit, Max Wainwright, Steve Jones, J Richards, Amit Patel, Jim Frize, Sam Topley, Audrey Riley Schedule
Wednesday 4th March 2015 13:00 -14:00 – Seminar with Takuro Lippit + Owen Green (University of Edinburgh) MTIRL, Clephan Building14:00 -18:00 – Open rehearsals in the PACE Studio 119:00 – Concert 1, PACE Studio 1 (all LLEAPP)
Thursday 5th March 20159:00 -12:00 – Breadboard Workshop with Jim Frize (Sonodrome) + DMU students at Leicester Hackspace (meet PACE @ 9.00 am)9:00 -15:00 – Open rehearsals PACE Studio 1, DMU (LLEAPP)21:00 - 21:30 – Q&A with Jenny Walklate + LLEAPP musicians + The Real Junk Food Project, Leicester in the Phoenix Café Bar.21.30 - late – Concert 2, feat DJ Sniff + Breadboard Workshop + LLEAPP musiciansin the Phoenix Café Bar.
Friday 6th March 20159.00 - LLEAPP get-in/rehearse PACE Studio 110:30 - 12:00 – Open rehearsals/workshop (MUST1008). Mobile music feat Steranko, Dushume, DMU students, PACE Studio 112.00 - 13.00 Pay-As-You-Feel Food provided by The Real Junk Food Project, Leicester on the PACE Mezzanine.13:00 - 14:30 – Lunchtime Concert 3, PACE Studio 1 + mobile music performances 14:30 - 15:00 – Breakdown17:00 - Social

Leicester Hackspace is a venue for makers of digital, electronic, mechanical and creative projects. Real Junk Food Project is a pioneering UK movement that re-purposes food thrown away by supermarkets. further information and details of featured artists, please contact Steve Jones at Jack Richardson or John Richards

ASEF Fellowship Program for research visits to U.S. universities

FRI - News and Events - Mon, 03/02/2015 - 00:00

The American Slovenian Education Foundation (ASEF) fellowship program offers funding for a 10 week summer research visit to a U.S. university or research institution. Financial support allows talented students to focus on their research and realize their full potential.

Xavier Serra - Music Information Retrieval from a Multicultural Perspective

CCRMA-Stanford University - Sun, 03/01/2015 - 22:59
Date:  Mon, 04/06/2015 - 5:15pm - 7:00pm Location:  CCRMA Classroom Event Type:  Guest Colloquium Music is a universal phenomenon that manifests itself in every cultural context with a particular personality and the technologies supporting music have to take into account the specificities that every musical culture might have. This is particularly evident in the field of Music Information Retrieval, in which we aim at developing technologies to analyse, describe and explore any type of music. From this perspective we started the project CompMusic ( in which we focus on a number of MIR problems through the study of five music cultures: Hindustani (North India), Carnatic (South India), Turkish-makam (Turkey), Arab-Andalusian (Maghreb), and Beijing Opera (China). FREE Open to the Public

read more

2015 Electronic Music Midwest Call for Submissions

Bergen Center for Electronic Arts - Sun, 03/01/2015 - 21:48

Lewis University and Kansas City Kansas Community College are pleased to announce an international call for submissions for the Electronic Music Midwest Festival, featuring guest performer Keith Benjamin (trumpet), to be held November 19-21, 2015 at Kansas City Kansas Community College. Each concert will feature an 8.1 speaker diffusion system. Acclaimed trumpeter, Keith Benjamin, will be the featured performer and composers are encouraged to submit works for his consideration. Any composer regardless of region, age or nationality may submit one work.

Deadline: May 20, 2015
Entry Fee: none

For complete guidelines, visit


CCRMA-Stanford University - Fri, 02/27/2015 - 20:25
Date:  Fri, 03/06/2015 - 7:30pm - 9:00pm Location:  Dinkelspiel Auditorium Event Type:  Concert

VELA 6911






read more

Syndicate content