Sound and music computing

MUSA: Accessible music to break invisible barriers

Phonos project - UPF - Fri, 02/02/2018 - 16:43

MUSA is the name of the new project of Phonos in collaboration with the Music Technology Group (MTG) and Escola Superior de Música de Catalunya (ESMUC) that has received support from the Obra Social la Caixa within its Art for Change program. The project, which will last a year, aims to promote interest in music among those people who have some kind of motor disability that prevents them from playing traditional musical instruments. Through adapted digital instruments such as EyeHarp, participants in this project will learn to play musical pieces and participate in music groups. The project will end with a concert in which the adapted digital instruments and the traditional ones will coexist to play a repertoire in an event open to the public.

The EyeHarp is a digital instrument developed by the MTG that allows people with severe motor disabilities to benefit from learning and playing music using only their eyes. We would like to offer users of instruments like EyeHarp or similar ones the possibility of playing in a music group, together with other musicians, with the aim of improving their autonomy through art and sharing interests with other people with music as common topic.

Phases of the project:

- Training for EyeHarp users and other accessible interfaces
Training workshop where participants will learn about the instruments and how to play them.

- Creation of musical pieces adapted to the new instruments
With the collaboration of ESMUC a set of arrangements will be created to adapt the musical pieces for his interpretation with the different accessible interfaces.

- Creation of musical groups with traditional and accessible instruments
These musical groups, formed by music students, will play different repertoires and combine the use of traditional instruments with accessible interfaces.

- Final concert
Concert in which various pieces will be presented as a result of the project. It will take place in a hall to be determined and it will be open to the public.

 

 

This project is coordinated by the Phonos Foundation, with the collaboration of the Music Technology Group (Universitat Pompeu Fabra) and the Escola Superior de Música de Catalunya (ESMUC), and it has the support of the Obra Social La Caixa within the Art for Change program.

Fourth artist-in-resindence program || MUTEK & MTG & PHONOS

Phonos project - UPF - Tue, 01/30/2018 - 15:26

Aiming at fostering the approach between creators and research in sound and music technology, MUTEK Festival [ES] is organizing, for the fourth consecutive year, an artistic and research residency aimed at those creators interested in working with interactive technologies and artificial intelligence.

The selected artist will be able to work with experts for three weeks to create a musical project that will be presented at the Mazda Space during the festival, using the technologies developed in the context of the TELMI project, which proposes the creation of new intelligent interactive systems to facilitate learning of music performance. The project involves the use of a variety of sensors, microphones, 3D cameras, position sensors, EMG, EDA and EEG to detect audio, movement and physiological information. With these resources and technologies the artist will be able to create a musical and audiovisual performance.

More infomation and conditions of the call: https://www.patcomunicaciones.com/cuarta-residencia-para-creadores-mutek...

 

Category:

Expressive MIDI Control in VR and Foot Control

CCRMA-Stanford University - Wed, 01/24/2018 - 09:02
Date:  Tue, 01/30/2018 - 5:00pm - 5:50pm Location:  CCRMA Classroom [Knoll 217] Event Type:  Guest Colloquium Speaker: Loki Davison from musicroomvr.com

Abstract: After commercially releasing a number of VR MIDI controllers and a physical multidimensional foot controller I'll share my development process, findings from observing hundreds of musicians use them and feedback from customers. I'll highlight common pitfalls and discuss how multidimensional control can help make new expressive instruments but also combine with traditional instruments to enable deep live effect control.


FREE Open to the Public

Yaqing Su on Neural Coding of Pitch Cues in the Auditory Midbrain

CCRMA-Stanford University - Tue, 01/23/2018 - 23:28
Date:  Fri, 02/16/2018 - 10:30am - 12:00pm Location:  CCRMA Seminar Room Event Type:  Hearing Seminar Neural Coding of Pitch Cues in the Auditory MidbrainNeural Coding of Pitch Cues in the Auditory Midbrain music perception that pitch??!!?!? And just how do we perceive pitch? After many decades it’s still a source of many disagreements, and much research.

Pitch is critical to auditory perception. It’s an important cue for separating sounds in a cocktail party. It binds a speech signal together so you hear speech instead of a bunch of chirps. Yet, the exact mechanism that our brain uses to hear the pitch of a signal is not known. Yaqing Su will be talking about evidence she has found that suggests that pitch is perceived with a combination of measures, effectively sidestepping the argument about place vs. time. This will be fun.
FREE Open to the Public

read more

Decoding EEG Signals by Malcolm Slaney

CCRMA-Stanford University - Mon, 01/22/2018 - 18:36
Date:  Fri, 02/02/2018 - 10:30am - 12:00pm Location:  CCRMA Seminar Room Event Type:  Hearing Seminar There are many ways to interpret EEG signals, but the newest approach eschew averaging and perform single-shot decoding. This allows for real-time information about how a subject is perceiving a sound. How does one connect arbitrary audio to the resulting EEG signals?  Or how does one predict the audio that resulted in a particular EEG signal?  Can I tell you the sound you are attending to?  I'll be talking about recent work using linear and correlation methods to build better models that connect audition and EEG signals. This a form of system identification, and has been shown to work for decoding which of two audio signals one is attending to.  Cool stuff.

Who: Malcolm Slaney (Google and CCRMA)
What: EEG Decoding Algorithms and Results FREE Open to the Public

read more

CCRMA At Bing - Greatest Bongs

CCRMA-Stanford University - Wed, 01/17/2018 - 20:20
Date:  Fri, 01/26/2018 - 7:30pm - 9:00pm Date:  Sat, 01/27/2018 - 7:30pm - 9:00pm Location:  BING CONCERT HALL - MAIN HALL Event Type:  Concert CCRMA presents two concerts of the best of CCRMA's recent electroacoustic multichannel music, deploying our GRAIL (Giant Radial Array for Immersive Listening) speaker array on the main stage of Bing:

Friday January 26th:

Mark Applebaum
Jonathan Berger
Alex Chechile
Chris Chafe
Elliot Canfield-Dafilou
Fernando Lopez-Lezcano
Anders Tveit

Saturday January 27th:

Constantin Basica
Chris Chafe
Christopher Jette
Jarek Kapuscinski
Fernando Lopez-Lezcano
Anders Tveit
Nick Virzi

FREE Open to the Public

the first time you closed your eyes

CCRMA-Stanford University - Fri, 01/12/2018 - 07:35
Date:  Mon, 01/22/2018 - 7:30pm - 8:40pm Location:  CCRMA STAGE Event Type:  Concert  Jane Rigler, Christopher Jette and Matt Wright present an evening of improvised electro-acoustic music. "the first time you closed your eyes" provides a contemplative point of departure for the sonic explorations. The two ~22 minute sets take advantage of the 64 channel array of speakers on the CCRMA stage. The performance will combine flute, a monochord and a cybernetically controled feedback network.  FREE Open to the Public

Romain Michon's Dissertation Defense: The Hybrid Mobile Instrument — Recoupling the Haptic, the Physical, and the Virtual

CCRMA-Stanford University - Wed, 01/10/2018 - 00:30
Date:  Wed, 02/07/2018 - 5:30pm - 7:00pm Location:  CCRMA Stage Event Type:  Other Romain Michon will present a defense of his doctoral dissertation on "The Hybrid Mobile Instrument — Recoupling the Haptic, the Physical, and the Virtual." 

For those who are not able to attend in person, a live stream link will be available soon. 

Abstract:

The decoupling of the "controller" from the "synthesizer" is one of the defining characteristic of digital musical instruments (DMIs). While this allows for much flexibility, this "demutualization" (as Perry Cook termed) sometimes results in a loss of intimacy between the performer and the instrument. FREE Open to the Public

read more

AES E-News: January 8, 2018

AES E-News - Mon, 01/08/2018 - 22:16
1. Register Now for AES@NAMM!
2. 2018 Milan Convention Paper Deadline Approaches
3. New AES Live Videos Available
4. Upcoming Conference News
5. Standards News
6. Job Board Update
7. AES December Issue Now Available

Audio Style Transformations using Deep Neural Networks by Prateek Verma

CCRMA-Stanford University - Mon, 01/08/2018 - 06:40
Date:  Fri, 01/12/2018 - 10:30am - 12:00pm Location:  CCRMA Seminar Room Event Type:  Hearing Seminar

Deep Neural Networks (DNNs) have been wildly successful for many tasks, but none of the tasks are as wondrous as the success that DNNs have had on transferring the style of one painting to another painter’s art. This magical trick is accomplished by mixing and matching the low-level feature analysis layers between different styles of paining, so a painting in one style is rendered in another with different kinds of brush works.

But can we do this for audio? Prateek Verma has been experimenting with this, and will talk about his work and results. What does audio style mean, and how does one capture it?

Who: Prateek Verma (Stanford CCRMA)
What: Audio Style Transformations using Deep Neural Networks

FREE Open to the Public

read more

Specifying, implementing and deploying Low-level avatar animation systems: lessons I’ve learned after ten years in the field.

KTH Royal Institute of Technology - Fri, 12/15/2017 - 21:12






Time: Fri 2017-12-15 10.00 - 10.45

Location: F0, Lindstedsvägen 24, 5th Floor

Type of event: Seminars





Social Human-Robot Interaction

KTH Royal Institute of Technology - Fri, 12/15/2017 - 21:12






Time: Fri 2017-12-15 10.45 - 11.30

Location: F0, Lindstedsvägen 24, 5th floor

Type of event: Seminars





Performance, Processing and Perception of Communicative Motion for Avatars and Agents

KTH Royal Institute of Technology - Fri, 12/15/2017 - 21:12






Time: Fri 2017-12-15 14.00

Location: F3, Lindstedtsvagen 26, KTH Campus

Type of event: Dissertations





Quarante composants -Collection vidéo CDMC/ Ina GRM

INA-GRM - Sun, 12/03/2017 - 13:54
10 Décembre, 2017 - 31 Janvier, 2018

Quarante Composants 

Le projet de cette collection vidéo a été conçu à l’occasion des quarante ans du Cdmc.

 

L' Agenda du GRM

en lire plus

A Non-Sequential Approach for Browsing Large Sets of Found Audio Data

KTH Royal Institute of Technology - Tue, 11/28/2017 - 23:42






Time: Tue 2017-11-28 15.00

Location: Fantum, Lindstedsvägen 24, 5th floor

Type of event: Seminars





Syndicate content