News aggregator

Multimodal Language Grounding for Improved Human-Robot Collaboration - Exploring Spatial Semantic Representations in the Shared Space of Attention

KTH Royal Institute of Technology - Fri, 11/10/2017 - 21:24

Time: Fri 2017-11-10 15.00 - 17.00

Location: Fantum, Lindstedsvägen 24, 5th floor

Type of event: Seminars

James A. (Andy) Moorer: The Future of Technology - Looking Forward by Looking Back

CCRMA-Stanford University - Wed, 11/08/2017 - 00:42
Date:  Mon, 11/27/2017 - 5:30pm - 7:00pm Location:  CCRMA Classroom [Knoll 217] Event Type:  Guest Colloquium In 2000, the author published an article entitled "Audio in the New Millennium", using the author's experience to project 20 years into the future. Today we are 17 years into that projection. Comparisons of the state of the art with the projections lead to some startling observations with profound implications for the future of the relation between humans and technology. For instance, it is not hard to see that audio is the most powerful and critical medium today - more ubiquitous than even television. It was thought originally that television would entirely replace radio, but now we rely on radio to talk to our friends and stream our music, relegating television to the home or perhaps the sports bar. FREE Open to the Public

read more

Modeling fine time structure in the brain

CCRMA-Stanford University - Tue, 11/07/2017 - 17:54
Date:  Fri, 01/19/2018 - 10:30am - 12:00pm Location:  CCRMA Seminar Room Event Type:  Hearing Seminar Assessing the role of monaural and binaural temporal fine structure for robust speech perception: Insights from psychophysics and physiology-based modeling

Jayaganesh Swaminathan
Starkey Hearing Research Center, Berkeley, CA, USA

Open to the Public

read more

Weighing acoustic factors in music and language during development

CCRMA-Stanford University - Sat, 11/04/2017 - 00:20
Date:  Fri, 12/01/2017 - 10:30am - 12:00pm Location:  CCRMA Seminar Room Event Type:  Hearing Seminar Christina M. Vanden Bosch der Nederlanden will be talking about the differences between speech and music when we develop our auditory brains. They seem pretty different to us now, but how do our young brains parse the cachophony of sounds and decide that some sounds are meant to be recognized as words, and other sounds are just to make us feel good.  Or are they both?

Who: Christina M. Vanden Bosch der Nederlanden
What: Weighing acoustic factors in music and language during development
When: 10:30AM on Friday December 1, 2017
Where: CCRMA Seminar Room, top floor of the Knoll at Stanford
Why: How do we learn the meaning (or not) of sounds??
Open to the Public

read more

AES E-News: November 2, 2017

AES E-News - Thu, 11/02/2017 - 23:19
1. 143rd Convention Wrap-Up
2. 2018 Milan Convention Call for Contributions
3. New AES Live Videos Available
5. Upcoming Conference News
6. Loudness Guidelines
7. Job Board Update
8. AES October Issue Now Available

Karya, a new sequencer, notation, and language

CCRMA-Stanford University - Mon, 10/30/2017 - 01:02
Date:  Fri, 11/17/2017 - 5:00pm - 6:20pm Location:  CCRMA Classroom [Knoll 217] Event Type:  Guest Colloquium

Abstract: Karya is a music sequencer. Its main goal is to let you write a high level score which is then realized to expressive and idiomatic instrumental parts, primarily for computer realization. It uses its own score format. One way to look at it is a 2D language for expressing music along with an editor for that language. The score language has a built-in library of notation and has basic means for defining new notation, but more complicated notation is defined in Haskell. The idea is to have a standard library, but also be able to define notation specific to your score. The editor is graphical but also uses a Haskell REPL for configuration, automation, and extension.

FREE Open to the Public

read more

Demixing and Remixing Music with Deep Learning

CCRMA-Stanford University - Sun, 10/29/2017 - 00:31
Date:  Fri, 11/10/2017 - 5:00pm - 6:20pm Event Type:  Guest Lecture Abstract: In 2015 Alejandro Koretzky created tuneSplit with the goal of democratizing music creation and remixing while introducing the concept of “Semantic Equalization” in music. By implementing an end-to-end pipeline that performs audio source separation in real time, commercial stereo music can be deconstructed into different instruments and vocals, allowing users to personalize the listening experience and unlocking the possibilities for remixing using parts of existing stereo mixes. Initial versions of the underlying algorithms were based on a proprietary adaptive version of Non-negative Matrix Factorization. FREE Open to the Public

read more

Attentive speaking – From listener feedback to interactive adaptation

KTH Royal Institute of Technology - Fri, 10/27/2017 - 20:01

Time: Fri 2017-10-27 15.00 - 17.00

Location: Fantum, Lindstedsvägen 24, 5th floor

Type of event: Seminars

Dante Grela

Phonos project - UPF - Fri, 10/27/2017 - 14:53
Series: Wednesday, 2017, November 15 - 19:30 15.11.2017 - 19:30h Sala Polivalent UPF Poblenou C/ Roc Boronat 138 Barcelona

Free admission



“Glaciación” (1979), quadraphonic version. “Síncresis” (2006), quadraphonic version. “Imaginarios” (2010), acousmatic, quadraphonic version. “Encuentros mágicos” (2017), octophonic.  

Dante Grela is a composer of music, university professor and researcher born in Rosario (Santa Fe), Argentina, in 1941. He is the author of several works on pedagogy of composition, analysis and orchestration, as well as on contemporary musical creation of Latinamerica. In the field of music research, he has directed projects particularly dedicated to the study of Latin American production of the XX and XXI centuries, residing in the Instituto Superior de Música de la Universidad nacional del Litoral.

Composer and guest lecturer at several Contemporary Music Festivals in Argentina, Brazil, Venezuela, Uruguay, Chile, El Salvador, Canada and the United States, having given a large number of courses and lectures on composition, analysis, techniques and aesthetics of music contemporary and musical creation of Latin America. He has regularly directed diverse ensembles dedicated to the interpretation and diffusion of the music of S. XX.

As a composer, his works have received awards on several occasions, as well as numerous premieres in Argentina, Brazil, Chile, U.S.A., Venezuela, Spain, Canada, El Salvador, Germany and Uruguay. His production includes works for solo instruments, chamber music and symphony, electroacoustic music and mixed compositions (for instrumental sound sources and electronic sounds).



with the support of:

Generalitat de Catalunya: Departament de Cultura
Ajuntament de Barcelona: Barcelona Cultura
Universitat Pompeu Fabra Category:

Data-efficient Machine Learning with People and Robots

KTH Royal Institute of Technology - Fri, 10/20/2017 - 18:06

Time: Fri 2017-10-20 15.00 - 17.00

Location: Fantum, Lindstedsvägen 24, 5th floor

Type of event: Seminars

Jan Skoglund on Objective Quality Assessment for Immersive Audio

CCRMA-Stanford University - Thu, 10/19/2017 - 18:22
Date:  Fri, 11/10/2017 - 10:30am - 12:00pm Location:  CCRMA Seminar Room Event Type:  Hearing Seminar With the surging interest in augmented and virtual reality, there is more and more interest in high-quality 3D sound rendering.  We heard many talks about this last year, with very smart people rendering sounds in complicated physical environments.  My favorite was one group who said they could model the sounds from a hallway the user couldn't see.  But does this matter?  I'm not sure I could tell the difference. But I'm mindful of the fact that at one point people thought LP records were the ultimate in audio fidelity.
FREE Open to the Public

read more

Aren Jansen on AudioSet: Real world audio event classification

CCRMA-Stanford University - Thu, 10/19/2017 - 18:19
Date:  Fri, 11/03/2017 - 10:30am - 12:00pm Location:  CCRMA Seminar Room Event Type:  Hearing Seminar AudioSet is an attempt to do for audio processing what big image databases like ImageNet have done for computer vision. Arguably big image datasets like ImageNet, by Prof. Li Fei-Fei at Stanford, and the competitions they have spawned, have advanced image recognition more than any other research result.

Aren Jansen, from Google, will be talking about AudioSet. He will talk about their data collection effort, how the data is organized and the first results on sound object recognition from this large dataset.  By large, they mean 2.1M human-annotated videos, 5.8M hours of audio, and 527 classes of sounds.
FREE Open to the Public

read more

Malcolm Slaney on Cool Audio Projects from the Telluride Neuromorphic Workshop

CCRMA-Stanford University - Thu, 10/19/2017 - 18:10
Date:  Fri, 10/27/2017 - 10:30am - 12:00pm Location:  CCRMA Seminar Room Event Type:  Hearing Seminar For many years a collection of world-reknown faculty and amazing students have gathered in the mountains of Telluride to propose and pilot interesting auditory experiments. This past summer was no exception, and Prof. Fujioka and I would like to review the 11 auditory projects that were successful this year. This ranges from music perception, to decoding EEG responses, to matching deep neural networks and brains. We were studying musical scales, tension, and rhythm.  What could be more fun?!!?!?

The projects include:
    Tension decoding ♪
    Musical scales ♪
    Salience detection
    Hierarchical features for decoding FREE Open to the Public

read more

Invisible Choirs, a solo exhibition by Nolan Lem

CCRMA-Stanford University - Wed, 10/18/2017 - 20:57
Date:  Fri, 11/03/2017 - 6:00pm - Fri, 12/01/2017 - 6:00pm Location:  Pro Arts Gallery 150 Frank H Ogawa Plaza, Oakland, CA 94612 Event Type:  Other CCRMA PhD student Nolan Lem will premiere his first solo exhibition comprised of new mixed media and sound-based works at Pro Arts Gallery in Oakland, CA. Invisible Choirs examines the automation of of artificial intelligence by exploring the pathological ramifications of an increasingly technocentric society. Focusing on the emergence of artificially intelligent machines, Lem questions the relationship between technological modes of production and physical labor, visibility and identity, and autonomy and monotony. Comprised as a set of mixed-media, kinetic, and sound-based works, the installation's environment is constructed as an interactive neural network--one that renders visible the physical and algorithmic automata that seek to govern our daily lives. 
FREE Open to the Public

read more

Generative Models for Music and Art

CCRMA-Stanford University - Mon, 10/16/2017 - 21:32
Date:  Thu, 10/26/2017 - 6:00pm - 7:20pm Location:  CCRMA Classroom [Knoll 217] Event Type:  Guest Lecture

Abstract: Doug Eck will discuss Magenta, a Google Brain project investigating music and art generation using deep learning and reinforcement learning. The goals of Magenta and how it fits into the general trend of AI moving into our daily lives will be described. One crucial question is: Where does AI and Machine Learning fit in the creative process? The speaker argues that generative models are the core tools to import from machine learning, and introduces concepts from generative models such as autoencoders, recurrent neural networks, variational methods, generative adversarial networks (GANs) and different sampling methods.

FREE For CCRMA Users Only

read more

Syndicate content