ABSTRACT OF THE TALKS
JOSEP LLUIS ARCOS - Artificia Intelligence Research Institute (IIIA, CSIC) : SMC: Challenges and Opportunities. A Personal View.
Understanding the gap between a musical score and a real performance of that score is still a challenging problem. The research interest comes from different motivations: to understand or model music expressivity; to identify the expressive resources that characterize an instrument, musical genre, or performer; or to build synthesis systems able to play expressively. Our research is currently focused on the study of classical guitar and aims at designing a system able to model the use of the expressive resources of that instrument. From a Machine Learning perspective, we propose: (1) a multimodal approach that combines sources such as audio analysis or gesture analysis; (2) to deal with multi-scale events; and (3) to combine Knowledge Intensive and Data Intensive methods. A detailed description of our research is available at http://www.iiia.csic.es/guitarLab
JOSÉ LUIS INIESTA - University of Alicante: The Computer Music Laboratory at the University of Alicante
The Computer Music Laboratory of the University of Alicante is part of a wider research group named PRAIg (Pattern Recognition and Artifical Intelligence group). The group is formed currently by 4 doctors, 1 doctorate lecturer, two technicians and variable number of graduate, post-graduate and doctoral students. Currently, a number of projects on music style recognition, melody track identification, melodic analysis and similarity, automatic transcription of digital audio, algorithm composition, and digital sound synthesis are being developed. The group collaborates in a number of national and international projects and has important relations with both Spanish and European groups.
DARRELL CONKLIN - Departamento de Ciencias de la Computación e Inteligencia Artificial, Universidad del País Vasco:
Pattern discovery is a central part of symbolic music processing, with a wide range of applications, including motivic analysis, music generation, music classification, music information retrieval, and performance analysis. Pattern discovery is an important descriptive method for obtaining an overall view of a set of pieces and for indicating comprehensible subgroups, distributions, and clusters in a database of pieces. Maximally general distinctive patterns in music are patterns that are distinctive or significantly over-represented in a positive set (called the corpus) with respect to a background set (called the anticorpus), and furthermore the most general among all such distinctive patterns.
Furthermore in music there may exist a background ontology, that is a hierarchy of classes, overlaying the pieces. For example, pieces might be assigned to geographical regions of origin, which may be grouped into wider enclosing regions. There may exist a hierarchy of song types or genres.
This talk will report on data mining of folk songs for distinctive patterns. A set of folk songs was collected, organized using an ontology, and mined using distinctive pattern discovery methods. Several highly distinctive and interesting melodic patterns emerge, indicating the ability of patterns to describe interesting subgroups of folk songs.
RAFAEL RAMÍREZ - Music Technology Group, Universitat Pompeu Fabra : Music Expression and Cognition
Professional musicians manipulate sound properties such as pitch, timing, amplitude and timbre in order to express their view of the content of musical pieces. However, there is little quantitative information about how and in which contexts this manipulation occurs. We present our work on quantitatively model, analyse and identify interpreters in expressive performances. We apply sound analysis techniques based on spectral models to real audio performances for extracting both inter-note and intra-note expressive features. Based on these features, we then apply different machine learning techniques to train computational models characterising different aspects of expressive performance. In this talk we identify topics for possible collaboration with other SITEMU members.
LORENZO TARDÓN - ETSI Telecomunicación. Universidad de Málaga: The ATIC Group: an Account of its Musical Research
The research group of Applications of Information Technologies and Communications (Grupo de Investigación de Aplicación de las Tecnologías de la Información y Comunicaciones (ATIC)), carries out their activities in the Telecommunication Engineering School from the University of Málaga. The group is composed of a team with a large experience in audio and musical signal processing and it is provided with the latest generations equipments and tools for multimedia signal acquisition and processing. The quality of the group is endorsed by a large number of funded research projects and many international publications. The main research lines are:
- Processing of musical information. This research line includes, automatic piano music transcribers, expressive transcription of violin and viola music, automatic transcription of guitar, drums transcription, etc.
- Processing of audiovisual signals. In this line, it is noteworthy the joint analysis of audio and video signal.
- Optical music recognition. This line is focused on ancient scores that have specific music notation and different quality of conservation.
- Music Information Retrieval (MIR) techniques. From the processing of digital audiovisual signals, we have created different interesting applications such as: systems for discriminate between music/voice, creating lists of songs by similarity, new systems for searching song, etc.
- Interactive audiovisual applications. New methods for aid music teaching and development of 3D interfaces for the exploration of musical contents.
PACO GÓMEZ: The COFLA Group
In this talk we will present the research activity of the COFLA Group. This group was created to fill to fill certain lacunas in flamenco research such as: More systematic-oriented analysis; Computational models; Modern analysis theories an methodologies; Interdisciplinary analysis. We are currently tackling with the following problems:
- Melodic representation of flamenco music.
- Music transcription.
- Rhythmic similarity.
- Melodic similarity.
- Computational musical models for flamenco music.
- Musical characterization of flamenco styles.
- Style classification and style evolution in flamenco music.
We will mention some of the results we have obtained in the last two years.
NORBERTO DEGARA Universidad de Vigo: Reliability-Informed Beat Tracking of Musical Audio
In this talk, a new probabilistic framework for beat tracking of musical audio is presented. The method estimates the time between consecutive beat events and exploits both beat and non-beat information by explicitly modeling non-beat states. In addition to the beat times, a measure of the expected accuracy of the estimated beats is provided. The quality of the observations used for beat tracking is measured and the reliability of the beats is automatically calculated using a k-nearest neighbor regression algorithm. The performance of the beat tracking system is statistically evaluated and compared with existing algorithms. Finally we show how reliability information can be used to increase performance and compare automatic beat tracking to human tapping.
FABIEN GOUYON - INESC Porto: Recent Research and Education Activities in Sound and Music Computing at INESC Porto/FEUP
LUIS G. MARTINS: A Computational Framework for Sound Segregation in Music Signals
Music is built from sound, ultimately resulting from an elaborate interaction between the sound-generating properties of physical objects (i.e. music instruments) and the sound perception abilities of the human auditory system. Humans, even without any kind of formal music training, are typically able to extract, almost unconsciously, a great amount of relevant information from a musical signal (e.g. the beat and main melody of a musical piece, or the sound sources playing in a complex musical arrangement). In order to do so, the human auditory system uses a variety of cues for perceptual grouping such as similarity, proximity, harmonicity, common fate, among others. The work presented in this talk proposes a flexible and extensible Computational Auditory Scene Analysis (CASA) framework for modeling perceptual grouping in music listening. The goal of the proposed framework is to partition a monaural acoustical mixture into a perceptually motivated topological description of the sound scene (similar to the way a naive listener would perceive it) instead of attempting to accurately separate the mixture into its original and physical sources.
RUI PEDRO PAIVA - University of Coimbra, Portugal: MOODetector: A System for Mood-based Classification and Retrieval of Audio Music
Digital music repositories grow in size and complexity each day and need more advanced, flexible and user-friendly search mechanisms, adapted to the requirements of individual users. In fact, “music’s preeminent functions are social and psychological”, and so “the most useful retrieval indexes are those that facilitate searching in conformity with such social and psychological functions. Typically, such indexes will focus on stylistic, mood, and similarity information” [Huron, 2000]. This is supported by studies on music information behaviour that have identified music mood as an important criterion for music retrieval and organization [Juslin & Laukka, 2004]. Mood analysis in audio signals has received growing interests recently. Being a very recent research topic, many limitations can be found and several problems are still open. The effectiveness of such systems demands research on, e.g., feature extraction, selection and evaluation, extraction of knowledge from computational models and the tracking of mood variations throughout a song. In this talk, we will present the MOODetector project: A System for Mood-based Classification and Retrieval of Audio Music. This is a project funded by the Portuguese “Fundação para a Ciência e Tecnologia” that started in May 2010, aiming to tackle some of the research possibilities in mood analysis.
FILIPE LOPES - ESMAE & Fundação Casa da Música: Academic and non Academic Music Technology Intersections
Digital music making and hardware development is evolving and growing exponentially with today's availability of free software, musical content and proper expertise on the internet. I will report and compare two music technology contexts being myself significantly involved on both: Escola Superior de Música e das Artes do Espectáculo do Porto (ESMAE) academic curriculum and a non-academic project Digitópia, hosted at Casa da Música, Porto's main concert venue. Both address music technology issues and aim at developing musical skills based on digital music practices. What can the academic contexts, both superior or non-superior, retrieve from the exponentially growth of non-academic digital music practice? How can that affect the curriculum of music technology courses or courses that are not exclusively music technology related, such as composition degrees? How should them influence each other?
PEDRO VERA CANDEAS - University of Jaen: Separation of Singing Voice from Music Accompaniment
Separation of singing voice from music accompaniment is a topic of great utility in many applications of Music Information Retrieval (MIR), such as singer identification, melody transcription or lyrics recognition. In the context of stereophonic music mixtures, many algorithms face this problem modeling the mixture as an instantaneous sum of sound sources, and making use of the Interchannel Intensity Difference (ITD) to localize and isolate the singing voice. Although this approach can obtain acceptable results, the separated signal usually contains a high level of interferences and artifacts, which are consequence of the blind nature of the separation. In the work described here, we propose a method for improving the isolation of the singing voice in stereo recordings based on incorporating the fundamental frequency (F0) information to the separation process. The proposed method involves 3 main steps. First, an initial signal estimate of singing voice is blindly constructed analyzing the ITD of the mixture, in the same manner as typical stereo separation algorithms. Then, the F0 of this signal estimate is obtained using a pitch estimator incorporating voiced/unvoiced decisions. Here, a robust estimator based on the computation of the difference function and Hidden Markov Models (HMM) is proposed, which enables to obtain a smooth pitch contour over time. Finally, given the F0, a binary harmonic mask is constructed to select the harmonics of the singing voice in the original spectrum, and distorted harmonics are corrected to preserve the same ITD. The method has been tested on commercial music recordings, obtaining good separation results. Information about authors: P. Cabañas-Molero, P. Vera-Candeas, F. Rodriguez-Serrano, J. Carabias-Orti, F. Canadas-Quesada and N. Ruiz-Reyes. Telecommunication Engineering Department, Polytechnic School of Linares, University of Jaen, C/ Alfonso X el Sabio, Linares, Jaen, Spain.
ENRIC GUAUS - Escola Superior de Música de Catalunya: La investigación en la ESMUC
La Escola Superior de Música de Catalunya (ESMUC) es un conservatorio relativamente joven, ubicado en Barcelona, que aspira a ser referente en el panorama musical internacional. Para ello, uno de los puntos claves es la investigación como culminación de la implantación de sus planes de estudios en grado y postgrado. En esta presentación se muestra el entorno academico en el cual se desarrolla la investigación, cuales son sus lineas principales y dos de ejemplos de sus resultados más recientes.
1.- Welcome to the SITEMU Meeting
With great pleasure, Manuel Rosa (UAH) and I, Paco Gomez (UPM), are organizing the SITEMU Meeting in Madrid. The purpose of this meeting is to bring together researchers, profesionals and students of Music Technology from Spain and Portugal to share knowledge about the field, know about other research groups and their accomplishments, and search for potential collaborations. We all strive for building a tight, competitive, friendly community of practitioners of Music Technology, and holding this meeting will be an excellent means for achieving such a goal.
The meeting has no registration fee. Lunches and coffees are kindly paid by the Escuela Universitaria de Informática. However, for organization purposes, we ask you to register. Please, fill out the registration form.
3.- Importand Dates
Important dates are the following.
- Registration deadline: November 20th 2010.
- Deadline for abstract submission (group/project presentations): December 1st, 2010.
- SITEMU Meeting: December 16th-17th, 2010.
4.- Information for Authors
- Where to Submit: Please, submit your abstracts through the submitting form.
- Papers: Since this meeting is to get to know other groups' research, please, send a short abstract describing the work you will present in no more 200 words.
- Language: Abstracts are accepted in Spanish, English and Portuguese. If an abstract is sent in either Spanish or Portuguese, then an English version should also be sent.
5.- Information for Participants
- In order to participate the only you have to do is to register (registration form here).
- If you intend to present an abstract, please, follow the instructions given in the Information for Authors section above.
- The meeting will start on the 16th in the morning, and it will finish on the 17th in the afternoon.
- Sessions: there will be two morning sessions plus an afternoon session. There will a plenary talk by Josep Lluis Arcos, from IIIA-CSIC, on Thursday at 11:30. For the rest of the time, we will have presentation sessions of 20 minutes each.
- The schedule (UPDATED: DEC-15th, 11:05 a.m.) will be as follows.
THURSDAY 16th 11:00-11:30
Welcome speech by the Dean
Josep Luis Arcos
José Manuel Iniesta
U. País Vasco
Lorenzo J. Tardón
U. of Vigo
16:15-17:30Round-Table FRIDAY 17th 9:00-9:20
Luís G. Martins
Rui Pedro Paiva
U. of Coimbra
Pedro Vera Candeas
U. of Jaén
11:00-11:30Coffee Break 11:30-13:30SITEMU Meeting 13:30-15:00Lunch and farewell
- Presentations: one plenary talk plus several presentation sessions plus a round table session. As a matter of fact, the SITEMU meeting will also be held.
- The topic for the round table will be Graduate and Post-graduate Studies in Music Technology in Spain and Portugal.
7.- Where? At the ESCUELA UNIVERSITARIA DE INFORMÁTICA
- This meeting will take place at the Escuela Universitaria de Informática (EUI). To learn more about this school, visit its web page here.
- The meeting itself will be held at Sala de Grados. It is located on the second floor in Block III. There will signs indicating the way to you.
How to get to EUI?
- EUI is located on Carretera de Valencia, km. 7. Zip code: 28031, Madrid. GPS: 40º 23' 22.92'' -3° 37' 40.42"
- EUI belongs to Campus Sur, a big UPM campus where you can find the Survey Studies School (Escuela de Topografía), the School of Fashion (Escuela de Moda) and the Electrical Engineering School (Escuela de Ingenieros de Telecomunicación).
- Here you have a map showing the main access routes.
- Getting there by road: EUI is located South-East of Madrid. There are several ways to get there.
- If you come through the beltway M-40, take exit Avenida del Mediterráneo and follow the instructions on the signs (they read Campus Sur - UPM).
- If you come through the beltway M-30, take exit to Conde de Casal. From that point take highway N-III (Avenida del Mediterráneo). At km. 7 take the turning to the EUI as indicated by the signs.
- Getting there by bus: There several buses serving EUI. The following ones are the most convenient.
- Bus E: It is a shuttle service from Plaza Conde de Casal straight to Campus Sur.
- Bus 63: It goes from Felipe II square to Santa Eugenia. It stops at Plaza Conde de Casal.
- Bus 145: It goes from Plaza Conde de Casal to Santa Eugenia. It does not enter into Campus Sur, but stops near a gas station. Take a small trail down to Campus Sur.
- By metro: The corresponding stop is Sierra de Guadalupe, line 1 (the blue line). Then, walk 10 minutes on Calle de la Arboleda and you will reach EUI. To check out the metro map, click here.
- By train: The two main train stations are Atocha Station and Chamartin Station. From any of them you can catch local trains to EUI. The main local trains are C-7a, C-2 or C-1. Get off at Vallecas stop. From the station there is a 10-minute walk to EUI through Calle de la Arboleda. The train station is next to the metro station.
8.- Recommended Hotels
Unfortunately, EUI is not very well located in terms of hotels. Our recommendation is to get accomodation at Claridge Hotel. It is located at Plaza de Conde de Casal. You can catch bus E, the shuttle bus, and get to the EUI in 10-15 minutes, which is quite convenient. Also, there is a metro station, Pacífico, very nearby, which directly takes you directly to downtown (line 1, the blue line; in 5 stops with no transfers you are at Puerta del Sol). Below we give other options that may also result convenient.
- Claridge Hotel Booking Service.
- Hotel Agumar, on Paseo Reina Cristina. It is an option for those who want to be near downtown and near the line taking you to EUI.
- For more ways of accomodation, please, check the web page Tourist Guide.
Should you have any doubts about a hotel, please, let us know (write an e-mail to Paco: fmartin at eui dot upm dot es).
9.- Meeting Dinner
For the meeting dinner, we will wait until the number of participants is determined. Options are countless.
For those who haven't been to Madrid yet, we leave here a few links worth following.
- EsMadrid.com: a portal in English with a great deal of information about the endless cultural and night life of Madrid.
- Prado Museum.
- Museo Thyssen-Bornemisza.
- Queen Sofia Museum of Contemporary Art.
- If you are interested in walking tours in Madrid, click here. We recommend that you take the Madrid of the Bourbons tour.
- Check out Tourist Guide; it is full of useful information.
- Madrid Metro web page.
- Madrid Bus Service web page (only in Spanish).
This meeting is financially suported by the Dean's Office of the EUI and the Applied Mathematics Department of the EUI.