Keynotes


Amílcar Cardoso (University of Coimbra, Portugal) 
Monday 25th September, 9:20 – 10:20
Salão Nobre

Autonomous composition as search in a conceptual space: a Computational Creativity view.
Computational Creativity (CC) is an emerging field that brings together academics and practitioners from diverse disciplines, genres and modalities, to study and exploit the potential of computers to act as autonomous creators and co-creators in their own right. As a scientific endeavor, CC proposes that computational modeling can yield important insights into the fundamental creative capabilities of both humans and machines. As an engineering endeavor, CC claims that it is possible to construct autonomous systems that, by taking on particular responsibilities, produce novel and useful outputs that deserve the label “creative”.
This talk will introduce some basic concepts and terminology of the area, as well as abstract models for characterising different modes of creativity. We will illustrate how these concepts have been applied in recent times in the development of creative systems, particularly in the music field. Special attention will be payed to computational models and techniques for Conceptual Blending that have been one of the focus of our research in the last years. With this talk, we expect to contribute to facilitate communication between the CMMR and CC communities and foster synergies between them.

Biography
F. Amílcar Cardoso is a Lecturer at the Department of Informatics Engineering of the University of Coimbra, where he teaches Artificial Intelligence, Computational Creativity, Programming for Design, Sound Design and other topics. He is a member of the Cognitive and Media Systems Group of CISUC, a team that performs research on artificial intelligence, computational design, data visualization and analysis and other topics. He is also Vice-President of Instituto Pedro Nunes, the technology transfer unit of University of Coimbra.
He developed pioneering work on Computational Creativity in the 90s, and assumed since then an active role in the area. In the last years his research has been focused mostly on computational models of Conceptual Blending. His current research interests also include bio-inspired approaches to visual and auditory expression, data sonification and interactive environments for sound and image.
In the last years he has been involved in two EU projects on Computational Creativity, PROSECCO and ConCreTe. He was the General Chair of the International Conference on Computational Creativity, held in Paris, France, June 2016. He is co-editor of the forthcoming book “Computational Creativity – The Philosophy and Engineering of Autonomously Creative Systems”, to be published by Springer in 2017.



Margaret Schedel (Stony Brook University, USA) 
Tuesday 26th September, 9:20 – 10:20
Salão Nobre

Inscribing Bodies
Programming computers to recognize human gesture, designing prosthetics to augment human potential, building automatons to simulate human behavior, creating tools of transcription to record human kinetics, generating graphical methods to analyze human behavior, and theorizing the politics of expressive human motion, are just some of the ways in which we can notate the body. As the disciplines of computer science, media studies, and the fine arts become more open to the study of works, motions, and problems whose conceptual and material conditions challenge categorization, new questions arise that complicate traditional modes of historiography and analysis. Technology is a broad term encompassing tools, machines, techniques, crafts, and organizational systems, while notation is a distillation and a selection of information. Available tools and technologies affect how we can reduce data gathered, and record what is important. In this keynote, I will focus on methods of transcribing the functions and activities of gesture, with a specific focus on embodiment, or how the interrelated roles of environment and the body shape mental process and experience. Using my own custom open-source 3D printed sensors I will demonstrate the complexities of tracking even a single point over time, and distinguish between casual gesture and choreographed motion.

Biography
Margaret Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media whose works have been performed throughout the United States and abroad. While working towards a DMA in music composition at the University of Cincinnati College Conservatory of Music, her interactive multimedia opera, A King Listens, premiered at the Cincinnati Contemporary Arts Center and was profiled by apple.com. She holds a certificate in Deep Listening with Pauline Oliveros and has studied composition with Mara Helmuth, Cort Lippe and McGregor Boyle. She is a joint author of Electronic Music recently edited an issue of Organised Sound on the aesthetics of sonification. Her work has been supported by the Presser Foundation, Centro Mexicano para la Música y les Artes Sonoras, and Meet the Composer. She has been commissioned by the Princeton Laptop Orchestra and the percussion ensemble Ictus. In 2009 she won the first Ruth Anderson Prize for her interactive installation Twenty Love Songs and a Song of Despair. Her research focuses on gesture in music, the sustainability of technology in art, and sonification of data. She sits on the boards of 60×60, the International Computer Music Association, and is a regional editor for Organised Sound. From 2009-2014 she helped run Devotion, a Williamsburg Gallery focused on the intersection of art, science, new media, and design. In 2010 she co-chaired the International Computer Music Conference, and in 2011 she co-chaired the Electro-Acoustic Music Studies Network Conference She ran SUNY’s first Coursera Massive Open Online Course (MOOC) in 2013. As an Associate Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is the Director of cDACT, the consortium for digital art, culture and technology.

A piece of Margaret Schedel will be performed at CMMR 2017!



Peter Vuust (Aarhus University, Denmark)
Wednesday 27th September, 9:20 – 10:20
Salão Nobre

Groove on the brain: Rhythmic complexity and predictive coding
Musical rhythm has a remarkable capacity to move our minds and bodies. In this talk I will describe how the theory of predictive coding can be used as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. This theory posits a hierarchical organisation of brain responses reflecting fundamental, survival- related mechanisms associated with predicting future events. Overall, I will show how musical rhythm exploits the brain’s general principles of prediction and that the pleasure and desire for sensorimotor synchronisation from musical rhythm could be a result of such mechanisms.

Biography
Peter Vuust is a unique combination of a jazz musician and a world class scientist. As a researcher, he is Denmark’s leading expert in the field of music and the brain – a research field he has single-handedly built up as leader of the group Music In the Brain. He is internationally recognised, widely quoted and received in October 2014 the Danish National Research Foundation’s centre grant of DKK 52 million to found the Center for Music In the Brain. As a composer and bass player he has collaborated with a variety of artists, from Danish pop stars to some of the world’s major, international jazz artists, and in November 2014 he was nominated for a Danish Music Award for “Best Danish vocal jazz album” with his own quartet and Veronica Mortensen.

Since 2007, Peter Vuust leads the multidisciplinary research group Music In the Brain, which aims at understanding the neural processing of music, by using a combination of advanced music theory, behavioral experience, and state-of-the-art brain scanning methods. This research has the potential to significantly influence the way we play, teach, and use music clinically, and impact on our understanding of human brain function in general. Owing to the Danish National Research Foundation’s centre grant, the group has now transformed into a Center for Music In the Brain, consisting of PhDs, post-docs and a wide international network of collaborators who engage through weekly meetings, workshops and international symposia.



Carlos Guedes (New York University Abu Dhabi, United Arab Emirates)
Thursday 28th September, 9:20 – 10:20
Salão Nobre

Composing and improvising. In real time
It is undeniably true that computers dramatically changed the way we make music. One of the most impactful aspects on this change was their ability to modify the behavior of musical algorithms while executing them in real time. Interactive music systems changed the field of computer music as well as the overall relation between humans and computers in music making. Starting from being a machine that was capable of creating all imaginable sounds, record and sequence audio, or even notate music, the computer has progressively become more of a musical partner with the appearance of applications that behave like artificial musicians. The fact that one can connect sensors to control these applications and create devices that resemble musical instruments calls into question what is one doing while operating these devices: is one performing a musical instrument, is one improvising with a partner, or is one composing in real time? This question is certainly not new.

I define real-time composition (RTC) as a compositional practice utilizing interactive music systems in which generative algorithms with a non-deterministic behavior are manipulated by a user during performance. Nowadays, there are a myriad of situations where one can engage on RTC as defined: (1) through software applications for smart phones or portable game consoles employing generative music algorithms whose behavior is controllable by users; (2) through sequencing software that allows non-linear sequencing and its control in real time (e.g. Ableton Live); as well as (3) through generative music modules in commercial sequencing software that allows the control of music by specifying certain high-level parameters (e.g. Logic’s Drummer). But the question remains: are we improvising or composing when operating/performing these systems?

In this talk, I will show how some of my work relates to this increasingly common practice. I will show the anatomy of a real-time composition system and discuss how these types of system can provide interesting approaches to composition with electronic music and to music education and enculturation. By dissecting a bit further into what constitutes improvisation and composition in real time I will establish a relationship between these two important aspects in contemporary music performance and how (or if) this paradigm can be transported to live performance situations without computers.

Biography
Carlos Guedes has a multifaceted activity in composition and sound design, counting numerous commissioned projects for dance, theatrical performance, film and interactive installations besides conventional concert music. His creative work has been presented around the world in several shapes and forms. Carlos Guedes holds a PhD (2005) and MA (1996) in composition from NYU and a BM (1993) from ESMAE. He lived for three years in the Netherlands where he attended the Institute of Sonology in the Hague between 2001 and 2002.

Since 2007, he has developed a research activity in generative music systems through different projects at the Sound and Music Computing Group, a research group he co-founded at INESC Technology and Science (formerly, INESC Porto). Since joining New York University Abu Dhabi, Carlos Guedes has been working in projects “Cross-disciplinary and multicultural perspectives on musical rhythm,” “Creation and Analysis of a Digital Repository of Middle Eastern Music,” and “Sounds from Sir Bani Yas Island.” Carlos Guedes is currently Associate Arts Professor of Music at New York University Abu Dhabi.

Carlos Guedes will premiere a new piece at CMMR 2017!