The main objective of this research stage - hosted at the Steinhardt NYU (New York University) - consists of investigating new models of interaction between acoustic instruments and live-electronics manipulations. In consequence, we will investigate the changes that are taking shape in the relationship between composer, performer, performance and audience. To this end, the following questions will be considered: How technology is transforming the aesthetic designs of contemporary music creation procedures? How technology is transforming the modalities of performance? Which new tools the digital technologies are providing for composers and performers? Which new paradigms have been raised in consequence of the latest technological development? The expected results consist of compositions, performances, academic writings and an original MAX/MSP-based software system for live-electronics compositions and performances. The software system will be build based on the researches that have been carried on at Steinhardt NYU and the main theoretical fundaments will come from Robert Rowe, Norbert Schnell and Marc Battier. For Schell and Battier the development of digital systems for composing and performing must take into account the following statements: (1) The computer system must be considered as a composed instrument consisting of functions that goes from enabling the composer/performer to explore personal and originals ways trough to using them to controlling complex computational algorithmic calculus; (2) The computer system must be seen as consisting of processing modules passing stream information between them, and the processes coming from these processing modules can be divided in four categories: transformation, analysis, synthesis and memorization; (3) The range of interaction between performer and system goes from pre-composed music triggered by the performer to music created and controlled by the performer. The researcher Robert Rowe, in a similar way, has proposed a combination of three dimensions to classify the interactive systems. The first dimension is divided into performance-driven programs and score-driven program and the second dimension concerns the methods used to generate new sounds and is divided into systems using transformative, generative or sequenced methods. The third dimension embraces the concepts of instrument paradigm systems and systems following a player paradigm. The former is concerned with constructing an extended instrument and the latter is concerned with constructing an artificial player. From Rowe we will also take into account his interactive systems tripartite model. To the researcher, three stages are involved in any interactive system chain: sensing stage; processing stage; response stage. To the extend that our investigation encompasses research areas as Music Information Retrieval (MIR), Machine Listening, Human-Machine Interaction, Sound processing and diffusion Music Cognition, the research will count on exchanges with the following MARL research groups: (a) Computer Music and Interactive Performance Systems Research Group, in special their investigations into ecosystemic relationship interactive systems using the concepts of Finite State Machine (FST), Finite State Automata (FSA) and Finite State Transducer (FST); (b) Immersive Audio, in special its investigation on multi-touch controller for Real-Time acoustic gestures and its investigation on center-surrounded listening using multichannel systems; (c) Music and Informatics in special its investigations on signal processing. The second complementary objective consists of developing compositions and performances making use of the MAX/MSP system. The compositions will be developed in collaboration with the NYU New Music Ensemble, coordinated by Prof. Dr. Esther Lamneck. The research will be carried on under the supervision of the Ph. D. Robert Rowe, Associate Dean of Research at Steinhardt NYU.
News published in Agência FAPESP Newsletter about the scholarship: