- acoustic, augmented, behavior, ecology, embodied, enactive, gesture, liminality, memory, perception, reality, sensors, situated, sonic, sound field, soundscapes
I recently gave two talks, one for the PhDs based in the Electronic Music Studios, and another for the PhDs in Arts and Computational Technology. I received some very valuable feedback, and having to incorporate what I’ve been working on in a somewhat presentable manner also had a lot of benefit. The talk abstract (which is very abstract) is posted below with a few references listed. Please feel free to comment and open a discussion, or post any references that may be of interest.
An augmented sonic reality aims to register digital sound content with an existing physical space. Perceptual mappings between an agent in such an environment and the augmented content should be both continuous and effective, meaning the intentions of an agent should be taken into consideration in any affective augmentations. How can an embedded intelligence such as an iPhone equipped with detailed sensor information such as microphone, accelerometer, gyrometer, and GPS readings infer the behaviors of its user in creating affective, realistic, and perceivable augmented sonic realities tied to their situated experiences? Further, what can this augmented domain reveal about our own ongoing sensory experience of our sonic environment?
Keywords: augmented, reality, sonic, enactive, perception, memory, behavior, sensors, gesture, embodied, situated, acoustic, ecology, liminality
Augoyard and Torgue, “Sonic Experience”, McGill-Queen’s University Press, 2005.
Arfib, D. “Organised Sound”, 2002.
E. Corteel, “Synthesis of directional sources using Wave Field Synthesis, possibilities and limitations.” EURASIP Journal on Advances in Signal Processing, special issue on Spatial Sound and Virtual Acoustics, January, 2007
Lemaitre, G., Houix, O., Visell, Y., Franinovic, K., Misdariis, N., Susini, P. “Toward the Design and Evaluation of Continuous Sound in Tangible Interfaces: The Spinotron”, International Journal of Human Computer Studies, no 67, 2009
K. Nguyen, C. Suied, I. Viaud-Delmon, O. Warusfel, “Spatial audition in a static virtual environment : the role of auditory-visual interaction.” Journal of Virtual Reality and Broadcasting, 2009
M. Noisternig, B. Katz, S. Siltanen, L. Savioja, “Framework for Real-Time Auralization in Architectural Acoustics.” Acta acustica united with Acustica, vol. 94, no 6, November, 2008
R. Murray Schafer, “The Soundscape”, Destiny Books, 1977.
J. Tardieu, P. Susini, F. Poisson, P. Lazareff, S. McAdams, “Perceptual study of soundscapes in train stations.” Applied Acoustics, vol. 69, no 12, December, 2008
Strategies of mapping between gesture data and synthesis model parameters using perceptual spaces. D. Arfib, J. M. Couturier, L. Kessous, V. Verfaille. Organised Sound, International Journal of Music Technology, Volume 7, Issue 2 , pages 127-144, 2002.
D. Arfib, J-M. Couturier, L. Kessous , “Expressiveness and digital musical instrument design”, in Journal of New Music Research,Vol. 34, No. 1, pages 125 – 136, 2005.
N. d’Alessandro, O. Babacan, B. Bozkurt, T. Dubuisson, A. Holzapfel, L. Kessous, A. Moinet, M. V. Lieghe, “RAMCESS 2.X framework – expressive voice analysis for realtime and accurate synthesis of singing”, Journal On Multimodal User Interfaces, Springer Berlin/Heidelberg, Vol. 2, Nr. 2, September, pages 133-144, 2008.
L. Kessous, G. Castellano, G. Caridakis, Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis, Journal on Multimodal User Interfaces, Vol. 3, Issue 1, Springer Berlin/Heidelberg, December 12, pages 33-48, 2009.
G.Caridakis, K. Karpouzis, M. Wallace, L. Kessous, N.Amir, Multimodal user’s affective state analysis in naturalistic interaction, Journal on Multimodal User Interfaces, Vol. 3, Issue 1, Springer Berlin/Heidelberg, December 15, pages 49-66, 2009.
Anton Batliner, Stefan Steidl, Bjoern Schuller, Dino Seppi, Turid Vogt, Johannes Wagner, Laurence Devillers, Laurence Vidrascu, Noam Amir, Loic Kessous and Vered Aharonson. Whodunnit – Searching for the Most Important Feature Types Signalling Emotion-Related User States in Speech. Computer Speech and Language, volume 25, No. 1, pages 4–28, 2011.