Parag Kumar Mital
Parag K. MITAL (US) is an artist and interdisciplinary researcher obsessed with the nature of information, representation, and attention. Using film, eye-tracking, EEG, and fMRI recordings, he has worked on computational models of audiovisual perception from the perspective of both robots and humans, often revealing the disjunct between the two, through generative film experiences, augmented reality hallucinations, and expressive control of large audiovisual corpora. Through this process, he balances his scientific and arts practice, with both reflecting on each other: the science driving the theories, and the artwork re-defining the questions asked within the research. His work has been exhibited internationally including the Prix Ars Electronica, ACM Multimedia, Victoria & Albert Museum, London’s Science Museum, Oberhausen Short Film Festival, and the British Film Institute, and featured in FastCompany, BBC, NYTimes, CreativeApplications.Net, and CreateDigitalMotion.
Feel free to contact me.
Director of Machine Intelligence (2016-current) Kadenze, Valencia, CA, U.S.A.
Applied machine learning and deep learning.
Python; Theano; TensorFlow; Lasagne; Keras; Deep Learning; Computer Vision; Signal Processing; OpenCV; Amazon EC2, S3.
Artist Fellow (2016-current) CalArts, Valencia, CA, U.S.A.
Mentorship and development of arts practice with BFA, MFA students in the Music Technology Program.
Previous Work Experience
Senior Research Scientist (2015) Firef.ly Experience Ltd., London, U.K.
Machine learning and signal processing of user behavior and activity patterns from GPS and smartphone motion data.
MongoDB cluster computing; Mapbox; Python; Objective-C, Swift; Machine learning; Mobile signal processing.
Visiting Researcher (2015) at the Mixed Reality Lab at University of Southern California, Los Angeles, CA
Augmented reality; Unity 5; Procedural audiovisual synthesis.
Post-Doctoral Research Associate (2014-2015) at Dartmouth College, Hanover, NH
Exploring feature learning in audiovisual data, fMRI coding of musical and audiovisual stimuli during experienced and imagined settings, and sound and image synthesis techniques. Designed experiment for fMRI and behavioral data, collected data using 3T fMRI and PyschoPy, wrote custom pre-processing of data using AFNI/SUMA/Freesurfer using the Dartmouth Discovery supercomputing cluster, and developed methods using Univariate and Multivariate methods including Hyperalignment measures using PyMVPA. Principal Investigator: Michael Casey
Research Assistant (2011) London Knowledge Lab, Institute of Education, London, U.K.
ECHOES is a technology-enhanced learning environment where 5-to-7-year-old children on the Autism Spectrum and their typically developing peers can explore and improve social and communicative skills through interacting and collaborating with virtual characters (agents) and digital objects. ECHOES provides developmentally appropriate goals and methods of intervention that are meaningful to the individual child, and prioritises communicative skills such as joint attention. Wrote custom computer vision code for calibrating behavioral measures of attention within a large format touchscreen television. Funded by the EPSRC. Principal Investigators: Oliver Lemon and Kaska Porayska-Pomsta
Research Assistant (2008-2010) John M. Henderson’s Visual Cognition Lab, University of Edinburgh
Investigating dynamic scene perception through computational models of eye-movements, low-level static and temporal visual features, film composition, and object and scene semantics. Wrote custom code for processing large corpus of audiovisual data, correlating the data with behavioral measures from a large collection of human subject eye-movements, and applied pattern recognition and signal processing techniques to infer the contribution of auditory and visual features and their interaction within different tasks and film editing styles. The DIEM Project. Funded by the Leverhulme Trust and ESRC. Principal Investigator: John M. Henderson
Ph.D. (2014) Arts and Computational Technologies Goldsmiths, University of London.
Thesis: Computational Audiovisual Scene Synthesis
This thesis attempts to open a dialogue around fundamental questions of perception such as: how do we represent our ongoing auditory or visual perception of the world using our brain; what could these representations explain and not explain; and how can these representations eventually be modeled by computers?
M.Sc. (2008) Artificial Intelligence: Intelligent Robotics, University of Edinburgh
B.Sc. (2007) Computer and Information Sciences, University of Delaware
Christian Frisson, Nicolas Riche, Antoine Coutrot, Charles-Alexandre Delestage, Stéphane Dupont, Onur Ferhat, Nathalie Guyader, Sidi Ahmed Mahmoudi, Matei Mancas, Parag K Mital, Alicia Prieto Echániz, François Rocca, Alexis Rochette, Willy Yvart. Auracle: how are salient cues situated in audiovisual content? eNTERFACE 2014, Bilbao, Spain, June 9 – July 4, 2014.
Parag K. Mital, Jessica Thompson, Michael Casey. How Humans Hear and Imagine Musical Scales: Decoding Absolute and Relative Pitch with fMRI. CCN 2014, Dartmouth College, Hanover, NH, USA, August 25-26, 2014.
Tim J. Smith, Sam Wass, Tessa Dekker, Parag K. Mital, Irati Rodriguez, Annette Karmiloff-Smith. Optimising signal-to-noise ratios in Tots TV can create adult-like viewing behaviour in infants. 2014 International Conference on Infant Studies, Berlin, Germany, July 3-5 2014.
Tim J. Smith, Parag K. Mital. Attentional synchrony and the influence of viewing task on gaze behaviour in static and dynamic scenes. Journal of Vision, vol. 13 no. 8 article 16, July 17, 2013.
Tim J. Smith, Parag K. Mital. “Watching the world go by: Attentional prioritization of social motion during dynamic scene viewing”. Journal of Vision, vol. 11 no. 11 article 478, September 23, 2011.
Melissa L. Vo, Tim J. Smith, Parag K. Mital, John M. Henderson. “Do the Eyes Really Have it? Dynamic Allocation of Attention when Viewing Moving Faces”. Journal of Vision, vol. 12 no. 13 article 3, December 3, 2012.
C.A.R.P.E. | Jan 2015
C.A.R.P.E. is the computational and algorithmic representation and processing of eye-movements. The original C.A.R.P.E. was built in 2008 with The DIEM Project. This software is a rewrite from the ground up and capable of visualizing a large range of eye-movements together with clustering, multiple conditions/subject-pools heatmaps, and video and audio analysis. Some more information posted here.
YouTube Smash Up attempts to generatively produce viral content using video material from the Top 10 most viewed videos on YouTube. Each week, the #1 video of the week is resynthesized using a computational algorithm matching its sonic and visual contents to material coming only from the remaining Top 10 videos. This other material is then re-assembled to look and sound like the #1 video. The process is not copying the file, but synthesizing it as a collage of fragments segmented from entirely different material.
The Simpsons vs. Family Guy | Sep 2011
I have developed a method for resynthesizing existing videos using material from any other video(s). This process starts by learning a database of objects that appear in the set of videos to synthesize from. The target video to resynthesize is then broken into objects in a similar manner, but also matched to objects in the database.
Harry Smith vs. Pink Elephants | Dec 2011
A perceptual model based on proto-objects is presented as a visual reconstruction of Harry Smith’s Early Abstractions. I train the model on a scene from Dumbo, Pink Elephants, asking it to interpret Harry Smith, having only knowledge of Dumbo. The reconstruction is surprisingly able to capture a wide variety of the abstract images and movements in Harry Smith, as well as capture after-image. This model is an early prototype of my PhD work on visual resynthesis.
Infected Puppets | Nov 2011
Part of the SURFACES exhibition at BAR1 in Bangalore, India, this piece organizes the thoughts and speech of numerous Indian politicians using a microphone input. Participants are invited to use a microphone in front of a 3-channel audiovisual installation where the patterns of sound coming into the microphone are matched to a large database of different speeches by Indian politicians. The resulting cut-up fragmented narration by the different politicians feels like an infected synthesis of promises, lies, and puppetry. Made in collaboration with Prayas Abhinav and 9 students from the CEMA course at the Srishti School of Art and Design, Bangalore, India.
Future Echoes | Nov 2011
Part of the SURFACES exhibition at BAR1 in Bangalore, India, participants enter an ambisonic audio environment where they each become a character in a post-apocalyptic cyber-punk tale of a one-dimensional fate. As they enter the 3x3x3m space, audio cues spatialized from their perspective are triggered based on a randomly chosen character. Cut-up fragments of the voice of the character are synthesized where the participant stands. As they move within the ambisonic space, their character’s narrative is unfolded further, revealing more of their story. Other participants can enter the space and hear each of the participant’s tale synchronized to each of the participant’s location through the use of ambisonics. Made in collaboration with Prayas Abhinav and 9 students from the CEMA course at the Srishti School of Art and Design, Bangalore, India.
Michael Jackson vs. Chris Watson | Oct 2011
An auditory reconstruction of Michael Jackson’s beat it using “Memory Mosaicing“. Every sound being played comes from a sample of Chris Watson’s nature recordings.
Real-time Auditory Memory Mosaicing | Jun 2011
Memory Mosaicing is a new type of Augmented Sonic Reality that resynthesizes your sonic world using recorded segments of sound from the microphone. You can also add a song from your iTunes Library to the app’s memory, creating a mashup of sounds in your sonic environment based on your favorite music, techno-fying or hiphop-i-fying your world.
Oramics | Jun 2011
this project focused on an iPhone and Desktop (for the Science Museum in London) emulator which would try to bring the sound of Daphne Oram’s “Oramics Machine” to life, through the Oramics drawn sound technique. the interactive desktop app goes live in the science museum of london on july 29th, with the iPhone app being released soon after.
ECHOES | Jun 2011
ECHOES is a technology-enhanced learning environment where 5-to-7-year-old children on the Autism Spectrum and their typically developing peers can explore and improve social and communicative skills through interacting and collaborating with virtual characters (agents) and digital objects. ECHOES provides developmentally appropriate goals and methods of intervention that are meaningful to the individual child, and prioritises communicative skills such as joint attention.
Real-time Source Separation | Mar 2011
Separating foreground from background can also elicit a model of auditory saliency, or a model of what is likely to be important in the auditory stream of information. First a chunk of audio is trained as a background in real-time. Next, the audio is discretized into matrix factors through a number of maximum likelihood iterations (Expectation-Maximization) into 3 variables representing basis components in the 2D spectrum, their weights, and the impulses of where they occur. Foreground is a reprojection of this data onto additional components. This project works in real-time on an iPhone 4.
Real-time Binauralization | Feb 2011
this works extends the IRCAM Listen database for real-time cluster-based binauralization on the iPhone, allowing up to 30 sound sources to be spatialized in 3D in real-time on an iPhone 4, using the GPS, compass, and altitude information.
Responsive Ecologies | Dec 2010 – Jan 2011
exhibited at the Watermans between 6th December 2010 and the 21st January 2011. The installation was in the form of a 360 degrees multi-screened projection or CAVE (Cave Automatic Virtual Environment). The presence of people within the space would be tracked and used to deconstruct and interlace the video in response to their movement. The video documentation below was taken from the installation (throughout this video the camera is panning around the space in order to record all sides of the CAVE).
Sonic Graffiti | Nov 2010
sounds are placed in the city as graffiti using the iPhone 4’s GPS and microphone. the result is a sonification of the graffiti around you as a spatialized orchestra in 3D sound.
Sound-seeer | Nov 2010
In collaboration with R. Beau Lotto of lottolab studios, and Mick Grierson of Goldsmiths, University of London’s Department of Computing, this project sought to allow children to design visual search experiments investigating the relationship of sound and vision. Setup during the november Science Museum of London LATES exhibition, and the i,Scientist 2011 program, participants were blindfolded and navigated a maze using the sound from this iPod app, which converted the camera image into spatialized sounds.
The Trial | Jun 2010
A collaboration between Christos Michalakos, Lin Zhang, and myself, ‘The Trial’ was presented as a live laptop set for the Dialogues Festival in Edinburgh’s Voodoo Rooms, as a support act for Rune Grammofon artist Humcrush
Calibration | Apr 2010
A collaboration between Christos Michalakos and myself, ‘Calibration’ was the continuation of an audiovisual synaesthetic duo exploring raw symmetry with digitally-controlled analog aesthetics between sound and visuals.
Memory | 2009-2010
‘Memory’ is an augmented installation of a neural network employing hand blown glass, galvanized metal chain, projection, cameras; 1.5m x 2.5m x 3m. As one begins to look at the neurons, they notice the faces as their own, trapped as disparate memories of a neural network. Filmed and installed for the Athens Video Art Festival in May 2010 in Technopolis, Athens, Greece. The venue is a disused gas factory converted art space. Also seen at Kinetica Art Fair, Ambika P3, London, UK, 2010; Passing Through Exhibition, James Taylor Gallery, London, UK, 2009; Interact, Lauriston Castle, Edinburgh, UK, 2009.
Colony | Summer 2010
A microscope video feed is processed for numerous tracking parameters and influences the resulting re-projected visuals. Live audio is also processed based on these same parameters. Additional performers can “plug-in” by receiving the tracking information, or simply by viewing the other performers or visuals. This unadulterated (and very rough) clip was initiated at the Edinburgh HackLab on 17 Sept 2010 with Shiori Usui on Live Instruments, Sarah Roberts on Microscope, and Parag K Mital on Audio/Visual processing (other clips feature Owen Green also on Audio processing).
X-RAY | Jun 2010
X-RAY invites participants to interact with a seemingly broken television installed as part of a 1970’s living room. Television signals are affected by the surrounding audio textures in the room as well as a novel measure of attention invoked while a user views the television.
First installed as part of Neverzone on 10 June 2010 (pictures
Polychora | Feb 2010
As an exploration of synaesthesia, the visuals are created as an audio-reactive algorithm based on brightness, panning, texture, noisiness, pitch, and their combinations. By combining the amorphous space of possible impulses and the range of sound textures, the polychoron takes a visual shape altered by the different dimensions of texture. Presented at the Soundings Festival on February 6th and 7th, 2010 (curated by Andrew Connor).
Colony | Summer 2010
COLONY is a multi-faceted networked and interdisciplinary platform for exploring creative ideas. The idea behind this particular incarnation is that a microscope video feed is processed for numerous tracking parameters and further influencing the resulting re-projected visuals. Live audio is also processed based on these same parameters. Additional performers can “plug-in” by receiving the tracking information, or simply by viewing the other performers or visuals. Featuring Sarah Roberts, Shiori Usui, Owen Green, and myself.
Attention | Spring 2008
Participants of a quadrophonic multiscreen immersive installation are eye-tracked for eye-movements while watching a continuous 2×2 film narrative. The resulting installation creates an entirely algorithmically edited film based on the participant’s eye-movements, creating real-time edits of sound and video to create atmospheric sounds, off-screen voice-overs, and video edits between close-ups, mid-shots, and wide-shots, all depending on the eye-trackee’s original attention to the film.
Ask Me About Iran | Sep 2009
Set in the city center of Edinburgh during the end of the largest arts festival in the world, a small group of individuals were curious to document the current public opinion about Iran. Spontaneously finding a piece of cardboard, a marker, and a stick, we drew a sign which said, “Ask me about Iran” and waited for anyone willing to start a conversation.
Geodesic Dome Projection Mapping | May 2010
Custom built software for interactive projection mapping of a geodesic dome. Dome design by Tom Clowney for the artist Cardboard.
Dynamic Images and Eye-Movements | 2008-2010
The DIEM Project (Prof John Henderson, Dr Robin Hill, Dr Tim Smith, Parag Mital) are developing new visualisation tools for eye movements in dynamic images, as well as new data analysis tools and techniques based on dynamic-regions-of-interest (DROIs) for use in film and video. We applied these new methods to investigate how people see and understand the visual world as depicted in film and video in developing a stronger theory of active visual cognition.
Attention: The Experimental Film | 2008
A collaboration between stefanie tan, dave stewart, and myself, “attention” explored how eye-tracking could be used to algorithmically edit new films, using sound and video databases (supervised by Tim J. Smith). a pov 2×2 film shot in the style of michael figgis’s timecode was created alongside of additional wide/mid/and close-shot videos. sound bytes and narratives were also collected. a final installation of dual projection and quadrophonic audio was algorithmically edited in real-time based on viewers eye-tracking information.
Interactive Light Field Renderer | 2006
As part of a post-doctoral seminar I attended at the University of Delaware, I implemented bespoke software for a Light Field Renderer with support for aperture size, synthetic focal length, and translational motion of the virtual viewing camera. This project was under the direction of Dr. Jingyi Yu.
Gradient Domain Context Enhancement Using Poisson Integration | 2006
As part of a post-doctoral seminar I attended at the University of Delaware, I built bespoke software to correct either a highly saturated day-time video using video material from the night, or vice-versa correcting a overly dark night time video using material from the day time. This project was under the direction of Dr. Jingyi Yu.
Kadenze Academy, Fall 2016
This is a 5-session course in state of the art Deep Learning algorithms, led through a creative application of the algorithms. This unique course gets you up to speed with Tensorflow and interactive computing with Python.
Lecturer, Department of Computing @ Goldsmiths, University of London. London, U.K. – Spring 2013
This is a 10-week Master’s course covering Mobile and Computer Vision development using the openFrameworks creative coding toolkit at Goldsmiths, Department of Computing. Taught to MSc Computer Science, MSc Cognitive Computing, MSc Games and Entertainment, MA Computational Arts, and MFA Computational Studio Arts students.
Lecturer, Department of Computing @ Goldsmiths, University of London. London, U.K. – Fall 2012
This is a 4-week course covering the basics of openFrameworks taught to MA Computational Arts and MFA Computational Studio Arts students.
Lecturer, Department of Computing @ Goldsmiths, University of London. London, U.K. – Spring 2012
This is a 5 week course covering Gesture and Interaction design as well as Computer Vision basics using the openFrameworks creative coding toolkit. Taught to MSc Computer Science, MSc Cognitive Computing, MSc Games and Entertainment, MA Computational Arts, and MFA Computational Studio Arts students.
Lecturer, Digital Studio, Sackler Centre @ Victoria & Albert Museum. London, U.K. – Spring 2012
10 week course open to anyone covering the basics of iOS development.
Lecturer, Center for Experimental Media Arts @ Srishti School of Art, Design, and Technology. Bangalore, India – Fall 2011
Taught during the interim semester, the course entitled, “Stories are Flowing Trees”, introduced a group of 9 students to the creative coding platform openFrameworks through practical sessions, critical discourse, and the development of 3 installation artworks that were exhibited in central Bangalore. During the first week, students were taught basic creative coding routines including blob tracking, projection mapping, and building interaction with generative sonic systems. Following the first week, students worked together to develop, fabricate, install, publicize, and exhibit 3 pieces of artwork in central Bangalore at the BAR1 artist-residency space in an exhibition entitled, SURFACE, textures in interactive new media.
Supervisor, School of Arts, Culture, and Environment, University of Edinburgh
(2010) Supervisor for 3 MSc Students on Augmented Sculpture
(2009) Supervised 6 MSc Students on Incorporating Computer Vision in Interactive Installation
Engineering and Sciences Research Mentor. Seminar. McNair Scholars, University of Delaware, 2007
Instructor. Web Design. McNair Scholars, University of Delaware, 2007
Teaching Assistant. Introduction to Computer Science. University of Delaware, 2006
Parag K. Mital, “Computational Audiovisual Synthesis and Smashups”. International Festival of Digital Art, Waterman’s Art Centre, 25 August 2012.
Parag K. Mital and Tim J. Smith, “Investigating Auditory Influences on Eye-movements during Figgis’s Timecode”. 2012 Society for the Cognitive Studies of the Moving Image (SCSMI), New York, NY. 13-16 June 2012.
Parag K. Mital and Tim J. Smith, “Computational Auditory Scene Analysis of Dynamic Audiovisual Scenes”. Invited Talk, Birkbeck University of London, Department of Film. London, UK. 25 January 2012.
Parag K. Mital, “Resynthesizing Perception”. Invited Talk, Queen Mary University of London, London, UK. 11 January 2012.
Parag K. Mital, “Resynthesizing Perception”. Invited Talk, Dartmouth, Department of Music. Hanover, NH, USA. 7 January 2012.
Parag K. Mital, “Resynthesizing Perception”. 2011 Bitfilm Festival, Goethe Institut, Bengaluru (Bangalore), India. 3 December 2011.
Parag K. Mital, “Resynthesizing Perception”. Thursday Club, Goldsmiths, University of London. 13 October 2011.
Parag K. Mital, “Resynthesizing audiovisual perception with augmented reality”. Invited Talk for Newcastle CULTURE Lab, Lunch Bites. 30 June 2011 [slides][online]
Hill, R.L., Henderson, J. M., Mital, P. K. & Smith, T. J. (2010) “Dynamic Images and Eye Movements”. Poster at ASCUS Art Science Collaborative, Edinburgh College of Art, 29 March 2010.
Robin Hill, John M. Henderson, Parag K. Mital, Tim J. Smith. “Through the eyes of the viewer: Capturing viewer experience of dynamic media.” Invited Poster for SICSA DEMOFest. Edinburgh, U.K. 24 November 2009
Parag K Mital, Tim J. Smith, Robin Hill, and John M. Henderson. “Dynamic Images and Eye-Movements.” Invited Talk for Centre for Film, Performance and Media Arts, Close-Up 2. Edinburgh, U.K. 2009
Parag K. Mital, Stephan Bohacek, Maria Palacas. “Realistic Mobility Models for Urban Evacuations.” 2007 National Ronald E. McNair Conference. 2007
Parag K. Mital, Stephan Bohacek, Maria Palacas. “Developing Realistic Models for Urban Evacuations.” 2006 National Ronald E. McNair Conference. 2006
(2016) Espacio Byte, Argentina
(2015) Re-Culture 4, International Visual Arts Festival, Patras, Greece
(2015) Cologne Short Film Festival, New Aesthetic, Köln (Cologne), Germany
(2015) Blackout Basel, Basel, Switzerland
(2015) Prix Ars Electronica, Linz, Austria
(2015) Oberhausen Short Film Festival, Oberhausen, Germany
(2013) Media Art Histories/ART+COMMUNICATION 2013 (SAVE AS), RIXC, Riga, Latvia
(2013) Breaking Convention, University of Greenwich, London, U.K.
(2012) Digital Design Weekend, Victoria and Albert Museum, London, U.K.
(2012) SHO-ZYG, Goldsmiths, University of London, U.K.
(2011) SURFACES, Bengaluru Artist Residency 1 (BAR1), Bengaluru (Bangalore), India (Co-Curator and Artist)
(2011) Bitfilm Festival, Goethe Institut, Bengaluru (Bangalore), India
(2011) Oramics to Electronica, Science Museum. London, U.K.
(2011) Edinburgh International Film Festival. Edinburgh, U.K.
(2011) Kinetica Art Fair 2011, Ambika P3. London, U.K.
(2010-2011) Solo Exhibition, Waterman’s Art Centre, London, UK.
(2010) onedotzero Adventures in Motion Festival, British Film Institute (BFI) Southbank, London, UK.
(2010) LATES, Science Museum, London, UK.
(2010) Athens Video Art Festival, Technopolis. Athens, Greece
(2010) Is this a test?, Roxy Arthouse, Edinburgh, UK.
(2010) Neverzone, Roxy Arthouse, Edinburgh, UK.
(2010) Dialogues Festival, Voodoo Rooms, Edinburgh, U.K.
(2010) Kinetica Art Fair 2010, Ambika P3. London, U.K.
(2010) Soundings Festival, Reid Concert Hall, Edinburgh, U.K.
(2010) Media Art: A 3-Dimensional Perspective, Online Exhibition (Add-Art)
(2009) Passing Through, James Taylor Gallery. London, U.K.
(2009) Interact, Lauriston Castle Glasshouse. Edinburgh, U.K.
(2008) Leith Short Film Festival, Edinburgh, U.K. June
(2008) Solo exhibition, Teviot, Edinburgh, U.K. April
Parag K. Mital, Tim J. Smith, John M. Henderson. A Framework for Interactive Labeling of Regions of Interest in Dynamic Scenes. MSc Dissertation. Aug 2008
Parag K. Mital. Interactive Video Segmentation for Dynamic Eye-Tracking Analysis. 2008
Parag K. Mital. Augmented Reality and Interactive Environments. 2007
Stephan Bohacek, Parag K. Mital. Mobility Models for Urban Evacuations. 2007
Parag K. Mital, Jingyi Yu. Light Field Interpolation via Max-Contrast Graph Cuts. 2006
Parag K. Mital, Jingyi Yu. Gradient Based Domain Video Enhancement of Night Time Video. 2006
Parag K. Mital, Jingyi Yu. Interactive Light Field Viewer. 2006
Stephan Bohacek, Parag K. Mital. OpenGL Modeling of Urban Cities and GIS Data Integration. 2005
Bregman Media Labs, Dartmouth College
EAVI: Embodied Audio-Visual Interaction group initiated by Mick Grierson and Marco Gilles at Goldsmiths, University of London
The DIEM Project: Dynamic Images and Eye-Movements, initiated by John M. Henderson at the University of Edinburgh
CIRCLE: Creative Interdisciplinary Research in CoLlaborative Environments, initiated between the Edinburgh College of Art, the University of Edinburgh, and elsewhere.
Summer Schools/Workshops Attended
Michael Zbyszynski, Max/MSP Day School. UC Berkeley CNMAT 2007
Ali Momeni, Max/MSP Night School. UC Berkeley CNMAT 2007
Adrian Freed, Sensor Workshop for Performers and Artists. UC Berkeley CNMAT 2007
Andrew Benson, Jitter Night School. UC Berkeley CNMAT 2007
Perry R. Cook and Xavier Serra, Digital Signal Processing: Spectral and Physical Models. Stanford CCRMA 2007
Ivan Laptev, Cordelia Schmid, Josef Sivic, Francis Bach, Alexei Efros, David Forsyth, Zaid Harchaoui, Martial Hebert, Christoph Lampert, Ivan Laptev, Aude Oliva, Jean Ponce, Deva Ramanan, Antonio Torralba, Andrew Zisserman, INRIA Computer Vision and Machine Learning. INRIA Grenoble 2012
Bob Cox and the NIH AFNI team, AFNI Bootcamp. Haskins Lab, Yale University. May 27-30, 2014.
In the News
The Space (BBC/Arts Council England)
Fast Company: Co.Design
The Creators Project (Vice/Intel)