Archived entries for audio-visual

YouTube’s “Copyright School” Smash Up

Ever wonder what happens when you’ve been accused of violating copyright multiple times on YouTube? First, you get a redirect to YouTube’s “Copyright School” whenever you visit YouTube, forcing you to watch a cartoon of Happy Tree Friends where the main character is dressed as an actual pirate:

Second, I’m guessing, your account will be banned. Third, you cry and wonder why you ever violated copyright in the first place.

In my case, I’ve disputed every one of the 4 copyright violation notices that I’ve received under grounds of Fair Use and Fair Dealing. Here’s what happens when you file a dispute using YouTube’s online form (click for high-res):

3 of the 4 have been dropped after I’ve filed disputes, though I’m still waiting to hear about the response to the above dispute. Read the dispute letter to Sony ATV and UPMG Publishers in full here.

The picture above shows a few stills from what my Smash Ups look like. The process described in greater detail on is part of my ongoing research into how existing content can be transformed into artistic styles reminiscent of analytic cubist, figurative, and futurist paintings. The process to create the videos … Continue reading...

3D Musical Browser

I’ve been interested in exploring ways of navigating media archives. Typically, you may use iTunes and go from artist to artist, or have managed to tediously classify your collection into genres. Some may still even browse their music through a file browser, perhaps making sure the folders and filenames of their collection are descriptive of the artist, album, year, etc… Though what about how the content actually sounds?

Wouldn’t it be nice to hear all music which shares similar sounds, or similar phrases of sounds? Research in the last 10-15 years have developed methods precisely to solve this problem and fall under the umbrella term content-based information retrieval (CBIR) algorithms, or uncovering the relationships of an archive through the information within the content. For images, Google’s Search by Image is a great example which only recently became public. For audio, audioDB and ShaZam are good examples of discovering music through the way it sounds, or the content-based relationships of the audio itself. Though, each of these interfaces present a list of matches to a image or audio query, making exploring the content-based relationships of a specific set of material difficult.

The video above demonstrates interaction with a novel 3D browser … Continue reading...

Intention in Copyright

The following article is written for the LUCID Studio for Speculative Art based in India.


My work in audiovisual resynthesis aims to create models of how humans represent and attend to audiovisual scenes. Using pattern recognition of both audio and visual material, these models use large corpora of learned audiovisual material which can be matched to ongoing streams of incoming audio or visual material. The way audio and visual material is stored and segmented within the model is based heavily on neurobiology and behavioral evidence (the details are saved for another post). I have called the underlying model Audiovisual Content-based Information Description/Distortion (or ACID for short).

As an example, a live stream of audio may be matched to a database of learned sounds from recordings of nature, creating a re-synthesis of the audio environment at present using only pre-recorded material from nature itself. These learned sounds may be fragments of a bird chirping, or the sound of footsteps. Incoming sounds of someone talking may then be synthesized using the closest sounding material to that person talking, perhaps a bird chirp or a footstep. Instead of a live stream, one can also re-synthesize a pre-recorded stream. Consider using a database … Continue reading...

Course @ CEMA Srishti School of Design, Bangalore, IN

From November 21st to the 2nd of December, I’ll have the pleasure to lead a course and workshop with Prayas Abhinav at the Center for Experimental Media Arts in the Srishti School of Design in Banaglore, IN.  Many thanks to Meena Vari for all her help in organizing the project.

Stories are flowing trees

Key words:  3D, interactive projects, data, histories, urban, creative coding, technology, sculpture, projection mapping

Project Brief:

Urban realities are more like fictions, constructed through folklore, media and policy. Compressing these constructions across time would offer some possibilities for the emergence of complexity and new discourse. Using video projections adapted for 3D surfaces, urban histories will become data and information – supple, malleable, and material.

The project will begin with a one week workshop by Parag Mital on “Creative Coding” using the openFrameworks platform for C/C++ coding”.

About the Artists:

Prayas Abhinav

Presently he teaches at the Srishti School of Art, Design and Technology and is a researcher at the Center for Experimental Media Arts (CEMA). He has taught in the past at Dutch Art Institute (DAI) and Center for Environmental Planning and Technology (CEPT).
He has been supported by fellowships by Openspace India (2009), TED (2009), … Continue reading...

Concatenative Video Synthesis (or Video Mosaicing)


Working closely with my adviser Mick Grierson, I have developed a way to resynthesize existing videos using material from another set of videos. This process starts by learning a database of objects that appear in the set of videos to synthesize from. The target video to resynthesize is then broken into objects in a similar manner, but also matched to objects in the database. What you get is a resynthesis of the video that appears as beautiful disorder. Here are two examples, the first using Family Guy to resynthesize The Simpsons. And the second using Jan Svankmajer’s Food to resynthesize Jan Svankmajer’s Dimensions of Dialogue.

Continue reading...

Google Earth + Atlantis Space Shuttle

I managed to catch the live feed from of the Atlantis Space Shuttle launch yesterday. Though what I found really interesting was a real-time virtual reality of the space shuttle launch from inside Google Earth. Screen-capture with obligatory 12x speedup to retain attention span below:

Continue reading...

Lunch Bites @ CULTURE Lab, Newcastle University

I was recently invited to the CULTURE lab at Newcastle University by director, Atau Tanaka. I would say it has the resources and creative power of 5 departments all housed in one spacious building. In the 12-some studios housed over 3 floors, over the course of 2 short days, I found people building multitouch tables, controlling synthesizers with the touch of fabric, and researching augmented spatial sonic realities. There is a full suite of workshop tools including a laser cutter, multiple multi-channel sound studios, full stage/theater with stage lighting and multiple projection, radio lab, and tons of light and interesting places to sit and do whatever you feel like doing. The other thing I found really interesting is there are no “offices”. Instead, the staff are dispersed amongst the students in the twelve-some studios, picking a new desk perhaps whenever they need a change of scenery? If you are ever in the area, it is certainly worth a visit, and I’m sure the people there will be very open to tell you what they are up to.

I also had the pleasure to give a talk on my PhD research in Resynthesizing Audiovisual Perception with Augmented Reality at the Lunch Continue reading...

Short Time Fourier Transform using the Accelerate framework

Using the libraries pkmFFT and pkm::Mat, you can very easily perform a highly optimized short time fourier transform (STFT) with direct access to a floating-point based object.

Get the code on my github:
Depends also on:

 *  pkmSTFT.h
 *  STFT implementation making use of Apple's Accelerate Framework (pkmFFT)
 *  Created by Parag K. Mital - 
 *  Contact:
 *  Copyright 2011 Parag K. Mital. All rights reserved.
 *	Permission is hereby granted, free of charge, to any person
 *	obtaining a copy of this software and associated documentation
 *	files (the "Software"), to deal in the Software without
 *	restriction, including without limitation the rights to use,
 *	copy, modify, merge, publish, distribute, sublicense, and/or sell
 *	copies of the Software, and to permit persons to whom the
 *	Software is furnished to do so, subject to the following
 *	conditions:
 *	The above copyright notice and this permission notice shall be
 *	included in all copies or substantial portions of the Software.
Continue reading...

Real FFT/IFFT with the Accelerate Framework

Apple’s Accelerate Framework can really speed up your code without thinking too much. And it will also run on an iPhone. Even still, I did bang my head a few times trying to get a straightforward Real FFT and IFFT working, even after consulting the Accelerate documentation (reference and source code), stackoverflow (here and here), and an existing implementation (thanks to Chris Kiefer and Mick Grierson). Still, the previously mentioned examples weren’t very clear as they did not handle the case of overlapping FFTs which I was doing in the case of a STFT or they did not recover the power spectrum, or they just didn’t work for me (lots of blaring noise).

Get the code on my github:

 *  pkmFFT.h
 *  Real FFT wraper for Apple's Accelerate Framework
 *  Created by Parag K. Mital - 
 *  Contact:
 *  Copyright 2011 Parag K. Mital. All rights reserved.
 *	Permission is hereby granted, free of charge, to any person
 *	obtaining a copy of this software and associated documentation
 *	files (the "Software"), to deal in the Software without
 *	restriction, including without limitation the rights to use,
Continue reading...

Tim J Smith guest blogs for David Bordwell

Tim J Smith, expert in scene perception and film cognition, and of The DIEM project [1] recently starred as a guest blogger for David Bordwell, a leading film theorist with an impressive list of books and publications widely used in film cognition/film art research/studies [2]. In his article featured on David’s site, Tim expands on his research on film cognition including continuity editing [3], attentional synchrony [4], and the project we worked on in 2008-2010 as part of The DIEM Project. Since Tim’s feature on David Bordwell’s blog, The DIEM Project saw a surge of publicity and our vimeo video loads going higher than 200,000 in a single day and features on dvice, slashfilm, gizmodo, Rogert Ebert’s facebook/twitter, and the front page of

Not to mention, our tools and visualizations are finally reaching an audience with interests in film, photography, and cognition. If you haven’t yet seen some of our videos, please head on over to our vimeo page, where you can see a range of videos embedded with eye-tracking of participants and many different visualizations of models of eye-movements using machine learning, or start by reading Tim’s post on Continue reading...

Responsive Ecologies Exhibition

Come checkout the Waterman’s Art Centre from the 6th of December until the 21st of January for an immersive and interactive visual experience entitled “Responsive Ecologies” developed in collaboration with artists captincaptin. We will also be giving a talk on the 10th of December from 7 p.m. – 9 p.m. during CINE: 3D Imaging in Art at the Watermans Center.

Responsive Ecologies is part of a wider ongoing collaboration between artists captincaptin, the ZSL London Zoo and Musion Academy. Collectively they have been exploring innovative means of public engagement, to generate an awareness and understanding of nature and the effects of climate change. All of the contained footage has come from filming sessions within the Zoological Society; this coincidentally has raised some interesting questions on the spectacle of captivity, a issue which we have tried to reflect upon in the construction and presentation of this installation. The nature of interaction within Responsive Ecologies means that a visitor to the space cannot simply view the installation but must become a part of its environment. When attempting to perceive the content within the space the visitor reshapes the installation. Everybody has a degree of impact whether directed or incidental, and … Continue reading...

“Memory” Video @ AVAF 2010

Please rate, share, and comment!

Memory @ AVAF 2010 from pkmital on Vimeo.

‘Memory’ is an augmented installation of a neural network by Parag K Mital & Agelos Papadakis.
hand blown glass, galvanized metal chain, projection, cameras; 1.5m x 2.5m x 3m

Ghostly images of faces appear as recorded movie clips within neural-shaped hand-blown glass pieces. As one begins to look at the neurons, they notice the faces as their own, trapped as disparate memories of a neural network.

Filmed and installed for the Athens Video Art Festival in May 2010 in Technopolis, Athens, Greece. The venue is a disused gas factory converted art space.

Also seen at Kinetica Art Fair, Ambika P3, London, UK, 2010; Passing Through Exhibition, James Taylor Gallery, London, UK, 2009; Interact, Lauriston Castle, Edinburgh, UK, 2009.

DSC_0551.jpg DSC_0420.jpgDSC_0591.jpgDSC_0569.jpgContinue reading...

X-Ray @ the Roxy Arthouse

Come to the Roxy Arthouse on June 10 for Neverzone and June 26 for Is This a Test? where I’ll be presenting my latest installation X-Ray, as well as to catch other brilliant Scotland-based artists.  More info on the flyers below.

Neverzone / Thursday June 10th 19:00-23:00

Is This a Test? / Saturday June 26th 19:00-23:00

[update] Plug for Is This a Test? on

[update 2] video now online:

X-RAY from pkmital on Vimeo.

Continue reading...

Memory and ChaoDependant at the Athens Video Art Festival 2010

DSC_0569.jpg May 7-9 saw the 2010 Athens Video Art Festival where with collaborator Agelos Papadakis, Memory saw its latest installation. The venue, a 2,500 square meters disused gas factory called Technopolis, or more commonly referred to as Gazi (Gas), was a brilliant display of warehouse spaces littered by gas pipesDSC_0063.jpg and oil still dripping from the cracks.


Over 2,700 submissions were received for a total of 450 presenting artists to see over 13,000 visitors during the weekend. Among the hundreds of video art, animations, and installations, were a number of performances including dance and music.



Both myself and collaborator Agelos Papadakis were interviewed by ERT, or loosely translated as Hellenic Radio and Television (something like the BBC). It is all in Greek, except for my interview.

Picture 28

Video Link to the interview on EPT.

Feel free to check out pictures from the festival and my travels on my flickr page.… Continue reading...

Neverzone, 10th June

Currently working on a video installation for an exciting gig on the 10th of June dubbed Neverzone. Check out the insanity on this flyer and information on the just as insane artists at Black Lantern Music

neverzone10jun2010_smallContinue reading...

Testing Audiovisuals

Have a look at a teaser of our upcoming show with Christos Michalakos and Lin Zhang on the 12th of April as support for Humcrush and Leverton Fox.

More information: reading...

TV Interference

Some images from an idea I am currently playing with using TV interference:

vlcsnap-2010-03-05-16h57m06s228 vlcsnap-2010-03-05-16h57m11s23 vlcsnap-2010-03-05-16h57m34s245 vlcsnap-2010-03-05-16h57m39s41 vlcsnap-2010-03-05-16h58m31s53 vlcsnap-2010-03-05-16h58m49s227 vlcsnap-2010-03-05-16h59m08s162 vlcsnap-2010-03-05-16h59m16s241 vlcsnap-2010-03-05-16h59m58s155 vlcsnap-2010-03-05-17h00m20s120 vlcsnap-2010-03-05-17h00m27s182
Continue reading...

Polychora (AV Synaesthesia)

Polychora stands for a 4 dimensional polytope, a connected or closed figure of lower dimesnional polytopal elements. These elements are thought of as the impulses of the piece and their combinations are intricately explored and decomposed in order to create a soundscape of particles. As an exploration of synaesthesia, the visuals are created as an audio-reactive algorithm based on brightness, panning, texture, noisiness, pitch, and their combinations. By combining the amorphous space of possible impulses and the range of sound textures, the polychoron takes a visual shape altered by the different dimensions of texture.

This piece was presented at the Soundings Festival on February 6th and 7th, 2010 (curated by Andrew Connor).

Audio by Christos Michalakos:

Visuals by Parag K Mital:… Continue reading...

Soundings Festival

As part of an audio-visual collaboration, Christos Michalakos and myself have experimented with a possible rendering, Polychora, for submission to the Soundings Festival. I’m happy to announce that our piece was selected and will be presented on February 6th as part of the evening festival, 6 p.m. at the Reid Concert Hall, Bristo Square, Edinburgh, Scotland. Uploaded here are a few stills from the video:

As well, a short description of the piece:
Christos Michalakos & Parag K Mital – Polychora:

Polychora stands for a 4 dimensional polytope, a connected or closed figure of lower dimesnional polytopal elements.  These elements are thought of as the impulses of the piece and their combinations are intricately explored and decomposed in order to create a soundscape of particles.  As an exploration of synaesthesia, the visuals are created as an audio-reactive algorithm based on brightness, panning, texture, noisiness, pitch, and their combinations.  By combining the amorphous space of possible impulses and the range of sound textures, the polychoron takes a visual shape altered by the different dimensions of texture.

Continue reading...

Copyright © 2010 Parag K Mital. All rights reserved. Made with Wordpress. RSS