Over the past 2 years, I have been working under the direction of Prof John M Henderson together with Dr Tim J Smith and Dr Robin Hill on the DIEM project (Dynamic Images and Eye-Movements). Our project has focused on investigating active visual cognition by eye-tracking numerous participants watching a wide-variety of short videos.
We are in the process of making all of our data freely available for research use. As well, we have also worked on tools for analyzing eye-movements during such dynamic scenes.
CARPE, or more bombastically known as Computational Algorithmic Representation and Processing of Eye-movements, allows one to begin visualizing eye-movement data together with the video data it was tracked with in a number of ways. It currently supports low-level feature visualizations, clustering of eye-movements, model selection, heat-map visualizations, blending, contour visualizations, peek-through visualizations, movie output, binocular data input, and more. The videos shown above on our Vimeo page were all created using this tool. Head over to Google code to check out the source code or download the binary. We are still in the process of stream-lining this process by creating manuals for new users and uploading more of the eye-tracking and video data so keep checking back if you are interested.
Smita Kheria (Law) — “Copyright law and new media art
2pm – joint presentations
Richard Coyne, Penny Travlou, Mark Wright (ACE, ECA, and Informatics) — “Emerging forms of digital media and the democratization of urban discourse”
Jolyon Mitchell, Alina Birzache, Milja Radovic, Yasmin Fedda (Divinity) — “Seeing Through Film, Religion and Ethics
3.30pm DISCUSSION FORUM 1
Cross-Subject/School /Institution teaching and research supervision: current/future plans
Chair: Kriss Ravetto
With Martine Beugnet, Sarah Colvin, John Lee, Fiona Littleton, Martine Pierquin
4pm INFO and UPDATES
Knowledge Transfer and Exchange: Anne-Sofie Laegran
Cfpma Web Presence: Annette Davison
‘Film in the Public Space’ and the Roberts Fund: Martine Beugnet
4.20pm DISCUSSION FORUM 2
The future of the Cfpma.
Chair: Annette and Martine
What could the Centre usefully do/encourage to support its members and foster research collaboration?
5pm BREAK – and move to David Hume Tower conference room
5.15pm Professor Tim Lenoir, CONTEMPLATING SINGULARITY
Cinet (Cinema Network) talk, followed by wine reception and launch of CFPMA
The talk explores how the postbiological and posthuman future has haunted cultural studies of technoscience for two decades. Concern (and in some quarters enthusiasm) that contemporary technoscience is on a path leading beyond simple human biological improvements and prosthetic enhancements to a complete human makeover has been sustained by the exponential growth in power and capability of computer technology since the early 1990s. The deeper fear is that somehow digital code and computer-mediated communications are getting under our skin, and in the process we are being transformed.
Tim Lenoir is the Kimberly Jenkins Chair for New Technologies and Society at Duke University. He has published several books and articles on the history of biomedical science from the nineteenth century to the present. His more recent work has focused on the introduction of computers into biomedical research from the early 1960s to the present, particularly the development of computer graphics, medical visualization technology, the development of virtual reality and its applications in surgery and other fields. Lenoir has also been engaged in constructing online digital libraries for a number of projects, including an archive on the history of Silicon Valley. Two recent projects include a web documentary project to document the history of bioinformatics funded by the Bern Dibner and Alfred P. Sloan Foundations, and How They Got Game, a history of interactive simulation and video games. With economists Nathan Rosenberg, Henry Rowen, and Brent Goldfarb he has just completed a collaborative study for Stanford University on Stanford’s historical relationship to Silicon Valley entitled, Inventing the Entrepreneurial Region: Stanford and the Co-Evolution of Silicon Valley. In support of these projects, Lenoir has developed software tools for interactive web-based collaboration. In this connection he is currently engaged with colleagues at UC Santa Barbara in developing the NSF-supported Center for Nanotechnology in Society, where he contributes to the effort to document the history, societal, and ethical implications of bionanotechnology.
There were those interested in the slides of my presentation and I thought to include it online:
The videos shown in the slides are available online on our Vimeo and Youtube accounts:
My students for the Digital Media Studio Project here at the University of Edinburgh have asked me to present a small workshop on using some aspects of the Processing.org environment. I’ve worked up something and thought I could share it online as well. I’ve setup a google code repository with the necessary files. The code simply highlights what you could find throughout the Processing.org discourse and the OpenCV example files though is more thoroughly commented and organized. A few notes, I really dislike the Processing IDE. Maybe it’s just because I’ve used IDE’s like VS, Netbeans, Eclipse, XCode etc… and I haven’t really played with Processing enough to have a well founded basis in the functions available. I believe going through a few extra steps to setup an IDE like Eclipse makes the task of programming much easier though at the cost of a bulky editor that may not be so easy to setup at first…
Eclipse is an (Integrated Development Environment) IDE for many coding languages, one of which is Java. Some advantages:
code completion – automatically see possible choices for all members belonging to a class definition, such as functions and their arguments.
javadocs – javadoc is a formatting for writing code comments. by following a simple format, javadocs can produce a nice html document outlining all the functions/members/arguments/what to expect etc… – while coding, having the ability to see javadocs is invaluable as memorizing all of the members of a class is often not ideal.
browsing libraries – along the lines of javadocs, being able to see the definitions of a class are much easier than having to memorize all the functions belonging to something like processing.core.PImage – and with the eclipse environment, you can view the javadocs along with the libraries.
debug – step through your program and view the stack trace, threads, and all the messy hex numbers.
The biggest disadvantages are that it takes time to setup a project, include files, and write the class definitions, none of which you will have to do in the Processing IDE. Luckily, there is a nice tutorial for setting up Eclipse to use the Processing libraries: http://processing.org/learning/tutorials/eclipse/ – I recommend going through this thoroughly.