CARPE allows one to begin visualizing eye-movement data in a number of ways. It currently supports low-level feature visualizations, clustering of eye-movements, model selection, heat-map visualizations, blending, contour visualizations, peek-through visualizations, movie output, binocular data input, and more.
Performs interactive audio synthesis using a previously trained auto encoder. VIsualizes hidden layer and allows for interaction with hidden units. W/ Andy Sarroff. See Andy's github (woodshop) for more interesting projects using Deep Learning and audio synthesis.
CARPE is "Computational Algorithmic Representation and Processing of Eye-movements". It supports visualizations of film/eye-movements in columnar format, peak-through visualizations, heatmaps, optical flow analysis, gaze clustering, and many other visualization options. See http://thediemproject.wordpress.com for more details. Compuational and Algorithmic Representation and Processing of Eye-movements - OSX Version
CARPE is "Computational Algorithmic Representation and Processing of Eye-movements". It supports visualizations of film/eye-movements in columnar format, peak-through visualizations, heatmaps, optical flow analysis, gaze clustering, and many other visualization options. See http://thediemproject.wordpress.com for more details. Initial release of CARPE hosted for historical purposes. Includes many experimental addons for CARPE including dROIs, GMM, GPU-Flow, and others. See also CARPE and NSCARPE for more recent, stable versions not including the experimental addons.
openFrameworks projects from my course at Srishti's School of Art, Design and Technology's Center for Experimental Media: http://pkmital.com/home/teaching/cema-workshop/
(unfinished) storing data in leveldb
3d visualization browser/audio synthesis engine of the Daphne Oram Archive - archive is held by Goldsmiths, University of London - only the visualization engine is hosted here.
Matlab scripts specific to the DIEM database
testing eeg encoding/decoding. this produces some gabor wavelets and co-registers the visualization and recording of eeg.
For streaming/logging emotiv eeg headset data
testing some encoding/decoding using EEG. this produces various animations that are co-registered with the recording of eeg.
Uses cURL to download from Freesound.org
Matlab scripts for handling IRCAM LISTEN database
Does finger tracking, contour analysis, and different types of shape description for the hand contour using a 3D ROI around the hand-tracked by NITE middleware, using OpenCV 2.2, an OpenGL scene, and OpenNI.
LuminOrder: An OS X app to reorder a video in order of its frames' brightness levels.
Source code for the iOS application Memory Mosaic available on iTunes: https://itunes.apple.com/us/app/memory-mosaic/id475759669?mt=8 - Performs real-time segmentation, analysis, and retrieval of multiple audio sources such as iTunes, Microphone, and Audiobus connections. Requires pkmAudio, pkmMatrix, and possibly others.
Multiscale Visualization ToolKit
openFrameworks addon for visualizing and interfacing with pre-trained models in Caffe: Convolutional Architectures for Fast Feature Embedding. Requires Caffe, openFrameworks 64-bit, glog, hdf5, OpenCV, CUDA, pkmMatrix, and pkmHeatmap. Pre-trained models not included but can be found linked in Caffe's "Model Zoo" and placed in the bin/data directory of the example project.
simple way to stream 32-bit float data from itunes in real-time
ofxOpenCV linking against OpenCV 18.104.22.168, including libraries for OSX
Ph.D. Thesis from Goldsmiths, University of London entitled, "Audiovisual Scene Synthesis". Hosts all images and latex files.
Sound synthesis library. Implements a number of feature databses including frame-based, segment-based, and sequence-based audio analysis/storing/retrieval. Also includes libraries for GPS-based synthesis including binarualization retrieval. Complements many other libraries including pkmBinaural and Memory Mosaic. Heavy dependence on vectorized ops using pkmMatrix and OSX's library Accelerate.
For performing GPS based concatenative sound synthesis, ANN retrieval based on GPS locations, HRTF based binauralization (mono->stereo using HRTF FFT-based Overlap-Add Convolution) using the IRCAM HRTF Database
Background modeling for foreground subtraction, tracks multiple blobs (people), their orientations (using leading motion vector), and has a nice visual display for seeing the results... video demonstration here: http://vimeo.com/22054133 - more info here: http://pkmital.com
track overhead using color and map tracked points to a new geometry using a homography transformation and calibration routine - some example test videos are provided in the bin/data directory of an overhead capture. The tracking transformation is useful for when you need a defined metric space of your tracking parameters, or need to account for different user heights in tracking their paths in a space.
3d Object Tracking and Pose Estimation for the iPhone
Interfacing libcluster for doing Variational Dirichlet Process Gaussian Mixture Models. Depends on Eigen3 and pkmMatrix. Libcluster included.
pkmEXTAudioFileReader and pkmEXTAudioFileWriter provides simple interfaces to reading and writing audio files.
facial shape modeling, appearance modeling, and head pose recognition. Uses Jason Mora Saragih's FaceTracker code to track facial landmarks; GreatYao's aam-library for building/reprojecting the model (which may in fact be an uncited port of Jason's DeMoLib).
pkmFFT provide simple interfaces to the Accelerate.framework for performing vectorized FFT. pkmSTFT builds on pkmFFT to perform Short Time Fourier Transform efficiently using vectorized ops. Also handles options for windowing. pkmDCT provides a simple discrete cosine transform using Accelerate and pkmMatrix.
More projects are listed on my github