CARPE allows one to begin visualizing eye-movement data in a number of ways. It currently supports low-level feature visualizations, clustering of eye-movements, model selection, heat-map visualizations, blending, contour visualizations, peek-through visualizations, movie output, binocular data input, and more.
Perform interactive audio synthesis using an auto encoder
Compuational and Algorithmic Representation and Processing of Eye-movements - OSX Version
(unfinished) storing data in leveldb
Matlab scripts specific to the DIEM database
testing eeg encoding/decoding. this produces some gabor wavelets and co-registers the visualization and recording of eeg.
For streaming/logging emotiv eeg headset data
testing some encoding/decoding using EEG. this produces various animations that are co-registered with the recording of eeg.
Uses cURL to download from Freesound.org
Matlab scripts for handling IRCAM LISTEN database
Does finger tracking, contour analysis, and different types of shape description for the hand contour using a 3D ROI around the hand-tracked by NITE middleware, using OpenCV 2.2, an OpenGL scene, and OpenNI.
Memory Mosaic iOS App
Multiscale Visualization ToolKit
CARPE using NSWindow, Multiple Heatmaps, XML file settings, Difference of Heatmaps, GPU optimizations
simple way to stream 32-bit float data from itunes in real-time
ofxOpenCV linking against OpenCV 184.108.40.206, including libraries for OSX
Hopefully this will be a completed PhD Thesis...
Various classes for performing concatenative sound synthesis. Probably not useful to anyone else without a serious amount of time to understand the code.
For performing GPS based concatenative sound synthesis, ANN retrieval based on GPS locations, HRTF based binauralization (mono->stereo using HRTF FFT-based Overlap-Add Convolution) using the IRCAM HRTF Database
Background modeling for foreground subtraction, tracks multiple blobs (people), their orientations (using leading motion vector), and has a nice visual display for seeing the results... video demonstration here: http://vimeo.com/22054133 - more info here: http://pkmital.com
track overhead using color and map tracked points to a new geometry using a homography transformation and calibration routine - some example test videos are provided in the bin/data directory of an overhead capture. The tracking transformation is useful for when you need a defined metric space of your tracking parameters, or need to account for different user heights in tracking their paths in a space.
3d Object Tracking and Pose Estimation for the iPhone
Interfacing libcluster for doing Variational Dirichlet Process Gaussian Mixture Models
pkmEXTAudioFileReader and pkmEXTAudioFileWriter provides simple interfaces to reading and writing audio files.
facial shape modeling, appearance modeling, and head pose recognition. Uses Jason Mora Saragih's FaceTracker code to track facial landmarks; GreatYao's aam-library for building/reprojecting the model (which may in fact be an uncited port of Jason's DeMoLib).
pkmFFT and pkmSTFT provide simple interfaces to the Accelerate.framework for performing vectorized FFT/STFT
pkmMatrix provides a lightweight Matrix class using the Accelerate.framework for vectorized operations
Interface for OpenCV's TV-L1 Duality Based Approach to Optical Flow. Does some additional tricks to speed up processing, allow for analysis of the spatio-temporal changes in flow, and visualize the results.
phase vocoder made simple
Calibrate head pose with respect to a screen (television/monitor) for an attention-based measure. Uses Jason Mora Saragih's FaceTracker, please contact him for the code.
More projects are listed on my github