Goldsmiths, University of London | Workshops in Creative Coding: Computer Vision

Introduction

This is the homepage for the Master’s course – Workshops in Creative Coding: Computer Vision @ Goldsmiths, University of London’s Department of Computing. We will go over the basics of Computer Vision within the openFrameworks environment, using the ofxOpenCV addon, as well as using OpenCV directly. The course will be taught over 3 weeks and will end with a final project due at the end of Reading week. 

Labs

On this page, you will find your lab sheets and example solutions to the labs by the end of the week. 

Please make sure you already have a coding environment setup to use openFrameworks v007: openFrameworks. openFrameworks v007 includes an addon for OpenCV version 2.2.0 which we will explore extensively. We will also use some features of the OpenCV library directly. Keep in mind links for the OpenCV documentation and Wiki pages. Actually, this page is a good compendium of different resources, and also includes a link to a PDF Manual of OpenCV 2.3.1. More information is also in each of the lab sheets below.

Final Project

Information regarding your final project can also be found here for [Computational Arts] and [Games] students.  You are expected to have formed groups no larger than 6 ready with an idea for discussion by the start of the second week’s lab (30th/31st of January) .

You will be asked to show your finished work to us in the week after reading week (details to follow).

You should also submit a short (500 words) written discussion of your work by midnight on the 19th Feb. This should include a brief description of your work, your role in the work and the contribution of your different team members. In the last category you should state whether you feel the contribution was roughly equal in which case marks will be allocated equally, or whether any team members contributed significantly more or less.

Other Computer Vision courses

Bob Fisher’s IVR Introduction to Vision and Robotics & AV Advanced Vision classes at University of Edinburgh
James Hays’s CS 143 Introduction to Computer Vision class at Brown
Trevor Darrell’s CS 280 Computer Vision class at University of California, Berkeley
Antonio Torralba’s 6.869 Advances in Computer Vision class at MIT
Kristen Grauman’s CS 378 Computer Vision class at University of Texas, Austin

Recommended Books

Forsyth & Ponce – Computer Vision: a Modern Approach
Norvig – Artificial Intelligence: a Modern Approach
Bishop – Pattern Recognition and Machine Learning
Ballard & Brown – Computer Vision [free eBook]

Week 1

Introduction

This lab will orient you with using some features of openFrameworks’ ofxOpenCv addon, which builds wrappers for OpenCV, and also get you started with using OpenCV itself using some of its most basic functions.

Lab Sheet

Interactive RGB Colorspace

testApp.h

#pragma once
 
#include "ofMain.h"
#include "ofxCvMain.h"
 
class testApp : public ofBaseApp{
 
public:
         
    // redeclaration of functions (declared in base class)
    void setup();
    void update();
    void draw();
 
    void keyPressed(int key);
 
    ofVideoGrabber camera;
     
    ofxCvColorImage im_color;
    ofxCvGrayscaleImage im_red, im_green, im_blue;
    ofxCvGrayscaleImage im_gray;
    ofxCvGrayscaleImage im_value;
     
 
    int imgWidth, imgHeight;
     
    bool bShowBlended;
};

testApp.cpp

#include "testApp.h"
 
// here we "define" the methods we "declared" in the "testApp.h" file
 
// i get called once
void testApp::setup(){
     
    // do some initialization
    imgWidth = 320;
    imgHeight = 240;
     
    bShowBlended = false;
     
    // set the size of the window
    ofSetWindowShape(imgWidth * 6, imgHeight);
     
    // the rate at which the program runs (FPS)
    ofSetFrameRate(30);
     
    // setup the camera
    camera.initGrabber(imgWidth, imgHeight);
    im_color.allocate(imgWidth, imgHeight);
    im_red.allocate(imgWidth, imgHeight);
    im_green.allocate(imgWidth, imgHeight);
    im_blue.allocate(imgWidth, imgHeight);
    im_gray.allocate(imgWidth, imgHeight);
    im_value.allocate(imgWidth, imgHeight);
}
 
// i get called in a loop that runs until the program ends
void testApp::update(){
    camera.update();
     
    if(camera.isFrameNew())
    {
        // copy the pixels from the camera object into an ofxCvColorImage object
        im_color.setFromPixels(camera.getPixels(), imgWidth, imgHeight);
         
        im_gray = im_color;
         
        // get each color channel
        im_color.convertToGrayscalePlanarImages(im_red, im_green, im_blue);
         
        im_color.convertRgbToHsv();
        im_color.convertToGrayscalePlanarImage(im_value, 2);
    }
}
 
// i also get called in a loop that runs until the program ends
void testApp::draw(){
    // background values go to 0
    ofBackground(0);
     
    // draw the camera
    ofSetColor(255, 255, 255);
    camera.draw(imgWidth * 0,0);
     
    if(bShowBlended)
    {
        // blending mode for adding pictures together
        ofEnableAlphaBlending();
        ofEnableBlendMode(OF_BLENDMODE_ADD);
         
        // full red energy
        ofSetColor(255, 0, 0);
        im_red.draw(imgWidth * 1,0);
         
        // full green energy
        ofSetColor(0, 255, 0);
         
        // draw using an offset from the center of the screen determined by the mouse position
        im_green.draw(imgWidth * 1 + (mouseX - ofGetScreenWidth()/2) / 10.0,0);
         
        // full blue energy
        ofSetColor(0, 0, 255);
         
        // offset just like above, but 2x as much
        im_blue.draw(imgWidth * 1 + (mouseX - ofGetScreenWidth()/2) / 5.0,0);
         
        ofDisableAlphaBlending();
    }
    else
    {
        ofSetColor(255, 0, 0);
        im_red.draw(imgWidth * 1,0);
         
        ofSetColor(0, 255, 0);
        im_green.draw(imgWidth * 2,0);
         
        ofSetColor(0, 0, 255);
        im_blue.draw(imgWidth * 3,0);
    }
     
    ofSetColor(255, 255, 255);
    im_gray.draw(imgWidth * 4, 0);
    im_value.draw(imgWidth * 5, 0);
}
 
void testApp::keyPressed(int key)
{
    switch (key) {
        case 's':
            camera.videoSettings();
            break;
             
        // press space to switch between modes    
        case ' ':
            bShowBlended = !bShowBlended;
            break;
        default:
            break;
    }
}

Motion Tracking + Interactive Video Player

testApp.h

#pragma once
 
#include "ofMain.h"
#include "ofxOpenCv.h"
 
class testApp : public ofBaseApp{
 
public:
    void setup();
    void update();
    void draw();    
      
    float                   alpha;
    float                   sum;
    ofVideoGrabber          camera;
 
    ofxCvColorImage         color_img;
 
    ofxCvGrayscaleImage     gray_img;
    ofxCvGrayscaleImage     gray_previous_img;
    ofxCvGrayscaleImage     gray_diff;
 
    ofVideoPlayer           video;
 
    vector<ofxCvGrayscaleImage> previous_imgs;
 
    int                     img_width, img_height;
};

testApp.cpp

#include "testApp.h"
 
using namespace cv;
 
//--------------------------------------------------------------
void testApp::setup(){
 
    // keep variables for our image size
    img_width = 320;
    img_height = 240;
     
    // value for our first order linear filter
    // this controls how much we mix in previous results
    // the larger the value, the larger the weight of the 
    // previous result.  this is a very essential and basic
    // technique in digital signal processing also known as a 
    // low pass filter.
    alpha = 0.5;
     
    // change the window to hold enough space for 2 movies (1 row x 2 columns of movies)
    ofSetWindowShape(img_width * 2, img_height);
     
    ofSetFrameRate(30);
     
    // initialize our camera with a resolution of 320z240
    camera.initGrabber(img_width, img_height);
     
    // load a movie in and set it to loop, and then start it (play())
    video.loadMovie("sunra_pink.mov");
    video.setLoopState(OF_LOOP_NORMAL);
    video.play();
     
    sum = 0;
     
    // these are (wrappers for) opencv image containers 
    // we'll use for image processing
    // we are going to find the difference between successive frames
    color_img.allocate(img_width, img_height);
    gray_img.allocate(img_width, img_height);
    gray_previous_img.allocate(img_width, img_height);
    gray_diff.allocate(img_width, img_height);
    previous_imgs.push_back(gray_previous_img);
    previous_imgs.push_back(gray_previous_img);
    previous_imgs.push_back(gray_previous_img);
     
}
 
//--------------------------------------------------------------
void testApp::update(){
    // background to black
    ofBackground(0);
     
    // update the camera
    camera.update();
     
    if (camera.isFrameNew()) {
        // set the color image (opencv container) to the camera image
        color_img.setFromPixels(camera.getPixels(), img_width, img_height);
        // convert to grayscale
        gray_img = color_img;
        // calculate the difference image
        gray_diff = gray_img;
        // compute the absolute difference with the previous frame's grayscale image
        gray_diff.absDiff(previous_imgs[0]);
         
        Mat diff_mat(gray_diff.getCvImage());
        Scalar s1 = mean(diff_mat);
         
         
        // store the current grayscale image for the next iteration of update()
        previous_imgs.push_back(gray_img);
        if (previous_imgs.size() > 10) {
            previous_imgs.erase(previous_imgs.begin());
        }
         
        // let's threshold the difference image,
        // all values less than 10 are 0, all values above 10 become 255
        //gray_diff.threshold(10);
         
        // here we will find the sum and then average of all the pixels in the difference image
        // this will be used for a simple measure of "motion" 
        // we use a low-pass filter, a first order filter which combines the current 
        // value with the previous one, using a linear weighting.
        sum = (alpha) * sum + 
        (1 - alpha) * s1[0] / 10.0f;//cvSum(gray_diff.getCvImage()).val[0] / (float)img_width / (float)img_height / 10.0;
         
         
        // let's change the speed of our movie based on the motion value we calculated
        video.setSpeed(sum);
        video.update();
    }
     
     
}
 
//--------------------------------------------------------------
void testApp::draw(){
    ofEnableAlphaBlending();
    ofEnableBlendMode(OF_BLENDMODE_ADD);
     
    color_img.draw(0, 0, img_width, img_height);
    gray_diff.draw(0, 0, img_width, img_height);
 
    ofDisableAlphaBlending();
     
    video.draw(img_width, 0);
     
    // draw the sum of the motion pixels divided by the number of motion pixels 
    // (average of difference values)
    char buf[256];
    sprintf(buf, "%f", sum);
    ofDrawBitmapString(buf, 20, 20);
}

Week 2

Introduction

Last week we learned about colorspaces and how to represent our input image as luminance and a single value representing motion for doing simple motion tracking. This week, we will detect and display image features, and then use those features for detecting any planar object. Get the additional files required for the second part of the lab here: [pkmImageFeatureDetector.zip].

Lab Sheet

Detecting and Drawing Image Features

testApp.h

#pragma once
 
#include "ofMain.h"
#include "ofxCvMain.h"
 
using namespace cv;
 
class testApp : public ofBaseApp{
 
public:
         
    // redeclaration of functions (declared in base class)
    void setup();
    void setupFeatures();
     
    void update();
    void draw();
 
    void keyPressed(int key);
     
    int width, height;
    float scalar;
     
    ofVideoGrabber camera;
 
    ofxCvColorImage cv_color_img;
    ofxCvGrayscaleImage cv_luminance_img;
     
    Mat mat_image;
     
    unsigned int current_detector; 
    vector<string> feature_detectors;
     
    cv::Ptr<FeatureDetector> feature_detector;
    vector<KeyPoint> keypoints;
};

testApp.cpp

#include "testApp.h"
 
// here we "define" the methods we "declared" in the "testApp.h" file
 
// i get called once
void testApp::setup(){
     
    scalar = 1.0f;
    width = 640;
    height = 480;
     
    camera.initGrabber(width, height);
     
    cv_color_img.allocate(width, height);
    cv_luminance_img.allocate(width, height);
     
    feature_detectors.push_back("SURF");
    feature_detectors.push_back("DynamicSURF");
    feature_detectors.push_back("PyramidSURF");
    feature_detectors.push_back("GridSURF");
    feature_detectors.push_back("SIFT");
    feature_detectors.push_back("STAR");
    feature_detectors.push_back("DynamicSTAR");
    feature_detectors.push_back("PyramidSTAR");
    feature_detectors.push_back("GridSTAR");
    feature_detectors.push_back("FAST");
    feature_detectors.push_back("DynamicFAST");
    feature_detectors.push_back("PyramidFAST");
    feature_detectors.push_back("GridFAST");
    feature_detectors.push_back("GFTT");
    feature_detectors.push_back("PyramidGFTT");
    feature_detectors.push_back("MSER");
    feature_detectors.push_back("PyramidMSER");
    feature_detectors.push_back("HARRIS");
    feature_detectors.push_back("PyramidHARRIS");
     
    current_detector = 0;
    feature_detector = FeatureDetector::create(feature_detectors[current_detector]);
     
    ofSetFrameRate(60.0f);
    ofSetWindowShape(width * scalar, height * scalar);
}
 
// i get called in a loop that runs until the program ends
void testApp::update(){
    camera.update();
    if (camera.isFrameNew()) {
        cv_color_img.setFromPixels(camera.getPixelsRef());
        cv_color_img.convertRgbToHsv();
        cv_color_img.convertToGrayscalePlanarImage(cv_luminance_img, 2);
         
        mat_image = Mat(cv_luminance_img.getCvImage());
         
        keypoints.clear();
        feature_detector->detect(mat_image, keypoints);
    }
}
 
// i also get called in a loop that runs until the program ends
void testApp::draw(){
    ofBackground(0);
     
    ofPushMatrix();
    ofScale(scalar, scalar);
     
    ofSetColor(255, 255, 255);
    camera.draw(0, 0);
     
    ofNoFill();
    ofSetColor(200, 100, 100);
    vector<KeyPoint>::iterator it = keypoints.begin();
    while(it != keypoints.end())
    {
        ofPushMatrix();
        float radius = it->size/2;
        ofTranslate(it->pt.x - radius, it->pt.y - radius, 0);
        ofRotate(it->angle, 0, 0, 1);
        ofRect(0, 0, radius, radius);
        ofPopMatrix();
        it++;
    }
    ofPopMatrix();
     
    ofSetColor(255, 255, 255);
    string draw_string = ofToString(current_detector+1) + string("/") + 
                        ofToString(feature_detectors.size()) + string(": ") +
                        feature_detectors[current_detector];
    ofDrawBitmapString(draw_string, 20, 20);
    draw_string = string("# of features: ") + ofToString(keypoints.size());
    ofDrawBitmapString(draw_string, 20, 35);
    draw_string = string("fps: ") + ofToString(ofGetFrameRate());
    ofDrawBitmapString(draw_string, 20, 50);
}
 
void testApp::keyPressed(int key){
    if(key == 'n')
    {
        current_detector = (current_detector + 1) % feature_detectors.size();
        feature_detector = FeatureDetector::create(feature_detectors[current_detector]);
    }
}

Detecting Planar Objects

testApp.h

/*
*  Created by Parag K. Mital - http://pkmital.com 
*  Contact: parag@pkmital.com
*
*  Copyright 2011 Parag K. Mital. All rights reserved.
* 
*   Permission is hereby granted, free of charge, to any person
*   obtaining a copy of this software and associated documentation
*   files (the "Software"), to deal in the Software without
*   restriction, including without limitation the rights to use,
*   copy, modify, merge, publish, distribute, sublicense, and/or sell
*   copies of the Software, and to permit persons to whom the
*   Software is furnished to do so, subject to the following
*   conditions:
*   
*   The above copyright notice and this permission notice shall be
*   included in all copies or substantial portions of the Software.
*
*   THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 
*   EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
*   OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
*   NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
*   HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
*   WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
*   FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
*   OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _TEST_APP
#define _TEST_APP
 
#include "ofMain.h"
 
#include "pkmImageFeatureDetector.h"
 
#include "ofxOpenCv.h"
 
const int CAM_WIDTH = 320;
const int CAM_HEIGHT = 240;
const int SCREEN_WIDTH = CAM_WIDTH*2;
const int SCREEN_HEIGHT = CAM_HEIGHT + 75;
 
class testApp : public ofBaseApp {
 
    public:
 
    ~testApp();
    void setup();
      
    void update();
    void draw();
    void drawKeypoints(vector<KeyPoint> keypts);
 
    void keyPressed  (int key);
    void mouseMoved(int x, int y );
    void mouseDragged(int x, int y, int button);
    void mousePressed(int x, int y, int button);
    void mouseReleased(int x, int y, int button);
    void windowResized(int w, int h);
     
    ofVideoGrabber          camera;
     
    ofxCvColorImage         color_img, color_roi_img;
    ofxCvGrayscaleImage     gray_search_img, 
                            gray_template_img;
     
    float                   x_start, 
                            x_end, 
                            y_start, 
                            y_end;
     
    cv::Point2f             low_pass_bounding_box[4],
                            prev_pass_bounding_box[4];
     
    float                   alpha;
     
    bool                    choosing_img, 
                            chosen_img;
     
    pkmImageFeatureDetector detector;
     
    vector<cv::KeyPoint>    img_template_keypoints,
                            img_search_keypoints;
     
};
#endif

testApp.cpp

/*
 *  Created by Parag K. Mital - http://pkmital.com 
 *  Contact: parag@pkmital.com
 *
 *  Copyright 2011 Parag K. Mital. All rights reserved.
 * 
 *  Permission is hereby granted, free of charge, to any person
 *  obtaining a copy of this software and associated documentation
 *  files (the "Software"), to deal in the Software without
 *  restriction, including without limitation the rights to use,
 *  copy, modify, merge, publish, distribute, sublicense, and/or sell
 *  copies of the Software, and to permit persons to whom the
 *  Software is furnished to do so, subject to the following
 *  conditions:
 *  
 *  The above copyright notice and this permission notice shall be
 *  included in all copies or substantial portions of the Software.
 *
 *  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 
 *  EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
 *  OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
 *  NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
 *  HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 *  WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 *  FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
 *  OTHER DEALINGS IN THE SOFTWARE.
 */
 
#include "testApp.h"
//--------------------------------------------------------------
testApp::~testApp(){
}
void testApp::setup(){
     
    // init video input
    camera.initGrabber(CAM_WIDTH,CAM_HEIGHT);
    camera.setUseTexture(true);
     
    // window setup
    ofSetWindowShape(SCREEN_WIDTH, SCREEN_HEIGHT);
    ofSetVerticalSync(true);
    ofSetFrameRate(60);
    ofBackground(0,0,0);
     
    // allocate stuff
    color_img.allocate(CAM_WIDTH, CAM_HEIGHT);
    gray_search_img.allocate(CAM_WIDTH, CAM_HEIGHT);
     
    alpha = 0.6;
     
    choosing_img = false;
    chosen_img = false;
     
}
 
//--------------------------------------------------------------
void testApp::update(){
     
    camera.update();
    if(camera.isFrameNew())
    {
        // get camera img into iplimage
        color_img.setFromPixels(camera.getPixels(), CAM_WIDTH, CAM_HEIGHT);
        color_img.convertRgbToHsv();
        if (chosen_img) {
            color_img.convertToGrayscalePlanarImage(gray_search_img, 2);
            detector.setImageSearch(gray_search_img.getCvImage());
            detector.update();
             
            img_search_keypoints = detector.getImageSearchKeypoints();
             
            ofCircle(detector.dst_corners[0].x, detector.dst_corners[0].y, 10);
             
            low_pass_bounding_box[0] = detector.dst_corners[0] * (1-alpha) + prev_pass_bounding_box[0] * alpha;
            low_pass_bounding_box[1] = detector.dst_corners[1] * (1-alpha) + prev_pass_bounding_box[1] * alpha;
            low_pass_bounding_box[2] = detector.dst_corners[2] * (1-alpha) + prev_pass_bounding_box[2] * alpha;
            low_pass_bounding_box[3] = detector.dst_corners[3] * (1-alpha) + prev_pass_bounding_box[3] * alpha;
             
            prev_pass_bounding_box[0] = low_pass_bounding_box[0];
            prev_pass_bounding_box[1] = low_pass_bounding_box[1];
            prev_pass_bounding_box[2] = low_pass_bounding_box[2];
            prev_pass_bounding_box[3] = low_pass_bounding_box[3];
        }
    } 
}
 
//--------------------------------------------------------------
void testApp::draw(){
    ofBackground(0,0,0);
     
    ofSetColor(255, 255, 255);
    // camera image
    camera.draw(0, 0);
     
    if (chosen_img) {
        ofSetColor(200, 100, 100);
        drawKeypoints(img_search_keypoints);
         
         
        ofPushMatrix();
        ofTranslate(CAM_WIDTH, 0, 0);
        ofSetColor(255, 255, 255);
        gray_template_img.draw(0, 0);
        ofSetColor(200, 100, 100);
        drawKeypoints(img_template_keypoints);
        ofPopMatrix();
         
        ofSetColor(200, 200, 200);
         
        ofLine(low_pass_bounding_box[0].x, low_pass_bounding_box[0].y,
               low_pass_bounding_box[1].x, low_pass_bounding_box[1].y);
         
        ofLine(low_pass_bounding_box[2].x, low_pass_bounding_box[2].y,
               low_pass_bounding_box[1].x, low_pass_bounding_box[1].y);
         
        ofLine(low_pass_bounding_box[2].x, low_pass_bounding_box[2].y,
               low_pass_bounding_box[3].x, low_pass_bounding_box[3].y);
         
        ofLine(low_pass_bounding_box[0].x, low_pass_bounding_box[0].y,
               low_pass_bounding_box[3].x, low_pass_bounding_box[3].y);
         
    }
     
     
    // draw a rectanlge around the current selection
    if (choosing_img) {
        int x = mouseX;
        int y = mouseY;
         
        ofNoFill();
        ofRect(x_start < x ? x_start : x, 
               y_start < y ? y_start : y, 
               abs(x_start - x), 
               abs(y_start - y));
         
    }
     
     
}
 
void testApp::drawKeypoints(vector<KeyPoint> keypts)
{
    vector<KeyPoint>::iterator it = keypts.begin();
    while(it != keypts.end())
    {
        ofPushMatrix();
        float radius = it->size/2;
        ofTranslate(it->pt.x - radius, it->pt.y - radius, 0);
        ofRotate(it->angle, 0, 0, 1);
        ofRect(0, 0, radius, radius);
        ofPopMatrix();
        it++;
    }
 
}
 
 
//--------------------------------------------------------------
void testApp::keyPressed  (int key){
     
    switch (key){           
        case 's':
            camera.videoSettings();
            break;
        case 'n':
        {
            detector.changeDetector();
            if(chosen_img)
               img_template_keypoints = detector.getImageTemplateKeypoints();
            break;
        }
        case '1':
            break;
        case '2':
            break;
             
        case 'b':
            break;
             
    }
}
 
//--------------------------------------------------------------
void testApp::mouseMoved(int x, int y ){
}
 
//--------------------------------------------------------------
void testApp::mouseDragged(int x, int y, int button){
}
 
//--------------------------------------------------------------
void testApp::mousePressed(int x, int y, int button){
     
    // start a rectangle selection
    if(!choosing_img)
    {
        choosing_img = true;
        x_start = x;
        y_start = y;
    }
}
 
//--------------------------------------------------------------
void testApp::mouseReleased(int x, int y, int button){
     
    // end the rectangle selection
    if (choosing_img) {
        choosing_img = false;
        x_end = x;
        y_end = y;
         
        if(x_start > x_end)
            std::swap(x_start, x_end);
        if(y_start > y_end)
            std::swap(y_start, y_end);
         
        int w = x_end - x_start;
        int h = y_end - y_start;
         
         
        cvSetImageROI(color_img.getCvImage(), 
                      cvRect(x_start, 
                             y_start, 
                             w, h));
         
        if (color_roi_img.bAllocated) {
            gray_template_img.clear();
            color_roi_img.clear();
        }
        gray_template_img.allocate(w, h);
        color_roi_img.allocate(w, h);
        color_roi_img = color_img;
        color_roi_img.convertToGrayscalePlanarImage(gray_template_img, 2);
        cvResetImageROI(color_img.getCvImage());
 
        detector.setImageTemplate(gray_template_img.getCvImage());
         
        img_template_keypoints = detector.getImageTemplateKeypoints();
         
        chosen_img = true;
    }
     
}
 
//--------------------------------------------------------------
void testApp::windowResized(int w, int h){
     
}

Week 3

Introduction

This week we will develop a system for blob detection and tracking.

Lecture Slides

Lab Sheet

Make sure you download the additional files required for Part 2 of the lab here: [pkmBlobTracker.zip].