Technology

Optical Flow with OpenCV

So I wanted to play with video in OpenCV and also to get started with motion tracking.

Capturing video in OpenCV is made simple. Below is a snippet of code to capture and play video in C++.

VideoCapture capture;
if (argc > 1) capture.open(argv[1]);
else capture.open(0);

if (!capture.isOpened()) {
    printf("Video not opened");
    return -1;
}
else{
    Mat frame;
    namedWindow("video",1);
    while(1){
        capture >> frame;
        imshow("video",frame);
    }
}

The code tries to read in a video file if passed in as an argument otherwise defaults to video device 0. Then a simple check is made to make sure the video file or video stream is opened successfully. Once opened, a frame is created and a while to display the video. The while loop is used to capture a frame from the camera and show it in the window.

As it currently stands, the program will try to grab a frame as soon as the display is rendered and much faster that 30fps (what most cameras support). We need to limit it to 30fps so we add waitKey(33); after the imshow() call. 1/30=0.03333 thus 33 for waitKey.

So now we have a simple program playing a video from a file or a camera.

Next is object tracking. To be honest this is a big jump in complexity but with various articles and tutorials available online, you can get started relatively easily. There are several methods of motion detection and tracking. One method is named Lucas-Kanade Optical Flow. It is a basic method of visualising motion based on a two successive frames. The objective is to find distinctive features in the first frame that can be used for tracking and then try to find those same features in the next frame. Once the positions are found, the difference between them is used to visualise the flow of movement within the frame.

There are some inherent assumptions that the camera’s field of view is constant and only foreground objects are in motion. if the camera was panning then the whole frame shifts and optical flow is near useless to detect any movement. Having said that there are advanced methods which take into account the camera’s motion and then only extract relative movements within the frame.

During my research I came across a lecture from Stanford University on implementing Optical Flow in OpenCV. The lecture provides a nice explanation of the method (and includes some maths) and a step by step guide of implementing the Optical Flow algorithm in OpenCV.

OpenCV uses Lucas-Kanade Optical Flow method and provides some wrapper functions to find the features and run the algorithm. This makes it easy to implement a complex algorithm without having to study the maths! OpenCV provides function such as goodFeaturesToTrack(), TermCriteria(), calcOpticalFlowPyrLK() to implement this.

Naturally, I was running this on my Raspberry Pi. I have to say the performance wasn’t great but this could be my cheap USB webcam. With about 100 features to track within a frame, my program was achieving 2fps (yes TWO) at 640x320px resolution. I am hoping with the RPi’s camera module, this can be bumped up to something reasonable.

With this I am one step closer to my security project 🙂

[1]: Standford Lecture: http://ai.stanford.edu/~dstavens/cs223b/stavens_opencv_optical_flow_2007.pdf

[2]: Wikipedia Article: http://en.wikipedia.org/wiki/Lucas%E2%80%93Kanade_method

Standard