Object Detection with Cozmo - This video shows object detection through color tracking by Cozmo. Using its color camera, the robot can filter a captured image by a specific color range and look for a blob in the obtained mask in order to detect an object. In this video, Cozmo is looking for blue cubes in the scene. The robot detects the number of cubes it sees and then reports them back to the user. The code is a simple python script that uses the OpenCV library in order to process the video stream from the robot's camera.

Color Detection by Cozmo

Edge detection by combining a Gaussian filter and a Sobel operator - To make a more efficient edge detection first apply a Gaussian filter to the intensity image to smooth it and remove the noises that can be considered as edges. Then apply a Sobel operator in both vertical and horizontal directions. Check the following example which also advertises the book "Handling Uncertainty and Networked Structures in Robot Control" by Springer. Chapter 4 of the book is titled "Visuospatial Skill Learning"!!! Read more about it here.
In MATLAB it can be achieved quite easily:

Ig = im2double(rgb2gray(imread('test.jpg')));
G = fspecial('gaussian',[10 10],8);
IG = imfilter(Ig,G,'same');
Iedge = edge(IG,'sobel',0,008);

Cross-Entropy  Method for Automatic Thresholding - for computing global image threshold usually Otsu's method is used. Here I use Cross-Entropy Method to perform the same optimization and find the global image threshold. In this case, the objective function can be a simple dissimilarity function that compares all the absolute difference between all the pixels if the gray image and a corresponding black-and-white image. To be more consistent here I used the same objective function used by Otsu that is inter-class variance.


Color Alignment/Calibration for Stereo-cameras - To perform, color alignment it is possible to use a reference image and align both cameras w.r.t the reference. The other option is to consider one of the cameras as the reference and align the colors of the second camera to the first one. Below, you can see in the first row, the left and right images of the stereo camera, and in the second row the sampled data from both images and finally the aligned right camera w.r.t the left camera. The method uses least square fitting approach.


A Simple Object Detection
- After applying a median filter into a grayscale image, an adaptive Gaussian thresholding technique has been used for detecting the existing contours of the objects. Using the extracted contours a rough boundary for each object is identified. The process is done using opencv library in python as follows:

import cv2
img_filt = cv2.medianBlur(cv2.imread('f.jpg',0), 5)
img_th = cv2.adaptiveThreshold(img_filt,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
contours, hierarchy = cv2.findContours(img_th, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

Foam Detection in a Robotic Chain Cleaning Task - The vision module works based on the point-cloud data from the sensor. The goal is to detect the pose of the foam on top of a couple of mock-up chain links. After the position and boundary of the foam spots are detected the robot cleans the chain. An HSI conditional removal is used to filter the foams by color, and a Euclidean clustering method is applied to extract each spot in one cluster. Finally, a rejection rule is employed to distinguish the light reflection on the chain surface from the real foam.