TUTORIAL 5: Detecting color location using the camera

Blockly is a ROS package that provides web-based visualization and block programming tools for robots and drones.

Video images: These are the images captured by the 'Take a picture' block. The first two detect the center to the right and the last two to the left.

The first step is to select the color you want to find. Amout of light perceived must be taken into account, so you can make a guess or use the 'Take a picture' block to view the image taken by the camera.

Then join the 'Get color location' block and a variable to read the output. To use that same variable as many times as needed, "duplicate" option must be selected when right clicking that variable-block.

This workspace creates a simple behaviour where the Erle-Spider turns 1 second to the side detected. Watch the video above.

Block output:

-10 to +10: Range of values. [-10 left border, 0 center, 10 right border]

null: Not enough sample color read.

First of all, lower and upper color limits are calculated so that the mask can be applied, this will filter the image to the colors we want to find.

Once 'raspicam_node' is running, the service in charge of capturing data from the camera is called and the image is created from the data read from the camera.

After applying the mask, the center of the color is calculated. This gives a coordinate position, which is what we are looking for. We will get a range of values from -10 to 10 [-10 left border, 0 center, 10 right border].

Finally, camera capture is stopped.

# Python libs
import sys, time

# numpy and scipy
import numpy as np
from scipy.ndimage import filters

# OpenCV
import cv2

# Ros libraries
import roslib
import rospy

# Ros Messages
from sensor_msgs.msg import CompressedImage

#colorBGR to Boundaries
B = colorBGR.split(',')[0]
G = colorBGR.split(',')[1]
R = colorBGR.split(',')[2]

boundary = 70 #calibrate depending on the amout of light
B_low = int(B)-boundary
G_low = int(G)-boundary
R_low = int(R)-boundary
B_up = int(B)+boundary
G_up = int(G)+boundary
R_up = int(R)+boundary

if B_low < 0:B_low=0
if G_low < 0:G_low=0
if R_low < 0:R_low=0
if B_up > 255:B_up=255
if G_up > 255:G_up=255
if R_up > 255:R_up=255

ros_nodes = rosnode.get_node_names()
if '/raspicam_node' in ros_nodes:
    command='rosservice call /camera/start_capture'
    process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
    command+=';rosservice call /camera/start_capture'
    process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)

ros_data = rospy.wait_for_message('/camera/image/compressed', CompressedImage, timeout=5)

#### direct conversion to CV2 ####
np_arr = np.fromstring(ros_data.data, np.uint8)
image = cv2.imdecode(np_arr, 1) #cv2.CV_LOAD_IMAGE_COLOR

# define the list of boundaries in BGR
boundaries = [([B_low,G_low,R_low],[B_up,G_up,R_up])]

# loop over the boundaries
for (lower, upper) in boundaries:
    # create NumPy arrays from the boundaries
    lower = np.array(lower, dtype = "uint8")
    upper = np.array(upper, dtype = "uint8")

    # find the colors within the specified boundaries and apply
    # the mask
    mask = cv2.inRange(image, lower, upper)
    output = cv2.bitwise_and(image, image, mask = mask)

cvImg = cv2.cvtColor(output, 6) #cv2.COLOR_BGR2GRAY
npImg = np.asarray( cvImg )

coordList = np.argwhere( npImg >0 )
numWhitePoints = len( coordList )

if numWhitePoints > 2000: #lower limit
    for (x,y) in coordList:

    height = np.size(cvImg, 0)
    width = np.size(cvImg, 1)

    X_C = int(X/numWhitePoints)
    Y_C = int(Y/numWhitePoints)

    X_center=Y_C;Y_center=X_C #fix axes

    #DEBUG# Write the image with a circle in the center of the color.
    #DEBUG# print("Center point: "+str(X_center)+","+str(Y_center))
    #DEBUG# cv2.circle(image,(X_center,Y_center), 20, (0,255,0), -1)
    #DEBUG# cv2.imwrite("image_center.jpg", image);

    ##### PRINT LOCATION #####
    #print("Image height="+str(height)+", Image width="+str(width))

    if X_center >= width/2: # RIGHT from 0 to +10
        color_location = ((X_center - (width/2))*10)/(width/2)
    else: #LEFT from 0 to -10
        Xnew_center = (width/2) - X_center
        color_location = (-1)*((Xnew_center)*10)/(width/2)

    print("Not enough szample color")
    color_location = None
    #DEBUG# Write the image
    #DEBUG# cv2.imwrite("image_NO_center.jpg", image);

command="rosservice call /camera/stop_capture"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)

DEBUG: If lines starting with '#DEBUG#' are commented a new image is saved with a circle in the center of it.

The following workspace shows how values from the block can be interpreted and used to make further calculations. This example is hard coded but you will get the idea of how the values can be used.

After saving the location value (float from -10 to 10), the sign of 'location' is checked. Then the Spider will turn location*9 degrees to the corresponding side. This means that it will turn proportionally from 0 to 90 degrees depending on the center of the color read.

Note: "+str(turn)+" is used just because the 'Turn degrees block' reads the input as string.

blockly comes preinstalled in Erle-Brain 2.0. Acquire this artificial brain for making robots and drones here