Corner Detection with OpenCV

Tram Ho

1. Preamble

The corner detection problem is used quite a lot in computer vision problems such as image matching, object detection, etc.
What are its applications in image processing? . In image matching problems, the two images below are two photos of the same scene with different angles. When we want to find the similar point (feature point) of two images to join an image, we find countless similar points. But what is the important point? In the same way that the human eye works, when determining the similarity between two images, we make comparisons between angular positions while the frequently repeated edge features cannot be used as a feature. Or in the problem of recognizing license plates, just using the algorithm to take four corners of the license plate, we can quickly retrieve the license plate area much faster than the methods using Deep learning.

Corner Detection is used a lot in image processing, so today I review for everyone the two popular algorithms that are Harris Corner Detection and Shi-Tomasi Corner Detection .

2. Typical angle

When comparing between three areas: plane, edge and angle, we use a small window of fixed size sliding on the image.

  1. Plane area : No matter how you move the window in any direction, there is no change in intensity
  2. Edge area : if you move the window horizontally along the edge, there will be no change
  3. Corner area : move the window in any direction

3. Angle detection algorithms

Because the corner area moves the window in any direction, there is a change in intensity so to detect the angle we use the formula:

Inside :

  • w (x, y): sliding window at coordinates (x, y)
  • I (x + u, y + v): the intensity at the coordinates has shifted a distance (u, v)
  • I (x, y): intensity at current point coordinates
  • E (u, v): change in intensity with window (x, y) vs intensity at (x + u, y + v)

After a few more steps of Taylor development (this part is calculated quite long and involves a lot of math so I do not explain in depth), we have an expression equivalent to the formula above as follows:

where: Ix and Iy are derivatives in the x and y directions of the image as in the sobel algorithm I explained in this article

3.1. Harris Corner Detection

After using the intensity change formula above, we can select the areas that are likely to be an angle. To accurately assess whether the area actually contains an angle? The Harris Conner Detection algorithm uses a confidence score. lambda1, lambda2 are eigen values ​​of matrix M respectively

We can have the following cases:

  • Confidence score is small when both lambda1 and lambda2 are small ==> is the plane area
  • If one lambda is bigger than the other ==> the edge region because it only changes the intensity in one direction
  • If both lambda have a big value, the Confidence score will be large ==> is the angle because due to which direction the intensity will change.

Example : We read the image and convert it to GRAY image for processing. Because dst has the form cv2.CV_32F1, we cast gray to float32 first and then put it into the cv2.cornerHarris () function of OpenCV. For a pixel value greater than the maximum value multiplied by the threshold factor we are marked in red pixels. Finally, we get the resulting image as shown below. Note: I am using a jupyter notebook to demo an example .

3.2. Shi-Tomasi Corner Detection

Shi-Tomasi Corner Detection is only different from Harris Corner Detection in the cofidence score function. However, this is the point that this algorithm can detect the area that is likely more angular than Harris.

Confidenece score function (lambda1, lambda2 still turns the eigen value of the matrix M in turn):

As long as lambda1 and lambda2 are simultaneously greater than the LambdaMin threshold, that corner is already considered the corner container. Visually, you can see that the blue area (the corner area) of Shi-Tomasi is much larger than Harris, so it also shows that the standard of the area is the corner that has been expanded.

In OpenCV, there is a function cv2.goodFeaturesToTrack () that can make this algorithm easier. Details of this function can be found here . In this article, I just explained some important parameters:

  • image : should be converted into a gray image for processing
  • maxCorners: The number of corners you desire to find. If the number of angles found is greater than the number you want, it returns the number of corners you want but has the highest confidence
  • qualityLevel : this parameter is multiplied by the maximum confidence point of a given angle generated per threshold value. And based on this threshold to eliminate angular values ​​with confidence scores less than thresholds. If the number of angles found is less than the desired angle, return only the number of angles found
  • minDistance : smallest Euclidean distance between two corners.

Everyone can adjust the parameters to suit each problem and clearly see the difference of the algorithm

Thanks to everyone for taking the time to read your article

Refer

Share the news now

Source : Viblo