## 1. Preamble

The corner detection problem is used quite a lot in computer vision problems such as image matching, object detection, etc.

**What are its applications in image processing?** . In image matching problems, the two images below are two photos of the same scene with different angles. When we want to find the similar point (feature point) of two images to join an image, we find countless similar points. But what is the important point? In the same way that the human eye works, when determining the similarity between two images, we make comparisons between angular positions while the frequently repeated edge features cannot be used as a feature. Or in the problem of recognizing license plates, just using the algorithm to take four corners of the license plate, we can quickly retrieve the license plate area much faster than the methods using Deep learning.

Corner Detection is used a lot in image processing, so today I review for everyone the two popular algorithms that are * Harris Corner Detection* and

*.*

**Shi-Tomasi Corner Detection**## 2. Typical angle

When comparing between three areas: plane, edge and angle, we use a small window of fixed size sliding on the image.

**Plane area**: No matter how you move the window in any direction, there is no change in intensity**Edge area**: if you move the window horizontally along the edge, there will be no change**Corner area**: move the window in any direction

## 3. Angle detection algorithms

Because the corner area moves the window in any direction, there is a change in intensity so to detect the angle we use the formula:

Inside :

- w (x, y): sliding window at coordinates (x, y)
- I (x + u, y + v): the intensity at the coordinates has shifted a distance (u, v)
- I (x, y): intensity at current point coordinates
- E (u, v): change in intensity with window (x, y) vs intensity at (x + u, y + v)

After a few more steps of Taylor development (this part is calculated quite long and involves a lot of math so I do not explain in depth), we have an expression equivalent to the formula above as follows:

where: **Ix** and **Iy** are derivatives in the x and y directions of the image as in the sobel algorithm I explained in this article

## 3.1. Harris Corner Detection

After using the intensity change formula above, we can select the areas that * are likely to* be an angle. To accurately assess whether the area actually contains an angle? The

**Harris Conner Detection**algorithm uses a confidence score. lambda1, lambda2 are eigen values of matrix M respectively

We can have the following cases:

- Confidence score is small when both lambda1 and lambda2 are small ==> is the plane area
- If one lambda is bigger than the other ==> the edge region because it only changes the intensity in one direction
- If both lambda have a big value, the Confidence score will be large ==> is the angle because due to which direction the intensity will change.

**Example** : We read the image and convert it to GRAY image for processing. Because dst has the form cv2.CV_32F1, we cast gray to float32 first and then put it into the cv2.cornerHarris () function of OpenCV. For a pixel value greater than the maximum value multiplied by the threshold factor we are marked in red pixels. Finally, we get the resulting image as shown below. **Note: I am using a jupyter notebook to demo an example** .

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | import cv2 import numpy as np from IPython.display import Image def detect_corner(image_path, blockSize=2, ksize=3, k=0.04, threshold=0.01): """ image_path: link to image blockSize: the size of neighbourhood considered for corner detection ksize: parameter of Sobel derivative k: Harris detector free parameter in the equation. """ img = cv2.imread(image_path) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) gray = np.float32(gray) dst = cv2.cornerHarris(gray, blockSize, ksize, k) #result is dilated for marking the corners, not important dst = cv2.dilate(dst, None) # Threshold for an optimal value, it may vary depending on the image. img[dst>threshold*dst.max()]=[0, 0, 255] cv2.imwrite('example.png',img) return img out_path = detect_corner('sudoku.png', blockSize=2, ksize=5, k=0.04, threshold=0.005) Image(out_path) |

## 3.2. Shi-Tomasi Corner Detection

**Shi-Tomasi Corner Detection is** only different from Harris Corner Detection in the cofidence score function. However, this is the point that this algorithm can detect the area that is likely more angular than Harris.

**Confidenece score function** (lambda1, lambda2 still turns the eigen value of the matrix M in turn):

As long as lambda1 and lambda2 are simultaneously greater than the LambdaMin threshold, that corner is already considered the corner container. Visually, you can see that the blue area (the corner area) of Shi-Tomasi is much larger than Harris, so it also shows that the standard of the area is the corner that has been expanded.

In OpenCV, there is a function **cv2.goodFeaturesToTrack ()** that can make this algorithm easier. Details of this function can be found here . In this article, I just explained some important parameters:

**image**: should be converted into a gray image for processing**maxCorners: The**number of corners you desire to find. If the number of angles found is greater than the number you want, it returns the number of corners you want but has the highest confidence**qualityLevel**: this parameter is multiplied by the maximum confidence point of a given angle generated per threshold value. And based on this threshold to eliminate angular values with confidence scores less than thresholds. If the number of angles found is less than the desired angle, return only the number of angles found**minDistance**:**smallest**Euclidean distance between two corners.

Everyone can adjust the parameters to suit each problem and clearly see the difference of the algorithm

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | import cv2 import numpy as np from IPython.display import Image def shi_tomasi_detect_corner(img_path, maxCornerNB, qualityLevel, minDistance=0.6): img = cv2.imread(img_path) # convert to gray image gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # shi tomasi detect corners corners = cv2.goodFeaturesToTrack(gray, maxCornerNB, qualityLevel, minDistance) corners = np.int0(corners) for i in corners: # take (x, y) of corners x, y = i.ravel() # draw circle cv2.circle(img, (x, y), 3, (0, 0, 255), -1) cv2.imwrite('example_shitomasi.png', img) return 'example_shitomasi.png' img_path = shi_tomasi_detect_corner('house.jpg', 600, 0.05) Image(img_path) |

Thanks to everyone for taking the time to read your article