Corner detection, or the more general terminology interest point detection, is an approach used within computer vision systems to extract certain kinds of features from an image. Corner detection is frequently used in motion detection, image matching, tracking, image mosaicing, panorama stitching, 3D modelling and object recognition.
- Download sample application (with source code)
- Browse the sample application source code
- Browse the Harris Corners Detector classes
This code has also been incorporated in Accord.NET Framework, which includes the latest version of this code plus many other statistics and machine learning tools. In order to install the Accord.NET Framework in your projects, use NuGet. Type in the package manager: Install-Package Accord.Imaging
Introduction
One of the firsts operators for interest point detection was developed by Hans P. Moravec in 1977 for his research involving the automatic navigation of a robot through a clustered environment. It was also Moravec who defined the concept of “points of interest” in a image and concluded these interest points could be used to find matching regions in different images.
The Moravec operator is considered to be a corner detector because it defines interest points as points where there are large intensity variations in all directions. This often is the case at corners. It is interesting to note, however, that Moravec was not specifically interested in finding corners, just distinct regions in an image that could be used to register consecutive image frames.
The Harris Operator
This operator was developed by Chris Harris and Mike Stephens in 1988 as a processing step to build interpretations of a robot’s environment based on image sequences. Like Moravec, they needed a method to match corresponding points in consecutive image frames, but were interested in tracking both corners and edges between frames.
Harris and Stephens improved upon Moravec’s corner detector by considering the differential of the corner score with respect to direction directly. The Harris corner detector computes the locally averaged moment matrix computed from the image gradients, and then combines the Eigenvalues of the moment matrix to compute a corner measure, from which maximum values indicate corners positions.
Source Code
Below is the source code for the Harris Corners Detector algorithm. This is mostly the same code implemented in the Accord.NET Framework.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 |
/// Process image looking for corners. /// </summary> /// /// <param name="image">Source image data to process.</param> /// /// <returns>Returns list of found corners (X-Y coordinates).</returns> /// /// <exception cref="UnsupportedImageFormatException"> /// The source image has incorrect pixel format. /// </exception> /// public unsafe List<IntPoint> ProcessImage(UnmanagedImage image) { // check image format if ( (image.PixelFormat != PixelFormat.Format8bppIndexed) && (image.PixelFormat != PixelFormat.Format24bppRgb) && (image.PixelFormat != PixelFormat.Format32bppRgb) && (image.PixelFormat != PixelFormat.Format32bppArgb) ) { throw new UnsupportedImageFormatException("Unsupported pixel format of the source image."); } // make sure we have grayscale image UnmanagedImage grayImage = null; if (image.PixelFormat == PixelFormat.Format8bppIndexed) { grayImage = image; } else { // create temporary grayscale image grayImage = Grayscale.CommonAlgorithms.BT709.Apply(image); } // get source image size int width = grayImage.Width; int height = grayImage.Height; int srcStride = grayImage.Stride; int srcOffset = srcStride - width; // 1. Calculate partial differences float[,] diffx = new float[height, width]; float[,] diffy = new float[height, width]; float[,] diffxy = new float[height, width]; fixed (float* pdx = diffx, pdy = diffy, pdxy = diffxy) { byte* src = (byte*)grayImage.ImageData.ToPointer() + srcStride + 1; // Skip first row and first column float* dx = pdx + width + 1; float* dy = pdy + width + 1; float* dxy = pdxy + width + 1; // for each line for (int y = 1; y < height - 1; y++) { // for each pixel for (int x = 1; x < width - 1; x++, src++, dx++, dy++, dxy++) { // Convolution with horizontal differentiation kernel mask float h = ((src[-srcStride + 1] + src[+1] + src[srcStride + 1]) - (src[-srcStride - 1] + src[-1] + src[srcStride - 1])) * 0.166666667f; // Convolution vertical differentiation kernel mask float v = ((src[+srcStride - 1] + src[+srcStride] + src[+srcStride + 1]) - (src[-srcStride - 1] + src[-srcStride] + src[-srcStride + 1])) * 0.166666667f; // Store squared differences directly *dx = h * h; *dy = v * v; *dxy = h * v; } // Skip last column dx++; dy++; dxy++; src += srcOffset + 1; } // Free some resources which wont be needed anymore if (image.PixelFormat != PixelFormat.Format8bppIndexed) grayImage.Dispose(); } // 2. Smooth the diff images if (sigma > 0.0) { float[,] temp = new float[height, width]; // Convolve with Gaussian kernel convolve(diffx, temp, kernel); convolve(diffy, temp, kernel); convolve(diffxy, temp, kernel); } // 3. Compute Harris Corner Response Map float[,] map = new float[height, width]; fixed (float* pdx = diffx, pdy = diffy, pdxy = diffxy, pmap = map) { float* dx = pdx; float* dy = pdy; float* dxy = pdxy; float* H = pmap; float M, A, B, C; for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++, dx++, dy++, dxy++, H++) { A = *dx; B = *dy; C = *dxy; if (measure == HarrisCornerMeasure.Harris) { // Original Harris corner measure M = (A * B - C * C) - (k * ((A + B) * (A + B))); } else { // Harris-Noble corner measure M = (A * B - C * C) / (A + B + Accord.Math.Special.SingleEpsilon); } if (M > threshold) { *H = M; // insert value in the map } } } } // 4. Suppress non-maximum points List<IntPoint> cornersList = new List<IntPoint>(); // for each row for (int y = r, maxY = height - r; y < maxY; y++) { // for each pixel for (int x = r, maxX = width - r; x < maxX; x++) { float currentValue = map[y, x]; // for each windows' row for (int i = -r; (currentValue != 0) && (i <= r); i++) { // for each windows' pixel for (int j = -r; j <= r; j++) { if (map[y + i, x + j] > currentValue) { currentValue = 0; break; } } } // check if this point is really interesting if (currentValue != 0) { cornersList.Add(new IntPoint(x, y)); } } } return cornersList; } |
Using the code
Code usage is very simple. A Corners Marker filter from the AForge Framework can be used to directly draw the detected interest points in the original image as it would be usual within the framework.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
// Open a image Bitmap image = ... // Create a new Harris Corners Detector using the given parameters HarrisCornersDetector harris = new HarrisCornersDetector(k) { Threshold = threshold, Sigma = sigma }; // Create a new AForge's Corner Marker Filter CornersMarker corners = new CornersMarker(harris, Color.White); // Apply the filter and display it on a picturebox pictureBox1.Image = corners.Apply(image); |
Sample application
The accompanying sample application is pretty much self-explanatory. It performs the corners detection in the famous Lena Söderberg‘s picture, but can be adapted to work with other pictures as well. Just add another image to the project settings’ resources and change the line of code which loads the bitmap.
It is also very straightforward to adapt the application to load arbitrary images from the hard disk. I have opted to leave it this way for simplicity.
References
- P. D. Kovesi. MATLAB and Octave Functions for Computer Vision and Image Processing. School of Computer Science & Software Engineering, The University of Western Australia.
- D. Parks and J. P. Gravel. Corner Detectors: The Harris/Plessey Operator. Web. 11 May. 2010.
- J. Hutton and B. Dowling. Computer Vision Demonstration Website. Electronics and Computer Science, University of Southampton
- H. P. Moravec. Towards Automatic Visual Obstacle Avoidance. Proc. 5th International Joint Conference on Artificial Intelligence, pp. 584, 1977.
- H. P. Moravec. Visual Mapping by a Robot Rover. International Joint Conference on Artificial Intelligence, pp. 598-600, 1979.
- C. Harris and M. Stephens. A Combined Corner and Edge Detector. Proc. Alvey Vision Conf., Univ. Manchester, pp. 147-151, 1988.
- Wikipedia contributors. “Corner detection.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 5 May. 2010. Web. 11 May. 2010.
Disclaimer
Before using any information, applications or source code available in this article, please be sure to have read the site usage disclaimer.
Hi.
Cool thing Accord.Math.
If it could be searchable(like project some where), I would not experiment doing this matrixextensions.codeplex.com.
Hi,
Accord.NET is a LGPL-licensed project which is available here. The SVN is not open yet, but will be available soon.
Regards,
César
I have tried your code with other images and it doesn’t find corners, it’s necesary to use a specific format file?
Hi,
I have just updated the downloadable sources in this article with a more recent version. If you are still having trouble, send me an email with the images and I’ll take a look.
By the way, the latest version of the Harris Corner Detector will always be first available inside Accord.NET Framework.
Regards,
César
hi,
thanks for sharing this tutorial…
is it possible to get the 3D coordinates fo each point?
I mean, i want to obtain the point clouds from captured object…
please help me, how i can do it. i really need it for my final project…
many thanks before…
I had written a java program for harris corner detection, but my output is not so accurate. I have tried the above program and i’m getting the correct output. However, what does this statement means
if (measure==HarrisCornerMeasure.Harris)
Hi Pushpender,
This statement is only comparing if the chosen corners measure is indeed the original Harris measure. There are other ways to compute the corners measure which do not use the arbitrary factor k. HarrisCornersMeasure is just an enumeration.
I hope I have answered your question.
Best regards,
César
Hello Cesar,
Thanks.
Do you mean that if we set k=0 then I can use the harris nobel corner measure method i.e.
else
{
M = (A*B-C*C) / (A+B+eps);
}
Hi Pushpender,
Well, not exactly. If you set the Measure property of an instance of the HarrisCornersDetector class to be HarrisCornerMeasure.Nobel, then the algorithm will ignore the value of k and use Nobel’s measure instead.
i.e.:
// create corners detector’s instance
HarrisCornersDetector hcd = new HarrisCornersDetector();
hcd.Measure = HarrisCornerMeasure.Nobel;
// process image searching for corners
List<IntPoint≷ corners = hcd.ProcessImage( image );
Regards,
César
Hello Cesar
Thanx again. One more question: what is the use dfference between original harris and harris nobel method.? Is it only the k-value. I’m using java to implement the corner detection. So pls tell me how do I set which method to use and when??
The only difference is that it does not requires the tuning of the parameter k. It makes the usage of the Harris operator simpler, since you won’t have to care about choosing a suitable value for k in your application. For more details, please see Alison Noble, “Descriptions of Image Surfaces”, PhD thesis, Department of Engineering Science, Oxford University 1989, p45.
By the way, if you are basing your implementation on my code, I would kindly ask you to add a reference in your sources about this page and about the Accord.NET Framework (http://accord-net.origo.ethz.ch/).
Best regards,
César
Thanks a lot. I’ll add the respective references.
I want to put those inputs like k, threshold and sigma in your Automatic Image Stitching but why is it when I press the Detect button and the sigma’s value sets in its max value is which 5.000 or I set it to min value which is 0.000 there’s points being detected that supposed to be none. I find it hard to refactor in this part this on Harris Corners Detector not AIS:
float[,] diffx = new float[height, width];
float[,] diffy = new float[height, width];
float[,] diffxy = new float[height, width];
fixed (float* pdx = diffx, pdy = diffy, pdxy = diffxy)
{
byte* src = (byte*)grayImage.ImageData.ToPointer() + srcStride + 1;
// Skip first row and first column
float* dx = pdx + width + 1;
float* dy = pdy + width + 1;
float* dxy = pdxy + width + 1;
// for each line
for (int y = 1; y < height – 1; y++)
{
// for each pixel
for (int x = 1; x < width – 1; x++, src++, dx++, dy++, dxy++)
{
// Convolution with horizontal differentiation kernel mask
float h = ((src[-srcStride + 1] + src[+1] + src[srcStride + 1]) –
(src[-srcStride – 1] + src[-1] + src[srcStride – 1])) * 0.166666667f;
// Convolution vertical differentiation kernel mask
float v = ((src[+srcStride – 1] + src[+srcStride] + src[+srcStride + 1]) –
(src[-srcStride – 1] + src[-srcStride] + src[-srcStride + 1])) * 0.166666667f;
// Store squared differences directly
*dx = h * h;
*dy = v * v;
*dxy = h * v;
}
// Skip last column
dx++; dy++; dxy++;
src += srcOffset + 1;
}
// Free some resources which wont be needed anymore
if (image.PixelFormat != PixelFormat.Format8bppIndexed)
grayImage.Dispose();
}
// 2. Smooth the diff images
if (sigma > 0.0)
{
float[,] temp = new float[height, width];
// Convolve with Gaussian kernel
convolve(diffx, temp, kernel);
convolve(diffy, temp, kernel);
convolve(diffxy, temp, kernel);
}
Where exactly the part I should alter so that sigma will work what I want to. Should I also include the convolve method
Hi César,
Sorry for the typos (*there are points). Wanna ask again do I have to include the convolve method in HarrisCornersDetector in Automatic Image Stitching. Convert the float [,] types to UnmanagedImage and change those fixed values to unsafe. Just really got problem with sigma alone. The k and threshold works well.
hello;
I use the java version of this implementation and I want to recover the points detect a new vector to use it later if anyone has any idea how to recover the point values in a new vector
hi Cesar
i make download to the source code of harris in c# ,and when i start to excute the code i found some error in the program and i cannnot solve it can u help me plzz to solve these error .. if u can tell me, then i will send u the error
thanx
hi..
how can i solve this erro ?
{accord-imaging-harris-srcbinx86debugHarris.exe is missing .please build the project and retry , or set the outputPath and assembly name properties appropriately to point at the correct location for the target assembly.} plz help me … thanx
Hi,
I am not sure about this particular error, but I would advise you to download the full Accord.NET Framework. Perhaps the installer may resolve any issues you are having. Please see the starting guide to get started with the framework if you wish!
Best regards,
Cesar
ok thank u very much i will try
hi cesar
how can i change the color of point in harris corner detector to anther color like red or yellow in visual studio (c# , accord.net)
Hi,
You can just change the CornersMarker’s color property, or pass a different constructor argument. But please note this will only work if the image the filter is applied to indeed supports color… If the image is grayscale, it won’t work. You will have to either create a new color image, or transform it to a color image. I forgot how to do this in AForge, but you can do this with plain .NET, as depicted here.
Regards,
Cesar
Hi
The “Download sample application” is missing.(404 Not Found)
Can you upload again? plz.
thx
Thanks, should be fixed now.
hello
I can use this program on my C#. it’s very useful for me.
but, I want to know the select corner’s image index.
How can I get that?
Have some commands or sample to get the corner’s information?
thanks.
Hi
i want to apply your code on image contain buildings so i can extract them the buildings how to draw a rectangle or borders on every buildings , do you have idea how to do that ?
Thanks 🙂
I am not sure, but :
If you compute A=dx = h*h and B=dy = v*v and C=dxdy = hv Then
A*B-C*C is always 0 … What you compute as “harris” is only the trace of the matrix. You do not get the hessian part like this in the sum.
I suppose this is why the Gaussian averaging part is needed. This way, the A*B-C*C in the last part becomes different than 0, as it no longer represents the original A, B and C values that were computed as hh, vv and hv. But I am also not completely sure either, as it has been a while since the code was written. Do you think it makes sense?
Best regards,
Cesar
Hi, Cesar. Truly thanks for the awesome function.
I want to apply the function in stitching image of documents to get a higher resolution pictures to extract the character later on. For the image stitching part, do u think it can be applied on document’s processing?
Hi Cesar,
Nice post.Can you please let me know that if I can change this code to get only the most top edge(point) of the image.
Hi
The “Download sample application” is missing.(404 Not Found)
Can you upload again?
thx
// 1. Calculate partial differences
float[,] diffx = new float[height, width];
float[,] diffy = new float[height, width];
float[,] diffxy = new float[height, width];
fixed (float* pdx = diffx, pdy = diffy, pdxy = diffxy)
{
byte* src = (byte*)grayImage.ImageData.ToPointer() + srcStride + 1;
// Skip first row and first column
float* dx = pdx + width + 1;
float* dy = pdy + width + 1;
float* dxy = pdxy + width + 1;
// for each line
for (int y = 1; y < height – 1; y++)
{
// for each pixel
for (int x = 1; x < width – 1; x++, src++, dx++, dy++, dxy++)
{
// Convolution with horizontal differentiation kernel mask
float h = ((src[-srcStride + 1] + src[+1] + src[srcStride + 1]) –
(src[-srcStride – 1] + src[-1] + src[srcStride – 1])) * 0.166666667f;
// Convolution vertical differentiation kernel mask
float v = ((src[+srcStride – 1] + src[+srcStride] + src[+srcStride + 1]) –
(src[-srcStride – 1] + src[-srcStride] + src[-srcStride + 1])) * 0.166666667f;
***Im trying your code sir. But im thinking at this kernel mask. What kind kernel is this? I need this explanation for fullfill my draft literature***
Thank you
// 1. Calculate partial differences
float[,] diffx = new float[height, width];
float[,] diffy = new float[height, width];
float[,] diffxy = new float[height, width];
fixed (float* pdx = diffx, pdy = diffy, pdxy = diffxy)
{
byte* src = (byte*)grayImage.ImageData.ToPointer() + srcStride + 1;
// Skip first row and first column
float* dx = pdx + width + 1;
float* dy = pdy + width + 1;
float* dxy = pdxy + width + 1;
// for each line
for (int y = 1; y < height – 1; y++)
{
// for each pixel
for (int x = 1; x < width – 1; x++, src++, dx++, dy++, dxy++)
{
// Convolution with horizontal differentiation kernel mask
float h = ((src[-srcStride + 1] + src[+1] + src[srcStride + 1]) –
(src[-srcStride – 1] + src[-1] + src[srcStride – 1])) * 0.166666667f;
// Convolution vertical differentiation kernel mask
float v = ((src[+srcStride – 1] + src[+srcStride] + src[+srcStride + 1]) –
(src[-srcStride – 1] + src[-srcStride] + src[-srcStride + 1])) * 0.166666667f;
Im trying your code sir. But im thinking at this kernel mask. What kind kernel is this? I need this explanation for fullfill my draft literature
Thank you