There are many different algorithms used to detect features, and they fall into three categories, corners, blobs, and regions. In this video, you'll learn how to apply three popular feature detection algorithms, Harris Stephens, SURF, and MSER, through three different examples. We'll begin with this image of an office building. The building's windows have obvious corners, which you can detect using the Harris Stephens Algorithm. Let's see how this is done in MATLAB. Start by reading in the image and converting it to gray scale. Color is usually not needed for detecting structural features like corners and blobs. Then use the detect Harris features function to find the corner features. The output is a corner points object. The three properties of this object are, location, count, and metric, each of which you can access with dot notation. Location contains the X, Y coordinates of every detected feature. Metric includes the strength of each feature. In many algorithms, this value represents a measure of contrast. The larger the value, the stronger the detected feature. Finally, count is the number of detected features. It looks like the function found a lot of features. To visualize the detected features, show the image. Use the hold on command, so you can display the features on top of it and plot the features themselves using the corner points object. Remember to use hold off after you are finished plotting onto the image. In some applications, it's a good idea to focus on the strongest features. To do this, use the select strongest function and specify how many features to return. Now only the 100 strongest features are shown. Later in this course, you'll use corner features to stitch together multiple images into a single panoramic image. However, not all useful features are corners. For example, let's use a blob detector to locate the white dots on these dominos. A common blob detection algorithm is SURF, which stands for Speeded Up Robust Features. The workflow here is the same as with corners. Convert the imported image to grayscale and use the detect SURF features function. Let's limit our results to the 50 strongest features and visualize them. Oh, no, these blobs are a lot smaller than the ones we want. Unlike corner features, blob features have sizes associated with them. To find larger blobs in the image, adjust one of the options for detect SURF features called NumOctaves, which determines the sizes of image subsections the algorithm uses for detection. For example, for the default value of three, the algorithm focuses on subsections like this. To try and detect blobs. A higher value like five, focuses on larger subsections of the image. Because the white dots are larger than the detected features, you need to increase the NumOctaves value as shown here. This looks a lot better. As a final example, consider images with uniform intensity regions, like this photo of a stop sign. For these images, the MSER, or Maximally Stable Extreme Regions Algorithm is often helpful for feature detection. Just like in the previous two examples, use the corresponding function to detect regions of this stop sign image. Again, visualize the results. To make the regions appear more clearly, adjust the show pixel list and show ellipses options to display colored regions without the ellipsis. All the letters have been detected as features. This would be a solid first step if you were trying to recognize text in signs. In this video, you learned how to detect features in different images using the Harris-Stephens, SURF and MSER Algorithms. But these are only a few of the many available feature detection algorithms. It's difficult to know which of these you'll want to use for a specific application. You'll often need to try a variety of approaches with your own images.