By dom

2011-12-29 12:20:57 8 Comments

I successfully implemented the OpenCV square-detection example in my test application, but now need to filter the output, because it's quite messy - or is my code wrong?

I'm interested in the four corner points of the paper for skew reduction (like that) and further processing …

Input & Output: Input & Output

Original image:



double angle( cv::Point pt1, cv::Point pt2, cv::Point pt0 ) {
    double dx1 = pt1.x - pt0.x;
    double dy1 = pt1.y - pt0.y;
    double dx2 = pt2.x - pt0.x;
    double dy2 = pt2.y - pt0.y;
    return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);

- (std::vector<std::vector<cv::Point> >)findSquaresInImage:(cv::Mat)_image
    std::vector<std::vector<cv::Point> > squares;
    cv::Mat pyr, timg, gray0(_image.size(), CV_8U), gray;
    int thresh = 50, N = 11;
    cv::pyrDown(_image, pyr, cv::Size(_image.cols/2, _image.rows/2));
    cv::pyrUp(pyr, timg, _image.size());
    std::vector<std::vector<cv::Point> > contours;
    for( int c = 0; c < 3; c++ ) {
        int ch[] = {c, 0};
        mixChannels(&timg, 1, &gray0, 1, ch, 1);
        for( int l = 0; l < N; l++ ) {
            if( l == 0 ) {
                cv::Canny(gray0, gray, 0, thresh, 5);
                cv::dilate(gray, gray, cv::Mat(), cv::Point(-1,-1));
            else {
                gray = gray0 >= (l+1)*255/N;
            cv::findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
            std::vector<cv::Point> approx;
            for( size_t i = 0; i < contours.size(); i++ )
                cv::approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
                if( approx.size() == 4 && fabs(contourArea(cv::Mat(approx))) > 1000 && cv::isContourConvex(cv::Mat(approx))) {
                    double maxCosine = 0;

                    for( int j = 2; j < 5; j++ )
                        double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
                        maxCosine = MAX(maxCosine, cosine);

                    if( maxCosine < 0.3 ) {
    return squares;

EDIT 17/08/2012:

To draw the detected squares on the image use this code:

cv::Mat debugSquares( std::vector<std::vector<cv::Point> > squares, cv::Mat image )
    for ( int i = 0; i< squares.size(); i++ ) {
        // draw contour
        cv::drawContours(image, squares, i, cv::Scalar(255,0,0), 1, 8, std::vector<cv::Vec4i>(), 0, cv::Point());

        // draw bounding rect
        cv::Rect rect = boundingRect(cv::Mat(squares[i]));
        cv::rectangle(image,,, cv::Scalar(0,255,0), 2, 8, 0);

        // draw rotated rect
        cv::RotatedRect minRect = minAreaRect(cv::Mat(squares[i]));
        cv::Point2f rect_points[4];
        minRect.points( rect_points );
        for ( int j = 0; j < 4; j++ ) {
            cv::line( image, rect_points[j], rect_points[(j+1)%4], cv::Scalar(0,0,255), 1, 8 ); // blue

    return image;


@nathancy 2020-02-03 23:46:03

Once you have detected the bounding box of the document, you can perform a four-point perspective transform to obtain a top-down birds eye view of the image. This will fix the skew and isolate only the desired object.

Input image:

Detected text object

Top-down view of text document


from imutils.perspective import four_point_transform
import cv2
import numpy

# Load image, grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread("1.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (7,7), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]

# Find contours and sort for largest contour
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
displayCnt = None

for c in cnts:
    # Perform contour approximation
    peri = cv2.arcLength(c, True)
    approx = cv2.approxPolyDP(c, 0.02 * peri, True)
    if len(approx) == 4:
        displayCnt = approx

# Obtain birds' eye view of image
warped = four_point_transform(image, displayCnt.reshape(4, 2))

cv2.imshow("thresh", thresh)
cv2.imshow("warped", warped)
cv2.imshow("image", image)

@Kinght 金 2017-12-20 03:34:25

Well, I'm late.

In your image, the paper is white, while the background is colored. So, it's better to detect the paper is Saturation(饱和度) channel in HSV color space. Take refer to wiki HSL_and_HSV first. Then I'll copy most idea from my answer in this Detect Colored Segment in an image.

Main steps:

  1. Read into BGR
  2. Convert the image from bgr to hsv space
  3. Threshold the S channel
  4. Then find the max external contour(or do Canny, or HoughLines as you like, I choose findContours), approx to get the corners.

This is my result:

enter image description here

The Python code(Python 3.5 + OpenCV 3.3):

# 2017.12.20 10:47:28 CST
# 2017.12.20 11:29:30 CST

import cv2
import numpy as np

##(1) read into  bgr-space
img = cv2.imread("test2.jpg")

##(2) convert to hsv-space, then split the channels
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)

##(3) threshold the S channel using adaptive method(`THRESH_OTSU`) or fixed thresh
th, threshed = cv2.threshold(s, 50, 255, cv2.THRESH_BINARY_INV)

##(4) find all the external contours on the threshed S
#_, cnts, _ = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]

canvas  = img.copy()
#cv2.drawContours(canvas, cnts, -1, (0,255,0), 1)

## sort and choose the largest contour
cnts = sorted(cnts, key = cv2.contourArea)
cnt = cnts[-1]

## approx the contour, so the get the corner points
arclen = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02* arclen, True)
cv2.drawContours(canvas, [cnt], -1, (255,0,0), 1, cv2.LINE_AA)
cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 1, cv2.LINE_AA)

## Ok, you can see the result as tag(6)
cv2.imwrite("detected.png", canvas)

Related answers:

  1. How to detect colored patches in an image using OpenCV?
  2. Edge detection on colored background using OpenCV
  3. OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
  4. How to use `cv2.findContours` in different OpenCV versions?

@hchouhan02 2018-06-06 04:41:43

I tried using S space but still could not get succeed. See this:…

@Anubhav Rohatgi 2016-04-13 11:15:00

Detecting sheet of paper is kinda old school. If you want to tackle skew detection then it is better if you straightaway aim for text line detection. With this you will get the extremas left, right, top and bottom. Discard any graphics in the image if you dont want and then do some statistics on the text line segments to find the most occurring angle range or rather angle. This is how you will narrow down to a good skew angle. Now after this you put these parameters the skew angle and the extremas to deskew and chop the image to what is required.

As for the current image requirement, it is better if you try CV_RETR_EXTERNAL instead of CV_RETR_LIST.

Another method of detecting edges is to train a random forests classifier on the paper edges and then use the classifier to get the edge Map. This is by far a robust method but requires training and time.

Random forests will work with low contrast difference scenarios for example white paper on roughly white background.

@Tim 2013-08-04 11:10:32

What you need is a quadrangle instead of a rotated rectangle. RotatedRect will give you incorrect results. Also you will need a perspective projection.

Basicly what must been done is:

  • Loop through all polygon segments and connect those which are almost equel.
  • Sort them so you have the 4 most largest line segments.
  • Intersect those lines and you have the 4 most likely corner points.
  • Transform the matrix over the perspective gathered from the corner points and the aspect ratio of the known object.

I implemented a class Quadrangle which takes care of contour to quadrangle conversion and will also transform it over the right perspective.

See a working implementation here: Java OpenCV deskewing a contour

@mmgp 2013-01-16 21:56:49

Unless there is some other requirement not specified, I would simply convert your color image to grayscale and work with that only (no need to work on the 3 channels, the contrast present is too high already). Also, unless there is some specific problem regarding resizing, I would work with a downscaled version of your images, since they are relatively large and the size adds nothing to the problem being solved. Then, finally, your problem is solved with a median filter, some basic morphological tools, and statistics (mostly for the Otsu thresholding, which is already done for you).

Here is what I obtain with your sample image and some other image with a sheet of paper I found around:

enter image description here enter image description here

The median filter is used to remove minor details from the, now grayscale, image. It will possibly remove thin lines inside the whitish paper, which is good because then you will end with tiny connected components which are easy to discard. After the median, apply a morphological gradient (simply dilation - erosion) and binarize the result by Otsu. The morphological gradient is a good method to keep strong edges, it should be used more. Then, since this gradient will increase the contour width, apply a morphological thinning. Now you can discard small components.

At this point, here is what we have with the right image above (before drawing the blue polygon), the left one is not shown because the only remaining component is the one describing the paper:

enter image description here

Given the examples, now the only issue left is distinguishing between components that look like rectangles and others that do not. This is a matter of determining a ratio between the area of the convex hull containing the shape and the area of its bounding box; the ratio 0.7 works fine for these examples. It might be the case that you also need to discard components that are inside the paper, but not in these examples by using this method (nevertheless, doing this step should be very easy especially because it can be done through OpenCV directly).

For reference, here is a sample code in Mathematica:

f = Import[""]
f = ImageResize[f, ImageDimensions[f][[1]]/4]
g = MedianFilter[ColorConvert[f, "Grayscale"], 2]
h = DeleteSmallComponents[Thinning[
     Binarize[ImageSubtract[Dilation[g, 1], Erosion[g, 1]]]]]
convexvert = ComponentMeasurements[SelectComponents[
     h, {"ConvexArea", "BoundingBoxArea"}, #1 / #2 > 0.7 &], 
     "ConvexVertices"][[All, 2]]
(* To visualize the blue polygons above: *)
Show[f, Graphics[{EdgeForm[{Blue, Thick}], RGBColor[0, 0, 1, 0.5], 
     Polygon @@ convexvert}]]

If there are more varied situations where the paper's rectangle is not so well defined, or the approach confuses it with other shapes -- these situations could happen due to various reasons, but a common cause is bad image acquisition -- then try combining the pre-processing steps with the work described in the paper "Rectangle Detection based on a Windowed Hough Transform".

@Abid Rahman K 2013-02-15 17:31:14

is there any major difference in implementation of yours and the one above(ie @karlphilip 's answer) ? I am sorry I couldn't find any in a fast look (except 3 channel-1 channel and Mathematica-OpenCV).

@mmgp 2013-02-15 17:52:26

@AbidRahmanK yes, there are.. I don't use canny neither "several thresholds" to start with. There are other differences, but by the tone of your comment it seems pointless to put any effort on my own comment.

@Abid Rahman K 2013-02-15 18:16:22

I see both of you first find the edges, and determine which edge is square. For finding edges, you people use different methods. He uses canny, you use some dilation-erosion. And "several thresholds", may be he got from OpenCV samples, used to find squares. Main thing is, I felt overall concept is same. "Find edges and detect square". And I asked it sincerely, I don't know what "tone" you got from my comment, or what you (understood/misunderstood). So if you feel this question is sincere, I would like to know other differences. Otherwise discard my comments.

@mmgp 2013-02-15 19:02:02

@AbidRahmanK of course the concept is the same, the task is the same. Median filtering is being used, thinning is being used, I don't care from where he took several thresholds idea -- it is just not used here (thus how can it not be a difference ?), the image is resized here, the component measurements are different. "Some dilation-erosion" doesn't give binary edges, otsu is used for that. It is pointless to mention this, the code is there.

@Abid Rahman K 2013-02-16 02:11:19

K. Thank you. Got the answer. Concept is the same. (I never used Mathematica, so I can't understand the code.)And the differences you mentioned are differences, but not a different approach or major ones. If you still didn For example, check this:

@Abid Rahman K 2013-02-16 02:21:21 Its answers have plenty of variety approaches. Not sure how much of them will work. Some of them may be similar, but overall there are a lot of ideas (even my answer also is just a different concept, I didn't ensure it will work, but still another idea). That is actually I was looking. ( Not simply otsu or threshold like that). Anyway, thanks for your answer and +1 for helping out.

@mmgp 2013-02-16 02:21:30

@AbidRahmanK I'm not understanding what you are trying to point out. There are differences that I consider "major enough", otherwise I wouldn't have bothered posting an answer. There are other couple of differences that weren't pointed out, but the code is there (again). For more one example, I don't rely on checks like approx.size() == 4, which I see as something just waiting to fail for tiny variations in noise or other details during image acquisition. Every step is done in different manners, and I expect this shorter self-contained code shown here to be more robust (but it can fail).

@Abid Rahman K 2013-02-16 02:22:27

@mmgp 2013-02-16 02:26:03

@AbidRahmanK I'm not going to discuss this further in chat, don't have the time. For the other question you linked in, let us start by pointing out that is a completely different matter. One reason there are varied answers there is because people are just guessing on what to do. That task there is one that requires machine learning, with training, testing, and so on. It cannot be properly solved otherwise. It is also a bad question, despite lots of up votes.

@Abid Rahman K 2013-02-16 02:30:28

OK brother, then we can stop this discussion here, me too don't have much time(exam season). Differences you mentioned are differences, I agree, but not "major to me", may be for you. So I think it is just matter of difference in perspective. Thank you for the discussion.

@hchouhan02 2018-06-05 06:14:56

Hi, Can you explain the above code in C++ or Java code for OpenCV? Thanks

@karlphillip 2012-01-14 15:10:09

This is a recurring subject in Stackoverflow and since I was unable to find a relevant implementation I decided to accept the challenge.

I made some modifications to the squares demo present in OpenCV and the resulting C++ code below is able to detect a sheet of paper in the image:

void find_squares(Mat& image, vector<vector<Point> >& squares)
    // blur will enhance edge detection
    Mat blurred(image);
    medianBlur(image, blurred, 9);

    Mat gray0(blurred.size(), CV_8U), gray;
    vector<vector<Point> > contours;

    // find squares in every color plane of the image
    for (int c = 0; c < 3; c++)
        int ch[] = {c, 0};
        mixChannels(&blurred, 1, &gray0, 1, ch, 1);

        // try several threshold levels
        const int threshold_level = 2;
        for (int l = 0; l < threshold_level; l++)
            // Use Canny instead of zero threshold level!
            // Canny helps to catch squares with gradient shading
            if (l == 0)
                Canny(gray0, gray, 10, 20, 3); // 

                // Dilate helps to remove potential holes between edge segments
                dilate(gray, gray, Mat(), Point(-1,-1));
                    gray = gray0 >= (l+1) * 255 / threshold_level;

            // Find contours and store them in a list
            findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);

            // Test contours
            vector<Point> approx;
            for (size_t i = 0; i < contours.size(); i++)
                    // approximate contour with accuracy proportional
                    // to the contour perimeter
                    approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);

                    // Note: absolute value of an area is used because
                    // area may be positive or negative - in accordance with the
                    // contour orientation
                    if (approx.size() == 4 &&
                            fabs(contourArea(Mat(approx))) > 1000 &&
                            double maxCosine = 0;

                            for (int j = 2; j < 5; j++)
                                    double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
                                    maxCosine = MAX(maxCosine, cosine);

                            if (maxCosine < 0.3)

After this procedure is executed, the sheet of paper will be the largest square in vector<vector<Point> >:

opencv paper sheet detection

I'm letting you write the function to find the largest square. ;)

@dom 2012-01-18 10:06:33

For some unknown reason I'm not abled to get it working anymore. It always throws an exception when mixChannels is called, which is strange because it worked a few days ago. Some OpenCV Error: Assertion failed (j < nsrcs && src[j].depth() == depth) in mixChannels Do you know this kind of error? The depth of the channels is matching so it's not making sense. I'm working with OSX 10.7.2 and OpenCV 2.3.1

@karlphillip 2012-01-18 10:39:37

That's why I use source control. The smallest accidental modification to the code can be easily discovered. If you didnt change anything, try testing with other images and finally recompile/reinstall opencv.

@dom 2012-01-18 10:54:28

Ah, got it working – seems like some times photoshop messes the image up … And I'm now using SVN to manage versions. Thanks!

@dom 2012-01-20 12:16:09

In some cases the biggest square is not only containing the paper, but other stuff and the smaller squares are more accurate. Do you have any ideas how to prevent that? Input: Output:

@karlphillip 2012-01-20 12:27:15

Are you always working with the same paper size? What you can do is discover the size of the paper's rectangle in a successful detection, then on the other tests, try to find the square that has the closest width/height to that size: this should be an indicator of the paper's rectangle. Or, play along with the parameters of cvCanny, medianBlur until you find a more generic solution to your problem. For instance, try the value 7 when calling medianBlur.

@karlphillip 2012-01-20 12:27:52

The fact is that there is no universal way to detect the paper. What you can do is stablish rules for taking the picture, so it favours the algorithm detection.

@karlphillip 2012-01-20 12:34:03

I also noticed that the picture in your question has the paper horizontally, while these links you just shared show the paper in a vertical position. I suggest you rotate the image and try again to see if this influences the detection.

@dom 2012-01-20 13:13:17

Rotation the image helps in some cases. I also figured out that down- and upscaling to eliminate details in the picutes helps a lot. Now my result looks like this:

@karlphillip 2012-01-20 13:20:23

Awesome. We shall close this thread and not talk about it anymore! =D But seriously, this is becoming more like a chat (and we shouldn't). We are also using the comments to discuss the problems you are facing beyond what was stated in the original question. I suggest you consider asking new questions in stackoverflow if you have them. You'll also get way more attention that way. Good luck.

@Ajay Sharma 2012-01-25 12:35:05

@moosgummi How can I use this code snippet in my iPhone application looking for Square/Rectangle in an image.I didn;t find any help for implementing the code with OpenCV & iPhoneSDK.Is this the code worked for iPhone ?

@karlphillip 2012-01-25 12:40:19

OpenCV is pretty much the same for all platforms (Win/Linux/Mac/iPhone/...). The difference is that some don't supported the GPU module of OpenCV. Have you built OpenCV for iOS already? Were you able to test it? I think these are the questions you need to answer before trying anything more advanced. Baby steps!

@QueueOverFlow 2012-10-22 09:13:04

@karlphillip I have error Assertion failed (j < nsrcs && src[j].depth() , how to remove it?

@QueueOverFlow 2012-10-27 07:32:35

@karlphillip I follow your code and draw red rectangle, now how to crop the area inside the rectangle ?

@QueueOverFlow 2012-11-21 12:23:12

@karlphillip please look on that question…

@alandalusi 2012-11-26 20:52:02

@moosgummi I faced a similar error "Some OpenCV Error: Assertion failed (j < nsrcs && src[j].depth() == depth) in mixChannels". How did you solve it?

@alandalusi 2012-11-27 17:34:28

@karlphillip I tested this code and i was able to detect the paper clearly, but it takes so much time. Is the code really heavy? there is an app called SayText where this detection happens in real-time from a video stream. This code would be impractical for real-time, am I right?

@karlphillip 2012-11-27 17:45:01

Probably. This is an academic answer, not very practical for the industry. There are all sorts of optimizations you can try, beginning with the definition of the counter located at for (int c = 0; c < 3; c++), which is responsible to iterate on every channel of the image. For instance, you can set it to iterate on only one channel :) Don't forget to up vote.

@Dory 2013-07-01 10:31:55

hey @karlphillip I'm trying to detect square using opencv and used squares.cpp sample,but it is always throws an exception when mixChannels is called. Some OpenCV Error: Assertion failed (j < nsrcs && src[j].depth() == depth) in mixChannels. Could you please help me how did you resolved this error.

@Tim 2013-08-04 11:04:27

A working Java implementation of this code snippet can be found here: It also contains some critical and more efficient changes.

@Rocket 2013-09-03 22:17:29

@karlphillip +1 ,I tried the same code , but its give me the answer with 0 squares , same code , same images , i use the function of angel and debug squares from user and find squares from your answer and main from this answer of you‌​tangular-bright-area‌​-in-a-image-using-op‌​encv, but it give me the answer as 0

@Rocket 2013-09-04 09:24:42

Thanks @karlphillip i find the error but it drawing so much reactangle's like 33 , should i go for new thread ?

@karlphillip 2013-09-04 14:28:10

@Ahmad That's it, it's working. That's the number of rectangles found in your image and that's exactly what this code is supposed to do. In the samples image used here, the paper in the image is the LARGEST rectangle, so all I needed to do is write the code to figure out the largest rectangle in the image.

@user1140237 2013-09-17 07:01:13

@karlphillip thnks 4 the help tht i'll solve :) can u plz help me out for my one doubt find white paper if the background of tht is white(i mean white paper on white desk not brown or sme other contrast color )...

@Sohaib 2013-10-02 15:10:46

What exactly is angle? I understand the part up until you use approxPolyDP after that I dont understand what are you doing. Please have a look at this question. Is it the same problem?‌​age/…

@karlphillip 2013-10-03 00:11:37

@SilentPro angle() is a helper function. As stated in the answer, this code is based on samples/cpp/squares.cpp present in OpenCV.

Related Questions

Sponsored Content

8 Answered Questions

4 Answered Questions

1 Answered Questions

[SOLVED] pyopencv drawContours

  • 2010-08-10 21:04:42
  • trung
  • 1285 View
  • 2 Score
  • 1 Answer
  • Tags:   opencv

1 Answered Questions

3 Answered Questions

1 Answered Questions

detecting square opencv but square detected more than it should be

2 Answered Questions

[SOLVED] detection square opencv and c ++

2 Answered Questions

[SOLVED] Boundry detect paper sheet opencv

Sponsored Content