MATLAB: Distance measurement using edge detection

cannyedge detectionImage Processing Toolboxmriphantomvector

Hi all, I would like to have an expert opinion on this problem. We use a simple rectangular phantom to perform quality assessment of an MRI system and for this we take structural images to measure the dimensions of the phantom (I have attached a part of a sample image below, upper part). I wrote a code to perform this automatically; the procedure can be summarized as follows:
  1. edge detection using the Canny method
  2. removal of unwanted edges around the image using a rectangular mask and reconstruction
  3. morphological cleaning of smaller spurious edges
  4. creation of a row and a column vector based on the binary image of edges (the red line at the lower image is to visualize which part of the image is copied into the row vector)
  5. calculation of each dimension by subtracting the position of the first white pixel from the position of the last one, then multiplication by the pixel size (because the phantom can sometimes be slightly rotated I don't use the bounding box method)
Now, the code works perfectly. However, I have run it over a large number of images and compared the values with those obtained manually using a simple graphical tool such as ImageJ, and I have found that manual measurements are on average roughly half a pixel size larger than those obtained with the code. Inter-rater variation is very small and the code creates appropriate edges for all images. My question is, why is that? Which one is more trustworthy in this case, the MATLAB code measurements or the human measurements? Human measurements are always the ground truth in similar applications, but I feel that here this is not the case. Should I use a different method; possibly adding half a pixel size on each measurement would make any sense? I've been trying different edge detection methods and get the same thing. It's becoming really annoying!
Thank you in advance.

Best Answer

  • Have you tried sub-pixel edge position estimation? If not, it would be interesting to see if it improves agreement. There's a subpixel version of Canny here.
    It also occurs to me that applying gamma correction on image display may affect where your observers choose to put the edge. Say three adjacent pixels have values 0, 180, 255. The jump from 0 to 180 might be perceived by a person as smaller than the jump from 180 to 255, depending on your monitor