Test Image 1: 128x128 image of a white circle, centered and with the background black
However, the FFT algorithm inverts the quadrants of the FT plane resulting to the image above. Applying fftshift() command on it yields the correct FT of a circle whose analytical solution is the Airy circle, as shown in the figure below
One characteristic of FFT is that it can retrieve the original image by performing FFT on the FFT of the image, as seen on the resulting image below.
Test Image 2: 128x128 image of a letter “A”
Convolution
The convolution is a linear operation which means that if f and g are recast by
linear transformations, such as the Laplace or Fourier transform, they obey the
convolution theorem:
where H , F and G are the transforms of h, f and g, respectively. The convolution
is a “smearing” of one function against another such that the resulting function
h looks a little like both f and g. Convolution is used to model the linear regime of
instruments or detection devices such as those used in imaging. For example, in
imaging, f can be the object, g can be the transfer function of the imaging system.
Their convolution, h, is then the image produced by the detection system which in
general is not 100% identical to the original object. *
*AP186 Notes, Maricor Soriano, 2008
linear transformations, such as the Laplace or Fourier transform, they obey the
convolution theorem:
H=FG
where H , F and G are the transforms of h, f and g, respectively. The convolution
is a “smearing” of one function against another such that the resulting function
h looks a little like both f and g. Convolution is used to model the linear regime of
instruments or detection devices such as those used in imaging. For example, in
imaging, f can be the object, g can be the transfer function of the imaging system.
Their convolution, h, is then the image produced by the detection system which in
general is not 100% identical to the original object. *
*AP186 Notes, Maricor Soriano, 2008
For this part of the activity, we simulate an imaging device such as lens by using images of circles with different radii. These images represent the aperture of a circular lens. We then convolve another image with these lenses and observe the resulting image.
Original Image:
c) circular lens of large aperture with resulting image after convolution
d) circular lens of very large aperture with resulting image after convolution
As observed, the resulting images for lenses with larger aperture are better as compared to those with smaller aperture. This is because the resulting image is dependent on the finite size of the lens.
Correlation via Template Matching
Correlation is a measure of the degree of similarity between two functions f and g. The more identical they are at a certain position, the higher their correlation value. Therefore, the correlation function is used mostly in template matching or pattern recognition.
Template Matching is a pattern recognition technique suitable for finding exactly identical patterns in a scene such as in the case of finding a certain word in a document.
The image on the left is correlated with the template on the right. This is done to find the location of the letters A on the original image.
The resulting image highlights the points where A is located. These are the points where bright white spots occur.
Edge detection using the convolution integral
Edge detection can be seen as template matching of an edge pattern with an image. In this activity, we created a 3x3 matrix pattern of an edge such that the total sum is zero.
We used three different patterns corresponding to three different directions.
horizontal/vertical/spot
Using imcorrcoef() function in SciLab, we then correlate these patterns to an original image to be able to detect edges. Observe from the results below that horizontal patterns are highlighted on the first image, vertical ones on the second image while edges were better defined using the spot pattern since it can include both horizontal and vertical edges even smooth edges.
This technique can refine edges but it is limited to monochrome images as your original image because in using a 256 color bitmap as input image, the edge detection technique results to a noisy image, therefore one cannot see clearly defined edges.
-o0o-
Collaborators: Jeric and Benj
-o0o-
Rating: 10/10
Expected results were observed for each part of the activity.
No comments:
Post a Comment