Project : Accelerating Software Computation with
Custom Hardware
Introduction
Image processing
is a specific field of study in computer vision. Image processing itself
is any form of
signal processing for which the input is an image, such as a
photograph or
video frame;
the output of image
processing may be either an image or, a set of characteristics or
parameters related
to the image. Most image-processing techniques involve treating the
image as a two-dimensional
signal and applying standard signal-processing techniques to it.
In the project for this class, we are going to perform
edge detection using software, then using a combination using software
and a coprocessor.
Edge detection refers to the process of identifying and locating sharp
discontinuities in an image. The discontinuities are abrupt changes
in pixel intensity, which characterize boundaries of objects in a scene.
Classical methods of edge detection involve convolution on the image
with an operator (a 2-D filter), which is constructed to be sensitive
to large gradients in the image while returning values of zero in uniform
regions. There are an extremely large number of edge detection operators
available, each designed to be sensitive to certain types of edges.
Variables involved in the selection of an edge detection operator include:
- Edge orientation: The geometry of the operator determines
a characteristic direction in which it is most sensitive to edges.
Operators can be optimized to look for horizontal, vertical, or
diagonal edges.
- Noise environment: Edge detection is difficult in noisy
images, since both the noise and the edges contain high-frequency
content. Attempts to reduce the noise result in blurred and distorted
edges. Operators used on noisy images are typically larger in scope,
so they can average enough data to discount localized noisy pixels.
This results in less accurate localization of the detected edges.
- Edge structure: Not all edges involve a step change in
intensity. Effects such as refraction or poor focus can result in
objects with boundaries defined by a gradual change in intensity.
The operator needs to be chosen to be responsive to such a gradual
change in those cases. Newer wavelet-based techniques actually characterize
the nature of the transition for each edge in order to distinguish,
for example, edges associated with hair from edges associated with
a face.
There are many ways to perform edge detection. However, the majority
of different methods may be grouped into two categories Gradient and
Laplacian; in this project, we will only work with gradient. The gradient
method detects the edges by looking for the maximum and minimum in the
first derivative of the image.
The Sobel operator is used in image processing, particularly
within edge detection algorithms. Technically, it is a discrete differentiation
operator, computing an approximation of the gradient of the image intensity
function. At each point in the image, the result of the Sobel operator
is either the corresponding gradient vector or the norm of this vector.
The Sobel operator is based on convolving the image with a small, separable,
and integer valued filter in horizontal and vertical direction and is
therefore relatively inexpensive in terms of computations. On the other
hand, the gradient approximation, which it produces, is relatively crude,
in particular for high frequency variations in the image.
The operator consists of a pair of 3x3 convolution kernels; a kernel
a constant matrix. One kernel is simply the other rotated by 90 degrees.
For the calculation we use kernel’s: Kx = { {-1, 0, 1}, {-2, 0, 2},
{-1, 0, 1} }, Ky = { {1, 2, 1}, {0, 0, 0}, {-1, -2, -1} }. The calculation
uses the kernels with the picture P, where Gx=Kx*P and Gy=Ky*P.
These kernels are designed to respond maximally to edges running vertically
and horizontally relative to the pixel grid, one kernel for each of
the two perpendicular orientations. The kernels can be applied separately
to the input image, to produce separate measurements of the gradient
component in each orientation (call these Gx and Gy). These can then
be combined together to find the absolute magnitude of the gradient
at each point and the orientation of that gradient. The gradient magnitude
is given by: |G|=sqr(Gx2+Gy2). Typically, an approximate
magnitude is computed using: |G|=|Gx|+|Gy|, which is much faster to
compute. The angle of orientation of the edge (relative to the pixel
grid) giving rise to the spatial gradient is given by: θ=arctan(Gy/Gx).
The result of the Sobel operator is a 2-dimensional map of the gradient
at each point. It can be processed and viewed as though it is itself
an image, with the areas of high gradient (the likely edges) visible
as white lines. The following images illustrate this, by showing the
computation of the Sobel operator on a simple image.
Sample code for Sobel operation:
/*Function performs the partial Sobel convolution on a single
pixel value. Parameters: x,y correspond to the pixel location
on the image. The return value is the convoluted pixel at x,y.*/
unsigned
int
SobelOperator(const
unsigned
int&
x,
const
unsigned
int&
y,
const
unsigned
int&
picture[][])
{
//constants
const
int
MATRIX_MAX
=
3;
const
unsigned
int
PIXEL_MAX
=
255;
//value for the new cell calculation
unsigned
int
GXpixel,
GYpixel,
Gpixel
=
0;
//matrix for the Kx convolution
const
int
SobelKxfilterMatrix[3][3]
=
{
{-1,
0,
1},
{-2,
0,
2},
{-1,
0,
1}
};
//matrix for the Ky convolution
const
int
SobelKyfilterMatrix[3][3]
=
{
{-1,-2,
-1},
{0,
0,
0},
{1,
2,
1}
};
//for addressing into filter array and into picture itself
int
iFilter,
jFilter;
int
iPic,
jPic;
//Loop to iterate over picture and filter window to perform
Sobel operation
//The loop is not optimized and does not check for boundary
exceptions.
for(iFilter
=
0
,
iPic
=-1;
iFilter
<
MATRIX_MAX
&&
iPic
>=
1;
iFilter++,
iPic++)
{
for(jFilter
=
0
,
jPic
=-1;
jFilter
<
MATRIX_MAX
&&
jPic
>=
1;
jFilter++,
jPic++)
{
GXpixel
+=
(unsigned
int)(picture[x+iPic][y+jPic]
*
SobelKxfilterMatrix[iFilter][jFilter]);
GYpixel
+=
(unsigned
int)(picture[x+iPic][y+jPic]
*
SobelKyfilterMatrix[iFilter][jFilter]);
}
}
//check for pixel saturation
if
(GXPixel>PIXEL_MAX)
{GXpixel=PIXEL_MAX;}
if
(GYPixel>PIXEL_MAX)
{GYpixel=PIXEL_MAX;}
//normalize pixel
Gpixel
=
(unsigned
int)sqrt(pow((double)GXpixel,2)+pow((double)GYpixel,2));
//check for pixel saturation
if
(Gpixel>PIXEL_MAX)
{Gpixel=PIXEL_MAX;}
return
Gpixel;
}
|
Prewitt operator is similar to the Sobel operator and is used for detecting
vertical and horizontal edges in images. However, the kennels differ
as follows: Kx = { {1, 1, 1}, {0, 0, 0}, {-1, -1, -1} }, Ky = { {-1,
0, 1}, {-1, 0, 1}, {-1, 0, 1} }.
Software
Part 1: Software Implementation
Perform the Prewitt and Sobel filter on the given image in software
using C/C++ on the lab machines, then verify the operation visually.
You may use your own images, just remember your space is limited
on the FPGA. For all others, the pictures.zip
has several images included.
Recommended Steps:
1. EasyBMP to Data
structure
Read the images color channels using the EasyBMP library and put
values into a data structure. The libraries, documentation, and
a example for EasyBMP is provided below.
EasyBMP_1.06.zip
EasyBMP_Documentation_1.06.00.zip
BMP_Image_Parce_to_RGB.cpp
2. Prewitt and Sobel Operator Functions
Develop functions for the Prewitt and Sobel operator.
3. Perform the Edge Detection
Create a program to use the operator functions and output to a new
BMP file.
4. Verify Visually
Visually compare the original image to the new filtered images.
MicroBlaze
Part 2: MicroBlaze Software
Create a self-contained MicroBlaze software implementation of the
Prewitt and Sobel filter. Hard -code the image into the software,
perform the filters, output the data in a standard format to the
UART, and convert the data to a BMP image.
Recommended Steps:
1. BMP to RGB
Use Easy BMP to convert a 100 x 100 image to a RGB Map or
B/W (Black/White) Map.
2. (RGB or B/W) to C Matrix
Modify your program to take the (RGB or B/W) Map data and generate
a file with a C matrix.
3. Sobel Filter Software
From the source code above, develop a Sobel filter function.
4. Create a MicroBlaze program
From the previous parts, create a program that reads a C matrix,
performs the Sobel operation (Part 3), writes to another C matrix,
and outputs the filtered image to the UART. Note: The processed
data can be read from the Cutecom log file.
5. UART Data to BMP
Taking the UART data, make a C/C++ program to convert UART data
to a BMP image.
Profile MicroBlaze Software
Part 3: Course Gained Evaluation of C Instructions on the MicroBlaze
From the cycle counter in the previous lab, profile the number of
clock cycles to perform the Prewitt and Sobel function on the MicroBlaze.
Run the profile several times and calculate the average from a set
of at least ten runs.
Coprocessor Hardware
Part 4: Sobel Hardware Module
Taking what you have learned from the Prewitt and Sobel operator,
make hardware modules to calculate a single pixel. The hardware
module inputs will be the 9 pixel values and the output be the calculated
pixel value. Replace the software implementation with the hardware
module and output the filtered image to the UART and recombine the
data into a BMP image for visual confirmation.
Profile Coprocessor Hardware
Part 5: Course Gained Evaluation of C Instructions
Using the cycle counter profile the number of clock cycles to do
the Prewitt and Sobel function on the Coprocessor. Run the profile
several times and calculate the average from a set of at least ten
runs.
Direct Display
Part 6: Course Gained Evaluation of C Instructions
Using what you have already created, create a new peripheral to
use a BRAM in combination with your VGA driver to display the image
on the monitor Screen.
Application Specific Hardware
Part 7: Application Specific Processor
Create a RTL design of the Prewitt and Sobel Filter and display
it to the VGA monitor. The design will read your image from a BRAM,
run the Prewitt and Sobel Filter on the data from the BRAM, write
the processed image to a BRAM, and display it to the VGA monitor
using your previously made VGA Driver. Use a switch to reset the
computation and change from Prewitt to Sobel.
Recommended Steps:
1. Draw HLFSM
Create a high level finite state machine of the overall operation
and verify correctness.
2. Create Course-Grained Datapath Design
Evaluate your HLFSM and draw data path with black box operators.
3. Create Fine-Grained Component Design
From your previous source code, develop BRAM initialization, Prewitt
operator, Sobel operator, and other necessary operators.
4. Connect Components on Data-Path
Combining the designs from previous steps, incrementally connect
components and test the data flow. The step should involve several
testbenches to verify integrity of the RTL components and HLFSM
functionality. IE: Input image BRAM to operators, operators to output
BRAM, BRAM to VGA.
5. Display to VGA
Visually verify the result is comparable to the software implementation.
Profile Application Specific Hardware
Part 8: Course Gained Evaluation
Using cycle counter profile the number of clock cycles to do the
Prewitt and Sobel function on the application specific hardware.
Run the profile several times and calculate the average from a set
of at least ten runs. The cycle counter should start when the calculation
beings and stops when the calculation ends. The cycle calculation
can be output on the LCD, UART, or VGA over the image.
Project Report
Each group must submit one report. The report can significantly
affect your grade based on how many parts of the project each group
completed. Each page must be single spaced with at least 10-point
font in Arial or Garamond or Times New Roman at 1-inch margins on
all sides. The report will include a Title Page, Description
of Project, User Guide, Technical Design Specifications,
Results and Performance Evaluation, and Conclusion.
The Title page will contain the project title, course name,
university, instructor and teaching assistant names, names of the
group members, the members' emails, and a statement of original
work. The statement will be as follows "We, [member 1] and [member
2], attest that this document and the work described herein is entirely
our work".
The Description of Project will describe the constructed
project. The description should include a brief description of the
project in the members own words. The reader of this section should
be able to understand the purpose of the project, the theory related
to project content, and high level implementation.
The User Guide section will describe how to employ your project.
Is should not make reference to unnecessary technical details and
should be easy clear enough so any engineer can reproduce results.
You can assume the user will have all source files in your hierarchy.
The Technical Design Specifications section will describe
the fine details of your project. Is should not make reference to
unnecessary technical details and should be easy clear enough so
any other student in the class can reproduce results. It is encouraged
to include block diagrams, data paths, and other related figures.
The Results Performance Evaluation section will analytically
show the performance of MicroBlaze software to coprocessor and application
specific processor implementations. Show these results by a
brief description, before and after images, equations used in calculation,
and a graph comparison.
The Conclusion section will state a summary, problems encountered
and solved, suggestion for better approaches, division of labor
(itemized), contact methods, and knowledge learned.
Here are templates: projectreport.docx
projectreport.pdf
Note: The project turnin will follow the specification from
the turnin page, except include all Xilinx files in the extra
folder and the report in DOCX and PDF format.
References
http://zone.ni.com/devzone/conceptd.nsf/webmain/37BF3B76F646EF5286256802007BA00F
H.S. Neoh, and A. Hazanchuk.2004. Adaptive edge detection for real-time
video processing using FPGAs. GSPx 2004 confernce.
URL:
http://www.altera.com/literature/cp/gspx/edge-detection.pdf
N. Anastasiadis, I. Sideris, and K.l Pekmestzi. 2010. A fast multiplier-less
edge detection accelerator for FPGAs. In Proceedings of the 2010
ACM Symposium on Applied Computing (SAC '10). ACM, New York, NY,
USA, 510-515. DOI=10.1145/1774088.1774193
URL: http://doi.acm.org/10.1145/1774088.1774193
M.D. Heath, S. Sarkar, T. Sanocki, K.W. Bowyer. "A robust visual
method for assessing the relative performance of edge-detection
algorithms," Pattern Analysis and Machine Intelligence, IEEE Transactions
on , vol.19, no.12, pp.1338-1359, Dec 1997
DOI = 10.1109/34.643893
URL:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=643893&isnumber=14015
|