# Feature Extraction Vision and Image ProcessingWrite some Software

I have an assignment which is related to Image processing. This can be done on Python / C / C++

Here are the list of features I want to be done. Find the dropbox link at last of this post for files.

1 Detecting interest points (Features)

We start by detecting interest points in images (also sometimes referred to as

features). As we will see throughout this course, the ability to solve this problem

well is necessary for the success of many computer vision applications.

Find a couple of images to apply your solution to and illustrate that it works.

Apply either a blob detector, such as either Di erence of Gaussians (DoG) or

the Laplacian, or the Harris corner detector to your images. In this part you are

allowed to use library functions in your solution, but we recommend that you

attempt to implement the detection algorithm from scratch. Draw the detected

points on top of the image at the detected locations and include these images

Extra: Consider extending your solution to multi-scale detectors such as the

multi-scale Laplacian, multi-scale Harris corner or Harris-Laplacian detectors.

2 Simple matching of features

Next let us try to perform a simple feature matching based on your detected

points. As part of the assignment in Absalon you nd a pair of images

(img001_diffuse and img002_diffuse) showing the same object from slightly

di erent views (di erent view angle). These images are taken from the DTU

robot data set [url removed, login to view] We can compare interest

points from image A (img001_diffuse) with those in image B (img002_diffuse

2

) and nd points that match. We recommend you use the small gray scale

versions of these images.

Apply your interest point detectors to these images and collect all your

detected points. For each point extract a small square patch of pixels centered

on the detected point. Let us use this patch as our feature describing the local

image structure around the interest point. As a dissimilarity measure we will use

the normalized cross correlation between the two patches that we are comparing.

To select a match between a point in image A and points from image B, you

can either pick the point pairs with the smallest dissimilarity measure, or look

at the ratio between the dissimilarity measure for the closest match divided by

the dissimilarity measure of the second closest match and keep all points with

a ratio below some threshold value (you need to nd a good threshold). You

may risk having several points from image B matching with the same point in

image A and need to be able handle this situation. How could this be done?

A popular approach to illustrating results on matching interest points be-

tween two images, is to put the two images beside each other and then draw

lines between the matching interest points. Illustrate the performance of your

implementation using this approach and comment on your results. What patch

size should you use to get good results? Can you think of some normalization

steps you could apply to the patch in order to obtain a more robust feature?

Try to repeat the experiement by matching img001_diffuse with

img009_diffuse. The di erence in view angle with respect to img001_diffuse

is larger for img009_diffuse than for img002_diffuse. Explain why your

performance on this pair of images change.

Habilidades: Programação C, Programação C++ , Python

( 13 comentários ) Kathmandu, Nepal

ID do Projeto: #6790779

## Concedido a:

rdk2405

We are a team who have dedicated our combined 10 years on Image Processing and Feature Extraction. We would like to discuss with you regarding this potential collaboration. Regards, HD Services

\$25 USD em 30 dias
(0 Comentários)
0.0

SML2014