Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request : contour filter heuristics pattern to be refactored. #6

Open
infinnovation-dev opened this issue Sep 18, 2016 · 1 comment
Assignees

Comments

@infinnovation-dev
Copy link
Owner

The SquareFinderV3 class (for example) is a rough prototype that puts together some OpenCV routines to perform edge detection and then to find linked edges that form quadrilaterals (the term Square is not strictly accurate). The .squares member of the object is a list of such contours.

Since we have additional real world knowledge about an image and the monitors represented within it, we can give hints to improve rejection of false positives. Not all hints will work perfectly, and in many cases, the heuristics implemented will need some parameterisation. e.g. We could filter based on the expected distribution of monitor sizes by area. The area is then an input parameter to the FilterByArea function.

To build knowledge of which heuristics work best, and to tune and optimise the parameters needs a code pattern that easily permits rerunning the detection algorithm with different combinations of filters, each of which may be rerun with different parameters. If the expected outcome is well known, then we can build a scripting/gui environment to explore this solution space. Potentially, we could even run across ranges of parameters to tune automatically. For example : given testImageA.png we might determine that success means identifying 9 monitors (3x3) in the picture where the center of each monitor object has an equal distance to the expected partner. This spec plus a range of filter functions and parameters could be used to perform some kind of tuning.

A prerequisite is a way to express this range of signal processing options and to build up alternative processing chains.

The extended vision is best explained by a YouTube video (https://www.youtube.com/watch?v=2JBkg06qLNg) which shows the TouchDesigner VideoDJ composition application which has a very nice UI for doing this same kind of thing (build a signal processing chain, allow variation of effects parameters at each step, and show a live update of the outcome of the changes).

@banzsolt banzsolt added this to the piwall-cvtool 0.1 milestone Sep 18, 2016
@banzsolt banzsolt self-assigned this Sep 18, 2016
@banzsolt
Copy link
Collaborator

banzsolt commented Sep 19, 2016

Can't start piwall.py

/usr/bin/python piwall.py -v rotating

Traceback (most recent call last):
File "piwall.py", line 873, in
main()
File "piwall.py", line 842, in main
vssProto()
File "piwall.py", line 712, in vssProto
vss = VideoSquareSearch('./data/rotating_wall.mp4', False, 'piwall-search-mono.avi')
File "piwall.py", line 617, in init
squares = find_squares(img, 0.2)
File "piwall.py", line 110, in find_squares
bin, contours, hierarchy = cv2.findContours(bin, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ValueError: need more than 2 values to unpack

Solved: Make sure you are using openCV 3.0+

Try:
python piwall.py -v rotating

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants