You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The SquareFinderV3 class (for example) is a rough prototype that puts together some OpenCV routines to perform edge detection and then to find linked edges that form quadrilaterals (the term Square is not strictly accurate). The .squares member of the object is a list of such contours.
Since we have additional real world knowledge about an image and the monitors represented within it, we can give hints to improve rejection of false positives. Not all hints will work perfectly, and in many cases, the heuristics implemented will need some parameterisation. e.g. We could filter based on the expected distribution of monitor sizes by area. The area is then an input parameter to the FilterByArea function.
To build knowledge of which heuristics work best, and to tune and optimise the parameters needs a code pattern that easily permits rerunning the detection algorithm with different combinations of filters, each of which may be rerun with different parameters. If the expected outcome is well known, then we can build a scripting/gui environment to explore this solution space. Potentially, we could even run across ranges of parameters to tune automatically. For example : given testImageA.png we might determine that success means identifying 9 monitors (3x3) in the picture where the center of each monitor object has an equal distance to the expected partner. This spec plus a range of filter functions and parameters could be used to perform some kind of tuning.
A prerequisite is a way to express this range of signal processing options and to build up alternative processing chains.
The extended vision is best explained by a YouTube video (https://www.youtube.com/watch?v=2JBkg06qLNg) which shows the TouchDesigner VideoDJ composition application which has a very nice UI for doing this same kind of thing (build a signal processing chain, allow variation of effects parameters at each step, and show a live update of the outcome of the changes).
The text was updated successfully, but these errors were encountered:
Traceback (most recent call last):
File "piwall.py", line 873, in
main()
File "piwall.py", line 842, in main
vssProto()
File "piwall.py", line 712, in vssProto
vss = VideoSquareSearch('./data/rotating_wall.mp4', False, 'piwall-search-mono.avi')
File "piwall.py", line 617, in init
squares = find_squares(img, 0.2) File "piwall.py", line 110, in find_squares
bin, contours, hierarchy = cv2.findContours(bin, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ValueError: need more than 2 values to unpack
The SquareFinderV3 class (for example) is a rough prototype that puts together some OpenCV routines to perform edge detection and then to find linked edges that form quadrilaterals (the term Square is not strictly accurate). The .squares member of the object is a list of such contours.
Since we have additional real world knowledge about an image and the monitors represented within it, we can give hints to improve rejection of false positives. Not all hints will work perfectly, and in many cases, the heuristics implemented will need some parameterisation. e.g. We could filter based on the expected distribution of monitor sizes by area. The area is then an input parameter to the FilterByArea function.
To build knowledge of which heuristics work best, and to tune and optimise the parameters needs a code pattern that easily permits rerunning the detection algorithm with different combinations of filters, each of which may be rerun with different parameters. If the expected outcome is well known, then we can build a scripting/gui environment to explore this solution space. Potentially, we could even run across ranges of parameters to tune automatically. For example : given testImageA.png we might determine that success means identifying 9 monitors (3x3) in the picture where the center of each monitor object has an equal distance to the expected partner. This spec plus a range of filter functions and parameters could be used to perform some kind of tuning.
A prerequisite is a way to express this range of signal processing options and to build up alternative processing chains.
The extended vision is best explained by a YouTube video (https://www.youtube.com/watch?v=2JBkg06qLNg) which shows the TouchDesigner VideoDJ composition application which has a very nice UI for doing this same kind of thing (build a signal processing chain, allow variation of effects parameters at each step, and show a live update of the outcome of the changes).
The text was updated successfully, but these errors were encountered: