Change the way you interact with your browser.
Import the core library:
<script src="tracking.js"></script>
Import the color module:
<script src="tracker/color.js"></script>
Gets the user's camera:
var videoCamera = new tracking.VideoCamera().render();
Instantiates tracking by color magenta and displays X, Y and Z positions of the detected area in console:
videoCamera.track({
type: 'color',
color: 'magenta',
onFound: function(track) {
console.log(track.x, track.y, track.z);
},
onNotFound: function() {}
});
Check the full code of this Hello World example.
Note: if you want to run tracking.js examples locally, you're going to need a local server, since
file:///
doesn't work withgetUserMedia()
in some browsers.
- Install Grunt and its dependencies:
npm install .
- Run a local server:
grunt server
- Go to:
http://localhost:9001
and have fun :)
- tracking.js : Library's core;
- color.js : Module for color tracking;
- human.js : Module for human tracking.
There are some handy classes and chainable methods that you can use to achieve your goal, for example:
- new tracking.VideoCamera()
Requests user's camera using WebRTC's getUserMedia()
.
- new tracking.VideoCamera().render()
Render user's camera using a <video>
element into the DOM.
- new tracking.VideoCamera().hide()
Hides the <video>
rendered into the DOM by tracking.VideoCamera()
. In order to add information to the scene the <canvas>
element could be displayed instead of the <video>
.
- new tracking.VideoCamera().renderVideoCanvas()
Renders user's camera using a <canvas>
element.
When initializing the object tracking.VideoCamera().track()
, you can optionally specify some parameters, for instance:
- type {string} : could be
color
orhuman
.
new tracking.VideoCamera().track({
type: 'color'
});
- color {string} : could be
cyan
,magenta
oryellow
(default ismagenta
).
new tracking.VideoCamera().track({
type: 'color',
color: 'yellow'
});
- data {string} : could be
eye
,frontal_face
,mouth
orupper_body
(default isfrontal_face
).
new tracking.VideoCamera().track({
type: 'human',
data: 'eye'
});
- onFound : Each time your tracker find something this event will be fired.
new tracking.VideoCamera().track({
onFound: function(track) {
// do something
}
});
- onNotFound : Each time your tracker doesn't find something this event will be fired.
new tracking.VideoCamera().track({
onNotFound: function(track) {
// do something
}
});
It brings to web elements tracking techniques of a real scene captured by the camera, through natural interactions from object tracking, color markers, among others, allowing the development of interfaces and games through a simple and intuitive API.
The concept of Natural Interaction proposes interfaces that understand the intentions of the user so that this convey their intentions intuitively interacting with computer systems in the same way makes day-to-day with people and objects. It is in this direction that the areas of interaction Human-Computer (IHC) and Virtual and Augmented Reality (AR), both in accelerated progression to native environments.
Created by Eduardo Lundgren, one of the creators of AlloyUI, involved in many open-source projects, ex-contributor to jQuery and jQuery UI. Graduate in Telecommunications Engineering from UPE and taking a Masters in CS from UFPE. Currently development lead at Liferay.
- Eduardo Lundgren
- Thiago Pereira
- Zeno Rocha