Sound source localization in reconfigurable wireless acoustic sensor networks
Imagine a wireless sensor network of microphones dispersed in an environment. Using this wireless acoustic sensor network (WASN), we would like to be able to determine the positions of input sounds within the environment. More specifically, if we gave an input sound to our system, it would gather auditory information from the environment using the WASN combined with the positions of the nodes within the WASN to determine an x, y position of the sound and a corresponding confidence metric.
- Tracking a bird in a jungle based on the bird's unique call
- Determining the position of an enemy tank using the unique sound made by the engine
To make use of this API, the Locaudio server and the RethinkDB database must be running.
POST /notify
{
x: <Float: X position>,
y: <Float: Y position>,
spl: <Float: Sound pressure level>,
timestamp: <Float: Unix time in seconds>,
fingerprint: [<Int>: Audio fingerprint]
}
{
error: <Integer: Error code>,
message: <String: Error message>,
name: <String: Sound name>
}
GET /locations/:sound_name
sound_name
: The name of the sounds
[
{
position: {
x: <Float: X position of sound>,
y: <Float: Y position of sound>
},
confidence: <Float (0 <= F <= 1)>
]
GET /viewer/:sound_name
sound_name
: The name of the sound
GET /names
[
<String: Name of sound>
]
[
{
name: <String: Sound name>,
distance: <Float: Reference distance>,
spl: <Float: Reference sound pressure level>,
fingerprint: [<Int: Fingerprint of the sound>]
}
]
make depend
make documentation
NOTE: All commands should be spawned from different terminal sessions
-
Run RethinkDB
cd database
rethinkdb
-
Run the server
python run.py localhost 8000
-
(Optional) View the main page
- Open browser
- Go to
localhost:8000
NOTE: All commands should be spawned from different terminal sessions
-
Start the Locaudio server
-
make run_tests