Skip to content

RafaellaN/example-portenta-lorawan

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LoRaWAN computer vision with Edge Impulse & Portenta H7

This is an example application that runs a computer vision model on the Portenta H7 and streams the results over LoRaWAN. The application uses the camera on the Portenta Vision Shield in combination with a machine learning model trained in Edge Impulse to determine when an interesting event happens, then sends this using the LoRa radio on the Portenta Vision Shield back to the network. This demo was built for The Things Conference 2021.

Elephant Not elephant

Elephant vs. not elephant

Note: This example was built using a pre-release version of the Portenta H7 libraries, and a preview version of Edge Impulse for the Portenta H7. There are expected issues with exposure, and bugs may arise.

Requirements

You'll need the following hardware:

How to build

  1. Set your application EUI and application key in src/ei_main.cpp.

  2. If you want to use a different channel plan (default: EU868) set it in src/ei_main.cpp as well.

  3. Install the Arduino CLI.

  4. Build this application via:

    $ sh arduino-build.sh --build
    
  5. Flash this application via:

    $ sh arduino-build.sh --flash
    

Training a new model

The elephant model used in the demo is here: Elephant tracker.

  1. Load the Edge Impulse firmware for the Portenta H7: instructions.

  2. Build a new model from scratch in Edge Impulse with the following settings:

    • Image width / height: 64x64 pixels.
    • Image color depth: Grayscale.
    • Transfer learning base model: MobileNetV2 0.1.
  3. To avoid sending messages when just the classification for a single frame changes the output of the algorithm is smoothened. These parameters can be found in ei_run_impulse.cpp (search for the ei_classifier_smooth_init function).

  4. Then, remove the src/edge-impulse-sdk, src/model-paramters and src/tflite-model folders.

  5. In your Edge Impulse project go to the Deployment page and export as C++ Library.

  6. Add the files in the export to the src directory and recompile the application.

Debugging camera output

Issues with the camera? Install the Edge Impulse CLI, and run edge-impulse-daemon. This will connect the development board to Edge Impulse from where you can see a live feed of the camera.

Alternatively, connect a serial monitor to the development board, press b to stop inferencing, then run AT+RUNIMPULSEDEBUG. This will print out the framebuffer after capturing and resizing the image. Write the framebuffer to framebuffer.txt, and then run:

$ edge-impulse-framebuffer2jpg -f framebuffer.txt -w 64 -h 64 -o framebuffer.jpg

About

Computer vision over LoRaWAN with the Portenta H7

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 79.2%
  • C++ 20.7%
  • Other 0.1%