Skip to content

Commit

Permalink
Merge branch 'Seeed-Studio:docusaurus-version' into docusaurus-version
Browse files Browse the repository at this point in the history
  • Loading branch information
LJ-Hao authored May 16, 2024
2 parents f64aea2 + 3526319 commit f549a78
Show file tree
Hide file tree
Showing 10 changed files with 253 additions and 37 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -381,6 +381,51 @@ You will see the following output if the flashing process is successful
<div align="center"><img width ="700" src="https://files.seeedstudio.com/wiki/reComputer-J4012/4.png"/></div>
</TabItem>
<TabItem value="JP6.0" label="JP6.0">
Here we will use NVIDIA L4T 36.3 to install Jetpack 6.0 on the reComputer
**Step 1:** [Download](https://developer.nvidia.com/embedded/jetson-linux-r363) the NVIDIA drivers on the host PC. The required drivers are shown below:
<div align="center"><img width ="700" src="https://files.seeedstudio.com/wiki/Jetson-AGX-Orin-32GB-H01-Kit/P1.png"/></div>
**Step 2:** Extract **Jetson_Linux_R36.3.0_aarch64.tbz2** and **Tegra_Linux_Sample-Root-Filesystem_R36.3.0_aarch64.tbz2** by navigating to the folder containing these files, apply the changes and install the necessary prerequisites
```sh
tar xf Jetson_Linux_R36.3.0_aarch64.tbz2
sudo tar xpf Tegra_Linux_Sample-Root-Filesystem_R36.3.0_aarch64.tbz2 -C Linux_for_Tegra/rootfs
cd Linux_for_Tegra/
sudo ./apply_binaries.sh
sudo ./tools/l4t_flash_prerequisites.sh
```
**Step 4:** Navigate to **"Linux_for_Tegra"** directory, and enter the below command to configure your username, password & hostname so that you do not need to enter the Ubuntu installation wizard after the device finishes booting
```sh
cd Linux_for_Tegra
sudo tools/l4t_create_default_user.sh -u {USERNAME} -p {PASSWORD} -a -n {HOSTNAME} --accept-license
```
For example (username:"nvidia", password:"nvidia", device-name:"nvidia-desktop"):
```sh
sudo tools/l4t_create_default_user.sh -u nvidia -p nvidia -a -n nvidia-desktop --accept-license
```
**Step 5:** Flash the system to the NVMe SSD
```sh
sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 \
-c tools/kernel_flash/flash_l4t_t234_nvme.xml -p "-c bootloader/generic/cfg/flash_t234_qspi.xml" \
--showlogs --network usb0 jetson-orin-nano-devkit internal
```
You will see the following output if the flashing process is successful
<div align="center"><img width ="700" src="https://files.seeedstudio.com/wiki/reComputer-J4012/4.png"/></div>
</TabItem>
</Tabs>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -149,25 +149,55 @@ last_update:
<div class="table-center">
<table class="table-nobg">
<tr class="table-trnobg">
<th class="table-trnobg"><font size={"4"}>Maskcam - Crowd Face Mask Usage Monitoring based on Jetson Nano</font></th>
<th class="table-trnobg"><font size={"4"}>Speech Subtitle Generation on Nvidia Jetson</font></th>
<th class="table-trnobg"><font size={"4"}>Deploy Whisper on NVIDIA Jetson Orin for Real time Speech to Text</font></th>
<th class="table-trnobg"><font size={"4"}>How to Run a Local LLM Text-to-Image on reComputer</font></th>
</tr>
<tr class="table-trnobg"></tr>
<tr class="table-trnobg">
<td class="table-trnobg"><div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/reComputer-Jetson/A608/recoder.gif" style={{width:300, height:'auto'}}/></div></td>
<td class="table-trnobg"><div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/reComputer-Jetson/A608/Real-Time-Whisper.gif" style={{width:300, height:'auto'}}/></div></td>
<td class="table-trnobg"><div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/wiki-ranger/Contributions/Nvidia_Jetson_recomputer_LLM_texto-to-image/23_creating_image1.gif" style={{width:2000, height:'auto'}}/></div></td>
</tr>
<tr class="table-trnobg"></tr>
<tr class="table-trnobg">
<td className="table-trnobg" style={{ textAlign: 'justify' }}><font size={"2"}>In this wiki, we introduce you Speech Subtitle Generation on Jetson, which can offer real-time speech-to-subtitle services while avoiding information leakage on the internet.</font></td>
<td className="table-trnobg" style={{ textAlign: 'justify' }}><font size={"2"}>In this wiki we introduce you Real Time Whisper on Jetson, this integration enables speech processing directly on the device, eliminating the need for constant network connectivity and enhancing privacy and security. </font></td>
<td className="table-trnobg" style={{ textAlign: 'justify' }}><font size={"2"}>This wiki encompasses setting up and deploying local LLM-based text-to-image generation models on the Nvidia Jetson Orin NX 16GB</font></td>
</tr>
<tr class="table-trnobg"></tr>
<tr class="table-trnobg">
<td class="table-trnobg"><div class="get_one_now_container" style={{textAlign: 'center'}}><a class="get_one_now_item" href="https://wiki.seeedstudio.com/Real%20Time%20Subtitle%20Recoder%20on%20Nvidia%20Jetson/"><strong><span><font color={'FFFFFF'} size={"4"}>📚 Learn More</font></span></strong></a></div></td>
<td class="table-trnobg"><div class="get_one_now_container" style={{textAlign: 'center'}}><a class="get_one_now_item" href="https://wiki.seeedstudio.com/Edge/NVIDIA_Jetson/Application/Generative_AI/Whisper_on_Jetson_for_Real_Time_Speech_to_Text/"><strong><span><font color={'FFFFFF'} size={"4"}>📚 Learn More</font></span></strong></a></div></td>
<td class="table-trnobg"><div class="get_one_now_container" style={{textAlign: 'center'}}><a class="get_one_now_item" href="https://wiki.seeedstudio.com/How_to_run_local_llm_text_to_image_on_reComputer/"><strong><span><font color={'FFFFFF'} size={"4"}>📚 Learn More</font></span></strong></a></div></td>
</tr>
</table>
</div>

<br />

<div class="table-center">
<table class="table-nobg">
<tr class="table-trnobg">
<th class="table-trnobg"><font size={"4"}>Quantized Llama2-7B with MLC LLM on Jetson</font></th>
<th class="table-trnobg"><font size={"4"}>Knife Detection: An Object Detection Model Deployed on Triton Inference Sever Based on reComputer</font></th>
<th class="table-trnobg"><font size={"4"}>Deploy Detection Model on Jetson by No Code Edge AI Tool</font></th>
</tr>
<tr class="table-trnobg"></tr>
<tr class="table-trnobg">
<td class="table-trnobg"><div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/Jetson-Nano-MaskCam/tu3.png" style={{width:300, height:'auto'}}/></div></td>
<td class="table-trnobg"><div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/reComputer-Jetson/A608/MLC_LLM.gif" style={{width:1400, height:'auto'}}/></div></td>
<td class="table-trnobg"><div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/SecurityCheck/Security_Scan22.jpg" style={{width:300, height:'auto'}}/></div></td>
<td class="table-trnobg"><div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/node-red/36.png" style={{width:300, height:'auto'}}/></div></td>
</tr>
<tr class="table-trnobg"></tr>
<tr class="table-trnobg">
<td className="table-trnobg" style={{ textAlign: 'justify' }}><font size={"2"}> In this wiki, we have implemented a mask detection feature using Jetson.</font></td>
<td className="table-trnobg" style={{ textAlign: 'justify' }}><font size={"2"}>In this project, we introduce a quantized version of Llama2-7B, a large language model trained on 1.5TB of data, and deploy it on the Jetson Orin.</font></td>
<td className="table-trnobg" style={{ textAlign: 'justify' }}><font size={"2"}> We provide this fundamental project that we are going to deploy a Deep Learning model on reComputer J1010 to detect prohibited items.</font></td>
<td className="table-trnobg" style={{ textAlign: 'justify' }}><font size={"2"}> In this wiki, we'll go over how to download and install what we need under a fresh NVIDIA Jetson system, then open the Edge AI Tool and perform object detection with a live camera.</font></td>
</tr>
<tr class="table-trnobg"></tr>
<tr class="table-trnobg">
<td class="table-trnobg"><div class="get_one_now_container" style={{textAlign: 'center'}}><a class="get_one_now_item" href="https://wiki.seeedstudio.com/Jetson-Nano-MaskCam/"><strong><span><font color={'FFFFFF'} size={"4"}>📚 Learn More</font></span></strong></a></div></td>
<td class="table-trnobg"><div class="get_one_now_container" style={{textAlign: 'center'}}><a class="get_one_now_item" href="https://wiki.seeedstudio.com/Quantized_Llama2_7B_with_MLC_LLM_on_Jetson/"><strong><span><font color={'FFFFFF'} size={"4"}>📚 Learn More</font></span></strong></a></div></td>
<td class="table-trnobg"><div class="get_one_now_container" style={{textAlign: 'center'}}><a class="get_one_now_item" href="https://wiki.seeedstudio.com/Security_Scan/"><strong><span><font color={'FFFFFF'} size={"4"}>📚 Learn More</font></span></strong></a></div></td>
<td class="table-trnobg"><div class="get_one_now_container" style={{textAlign: 'center'}}><a class="get_one_now_item" href="https://wiki.seeedstudio.com/No-code-Edge-AI-Tool/"><strong><span><font color={'FFFFFF'} size={"4"}>📚 Learn More</font></span></strong></a></div></td>
</tr>
Expand Down Expand Up @@ -211,25 +241,30 @@ last_update:
<tr class="table-trnobg">
<th class="table-trnobg"><font size={"4"}>Update Jetson Linux Over-the-Air Using Allxon</font></th>
<th class="table-trnobg"><font size={"4"}>How to Train and Deploy YOLOv8 on reComputer</font></th>
<th class="table-trnobg"><font size={"4"}>Maskcam - Crowd Face Mask Usage Monitoring based on Jetson Nano</font></th>
</tr>
<tr class="table-trnobg"></tr>
<tr class="table-trnobg">
<td class="table-trnobg"><div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/Allxon/JetPack-OTA/thumb.png" style={{width:300, height:'auto'}}/></div></td>
<td class="table-trnobg"><div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/reComputer/Application/reComputer_is_all_you_need/inference_engine.png" style={{width:300, height:'auto'}}/></div></td>
<td class="table-trnobg"><div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/Jetson-Nano-MaskCam/tu3.png" style={{width:300, height:'auto'}}/></div></td>
</tr>
<tr class="table-trnobg"></tr>
<tr class="table-trnobg">
<td className="table-trnobg" style={{ textAlign: 'justify' }}><font size={"2"}>Allxon can help you to upload the OTA Payload Package and make sure it can work. You will come across this step later in this wiki.</font></td>
<td className="table-trnobg" style={{ textAlign: 'justify' }}><font size={"2"}>In this wiki, we train and deploy a object detection model for traffic scenes on the reComputer J4012.</font></td>
<td className="table-trnobg" style={{ textAlign: 'justify' }}><font size={"2"}> In this wiki, we have implemented a mask detection feature using Jetson.</font></td>
</tr>
<tr class="table-trnobg"></tr>
<tr class="table-trnobg">
<td class="table-trnobg"><div class="get_one_now_container" style={{textAlign: 'center'}}><a class="get_one_now_item" href="https://wiki.seeedstudio.com/Update-Jetson-Linux-OTA-Using-Allxon/"><strong><span><font color={'FFFFFF'} size={"4"}>📚 Learn More</font></span></strong></a></div></td>
<td class="table-trnobg"><div class="get_one_now_container" style={{textAlign: 'center'}}><a class="get_one_now_item" href="https://wiki.seeedstudio.com/How_to_Train_and_Deploy_YOLOv8_on_reComputer/"><strong><span><font color={'FFFFFF'} size={"4"}>📚 Learn More</font></span></strong></a></div></td>
<td class="table-trnobg"><div class="get_one_now_container" style={{textAlign: 'center'}}><a class="get_one_now_item" href="https://wiki.seeedstudio.com/Jetson-Nano-MaskCam/"><strong><span><font color={'FFFFFF'} size={"4"}>📚 Learn More</font></span></strong></a></div></td>
</tr>
</table>
</div>


## FAQ
- [Troubleshooting Installation](https://wiki.seeedstudio.com/Troubleshooting_Installation/)
- [The remaining space in the eMMC in the received reComputer is only about 2GB, how to solve the problem of insufficient space?](https://wiki.seeedstudio.com/solution_of_insufficient_space/)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,18 @@ last_update:
author: Jessie
---


## POE connection

SenseCAP M2 supports PoE (Power on Ethernet) and is compatible with IEEE 802.3 af standard.

:::tip
You will need to have an extra PoE switch that provides 40V-57V DC power as a PSE (Power Sourcing Equipment) if your modem/router does not support PoE.
:::


<p style={{textAlign: 'center'}}><img src="https://www.sensecapmx.com/wp-content/uploads/2022/07/m2-poe.png" alt="pir" width={800} height="auto" /></p>

## Gateway Network Configuration

Connect the antenna and power adaptor to the gateway.
Expand All @@ -21,13 +33,13 @@ The power LED will show in red, and in about 15s, the indicator on the top will
There are two ways to connect to the Internet. Choose the one that works for you.


### Connect to Ethernet Cable
### Ethernet Connection

Connect the Ethernet cable to the Ethernet port, and the indicator on the top will show solid green if the gateway is successfully connected to the internet.



### Connect to WIFI via Luci
### WIFI Connection

There are two ways for users to login Luci configuration page.

Expand Down Expand Up @@ -120,7 +132,7 @@ Then click Save and Apply to apply your settings.

The indicator on the top will show solid green if the gateway is successfully connected to the WIFI.

### Connect with Cellular(for 4G version)
### Cellular Connection (for 4G version)

* Step 1: Plug your SIM card into the Nano-SIM card slot

Expand All @@ -134,17 +146,20 @@ The indicator on the top will show solid green if the gateway is successfully co

<p style={{textAlign: 'center'}}><img src="https://files.seeedstudio.com/wiki/SenseCAP/M2_Multi-Platform/4g3.png" alt="pir" width={800} height="auto" /></p>

### POE connection

SenseCAP M2 supports PoE (Power on Ethernet) and is compatible with IEEE 802.3 af standard.

:::tip
You will need to have an extra PoE switch that provides 40V-57V DC power as a PSE (Power Sourcing Equipment) if your modem/router does not support PoE.
:::
### Channel Plan Settings

Navigate to `LoRa``Channel Plan`

<p style={{textAlign: 'center'}}><img src="https://files.seeedstudio.com/wiki/SenseCAP/M2_Multi-Platform/M2-MP3.png" alt="pir" width={800} height="auto" /></p>

Select the Region and Frequency plan.

<p style={{textAlign: 'center'}}><img src="https://files.seeedstudio.com/wiki/SenseCAP/M2_Multi-Platform/M2-MP4.png" alt="pir" width={800} height="auto" /></p>

<p style={{textAlign: 'center'}}><img src="https://www.sensecapmx.com/wp-content/uploads/2022/07/m2-poe.png" alt="pir" width={800} height="auto" /></p>

After setting, click `Save&Apply`.

### Checking the Gateway Connection Status

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,44 +57,46 @@ In this section, we will use "Model Assistant" here to enable the module. Combin

<!-- <div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/grove-vision-ai-v2/1.png" style={{width:1000, height:'auto'}}/></div>
-->
Now we will quickly get started the modules with SenseCraft AI, and this will only require the mnodule only.
Now we will quickly get started the modules with SenseCraft AI, and this will only require the module only.

#### Step 1. Connect the Grove Vision AI V2 to the SenseCraft AI Model Assistant
#### Step 1. Choose model

First, we need to open the main SenseCraft AI Model Assistant page.

<div class="get_one_now_container" style={{textAlign: 'center'}}>
<a class="get_one_now_item" href="https://seeed-studio.github.io/SenseCraft-Web-Toolkit/#/setup/process"><strong><span><font color={'FFFFFF'} size={"4"}>Go to SenseCraft AI</font></span></strong></a>
<a class="get_one_now_item" href="https://sensecraft.seeed.cc/ai/#/home"><strong><span><font color={'FFFFFF'} size={"4"}>Go to SenseCraft AI</font></span></strong></a>
</div>
<br />

Please use a Type-C type cable to connect Grove Vision AI V2 to your computer.
Choose the model you want to deploy and click into it.

In the upper right corner of the SenseCraft AI Model Assistant page, you can select **Grove Vision AI (WE2)**. Then click the **Connect** button on the far right.
<div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/grove-vision-ai-v2/a1.png" style={{width:1000, height:'auto'}}/></div>

<div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/grove-vision-ai-v2/2.png" style={{width:1000, height:'auto'}}/></div>
You can see the description of this model here and if it suit to you, click the **Deploy Model** button at the right side.

In the new window that pops up, select the correct COM port for the device and click the Connect button.
<div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/grove-vision-ai-v2/a2.png" style={{width:1000, height:'auto'}}/></div>

<div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/grove-vision-ai-v2/3.png" style={{width:1000, height:'auto'}}/></div>
#### Step 2. Connect the module and upload a suitable model

#### Step 2. Upload a suitable model
Please use a Type-C type cable to connect Grove Vision AI V2 to your computer and then click **Connect** button.

Then, just select a model you want to use and click the **Send** button below. Here is an example of Gesture Detection.
<div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/grove-vision-ai-v2/a3.png" style={{width:600, height:'auto'}}/></div>

<div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/grove-vision-ai-v2/4.png" style={{width:600, height:'auto'}}/></div>
Click **Confirm** button. In the upper left corner of the this page, you can select **USB Single Serial**. Then click the **Connect** button.

<div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/grove-vision-ai-v2/a4.png" style={{width:600, height:'auto'}}/></div>

Please remain on this page for 1-2 minutes until the model is successfully uploaded. Kindly note that switching to another page tab during this process may result in an unsuccessful upload (our team is actively working on resolving this issue, and it will be fixed soon).

#### Step 3. Observations

Once the model is uploaded successfully, you will be able to see the live feed from the Grove Vision AI V2 camera in the Preview on the right.
Once the model is uploaded successfully, you will be able to see the live feed from the Grove Vision AI V2 camera in the Preview on the left.

<div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/grove-vision-ai-v2/5.gif" style={{width:1000, height:'auto'}}/></div>
<div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/grove-vision-ai-v2/a5.png" style={{width:1000, height:'auto'}}/></div>

<br />

We can see that in the Preview Settings on the right hand side, there are two setting options that can be changed to optimise the recognition accuracy of the model.
We can see that in the Preview Settings on the left hand side, there are two setting options that can be changed to optimise the recognition accuracy of the model.

- **Confidence:** Confidence refers to the level of certainty or probability assigned by a model to its predictions.

Expand Down
Loading

0 comments on commit f549a78

Please sign in to comment.