-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: No device connected using Docker and Raspberry Pi Zero 2 #12811
Comments
Hi @MatthewRajan13 Does it make a difference if you launch Docker with sudo admin permissions with the command sudo docker run |
Unfortunately there is no difference when running with sudo |
Are you using the --privileged Docker command?
|
Yes, our docker file is shown below services: |
I think this conversation probably links back to this: @MatthewRajan13 and I are working on trying to get this working together. We're using a USB hat as such with the Pi Zero 2: https://www.amazon.com/UART-Onboard-Raspberry-Pi-XYGStudy/dp/B06Y5HYN5F Most of the documentation I've stumbled upon seems to suggest USB ports like this only allow 500 mA, whereas the Realsense needs 700 mA. We are powering the hub/Pi Zero through the GPIO pins, so I had hoped power wouldn't be an issue as I'd assume they can pull from the same VIN. We haven't yet tried compiling with the DFORCE_RSUSB_BACKEND rule, but we can try that next. Here's the weird thing: we ran |
My understanding is that 500 mA is the minimum amount of power consumed by a USB device, and that the power draw of a RealSense camera increases when streams are enabled. The more streams that are enabled, the higher the power draw. If you have access to the RealSense Viewer tool then a way to test whether there is an issue with insufficient power is to set the Laser Power option under 'Stereo Module > Controls' to zero and see if the depth stream can be successfully enabled. If it can, increase the Laser Power in small increments. If the camera disconnects after a certain amount then this indicates an issue with insufficient power from the USB port to meet the camera's power draw requirements. A Pi 4 user at #8274 (comment) was able to use a PoE hat successfully with RealSense but only with the official 1 meter USB cable supplied with the camera and USB-C cables that he chose himself had problems. Are you using the official USB cable, please? |
We aren't using the stock USB cable, as we are tight on space on the drone. Instead, we're using a right angle one. As for the Realsense Viewer, we could give that a shot. Currently we're running the pi in headless mode. Does the viewer tool come with pyrealsense, or is this a separate install? I really hope we don't have to build the viewer for an ARM device and run it on the pi zero :) Would knocking down the frame rate also decrease power draw/get us connected? |
The pyrealsense2 wrapper has to be installed separately from the Viewer tool but the Viewer does not require pyrealsense2 in order to operate. I appreciate though why the Viewer would not be practical on a headless system. If you instead run the text-based rs-hello-realsense example program, if you built the librealsense SDK with the examples included, then that should act as a test of how capable your Pi Zero is of streaming depth without graphics. I believe that reducing FPS would lower the processing demands placed on the Pi's CPU but not reduce the power draw. Whilst RealSense cameras can run on low-end computing devices because some of the processing is done on hardware inside the camera, a Pi Zero 2 is likely close to the minimum specification that the RealSense SDK will run on, especially in regard to memory capacity. You could try defining a larger swapfile for your Pi to create 'virtual memory' from its storage space that may help to compensate for the 1 GB of real memory by giving the Pi a fallback when the real memory is used up. https://pimylifeup.com/raspberry-pi-swap-file/ I note that the Pi Zero 2 uses a Micro USB port instead of a full size USB port. Small low-power computing devices with these ports can be susceptible to providing insufficient power for RealSense cameras. A solution is to use a mains electricity powered USB hub, but that would not be an option on a drone. |
Gotcha. We are going to try this out with the Raspberry Pi 4. So far, here are the results with that model: Build pyrealsense2 with the following Dockerfile:
I noticed that hte export I used above was mentioned here: That worked for getting python3 to find the build, but I thought that was supposed to happen automatically with the Anyway, we were able to view the depth image in the terminal using the following: All of the above was tested on an amd CPU. Going to use Docker buildx to cross-compile for arm and see what happens on the pi 4 |
Thanks so much for the detailed feedback. I look forward to your next report after further testing. Good luck! |
No problem! I figure someone can use this down the line. So I was able to deploy the docker container to the Raspberry Pi 4 with no problem, and librealsense is somewhat happy. However, I'm stuck with this:
Here's my compose file:
I'm using |
These /dev/video errors have been reported a few times in the past and on all occasions it was on Raspberry Pi. See #11843, #12552 (which is also a Docker case) and IntelRealSense/realsense-ros#2991 A RealSense user in the Docker case at #12552 (comment) suggested adding --device arguments to deal with it. |
I think I'm running Debian on the Pi, but Jammy Ubuntu for the container. I wonder if this might cause a problem with kernels conflicting. Might try Ubuntu as the host machine OS and see if that resolves these issues What is this about |
Hello hello hello! It works now on my end. I put the server Jammy (Ubuntu 22) as the host machine software, then ran the same container above. Works like a charm! The docker build is for --platform linux/arm/v7, though, but whatever works. I have some fun proof here too: @MartyG-RealSense you're the man, I appreciate your help! Give me a few minutes to drop some other helpful docs here for future folks, then we can close this issue out |
You are very welcome, @Forsyth-Creations - I'm pleased that I was able to help. Thanks very much for the update about your success! |
Okay, bit of a mind dump, but here it goes. Just a brief timeline of things:
Still digging into this fix now, will report back with final findings! |
Thanks so much for the detailed feedback. I look forward to your next report. Good luck! |
Hi @MatthewRajan13 Do you have an update about this case that you can provide, please? Thanks! |
Hi @MatthewRajan13 Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Sorry about the delay! What we eventually did was install Ubuntu Jammy on Raspberry Pi 4, and then built a docker container using docker buildx for an arm-version of the realsense library. Things worked flawlessly after that! Thanks for all the help. If you want me to provide more context I'd be happy to |
It's great to hear that you were successful. Thanks so much for the update and the sharing of your solution! If you are willing to provide further details then I'm sure that it would be helpful to other Pi users with a docker container. Thanks again! |
Sure thing! Here's the Dockerfile that produced our base image:
|
@Forsyth-Creations Thanks so much! |
Issue Description
Hello, We are currently trying to connect and run my IntelRealsense D435i on a Raspberry Pi Zero 2 using a docker container. Our container base is: https://hub.docker.com/r/nixone/pyrealsense2/tags so that we can use Pyrealsense for ARM. When running python scripts we are getting the following error:
root@echo:/firmware/app/Drone# python3 RealsenseServer.py
Server started at http://localhost:9000
192.168.137.1 - - [29/Mar/2024 18:13:42] "GET / HTTP/1.1" 200 -
Exception occurred during processing of request from ('192.168.137.1', 54686)
Traceback (most recent call last):
File "/usr/lib/python3.10/socketserver.py", line 316, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/lib/python3.10/socketserver.py", line 347, in process_request
self.finish_request(request, client_address)
File "/usr/lib/python3.10/socketserver.py", line 360, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python3.10/socketserver.py", line 747, in init
self.handle()
File "/usr/lib/python3.10/http/server.py", line 432, in handle
self.handle_one_request()
File "/usr/lib/python3.10/http/server.py", line 420, in handle_one_request
method()
File "/firmware/app/Drone/RealsenseServer.py", line 78, in do_GET
for frame_bytes in capture_frames():
File "/firmware/app/Drone/RealsenseServer.py", line 30, in capture_frames
pipeline.start(config)
RuntimeError: No device connected
The code we are running is below:
`import cv2
import numpy as np
import threading
import socketserver
from http.server import BaseHTTPRequestHandler, HTTPServer
import pyrealsense2 as rs
exit_event = threading.Event()
def combine_images(img1, img2):
height = min(img1.shape[0], img2.shape[0])
img1 = cv2.resize(img1, (int(img1.shape[1] * height / img1.shape[0]), height))
img2 = cv2.resize(img2, (int(img2.shape[1] * height / img2.shape[0]), height))
def capture_frames():
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# config.enable_record_to_file("test.bag")
pipeline.start(config)
class StreamingHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
self.send_response(200)
self.send_header('Content-type', 'multipart/x-mixed-replace; boundary=--frame')
self.end_headers()
for frame_bytes in capture_frames():
self.wfile.write(b'--frame\r\n')
self.send_header('Content-type', 'image/jpeg')
self.send_header('Content-length', len(frame_bytes))
self.end_headers()
self.wfile.write(frame_bytes)
self.wfile.write(b'\r\n')
else:
self.send_error(404)`
The text was updated successfully, but these errors were encountered: