Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Tools sometimes takes a full minute to trigger Blue Iris camera(s) #58

Open
Caddyman68 opened this issue Apr 6, 2021 · 3 comments
Open

Comments

@Caddyman68
Copy link

Caddyman68 commented Apr 6, 2021

I have Blue Iris running on a Windows machine; I just upgraded to an I5-9600k with 16G memory mainly to use this awesome (potentially?) program. My original setup of 8 cameras is working the same as always. I followed "the Hookup" tutorial and set up 4 of these cameras with an RTSP sub and main stream (8 profiles). I have version 1.67 preview 7 of AI Tools. According to the log it is processing the image in less than a second (wonderful) but somehow the trigger to the High res camera (main RTSP stream) takes about a minute to trigger. It would stand to reason that something is up on the Blue Iris end, but I don't know where to ask if that's the case.
More info: DeepStack is running in a Docker container via Portainer add-on in Home Assistant which is running on VirtualBox on the same computer as Blue Iris. The IP addresses are the same (192.168.X.X) for BI, Home Assistant, AI Tools, DeepStack and VB (so also Portainer), but it is 172.17.x.x for the DeepStack "Network-bridge". Is this a normal setup? - I assume it is since I followed Rob's tutorial. I noticed that versions above 1.65 have a different installation method as outlined by Gentle Pumpkin. I assume that the old method should still work too? I tried one camera with the new method and it didn't work either.
Entering the 1st URL in a web browser returns this result immediately:
signal=green
profile=1
lock=0
camera=Garage HR

Here is the log entry for that camera:
[06.04.2021, 12:51:55.292]:
[06.04.2021, 12:51:55.292]: Starting analysis of D:\AIinput\GarageLR.20210406_125155475.jpg
[06.04.2021, 12:51:55.292]: (1/6) Uploading image to DeepQuestAI Server
[06.04.2021, 12:51:56.621]: (2/6) Waiting for results
[06.04.2021, 12:51:56.622]: (3/6) Processing results:
[06.04.2021, 12:51:56.622]: Detected objects:person (78.72%),
[06.04.2021, 12:51:56.622]: (4/6) Checking if detected object is relevant and within confidence limits:
[06.04.2021, 12:51:56.623]: person (78.72%):
[06.04.2021, 12:51:56.629]: Checking if object is outside privacy mask of GarageLR:
[06.04.2021, 12:51:56.630]: Loading mask file...
[06.04.2021, 12:51:56.630]: ->Camera has no mask, the object is OUTSIDE of the masked area.
[06.04.2021, 12:51:56.630]: person (78.72%) confirmed.
[06.04.2021, 12:51:56.630]: The summary:person (78.72%)
[06.04.2021, 12:51:56.631]: (5/6) Performing alert actions:
[06.04.2021, 12:51:56.631]: trigger url: http://192.168.X.X:80/admin?trigger&camera=Garage_HR&user=Dave&pw=69BI69
[06.04.2021, 12:51:56.637]: trigger url: http://192.168X.X:80/admin?camera=Garage_HR&flagalert=1&trigger&memo=person%20(78.72%25)&user=Dave&pw=69BI69
[06.04.2021, 12:51:56.639]: -> 2 trigger URLs called.
[06.04.2021, 12:51:56.642]: (6/6) SUCCESS.
[06.04.2021, 12:51:56.642]: Adding detection to history list.
On this trigger, the video lagged 18 seconds. I set a 20 second pre-buffer, but it still didn't catch any action.
Please advise if this is something you can help with or if you have another source to help

@VorlonCD
Copy link

VorlonCD commented Apr 7, 2021

Try the fork of this project I've been working on for a while. At the very least, it will give you better logging on the AITOOL side of things:
https://github.com/VorlonCD/bi-aidetection/tree/master/src/UI/Installer

Perhaps try the latest version of BI? If running BI in a virtual machine, it could be slowing down video processing, so try outside the VM or throw more ram/cores at the VM.

@Caddyman68
Copy link
Author

Caddyman68 commented Apr 7, 2021 via email

@VorlonCD
Copy link

VorlonCD commented Apr 8, 2021

Basically the same setup concept . Just you have more control over triggering objects and events. And more AI server options. And if you run the Windows version of Deepstack, you can let AITOOL automatically start/stop it.

If you are using cloned cameras in BI I dont think you have to any longer. It may reduce the load on BI if you set it up for just 1 per camera... I believe the instructions for doing it that way are on the first page of the IPCAMTALK forum page. And one of the LAST posts mentions a new feature in BI that may further help, but I havent tried setting it up that way yet.

Keep an eye on the "ImgQueued" stat in the AITOOL status bar. If that number constantly stays high, that means that You may need to set up more than one deepstack server to handle more image requests OR find settings in BI to prevent so many images from being processed. Also for my 4k cameras I had to reduce the frames per second setting in the cameras direct web interface (not in BI) so it wasnt overwhelming my system. 10 fps seems to be fine for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants