Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout not wired in client unary call #193

Open
carlschwabni opened this issue Nov 14, 2022 · 5 comments
Open

Timeout not wired in client unary call #193

carlschwabni opened this issue Nov 14, 2022 · 5 comments
Labels
type: bug Something isn't working

Comments

@carlschwabni
Copy link

It appears that the timeout input isn't wired into an optional timeout in the "Client Unary Call.vim":

image

Version 0.5.0.1

@eyesonvis eyesonvis added the type: bug Something isn't working label Nov 21, 2022
@AndrewHeim
Copy link
Contributor

Ha! This affects #272, #320, and possibly #77.

I didn't see there was even an input for the timeout when I investigated #320 . I will submit this as a pull request.

When I tested it, it solved that issue.

@AndrewHeim
Copy link
Contributor

I don't have a LV 2019 environment to do this! I will try and do it soon.

Until then, I've patched mine locally and it makes it passable for now.

While functional, it's still not ideal - see #207 .

@kt-jplotzke
Copy link
Contributor

@AndrewHeim Is your patch simply wiring the timeout control to the "Wait on Occurrence" subVI? Or did you make other modifications?

I am also currently troubleshooting a similar issue where I have an occasional hang while waiting for the occurrence to fire. In my instance, I'm running a sequence from TestStand - when I run using the dev environment adapter, everything works fine. But when I run from the run time engine, most (but not all) times I get a hang waiting for an occurrence (even though I can tell the gRPC message went out to the server). In my case, I am using the local host loop back adapter, so I am also suspicious that this is related to your findings.

When I simple connect the timeout on the "Wait for occurrence" VI, an error (-1004) is generated from that VI when the timeout occurs, and then the call fails. Obviously not the best workaround for me. 😀

Can you explain your patch and/or post some screenshots if you can't make a pull request?

Thanks!

@AndrewHeim
Copy link
Contributor

Your situation makes sense - when I moved to higher performance situations, I got the hang more often. Moving from the dev environment to the runtime engine is consistent with that.

For my patch, I simply wired the timeout to the input terminal. So the Timeout control is wired to both the DLL call it was wired to, and now the Wait on Occurrence as well.

You're right that ideally it really shouldn't get stuck at all. When it does, this fix allows it to escape (eventually, depending on the timeout). That still generates a spurious error. In the LabVIEW world that's easy enough to filter out, but I can see how it might be more problematic depending on how it's being called from TestStand.

It's also worth referencing #210. Apparently grpc references shouldn't be reused. In my testing, this also appeared to get rid of the problem - at least after just a few minutes of testing. It does slow things down a little bit... but it shouldn't slow down anything near where the race is. So it's possible my race condition theory is wrong and something else is going on. And it's possible getting a new grpc session each time doesn't get rid of this error. But it might be a clue.

@kt-jplotzke
Copy link
Contributor

Thanks. After a bit more testing, I'm fairly confident that this race condition is not my issue. I've created a test gRPC method which has a 1000 msec delay before it returns and I still hang at "Wait for Occurrence", which rules out a performance/speed-related issue. I'm going to open a new issue for my problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants