-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
behaviour of ADR #4610
Comments
I'll try to reproduce this and get back. |
I could get a working version somehow by playing with the cli during the weekend. If I set the following values for the patched mcci lorawan stack which rejects adr frames:
then the nodes seem to behave like in ttn v2. I waited a night and it seems no additional packets are sent down. Nodes with the non patched version and the same cli settings seem to get additional packets. I can see the down counter go up but do not know yet what kind of packet it is. I assume these are ADR packets as the ADR algorithm did not go into error state but this is pure speculation. |
Small update. |
Keep in mind that the end-device rejects individual data rate indices / transmission powers / number of retransmissions. Within a singular session, the Network Server will not attempt to send the same tuple again, if the end device rejected the values, but it may attempt to send different tuples that the end device. The end device cannot stop the algorithm unilaterally. Also I'm interested in the details of the uplinks themselves, i.e. why the ADR algorithm itself is not 'acting right'. If you could provide a series of uplinks (so basically |
As usual not much is as it seems at first sight. I could not reproduce the SF12 problem. But this was only one problem. I now discovered that the cause might be the adr setting in the stack. I originally wanted to get the same behaviour of v3 like in v2 for my old mac 1.0.3 nodes. Funny thing, if I set adr to false I get a lot of adr requests, if I set it to true I get none. A bit paradox to me. |
Using the current ce stack version it seems I can get more information for debugging. I discovered a paradox behaviour. If I have ADR disabled (under General Settings, Network layer, advanced mac settings) the stack does send Link ADR messages starting after the initial join. The node sits very near the gateway with snr 10, rssi -48. The ADR message only contains the channel list, no power or dr setting. The ADR message is resend again and again, as the node only sends back true for the channel list. If on the other hand ADR is enabled with the same node the stack does not send any ADR message after the join. If ADR is disabled and I switch the ADR to enabled while the node stays within the same session, the sending of ADR messages stops and even stays stopped when switching back ADR to disabled as long as the node stays within the same session. I am puzzled if I even have understood the logic of that ADR switch correctly. |
fixed by #5353 |
Summary
There might be multiple issues here.
I deactivated ADR in MCCI LMIC 4.0, still the current public community stack sends an ADR frame down.
Additionally that ADR frame sets SF12, even for good rssi snr values. LMIC is not yet protected against ADR frames from a LoRaWAN Server if it does not expect those, so it simply accepts them.
I implemented a patch to not accept ADR frames in LMIC if I do not expect them. I can then see an ADR reject message in the console (mcci-catena/arduino-lmic#786). Patch works and the node stays on SF7.
After this it is not possible to send any downlinks. Whenever I schedule one it is substituted by another frame with the RX settings. I tried disabling ADR via the cli tool, but this does not change the decribed behaviour.
I am out of options. I noticed this because I am switching nodes over to v3.
Steps to Reproduce
What do you want to see instead?
ADR frames should not be send down to nodes if they do not offer them.
SF12 should not be choosen for good rssi/snr values.
How do you propose to test this?
I can offer a test device with LMIC which I could setup for a test network server.
The text was updated successfully, but these errors were encountered: