To really make sure that what the hardware is telling us and what a human actually sees, we added a USB attached camera and code to interpret video captures.
So that we can take this:
and end up with:
results:
- state: 4
wwn: 50014ee26049cc9d
- state: 2
wwn: 50014ee20aeb1530
- state: 4
wwn: 50014ee20ae9bafa
- state: 2
wwn: 50014ee2b59fb711
statekey: '{OFF: 1, NORM: 2, LOCATE: 3, FAULT: 4, LOCATE_FAULT: 5, UNKNOWN: 6}'
It's a bit more complicated, as some of the LED states are blinking, so we need to take a number of samples to determine what is going on. See source code for all the details.
Caveats: This code is very specific to one of our test systems and would need changes for others to utilize.
- Collect an updated image, figure out what LEDs you have access too and update the
REG_0 - REG_N
variables - Update
data.learn
, which is done by the following- Manually set all the LEDs of interest to normal, then run
./lec_determine.py collect G G G G > data.learn
- Manually set all the LEDs of interest to failure, then run
./lec_determine.py collect R R R R >> data.learn
- Manually set all the LEDs of interest to normal, then run
- Test it
- Update the
config.yaml
to ensure that each LED has the correct WWN