Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add error checking to building a test #13

Open
abacef opened this issue Aug 9, 2019 · 0 comments
Open

Add error checking to building a test #13

abacef opened this issue Aug 9, 2019 · 0 comments
Labels
enhancement New feature or request

Comments

@abacef
Copy link
Collaborator

abacef commented Aug 9, 2019

Is your feature request related to a problem? Please describe.
When I or a user of the GUI test builder builds a test that does not appear to work correctly on the instrument

Describe the solution you'd like
It may be a cool idea to have there be some sort of validation of numbers or string regular expressions for sending data to instruments. On most of the instruments I tested, if a command is invalid, the instrument just wont do anything. Some instruments set an error code if an invalid command is entered. Right now, on the PTCS side, there is no error detection or acknowledgement. Here are some options on how to make this better:

  • Make the driver have every command sent also query the instrument for if the command sent was valid. If it was not, you can do things with raising exceptions in the driver. I would not advocate this because it can be mission critical to have some commands operate at their max speed, and this data communication overhead can be really slow
  • Each query command can have a validation command called on its parameters and if when the query command is called in the driver, raise an exception to the user before the command is tried. This can be time consuming for someone writing a driver to find and successfully implement a robust validator for each method
  • When a test is built, before the file gets saved, call the validation methods on each command input and tell the user if there were any errors and what they are so the user can successfully save the test. Catching the errors before run-time if possible is always good.

Describe alternatives you've considered
Debugging the issue by hand

Additional context
None

@abacef abacef added the enhancement New feature or request label Aug 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant