You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 3, 2023. It is now read-only.
* Modify the model test script
- Change input numbers between 0-255
- Load both pb and pbtxt type graph
- Set back_end as parameter
- Save test result and tensor array to .npy file
# Compare model output between Tensorflow and NGraph
1
+
# Compare model output between two different backends
2
2
3
-
### This model_test tool will run the model inference seperately on TF and NGraph, and the desired output from TF and NGraph should match given the same inputs. It can be used as a debugging tool, and also a verification that NGraph produces the same output as Tensorflow.
3
+
### This model_test tool will run the model inference seperately on two specified backends(e.g.Tensorflow and nGraph) in json file and the desired outputs from each backend should match given the same inputs. It can be used as a debugging tool for layer by layer comparison, and also a verification that nGraph produces the same output as Tensorflow.
4
4
5
5
# Required files to use the tool:
6
6
* A json file: Provide model specific parameters. Look at the example ```mnist_cnn.json```. You can start with the ```template.json``` and modify it to match your model
7
-
* A tensorflow frozen graph: A model frozen graph(.pb file) with trained weights and model architecture
7
+
* A tensorflow model file: Either .pb or .pbtxt file
8
8
9
-
## To prepare the required json file:"
9
+
## To prepare the required json file:
10
+
* Specify the ```reference_backend``` and ```testing_backend```. For tensolrflow on CPU, use 'CPU' and for nGraph, use 'NGRAPH_[desired backend name]' (e.g. Use 'NGAPH_CPU' for nGraph on CPU)
10
11
* You will need the names of the input/output tensors of the model. Currently we are supporting
11
-
multiple input tensors and one output tensor. Put the input tensor names as a list in the ```input_tensor_name``` field of the json file, and the output tensor name as a string in the ```output_tensor_name``` field of the json file
12
+
multiple input tensors and output tensors. Put the input tensor names as a list in the ```input_tensor_name``` field of the json file, and the output tensor name as a list in the ```output_tensor_name``` field of the json file. If no outputs are specified in the ```output_tensor_name```, then it will compare all output tensors
12
13
* You will need the input dimensions for all the input tensors provided. Put the dimensions information as a list in the ```input_dimension``` field of the json file, and the corresponding order of ```input_tensor_name``` list should match the ```input_dimension``` list. Therfore, the length of ```input_tensor_name``` list should match the length of ```input_dimension``` list
13
-
* Specify the the location of the frozen graph in the ```frozen_graph_location``` of the json file
14
+
* Specify the the location of the graph file in the ```graph_location``` of the json file
14
15
* Specify the ```batch_size``` field in the json file to the desired batch size for inference
15
16
* Specify the tolerance between the TF and NGraph outputs at ```l1_norm_threshold```, ```l2_norm_threshold``` and ```inf_norm_threshold``` in the json file
17
+
* Specify the ```random_val_range``` used to generate the input within 0 to random_val_range
### The model_test tool will run the model inference and compare the outputs from TF and NGraph in terms of L1, L2 and Inf norm. If the corresponding norm is smaller than the matching tolerance specified in the json file, then the test passes. Otherwise, the test failes. In the situation of test failure, feel free to report the problem at the ngraph-tf github issue section.
23
+
### The model_test tool will run the model inference and compare the outputs in terms of L1, L2 and Inf norm. If the corresponding norm is smaller than the matching tolerance specified in the json file, then the test passes. Otherwise, the test failes. Each output tensors will be saved as .npy file in a folder named as '[reference_backend_name]-[testing_backend_name]'(e.g. CPU-NGRAPH_CPU). In the situation of test failure, feel free to report the problem at the ngraph-tf github issue section.
0 commit comments