Skip to content

too slow when Quantization-aware training ssd mobiletnet v2  #6909

Open
@roadcode

Description

@roadcode

use ssdlite_mobilenet_v2_coco.config
and modify it by adding graph_rewriter

graph_rewriter { quantization { delay: 0 weight_bits: 8 activation_bits: 8 } }
but the time of each step is 10x slower than the config without graph_rewriter

the log without graph_rewriter

INFO:tensorflow:global step 199709: loss = 1.4051 (0.745 sec/step)
INFO:tensorflow:global step 199710: loss = 1.5033 (0.564 sec/step)
INFO:tensorflow:global step 199710: loss = 1.5033 (0.564 sec/step)
INFO:tensorflow:global step 199711: loss = 1.7374 (1.093 sec/step)
INFO:tensorflow:global step 199711: loss = 1.7374 (1.093 sec/step)
INFO:tensorflow:global step 199712: loss = 1.6265 (0.812 sec/step)

the log with graph_rewiter

INFO:tensorflow:global step 4554: loss = 9.3010 (4.084 sec/step)
INFO:tensorflow:global step 4554: loss = 9.3010 (4.084 sec/step)
INFO:tensorflow:global step 4555: loss = 8.2835 (4.055 sec/step)
INFO:tensorflow:global step 4555: loss = 8.2835 (4.055 sec/step)
INFO:tensorflow:global step 4556: loss = 8.0293 (4.060 sec/step)
INFO:tensorflow:global step 4556: loss = 8.0293 (4.060 sec/step)

the tensorflow-gpu version is 1.12.
is the speed normal, any idea for this?

Metadata

Metadata

Labels

comp:litetf-lite issuesmodels:researchmodels that come under research directory

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions