Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Results are bad when training cityscapes on my own #100

Open
harora opened this issue Jan 19, 2019 · 3 comments
Open

Results are bad when training cityscapes on my own #100

harora opened this issue Jan 19, 2019 · 3 comments

Comments

@harora
Copy link

harora commented Jan 19, 2019

Hi

1.)
I'm using the model for cityscapes. When I'm using the pre-trained model I'm getting right mIoU.
But when I'm using my custom list(of 75% data) and training it, I'm only getting mIoU as 2-3%. My loss is decreasing till 0.6-0.7 but still, mIoU is really bad. Can somebody help with this?

Train command i used:

python train.py --update-mean-var --train-beta-gamma
--random-scale --random-mirror --dataset cityscapes --filter-scale 1

All other parameters are the same. LR: 5e-4 , Batch size: 8 , etc.

2.) Using trained weights I'm getting the mentioned mIoU(67) but when I'm using it to finetune my model my initial loss is ~10-11. Shouldn't it be around 0.5-1 considering the model is already trained on the same data?
I notice that, as the loss goes down the test accuracy also decreases.

@pkuqgg
Copy link

pkuqgg commented Mar 10, 2019

Hi

1.)
I'm using the model for cityscapes. When I'm using the pre-trained model I'm getting right mIoU.
But when I'm using my custom list(of 75% data) and training it, I'm only getting mIoU as 2-3%. My loss is decreasing till 0.6-0.7 but still, mIoU is really bad. Can somebody help with this?

Train command i used:

python train.py --update-mean-var --train-beta-gamma
--random-scale --random-mirror --dataset cityscapes --filter-scale 1

All other parameters are the same. LR: 5e-4 , Batch size: 8 , etc.

2.) Using trained weights I'm getting the mentioned mIoU(67) but when I'm using it to finetune my model my initial loss is ~10-11. Shouldn't it be around 0.5-1 considering the model is already trained on the same data?
I notice that, as the loss goes down the test accuracy also decreases.

Hi, I have meet the same problems with you. Have you solved the problem?Waiting for reply! Thank you.

@harora
Copy link
Author

harora commented Mar 11, 2019

Hi. I couldn't solve the problem. I moved onto this implementation - https://github.com/oandrienko/fast-semantic-segmentation . This works well

@LinRui9531
Copy link

Hi

1.)
I'm using the model for cityscapes. When I'm using the pre-trained model I'm getting right mIoU.
But when I'm using my custom list(of 75% data) and training it, I'm only getting mIoU as 2-3%. My loss is decreasing till 0.6-0.7 but still, mIoU is really bad. Can somebody help with this?

Train command i used:

python train.py --update-mean-var --train-beta-gamma
--random-scale --random-mirror --dataset cityscapes --filter-scale 1

All other parameters are the same. LR: 5e-4 , Batch size: 8 , etc.

2.) Using trained weights I'm getting the mentioned mIoU(67) but when I'm using it to finetune my model my initial loss is ~10-11. Shouldn't it be around 0.5-1 considering the model is already trained on the same data?
I notice that, as the loss goes down the test accuracy also decreases.

Would u tell me yours cityscapes dataset path? or the type of self.image_list and self.label_list?
when i run python train.py., i found that in util/image_reader.py , the value returned by from_tensor_slices is null.

dataset = tf.data.Dataset.from_tensor_slices((self.image_list, self.label_list))
dataset = dataset.map(lambda x, y: _parse_function(x, y, cfg.IMG_MEAN), num_parallel_calls=cfg.N_WORKERS)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants