Skip to content
This repository has been archived by the owner on Feb 20, 2020. It is now read-only.

Can you try another model? #17

Open
machoji opened this issue May 3, 2017 · 13 comments
Open

Can you try another model? #17

machoji opened this issue May 3, 2017 · 13 comments

Comments

@machoji
Copy link

machoji commented May 3, 2017

I try Googlenet, but crashed. and could you tell me how to enable log information ? thank for your help

@ThomasGueldner
Copy link

I tried all from caffe 1.0 to caffe2 officially ported models from here and they all crashed on my Sony XPERIA Z2 (Android 6.0). But SqueezeNet at least output some predictions for its detected objects for a few seconds (then crashes) while all the other models just showed me a "Loading ..." text output for a few seconds (before crashing).

@FullZing
Copy link

@ThomasGueldner I use model translator tool to translate caffe model to caffe2 format. It works.

@ThomasGueldner
Copy link

@FullZing

Thanks for the hint!
I am gonna try it out tomorrow. And report the result, whether it really worked or not.

@jazzseow
Copy link

I tried the translator tool on AlexNet and GoogleNet, it doesnt work still. I need some guidance.

@ThomasGueldner Does it work for you?

@northeastsquare
Copy link

northeastsquare commented Jul 25, 2017

@ThomasGueldner , @jazzseow @machoji I tried googlenet in https://github.com/caffe2/models, just display "loading" forever

@daimagou
Copy link

daimagou commented Aug 16, 2017

Hi @machoji,
Your problem has been solved?Can you share the ways? Thank you in advance.
I try the Alexnet by translator tool from caffe to caffe2

03-15 08:50:04.035 16251 16264 E F8DEMO : Attempting to load protobuf netdefs...
03-15 08:50:07.673 16251 16264 E F8DEMO : Couldn't parse net from data.
03-15 08:50:07.707 16251 16264 E F8DEMO : done.
03-15 08:50:07.707 16251 16264 E F8DEMO : Instantiating predictor...

03-15 10:28:04.956 17081 17094 E F8DEMO : Attempting to load protobuf netdefs...
03-15 10:28:08.830 17081 17094 E F8DEMO : Couldn't parse net from data.
03-15 10:28:08.867 17081 17094 E F8DEMO : done.
03-15 10:28:08.867 17081 17094 E F8DEMO : Instantiating predictor...

@daimagou
Copy link

I use the GoogleNet from https://github.com/caffe2/models/tree/master/bvlc_googlenet:

03-15 05:32:58.714 8443 8456 E F8DEMO : Attempting to load protobuf netdefs...
03-15 05:32:58.808 8443 8456 E F8DEMO : done.
03-15 05:32:58.808 8443 8456 E F8DEMO : Instantiating predictor...
03-15 05:32:58.884 8443 8456 E F8DEMO : done.
We can see that the network can be normal initialization.

Actually my alexnet model can be parsing and initialization by caffe2 in the ubuntu.
But the alexnet model about 300M, GoogleNet about 30M.

So I think it's model too big lead to crash

@daimagou
Copy link

@weiyichang
Copy link

weiyichang commented Sep 18, 2017

Hi, I encounter the same problem about running googlenet (download from https://github.com/caffe2/models/tree/master/bvlc_googlenet) on Android phone.
Does anyone solve this problem? or Any suggestion ?
Thanks!!!

@weiyichang
Copy link

weiyichang commented Sep 19, 2017

Update:
Hmm, I can run googlenet on Android now, but unsure why this works:

transform the model from caffe to caffe2 again !!! But need to modify the deploy.prototxt of googlenet while performing transformation:
input_param { shape: {dim: 1 dim: 3 dim: 224 dim: 224}}

The original batch size is 10, so I guess the model in caffe2 might be transformed by this setting
Hope this will help :)

@luyuhua
Copy link

luyuhua commented Mar 1, 2018

@northeastsquare @jazzseow @northeastsquare
Hi,When I changed a lot of models in https://github.com/caffe2/models .But all of display "loading".I find squeezenet size is 13227227 while other model is 13224224.So I changed code which in ClassifyCamera.java and native-lib.cpp.Make the img_h anf img_w is 224.But it dispaly "loading" again.Does anyone solve this @problem?
Thanks!!!

@xshipeng
Copy link
Contributor

xshipeng commented Mar 13, 2018

To use other models, I suggest to follow the instruction in this use the onnx_caffe2 to transform onnx model to caffe2 rather than other ways. Using the model exported from mobile_exporter in PyTorch example will cause this Android app to crash.
Also, I think the image preprocessing should be consistent with the model training progress. I have mentioned it in #21

@why702
Copy link

why702 commented May 11, 2018

I changed the model to resnet50 which from "python -m caffe2.python.models.download resnet50".
But the process got the error with "Couldn't parse net from data."
Does any idea to fix this issue?

I get len = 6181001 from squeeze, but get len = 128070759 from resnet50.
Is this caused by the too large size of the resnet50?

Thx

void loadToNetDef(AAssetManager* mgr, caffe2::NetDef* net, const char *filename) {
    AAsset* asset = AAssetManager_open(mgr, filename, AASSET_MODE_BUFFER);
    assert(asset != nullptr);
    const void *data = AAsset_getBuffer(asset);
    assert(data != nullptr);
    off_t len = AAsset_getLength(asset);
    assert(len != 0);
    if (!net->ParseFromArray(data, len)) {
        alog("Couldn't parse net from data.\n");
    }
    AAsset_close(asset);
}

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants