Load pytorch model got error DeserializeError("Candle pickle error: specified file not found in archive")
#1376
-
I believe there may be a significant issue with my model definition. However, based on this error message, I can't determine exactly what adjustments are needed.
use crate::lore_detector::LoreDetectModel;
use crate::lore_processor::LoreProcessModel;
use burn::module::Module;
use burn::record::{FullPrecisionSettings, Recorder};
use burn::tensor::backend::Backend;
use burn_import::pytorch::PyTorchFileRecorder;
#[derive(Module, Debug)]
pub struct LoreModel<B: Backend> {
model: LoreDetectModel<B>,
processor: LoreProcessModel<B>,
}
impl<B: Backend> LoreModel<B> {
pub fn new(model_path: &str, device: &B::Device) -> Self {
let record: LoreModelRecord<B> = PyTorchFileRecorder::<FullPrecisionSettings>::default()
.load(model_path.into(), device)
.unwrap();
let model = LoreDetectModel::new_with(record.model);
let processor = LoreProcessModel::new_with(record.processor, device);
Self { model, processor }
}
} model state weight keys "model.conv1.weight",
"model.bn1.weight",
"model.bn1.bias",
"model.bn1.running_mean",
"model.bn1.running_var",
"model.bn1.num_batches_tracked",
"model.layer1.0.conv1.weight",
"model.layer1.0.conv1.bias",
"model.layer1.0.bn1.weight",
"model.layer1.0.bn1.bias",
...
"processor.stacker.logi_encoder.0.weight",
"processor.stacker.logi_encoder.0.bias",
"processor.stacker.logi_encoder.2.weight",
"processor.stacker.logi_encoder.2.bias",
"processor.stacker.tsfm.linear.weight",
"processor.stacker.tsfm.linear.bias",
"processor.stacker.tsfm.encoder.pe.pe",
"processor.stacker.tsfm.encoder.layers.0.norm_1.alpha",
"processor.stacker.tsfm.encoder.layers.0.norm_1.bias",
"processor.stacker.tsfm.encoder.layers.0.norm_2.alpha",
... |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
I follow the official document test load pytorch model got same error too. My OS is Windows11. use burn::{
module::Module,
nn::conv::{Conv2d, Conv2dConfig},
tensor::{backend::Backend, Tensor},
};
use burn::record::{FullPrecisionSettings, Recorder};
use burn_import::pytorch::PyTorchFileRecorder;
#[derive(Module, Debug)]
pub struct Net<B: Backend> {
conv1: Conv2d<B>,
conv2: Conv2d<B>,
}
impl<B: Backend> Net<B> {
/// Create a new model from the given record.
pub fn new_with(record: NetRecord<B>) -> Self {
let conv1 = Conv2dConfig::new([2, 2], [2, 2])
.init_with(record.conv1);
let conv2 = Conv2dConfig::new([2, 2], [2, 2])
.with_bias(false)
.init_with(record.conv2);
Self { conv1, conv2 }
}
/// Forward pass of the model.
pub fn forward(&self, x: Tensor<B, 4>) -> Tensor<B, 4> {
let x = self.conv1.forward(x);
self.conv2.forward(x)
}
}
type Backend1 = burn_ndarray::NdArray<f32>;
fn main() {
let device = Default::default();
let record = PyTorchFileRecorder::<FullPrecisionSettings>::default()
.load("J:/RustWorkspace/table-structure-recognition/files/conv2d.pt".into(), &device)
.expect("Should decode state successfully");
Net::<Backend1>::new_with(record);
}
|
Beta Was this translation helpful? Give feedback.
-
Works on linux, but reproduced on windows. It is a known issue #1178 (also listed in the book). It seems to have been fixed on the latest candle version (0.4.0), which I tested to confirm. |
Beta Was this translation helpful? Give feedback.
-
The PR has been merged! It should work on windows if you check out the main branch. |
Beta Was this translation helpful? Give feedback.
PR #1382 opened which should fix this windows issue by bumping to the candle-core 0.4.1