You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fileList.get(i) + " encodes to different number of bytes");
}
The decoder eats duplicated fields.
This was an issue before; I simply excluded the failing file name from the byte size equality tests. In #9, a new duplicated field wound up; excluding that entire file name would regress test coverage. Thus, files now have a method that can be used to check whether discrepancies between original file size and re-encoded file size are expected. This is still not ideal.
Here's where the "Fun:" part comes in: One could find a reasonable upper bound for allowed differences, as in "we know field X is duplicated with name of length Y and data size of Z -- the file is allowed to have as many as 12+16+Y+Z+3 more bytes" (12+16 for header, 3 for alignment).
One could drive the point even further and derive a number of allowed different bytes for files without duplicated fields: Assuming that one or two bytes per Meta2 block have garbage bits and that the header has a bunch of garbage, we could have a more fine-grained test.
The text was updated successfully, but these errors were encountered:
Hi there. Any update on this? I am also trying to write my own version of Darkest Dungeon Save Editor, and encountered this issue on persist.progression.json. It was something like this, where slay_a_squiffy_with_jester gets mentioned twice as a field. Is it safe if I ignore the duplicated data?
I can't give you an authoritative answer on how to deal with these. I'm not aware of any issues caused by dropping these duplicates, but the scope of this project was the binary encoding of the save data, not the actual semantics of the data, so might be lots of issues I don't know about.
I'm glad you're finding this project useful though!
DarkestDungeonSaveEditor/src/main/java/de/robojumper/ddsavereader/file/DsonFile.java
Lines 359 to 390 in 50c28e9
DarkestDungeonSaveEditor/src/test/java/de/robojumper/ddsavereader/file/ConverterTests.java
Lines 82 to 87 in 50c28e9
The decoder eats duplicated fields.
This was an issue before; I simply excluded the failing file name from the byte size equality tests. In #9, a new duplicated field wound up; excluding that entire file name would regress test coverage. Thus, files now have a method that can be used to check whether discrepancies between original file size and re-encoded file size are expected. This is still not ideal.
Here's where the "Fun:" part comes in: One could find a reasonable upper bound for allowed differences, as in "we know field X is duplicated with name of length Y and data size of Z -- the file is allowed to have as many as
12+16+Y+Z+3
more bytes" (12+16 for header, 3 for alignment).One could drive the point even further and derive a number of allowed different bytes for files without duplicated fields: Assuming that one or two bytes per
Meta2
block have garbage bits and that the header has a bunch of garbage, we could have a more fine-grained test.The text was updated successfully, but these errors were encountered: