Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion: Compress embedded data to reduce size of executable #5

Open
Boscop opened this issue May 2, 2021 · 1 comment
Open

Comments

@Boscop
Copy link

Boscop commented May 2, 2021

Hi,
I had also implemented wordninja in Rust (~3.5 years ago) but hadn't published it.
In my implementation I'm using gzip compression to reduce the size of the executable,
and then decompressing with GzDecoder.
I think compressing the data also makes sense for your implementation :)

Btw, this is my implementation (it was a straight port of the python script):

//! Uses dynamic programming to infer the location of spaces in a string without spaces.

#[macro_use]
extern crate cute;
#[macro_use]
extern crate lazy_static;

use std::{collections::HashMap, io};
use flate2::read::GzDecoder;

pub fn read_string<T: io::Read>(r: &mut T) -> Result<String, io::Error> {
	let mut s = String::new();
	r.read_to_string(&mut s).map(|_| s)
}

pub fn split(s: &str) -> Vec<&str> {
	lazy_static! {
		static ref DATA: String = read_string(&mut GzDecoder::new(&include_bytes!("../wordninja_words.txt.gz")[..]).unwrap()).unwrap();
		// Build a cost dictionary, assuming Zipf's law and cost = -log(probability).
		static ref WORDS: Vec<&'static str> = DATA.lines().collect::<Vec<_>>();
		static ref WORDCOST: HashMap<&'static &'static str, f32> = c!{k => ((i + 1) as f32 * (WORDS.len() as f32).ln()).ln(), for (i, k) in WORDS.iter().enumerate()};
		static ref MAXWORD: usize = WORDS.iter().map(|w| w.len()).max().unwrap();
	}

	// Find the best match for the i first characters, assuming cost has
	// been built for the i-1 first characters.
	// Returns a pair (match_cost, match_length).
	let best_match = |cost: &[f32], i| {
		use noisy_float::prelude::*;
		use std::f32;

		let candidates = cost[if i >= *MAXWORD { i - *MAXWORD } else { 0 }..i].iter().rev().enumerate();
		candidates
			.map(|(k, c)| (c + *WORDCOST.get(&&s[i - k - 1..i]).unwrap_or(&f32::INFINITY), k + 1))
			.min_by_key(|&(c, _)| n32(c))
			.unwrap()
	};

	// Build the cost array.
	let mut cost = vec![0.];
	for i in 0..s.len() {
		let (c, _) = best_match(&cost, i + 1);
		cost.push(c);
	}

	// Backtrack to recover the minimal-cost string.
	let mut out = vec![];
	let mut i = s.len();
	while i > 0 {
		let (c, k) = best_match(&cost, i);
		assert_eq!(c, cost[i]);
		out.insert(0, &s[i - k..i]);
		i -= k;
	}
	out
}

#[test]
fn tests() {
	fn check(s: &str, r: &[&str]) {
		assert_eq!(split(s), r);
	}
	check("sensesworkingovertime", &["senses", "working", "overtime"]);
	check("thumbgreenappleactiveassignmentweeklymetaphor", &[
		"thumb",
		"green",
		"apple",
		"active",
		"assignment",
		"weekly",
		"metaphor",
	]);
	check(
		"itwasadarkandstormynighttherainfellintorrentsexceptatoccasionalintervalswhenitwascheckedbyaviolentgustofwindwhichsweptupthestreetsforitisinlondonthatoursceneliesrattlingalongthehousetopsandfiercelyagitatingthescantyflameofthelampsthatstruggledagainstthedarkness",
		&[
			"it",
			"was",
			"a",
			"dark",
			"and",
			"stormy",
			"night",
			"the",
			"rain",
			"fell",
			"in",
			"torrents",
			"except",
			"at",
			"occasional",
			"intervals",
			"when",
			"it",
			"was",
			"checked",
			"by",
			"a",
			"violent",
			"gust",
			"of",
			"wind",
			"which",
			"swept",
			"up",
			"the",
			"streets",
			"for",
			"it",
			"is",
			"in",
			"london",
			"that",
			"our",
			"scene",
			"lies",
			"rattling",
			"along",
			"the",
			"housetops",
			"and",
			"fiercely",
			"agitating",
			"the",
			"scanty",
			"flame",
			"of",
			"the",
			"lamps",
			"that",
			"struggled",
			"against",
			"the",
			"darkness",
		],
	);
	check(
		"thereismassesoftextinformationofpeoplescommentswhichisparsedfromhtmlbuttherearenodelimitedcharactersinthemforexamplethumbgreenappleactiveassignmentweeklymetaphorapparentlytherearethumbgreenappleetcinthestringialsohavealargedictionarytoquerywhetherthewordisreasonablesowhatsthefastestwayofextractionthxalot",
		&[
			"there",
			"is",
			"masses",
			"of",
			"text",
			"information",
			"of",
			"peoples",
			"comments",
			"which",
			"is",
			"parsed",
			"from",
			"html",
			"but",
			"there",
			"are",
			"no",
			"delimited",
			"characters",
			"in",
			"them",
			"for",
			"example",
			"thumb",
			"green",
			"apple",
			"active",
			"assignment",
			"weekly",
			"metaphor",
			"apparently",
			"there",
			"are",
			"thumb",
			"green",
			"apple",
			"etc",
			"in",
			"the",
			"string",
			"i",
			"also",
			"have",
			"a",
			"large",
			"dictionary",
			"to",
			"query",
			"whether",
			"the",
			"word",
			"is",
			"reasonable",
			"so",
			"what",
			"s",
			"the",
			"fastest",
			"way",
			"of",
			"extraction",
			"thx",
			"a",
			"lot",
		],
	);
}

It would also make sense to add a function to decompress the data before the first segmentation request, to reduce the wait time for the first segmentation request's result.
E.g. this function can be called in main() and would just access the DATA lazy static variable, to cause it to be evaluated.
(Or with once_cell.)

@kmod-midori
Copy link
Owner

I tried zstd, which reduced the dictionary in half, but the added code used to decompress that thing eventually evens out the gains (2.8 MB to 2.7 MB). I don't think it's worth the effort.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants