Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improves distribution of dummy data #2326

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

DRMacIver
Copy link
Contributor

This improves the distribution of dummy data through the strategy of oversampling: We produce a larger population of patients than requested, and then sample down to a subset of it according to some weighted distribution of values.

This weighting scheme is calculated in two ways:

  1. We try to force the distribution to be "more uniform" than whatever the basic dummy data generation produces.
  2. We allow the user to provide an arbitrary weighting function to deviate from that.

It's also a good place to insert heuristics we might want to add (e.g. we could add default age and sex distributions, we could choose what proportion of nullable columns should be null, etc). This doesn't do any of that.

It does come with the cost of making dummy data slower. I think this is an OK tradeoff, and will be improved by anything that improves dummy data generation (in particular future work on making constraints satisfied more often)

Copy link

cloudflare-workers-and-pages bot commented Dec 19, 2024

Deploying databuilder-docs with  Cloudflare Pages  Cloudflare Pages

Latest commit: dc68b87
Status: ✅  Deploy successful!
Preview URL: https://a36ced6f.databuilder.pages.dev
Branch Preview URL: https://drmaciver-field-weightings.databuilder.pages.dev

View logs

Dummy data generation will generate a larger population and then sample from it
to improve the distribution of patients. This parameter controls how much larger.
Lower values will be faster to generate, while larger values will get closer to
the target distribution.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be helpful to users to give the default and an idea of what a small/large value is. i.e. 1 means no oversampling, 2 means oversampling by up to 2x the specified population size

Defines a "weight" expression that lets you control the distribution of patients.
Ideally a patient row will be generated with probability proportionate to its weight,
although this ideal will be imperfectly realised in practice. The higher the value of
``oversample`` the closer this ideal will to being realised.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An example as we have for additional_population_constraints would be helpful. Will this always be a case()?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants