Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IOFS requires blackheaps models for realistic overhead; blackheap needs realistic overhead #6

Open
lquenti opened this issue Oct 17, 2022 · 1 comment

Comments

@lquenti
Copy link
Member

lquenti commented Oct 17, 2022

This is a pretty complex problem to solve elegantly.

The Problem

iofs is planned to (also) be used with blackheap. Blackheap uses blackbox methodology to provide prediction models to classify I/O requests solely based on their access times. iofs supports loading those models created by blackheap with the --classificationfile parameter.

Of course, this assumes that the latencies initially measured by blackheap are the same ones when running iofs. This is not the case, since it takes some time to classify the I/O requests. Thus, one the one hand, we require iofs to already have classifications to get the realistic overhead. However, on the other hand, we also need to create the classifications on a mounted iofs. Thus, we have a circular dependency.

Why the Trivial Solution won't work

The most obvious solution would be to just create a constant array of dummy classifications that gets evaluated against no matter whether actual classifications are provided or not. Unfortunately, this is not possible since we accept a any amount of models. For example, using the constantlinear model provided by blackheap creates twice the amount of models than the simpler linear model. See the blackheap docs for more.

Ideas

I have two idea, both of which are suboptimal at best.

Idea 1: Provide multiple dummy models, let the user choose

Create dummy CSV files with the amount of models that would be created for each model type used with blackheap. The actual parameters of those models are obviously irrelevant; it only matters that the amount of models (i.e. the number of iterations needed) are correct.

This should be fine, although not very user friendly.

Idea 2: 2 runs, 3 mounts

Just run it twice

  1. Mount iofs with no model
  2. Create a wrong model with blackheap
  3. Remount iofs with the wrong model
  4. Create a correct model with blackheap
  5. Remount iofs again with the correct model

This is the most reliant and secure way since the first model will be more correct than random data. But whether it is actually superior to the first idea is unknown. I dont think so, but I havent tested it yet.

The obvious disadvantage is how long it takes, as we have to create two models.

The current state

  • The blackheap and iofs documentation notes that this is an open issue
  • The advice is to use dummy models contained in the repository
  • It is mentioned that one can also just run it twice with pointing to this repo for further explaination
@lquenti
Copy link
Member Author

lquenti commented Mar 4, 2024

we do have example dummy models in the docs and from my testing lately the overhead is more negligible than expected back then.

I will leave the issue open for now but only for informative reasons

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant