Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial TODO #1

Open
4 of 13 tasks
dormando opened this issue Feb 15, 2023 · 7 comments
Open
4 of 13 tasks

Initial TODO #1

dormando opened this issue Feb 15, 2023 · 7 comments

Comments

@dormando
Copy link
Member

dormando commented Feb 15, 2023

To give followers an idea of what's planned-ish, in rough priority order.

  • request and response tokenizers, splitters, convenience functions
  • more command generators (with/without numeric, vary key length, missing commands, etc)
  • mcs.out(message) for printing data instead of lua's print
  • add a listener to filter the output from mcs.out()
  • automatically add meta Opaque tags on request generation, used with match below
  • local status, elapsed = mcs.match(request, response): does the response match the request and how long did it take
  • item value validation (using a hash of the key for the value data)
  • rate pace randomization
  • reconnect pace randomization
  • tick the pacer on every write call instead of every return (ie; write two requests in one loop, counts twice toward limit)
  • pcrg randomizer
  • random length key string (similar to mctester)
  • ramp up new connections over time
@dormando
Copy link
Member Author

planning on doing maybe half of these this week. they're mostly simple.

@dormando
Copy link
Member Author

dormando commented Feb 15, 2023

My first hack at this list will be to replace the three supplementary mc-crusher utilities:

  • bench-warmer
  • bench-sample
  • latency-sampler
    ... via builtin code and lua.

after that the rest is adding features for supporting more workloads, more or less.

bench-warmer is now replaced, by way of letting the benchmark run multiple times internally. Can create pre-warming functions, create progressive benchmarks, etc.

bench-sample is now more or less replaced. see conf/example.lua

@dormando
Copy link
Member Author

kinda want the out + filter routines sooner than later. would allow self-feedback for when to stop, and allow loops that self-adjust settings to ramp up toward falure.

@dormando
Copy link
Member Author

dormando commented May 8, 2023

Added a bunch of convenience functions that weren't part of the original list:

  • --help
  • --arg for passing arguments to lua via command line
  • allow passing arguments to the functions
  • clean up the abort situation a bit

next highest priority are functions for manipulating responses and more command generator builtin funcs.

@dormando
Copy link
Member Author

added a few response handlers. taking a break to think through the meta side.

looks like I never used the meta code in mcmc as r->rlen isn't right. it's a little weird finding the token offset for the flag functions... typing it out, I think probably just "if type is META and there is a vlen, flags start at 2 instead of 1".

@dormando
Copy link
Member Author

think we're able to get away with much simpler response parsing code than the proxy has.

believe the tasks up through value checking are still blocking... but I'll see how much I can push through in the next day or two.

@dormando
Copy link
Member Author

Think this is enough for me to task back over to the test writing phase again for a while. Should find more changes to make once I run into issues writing real workloads.

I will probably add mcs.out() this week. The point is to centralize output streams so a function can check for pass/fail status and stop the test with information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant