Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up transforms by caching the transform class instances. #570

Merged
merged 3 commits into from
May 9, 2016

Conversation

manthey
Copy link
Contributor

@manthey manthey commented May 5, 2016

For simple transforms, calculate them with less object creations.

For simple transforms, calculate them with less object creations.
@codecov-io
Copy link

codecov-io commented May 5, 2016

Current coverage is 72.40%

Merging #570 into master will increase coverage by +1.68%

@@             master       #570   diff @@
==========================================
  Files            81         81          
  Lines          7057       7069    +12   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
+ Hits           4991       5118   +127   
+ Misses         2066       1951   -115   
  Partials          0          0          
  1. 2 files (not in diff) in src/d3 were modified. more
    • Misses -30
    • Hits +30
  2. 5 files (not in diff) in src were modified. more
    • Misses -16
    • Hits +16
  3. File src/transform.js was modified. more
    • Misses -45
    • Partials 0
    • Hits +45

Powered by Codecov. Last updated by 445558b...2612ff7

Removed code that is not reachable.

Changed errors to throw new Error() rather than just strings.
* be rapidly switching between a large number of transforms, this is adequate
* simple behavior. */
var maxTransformCacheSize = 10;

var transform = function (options) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@manthey does it make sense to have a generic cache object?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean a generic cache for things besides transform? Or something else?

In general, we use only two or four transforms, so caching those in a module-level object is fine. Invalidating the cache when it reaches a certain size (rather than doing it via LRU) is quick and simple, and, since we won't generally have lots of transforms active at on time, it should be performant, too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean a generic cache for things besides transform

Yes generic cache object since we have been caching multiple things now and I feel like we could use something like that (not for this PR though).

@jbeezley
Copy link
Contributor

jbeezley commented May 9, 2016

LGTM

@manthey manthey merged commit 6dcb286 into master May 9, 2016
@manthey manthey deleted the faster-transforms branch May 9, 2016 18:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants