Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update redis.js so that revisions don't have to begin at 0 #133

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

TyGuy
Copy link
Contributor

@TyGuy TyGuy commented Jun 24, 2018

Explanation here: #132

@adrai
Copy link
Contributor

adrai commented Jun 24, 2018

Haven’t looked deep into the changes...
is this backwards compatible? So if someone started to use redis before an updates to the future version of eventstore... will this work?
Can you add tests?
Is this really only relevant to redis?

@TyGuy
Copy link
Contributor Author

TyGuy commented Jun 24, 2018

  • EDIT: This is NOT backwards compatible
  • Yes I will add tests (probably will be a couple days though)
  • Yes this is only really relevant to redis
    • (EDIT: To be more specific: this change is only really relevant to redis. I'm not sure what the general assumption behind event-store is with respect to revisions beginning at 0 (if there is any at all) or how other store implementations currently handle this situation)

@TyGuy
Copy link
Contributor Author

TyGuy commented Jun 25, 2018

Started adding tests last night, but also I realized that this is NOT backwards compatible (at least without some kind of migration script). Apologies about the misleading comment above (edited now); I made the original changes a couple months ago so I also was not super "immersed" when I made that comment.

Basically, the previous version (meaning the current version in master) creates keys like this, and assumes the streamRevision based on sorted index:

// in this version, 121 is id of the first event. It is assumed revision = 0 because of the index
const prevKeys = [
  "eventstore:events:1529910910749:0:_general:myAgg:idWithAgg:121",
  "eventstore:events:1529910910749:0:_general:myAgg:idWithAgg:122",
  "eventstore:events:1529910910749:0:_general:myAgg:idWithAgg:123"
]

store.getEventsByRevision({ aggregate: 'myAgg', aggregateId: 'idWithAgg' }, 1, 3, (err, events) => {
  if (err) { /* blah */ }
  
  // events here will be the events 122 and 123, because it returns events
  // at the indices (and thus assumed revisions) 1 and 2.
})

When the events for aggregate idWithAgg get super long, we might want to delete the oldest entry (or entries) -- assuming we've stored them somewhere else. At least, this is the use-case we came across. If we did delete this, then we get this:

// so now, 122 is id of the first event. It is assumed revision = 0 because of the index,
// but its actual revision is 1.
const prevKeys = [
  "eventstore:events:1529910910749:0:_general:myAgg:idWithAgg:122",
  "eventstore:events:1529910910749:0:_general:myAgg:idWithAgg:123"
]

store.getEventsByRevision({ aggregate: 'myAgg', aggregateId: 'idWithAgg' }, 1, 3, (err, events) => {
  if (err) { /* blah */ }
  
  // events here will ONLY be event 123, because it returns events at the indices (and thus assumed revisions) 1 and 2 [in this example index 2 doesn't exist, but if it did it would be offset].
  // importantly, this example returns events with revisions starting at 2 (not 1).
})

If we could explicitly add the revision to the redis key, then we could avoid this situation, which is what I did. So the keys now look like:

// note revision number at the end of these keys
const newKeys = [
  "eventstore:events:1529910910749:0:_general:myAgg:idWithAgg:121:0",
  "eventstore:events:1529910910749:0:_general:myAgg:idWithAgg:122:1",
  "eventstore:events:1529910910749:0:_general:myAgg:idWithAgg:123:2"
]

So now, any request for getEventsByRevision can actually filter on the revision.

But as mentioned above this is not backward compatible.

So, I guess there are a couple solutions.

One is to (1) write a migration script to add the revision to the keys, and (2) use branching logic that checks the format of the key and either calls the old version or the new version of the getEventsByRevision function.

Another is to bump a major version (and a hybrid solution is to do the first, and then remove the script and old function call in the new major version).

I am willing to implement these changes, but @adrai I would like to know:

  • If you think it's a change you want to incorporate, and
  • What your preferred strategy would be for this moving forward.

@adrai
Copy link
Contributor

adrai commented Jun 25, 2018

major version + (if possible) reference a migration script that could be used when switching to the new major version

@nanov
Copy link
Contributor

nanov commented Jun 26, 2018

Your approach and solution sound interesting. Maybe writing a new db driver that either combines the whole solution ( ie redis back by mongo ) or alternatively just the redis part, this way there won't be backward problems as the driver is new, and the whole solution could be packed in a clean and transparent way into the library.

var revMin = 1
var revMax = 2

store.client.keys(firstEventMatcher, (err, keys) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to supported legacy node versions, and avoid the use of transpilers, arrow functions should be avoided.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants