-
-
Notifications
You must be signed in to change notification settings - Fork 453
[V8˖] Aggregated cache cluster
As of the v8
there's a new feature called "aggregated cache cluster".
Basically it allows you to aggregate multiple cache backends behind a single cache instance.
To do so, Phpfastcache gives you 4 strategies, all defined in \Phpfastcache\Cluster\AggregatorInterface
:
STRATEGY_FULL_REPLICATION
STRATEGY_SEMI_REPLICATION
STRATEGY_MASTER_SLAVE
STRATEGY_RANDOM_REPLICATION
4 different strategies explained below:
This is the default behaviour if the strategy is not specified. Full replication mechanism that replicate all CRUD operations on every provided backends. This is the mechanism you would use for a strict & reliable, but not really scalable, cache mechanism. This strategy is the only one that can do WRITE operations on a READ request as explained below:
- Read on first backend result (and synchronize other backends if needed, the first found value being decisive, no failure allowed),
- Write on all (no failure allowed),
- Delete on all (no failure allowed)
- Conflict on multiple reads: Keep first found item (but sync the others)
- Cluster size: 2 minimum, unlimited
Semi replication mechanism that replicate all WRITE operations on every provided backends. The mechanism is fault tolerant allowing all except one backends to fail on CRUD operations. This is the mechanism you would use for a highly reliable & scalable cache mechanism that is fault tolerant without crashing your app.
- Read first working (but do not synchronize, with partial failure allowed),
- Write on all (with partial failure allowed)
- Delete on all (with partial failure allowed)
- Conflict on multiple reads: Keep first found item (without syncing the others)
- Cluster size: 2 minimum, unlimited
Master/slave replication mechanism that will use the MASTER backend for every CRUD operations and fallback to SLAVE only if some operations fails on the master. However the WRITE operations will be made on both backends. If the operation keeps failing on SLAVE backend, an exception will be thrown.
- Read from master (but do not synchronize, with MASTER failure only allowed)
- Write on all (with MASTER failure only allowed)
- Delete on all (with MASTER failure only allowed)
- Conflict on multiple reads: No, master is exclusive source except if it fails
- Cluster size: 2 exactly: Master & Slave (Exception thrown if more or less)
Mostly used for development testing. Once built, the cluster will pickup a random cache backend (among the provided backends) for the rest of the cluster life. Please note that the chosen backend wont change on every cluster operations. This means you have 1 chance out of (n count of pools) to find an existing cache item but also to write/delete an non-existing item.
- Read from chosen backend
- Write on chosen backend
- Delete on chosen backend
- Conflict on multiple reads: No
- Cluster size: 2 minimum, unlimited
❓ Finally, if you need help, always check out the inevitable README.md