Production Test & Sandbox Data/Requests #23
-
Historically at SPS Commerce customer configurations and connection testing for onboarding and setting up new customers was done in a separate environment called However, a separate environment and deployment to solve this problem are more of a stopgap that provides a quick win without modification of the platform capabilities. In reality, over time this extra environment breaks down as a strategy:
To sum it up: As a point of clarification, the API Standards already indicate the "environment" names should not appear within the URL Path (https://spscommerce.github.io/sps-api-standards/standards/url-structure.html#host). An argument could be made that this is no longer an environment and it would be appropriate to represent the URLs as such:
I think the Thinking of For example:
Thoughts?
I believe both Rules Management API (@eggilbert ) and Inventory API (@shifr) have both token approaches to this scenario. Please if you are able to provide your approaches and thoughts, that would be much appreciated. |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 16 replies
-
Our service was brand new and didn't have dependencies on any existing services in "preprod", so we were largely able to ignore most of the interoperability considerations mentioned above.
This is probably the closest thing to what we did end up doing? One of our design goals was to avoid the historical patterns of having multiple environmental databases in production. We didn't want to burden users with creating Rule artifacts, test them, change statuses, etc. and then do it all over again in the "real" production environment. Instead, we built our system around having a single set of artifacts, and just managing the state of which things are currently active/live, so we were effectively moving pointers around and recording a log of changes, instead of duplicating everything and leaving it to users to keep equivalent things in sync. That said, it our solution isn't a sandbox vs production segmentation of data. It's more a "currently active" vs "not active/everything else" distinction if that makes sense? There's still just one set of data, but with some lifecycle statuses. Users are still able to test changes to Rules because of other product features to allow that happen in a safe way via optional request body parameters. That isn't switching between data Segment A and data Segment B (i.e. sandbox vs production), but instead enables a sort of testing mode where we return non-active results as instructed to simulate what would happen should they be made active - before any changes are actually performed or persisted. I'm not sure this is necessarily a general pattern that would work for everything though. But, it made sense for our service and our other design goals of having (mostly) immutable artifacts. |
Beta Was this translation helpful? Give feedback.
-
Nice @travisgosselin, I think this sums things up really well. A couple other thoughts, here to build on this conversation. Regarding @eggilbert, response, the Rules Mgmt service he's discussing is really built to help other services handle this problem. In terms of thinking of the data (rules) as active or not, that distinction is to support other services that want to know what their configuration should be if there request is production (active) or "preprod" (not active). Taking a little bit of a turn, if we think about AWS's services, those are all production (from AWS' perspective), even if their customer is using an instance of some service for testing/integration/staging/whatever. But API Gateway has an interesting functionality where a "stage" can be set. It's somewhat open to how a stage would be used, but stages do let you use the production gateway instance to serve things through a development lifecycle. It's flexible in the since that you can decide what your stage is named. Tying that back to the Rules Mgmt use case, it let's you create a temporary, named "override" of rules. So, a service using Rules Mgmt to get its config can also give it a named override in order to act as if the config is in production. You could also think of these as test scenarios. So, when we talk about "preprod" as an environment there's also a limitation that there is only one state/override/test-scenario. Which, is probably okay sometimes, but there will be times where you need more than one active test scenario at a time. Not to mention that once something is in production there's a chance that preprod could become stale. So, thinking in terms of "preprod" as a use case, maybe it's really that preprod is a way to think about naming multiple, temporary configs for test scenarios we want to run. So, bringing it back to API standards. I wonder if rather than saying |
Beta Was this translation helpful? Give feedback.
-
Another thing regarding how we do this.
|
Beta Was this translation helpful? Give feedback.
-
An important feature of a sandbox is the box. Isolation is critical. Its true of any app, but especially in a microservice workflow-based application like Fulfillment that the application code having conditionals for test data and being responsible for propagating it though the entire workflow has countless points of failure that are easy to miss or invert with network-wide consequences that could be especially challenging to remediate. Additionally, code that has conditionals for testing are not actually testing with the code that will be used with production data, which reduces the value of the test, sometimes to zero. While I prefer a separate environment because it is simply not possible to cross the boundary between the sandbox and production, if we are not willing to go that far then the next level of isolation would be the Organization. Rather than putting the burden on applications to have any awareness of whether data or a transaction is "test data", we can instead issue customers "Sandbox" organizations and provide tooling to easily create or "refresh" the sandbox org from another existing org. While a sandbox org could have some indicator that it is a sandbox (for billing or metrics for example) services involved in data flow would not have a reason to be aware of it. The data would be routed and processed like any other organization. If we go a step further and create "well-known" conventions for test profiles (presumably requiring them for v4 retailers), we could have the organization creation/reset wire up trading partner relationships automatically. If we auto-provision SFTP, AS2 or ICA accounts to mirror the origin organization, we could potentially reach a point where customers could self-service their sandboxes. |
Beta Was this translation helpful? Give feedback.
-
Header based solution following discussions on SPS Execution Context: |
Beta Was this translation helpful? Give feedback.
Header based solution following discussions on SPS Execution Context:
#39