Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dmp 3357 performance improvements #458

Merged
merged 48 commits into from
Jul 16, 2024
Merged

Conversation

mestebanez
Copy link
Contributor

@mestebanez mestebanez commented Jul 4, 2024

JIRA link (if applicable)

https://tools.hmcts.net/jira/browse/DMP-3357

Change description

A PR to establish a baseline set of performance tests around the context registry. This prevent regression
and allows us to further optimise performance in future.

This PR is quite big:-

  • Establishes a common test library (inline with the darts api)
  • Functional test framework introduced to send SOAP requests against the darts gateway
  • Functional test written for the basic operations of the context registry
  • Context regisry Performance based functional tests written. This test utilises the JMeter API to dynamically run jmeter with a configurable set of performance test data.

Does this PR introduce a breaking change? (check one with "x")

[ ] Yes
[x] No

Comment on lines +16 to +27
* Performance of the context registry is vital as it is used across all of our integration endpoints in order to
* authenticate requests as well as talk to the downstream darts api.
*
* <p>This class utilises in memory jmeter to perform a baseline set of performance test against the context registry
* of a single gateway deployment running on my local machine.
* The asserted timings can be tailored according to the expected functional test infrastructure we are running against.
*
* <p>These tests act as a good indicator as to whether the current context registry performance is in line with our baseline metrics.
* They also act as a nice early warning system as to any regressions in performance.
*
* <p>NOTE: This test does NOT act as a substitute for running performance tests within an official performance test environment.
*/

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an interesting initiative and I like the idea of having automated tests to capture perf regressions. This kind of testing is fraught with challenges, so just some food for thought here.

Do these functional tests work similarly to darts-api, in the sense that the test code (that generates the requests and asserts the responses) is run as a standalone process on the Jenkins build node, and the application under test (that handles the requests) runs as a pod in the dev cluster?

Some of these tests are spinning up 100 threads, which could be a non-trivial load for the build server to generate. Normally these kinds of tests would be ran from a dedicated platform, such that tests are consistent from run-to-run, whereas our build platform is presumably shared. So you might find the load profile you're putting against the application under test is quite variable, depending on how busy the build server is with other jobs. In turn you may get variable results, which would compromise the value of this a regression test.

Might also be worth running past platops, as this may be quite resource hungry.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the input Chris. I will run past Ben

Yes, the jmeter functional test client will run in the same manner as the darts api functional test client. The JMeter test will point to a URL $TEST_URL that represents a pod in the dev cluster

I take your point regarding threads. Assumption is that the jenkins agent will exclusively lock a test agent to a particular test run so will be able to consistently generate the necessary load without problems. I will check with Ben though. These rudimentary tests do runs on my machine so "should" ;) run on a jenkins agent no problems.

This test is not a substitute for a dedicated perf test environment, they hopefully act as an early warning. We will have a complimentary set of gatling tests to catch more detailed perf problems

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ben has confirmed that agents are mutually exclusive to a pipeline jobso there should be no inconsistency due to agent resourcing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants