-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deliverable for May Timeframe #35
Comments
Priorities for Code:
Chose 3 sample measures and show a thread all the way through to demonstrate what the end product will look like. |
The following captures an email from Alian to the working group on this topic: After some discussion during our last meeting, followed by internal discussions within CSA, I'd like to confirm that our next goal is to produce new material in time for RSA 2022. During that event, CSA can advertise our work to a large audience. This release should include: A machine-readable version of the metrics catalog in YAML format, available on a GitHub repository. The second objective of creating YAML metrics is to facilitate the development of tools that automate security compliance and testing. A YAML file can easily be referenced, versioned, remixed, or imported into a security application. In this context, CSA has started migrating some of its standards to YAML, notably the CCM v4. At this stage, the metrics catalog will likely be the first "public" release of a YAML artifact by CSA. The way the community uses and interacts with the YAML metrics will inform CSA's approach to future releases of machine-readable standards. Recently, some working group members have been building a tool demonstrating the creation of measurements derived from the metrics catalog. This tool will use the metrics YAML file as an input, illustrating the value of a machine-readable document. The whitepaper that accompanies the GitHub release should notably provide the following content: An explanation of the YAML format. Alain |
Answer:
|
We are trying to empower our users to take the next step forward by allowing machines to review metrics, freeing humans to mature their management and engineering. With this basis, here are some logical next steps. |
|
We have a dependency on the data flow and system graph's before we can aggregate metrics. The real value is probably that we can always drill down into the details of the aggregation. |
Max to do initial outline as markdown; and then we assign work from there (as distinct issues). Just start within google docs so that we can collaborate faster. Move to notebook or whatever once we have a solid basis. (don't want to slow things down with PRs and new-to-us systems etc). |
If we get everything in to the google docs, I can copy-n-paste over to the markdown/ipynb. |
A rough outline in google docs is started. This link should give access (permissions are limited to us active members): https://docs.google.com/document/d/1GUQzvNlp9rDRMqLnm4qitlxziNplQ1i7h4kVOpFMY2s/edit?usp=sharing |
3/2 meeting note: |
The text was updated successfully, but these errors were encountered: