Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement ML model to identify performance issues in running Indy services, based on aggregated logs + metrics in ElasticSearch #1657

Open
geored opened this issue Jul 1, 2020 · 0 comments
Assignees

Comments

@geored
Copy link
Member

geored commented Jul 1, 2020

Implement ML model to identify performance issues in running Indy services, based on aggregated logs + metrics in ElasticSearch. Use the ML model to address any / all of the following:

  • Predict performance during performance testing executions
  • Identify performance bottlenecks and recommend areas to optimize
  • Trigger automated investigations when Indy SLOs are breached
  • Provide Jupyter notebook containing initial investigation results, linked to the datasets as appropriate

We will provide aggregated log events and/or metric events. Metric events will contain opentracing.io -compatible spans, with ID’s that tie the events together in context. Aggregated log events also contain some contextual information, but there’s a lot of overlap with the metric data. Span data in our metric events will contain measurements from subsystems (and threaded-off sub-processes) for each request. We may also be able to provide aggregated metrics (think Prometheus, not Opentracing) for system-level metrics like memory usage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants