Replies: 2 comments 5 replies
-
Hi! That's great👍LogScale was on my (quite long) todo list, I'm happy that it's done now 😉 My suggestion would be to merge the pipeline with the already existing CrowdStrike processing pipeline project. The pipeline you've created is much more complete than the existing one. As you already suggested to host it in SigmaHQ, I propose to Fork your project into the organization. I can then also setup the PyPI release actions in it. What do you think about renaming it then into pySigma-backend-crowdstrike? I think this name is not only shorter but also more appropriate, e.g. because the CrowdStrike data model is not specific to LogScale. Customers can also "buy" the logs via Falcon Data Replicator and ingest it into arbitrary platforms like Splunk and others. I also have the idea to implement a backend for creation of custom IOA rules from specific rule types like process creations, which would be also independent from LogScale. |
Beta Was this translation helpful? Give feedback.
-
Hi @thomaspatzke, All good suggestions! Hopefully this is what you had in mind: https://github.com/moullos/pySigma-backend-crowdstrike It probably makes sense to replicate some of the functionality of the "falcon" pipeline to the "FDR" pipeline but I am not sure when I will get around to it. Let me know if this makes sense and I can open a pull request. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone,
As CrowdStrike is gradually moving away from Splunk and now uses LogScale, I have created a pySigma backend and pipeline for LogScale and the CrowdStrike Falcon agent. This converts rules into the LogScale Query Language for telemetry collected through the CrowdStrike Falcon agent:
https://github.com/moullos/pySigma-backend-crowdstrikelogscale
I haven't published to PyPI yet and ideally I would like this to be part of the SigmaHQ repo.
Any feedback is welcome!
Beta Was this translation helpful? Give feedback.
All reactions