Starting from OpenCTI 6.2 when want_assertions_signed and want_authn_response_signed SAML parameter are not present in OpenCTI configuration, the default is set to "true" by the underlaying library (passport-saml) when previously it was false by default. If you have issues after upgrade, you can try with both of them set to false.
Here is an example of SAML configuration using environment variables:
This section lists breaking changes introduced in OpenCTI, per version starting with the latest.
Please follow the migration guides if you need to upgrade your platform.
OpenCTI 6.2
-
Change to the observable "promote" return value: the API now returns the created Indicator instead of the original Observable.
+
Change to the observable "promote"
+
The API calls that promote an Observable to Indicator now return the created Indicator instead of the original Observable.
+
GraphQL API
-
GraphQL Mutation StixCyberObservableEditMutations.promote is now deprecated
-
-
New GraphQL Mutation StixCyberObservableEditMutations.promoteToIndicator introduced
-
-
-
Client-python method client.stix_cyber_observable.promote_to_indicator is now deprecated
-
+
Mutation StixCyberObservableEditMutations.promote is now deprecated
+
New Mutation StixCyberObservableEditMutations.promoteToIndicator introduced
+
+
Client-Python API
+
+
Client-python method client.stix_cyber_observable.promote_to_indicator is now deprecated
New Client-python method client.stix_cyber_observable.promote_to_indicator_v2 introduced
-
Change to the SAML authentication: when want_assertions_signed and want_authn_response_signed SAML parameter are not present in OpenCTI configuration, the default is set to "true" by the underlaying library (passport-saml) when previously it was false by default. If you have issues after upgrade, you can try with both of them set to false.
Discontinued Support
Please note that the deprecated methods will be permanently removed in OpenCTI 6.5.
-
How to migrate from OpenCTI < 6.2
+
How to migrate
If you are using custom scripts that make use of the deprecated API methods, please update these scripts.
-
The changes are straightforward: if you are using the return value of the method, you should now expect the new Indicator instead of the Observable being promoted; adapt your code accordingly.
+
The changes are straightforward: if you are using the return value of the method, you should now expect the new Indicator
+instead of the Observable being promoted; adapt your code accordingly.
+
Change to SAML authentication
+
When want_assertions_signed and want_authn_response_signed SAML parameter are not present in OpenCTI configuration,
+the default is now set to true by the underlying library (passport-saml) when previously it was false by default.
+
How to migrate
+
If you have issues after upgrade, you can try with both parameters set to false.
OpenCTI 5.12
-
This version introduces a major rework of the filter engine with breaking changes to the model.
+
Major changes to the filtering APi
+
OpenCTI 5.12 introduces a major rework of the filter engine with breaking changes to the model.
OpenCTI is an open source platform allowing organizations to manage their cyber threat intelligence knowledge and observables. It has been created in order to structure, store, organize and visualize technical and non-technical information about cyber threats.
Learn how to deploy and configure the platform as well as launch connectors to get the first data in OpenCTI.
Deploy now
User Guide
Understand how to use the platform, explore the knowledge, import and export information, create dashboard, etc.
Explore
Administration
Know how to administrate OpenCTI, create users and groups using RBAC / segregation, put retention policies and custom taxonomies.
Customize
Need more help?
We are doing our best to keep this documentation complete, accurate and up to date.
If you still have questions or you find something which is not sufficiently explained, join the Filigran Community on Slack.
"},{"location":"#latest-blog-posts","title":"Latest blog posts","text":"
All tutorials are published directly on the Medium blog, this section provides a comprehensive list of the most important ones.
Introducing decay rules implementation for Indicators in OpenCTI Mar 25, 2024
Cyber Threat Intelligence is made to be used. To be useful, it must be relevant and on time. It is why managing the lifecycle of Indicators of Compromise...
Read
Introducing advanced filtering possibilities in OpenCTI Feb 5, 2024
CTI databases are usually vast and made of complex, inter-dependent objects ingested from various sources. In this challenging context, cyber analysts need...
Read
Breaking change: evolution of the way Connector, Streams and Feeds import data in OpenCTI Jan 29, 2024
How Connectors, Feeds and Streams use Confidence level currently...
In OpenCTI, CSV Mappers allow to parse CSV files in a STIX 2.1 Object. The mappers are created and configured by users with the Manage CSV mapper capability. Then, they are available to users who import CSV files, for instance inside a report or in the global import view.
The mapper contains representations of STIX 2.1 entities and relationships, in order for the parser to properly extract them. One mapper is dedicated to parsing a specific CSV file structure, and thus dedicated mappers should be created for every specific CSV structure you might need to ingest in the platform.
"},{"location":"administration/csv-mappers/#create-a-new-csv-mapper","title":"Create a new CSV Mapper","text":"
In menu Data, select the submenu Processing, and on the right menu select CSV Mappers. You are presented with a list of all the mappers set in the platform. Note that you can delete or update any mapper from the context menu via the burger button beside each mapper.
Click on the button + in the bottom-right corner to add a new Mapper.
Enter a name for your mapper and some basic information about your CSV files:
The line separator used (defaults to the standard comma character)
The presence of a header on the first line
Header management
The parser will not extract any information from the CSV header if any, it will just skip the first line during parsing.
Then, you need to create every representation, one per entity and relationship type represented in the CSV file. Click on the + button to add an empty representation in the list, and click on the chevron to expand the section and configure the representation.
Depending on the entity type, the form contains the fields that are either required (input outlined in red) or optional. For each field, set the corresponding columns mapping (the letter-based index of the column in the CSV table, as presented in common spreadsheet tools).
References to other entities should be picked from the list of all the other representations already defined earlier in the mapper.
You can do the same for all the relationships between entities that might be defined in this particular CSV file structure.
Fields might have options besides the mandatory column index, to help extract relevant data:
Date values are expected in ISO 8601 format, but you can set your own format to the time parser
Multiple values can be extracted by specifying the separator used inside the cell (e.g. + or |)
Or to set default values in case some data is missing in the imported file.
The only parameter required to save a CSV Mapper is a name. The creation and refinement of its representations can be done iteratively.
Nonetheless, all CSV Mappers go through a quick validation that checks if all the representations have all their mandatory fields set. Only valid mappers can be run by the users on their CSV files.
Mapper validity is visible in the list of CSV Mappers as shown below.
"},{"location":"administration/csv-mappers/#test-your-csv-mapper","title":"Test your CSV mapper","text":"
In the creation or edition form, hit the button Test to open a dialog. Select a sample CSV file and hit the Test button.
The code block contains the raw result of the parsing attempt, in the form of a STIX 2.1 bundle in JSON format.
You can then check if the extracted values match the expected entities and relationships.
Partial test
The test conducted in this window relies only on the translation of CSV data according to the chosen representation in the mapper. It does not take into account checks for accurate entity formatting (e.g. IPv4) or specific entity configurations (e.g. mandatory \"description\" field on reports). Consequently, the entities visible in the test window may not be created during the actual import process.
Test with a small file
We strongly recommend limiting test files to 100 lines and 1MB. Otherwise, the browser may crash.
"},{"location":"administration/csv-mappers/#use-a-mapper-for-importing-a-csv-file","title":"Use a mapper for importing a CSV file","text":"
You can change the default configuration of the import csv connector in your configuration file.
In Data import section, or Data tab of an entity, when you upload a CSV, you can select a mapper to apply to the file. The file will then be parsed following the representation rules set in the mapper.
By default, the imported elements will be added in a new Analyst Workbench where you will be able to check the result of the import.
"},{"location":"administration/csv-mappers/#default-values-for-attributes","title":"Default values for attributes","text":"
In the case of the CSV file misses some data, you can complete it with default values. To achieve this, you have two possibilities:
Set default values in the settings of the entities,
Set default values directly in the CSV mapper.
"},{"location":"administration/csv-mappers/#set-default-values-in-the-settings-of-the-entities","title":"Set default values in the settings of the entities","text":"
Default value mechanisms
Note that adding default values in settings have an impact at entity creation globally on the platform, not only on CSV mappers. If you want to apply those default values only at CSV mapper level, please use the second option.
In settings > Customization, you can select an entity type and then set default values for its attributes.
In the configuration of the entity, you have access to the entity's attributes that can be managed.
Click on the attribute to add a default value information.
Enter the default value in the input and save the update.
The value filled will be used in the case where the CSV file lacks data for this attribute.
"},{"location":"administration/csv-mappers/#set-specific-default-values-directly-in-the-csv-mapper","title":"Set specific default values directly in the CSV mapper","text":"
Information retained in case of default value
If you fill a default value in entity settings and the CSV mapper, the one from CSV mapper will be used.
In the mapper form, you will see next to the column index input a gear icon to add extra information for the attribute. If the attribute can have a customizable default value, then you will be able to set one here.
The example above shows the case of the attribute architecture implementation of a malware. You have some information here. First, it seems we have a default value already set in entity settings for this attribute with the value [powerpc, x86]. However, we want to override this value with another one for our case: [alpha].
"},{"location":"administration/csv-mappers/#specific-case-of-marking-definitions","title":"Specific case of marking definitions","text":"
For marking definitions, setting a default value is different from other attributes. We are not choosing a particular marking definition to use if none is specified in the CSV file. Instead, we will choose a default policy. Two option are available:
Use the default marking definitions of the user. In this case the default marking definitions of the connected user importing the CSV file will be used,
Let the user choose marking definitions. Here the user importing the CSV file will choose marking definitions (among the ones they can see) when selecting the CSV mapper.
Decay rules can be configured in the \"Settings > Customization > Decay rule\" menu.
There are built-in decay rules that can't be modified and are applied by default to indicators depending on their main observable type. Decay rules are applied from highest to lowest order (the lowest being 0).
You can create new decay rules with higher order to apply them along with (or instead of) the built-in rules.
When you create a decay rule, you can specify on which indicators' main observable types it will apply. If you don't enter any, it will apply to all indicators.
You can also add reaction points which represent the scores at which indicators are updated. For example, if you add one reaction point at 60 and another one at 40, indicators that have an initial score of 80 will be updated with a score of 60, then 40, depending on the decay curve.
The decay curve is based on two parameters:
the decay factor, which represents the speed at which the score falls, and
the lifetime, which represents the time (in days) during which the value will be lowered until it reaches 0.
Finally, the revoke score is the score at which the indicator can be revoked automatically.
Once you have created a new decay rule, you will be able to view its details, along with a life curve graph showing the score evolution over time.
You will also be able to edit your rule, change all its parameters and order, activate or deactivate it (only activated rules are applied), or delete it.
Indicator decay manager
Decay rules are only applied, and indicators score updated, if indicator decay manager is enabled (enabled by default).
Please read the dedicated page to have all information
Filigran is providing an Enterprise Edition of the platform, whether on-premise or in the SaaS.
"},{"location":"administration/enterprise/#what-is-opencti-ee","title":"What is OpenCTI EE?","text":"
OpenCTI Enterprise Edition is based on the open core concept. This means that the source code of OCTI EE remains open source and included in the main GitHub repository of the platform but is published under a specific license. As specified in the GitHub license file:
The OpenCTI Community Edition is licensed under the Apache License, Version 2.0 (the \u201cApache License\u201d).
The OpenCTI Enterprise Edition is licensed under the OpenCTI Enterprise Edition License (the \u201cEnterprise Edition Licensee\u201d).
The source files in this repository have a header indicating which license they are under. If no such header is provided, this means that the file belongs to the Community Edition under the Apache License, Version 2.0.
We wrote a complete article to explain the enterprise edition, feel free to read it to have more information
Audit logs help you answer \"who did what, where, and when?\" within your data with the maximum level of transparency. Please read Activity monitoring page to get all information.
"},{"location":"administration/enterprise/#playbooks-and-automation","title":"Playbooks and automation","text":"
OpenCTI playbooks are flexible automation scenarios which can be fully customized and enabled by platform administrators to enrich, filter and modify the data created or updated in the platform. Please read Playbook automation page to get all information.
"},{"location":"administration/enterprise/#organizations-management-and-segregation","title":"Organizations management and segregation","text":"
Organizations segregation is a way to segregate your data considering the organization associated to the users. Useful when your platform aims to share data to multiple organizations that have access to the same OpenCTI platform. Please read Organizations RBAC to get more information.
"},{"location":"administration/enterprise/#full-text-indexing","title":"Full text indexing","text":"
Full text indexing grants improved searches across structured and unstructured data. OpenCTI classic searches are based on metadata fields (e.g. title, description, type) while advanced indexing capability enables searches to be extended to the document\u2019s contents. Please read File indexing to get all information.
"},{"location":"administration/enterprise/#more-to-come","title":"More to come","text":"
More features will be available in OpenCTI in the future. Features like:
Generative AI for correlation and content generation.
Supervised machine learning for natural language processing.
A variety of entity customization options are available to optimize data representation, workflow management, and enhance overall user experience. Whether you're fine-tuning processing statuses, configuring entities' attributes, or hiding entities, OpenCTI's customization capabilities provide the flexibility you need to create a tailored environment for your threat intelligence and cybersecurity workflows.
The following chapter aims to provide readers with an understanding of the available customization options by entity type. Customize entities can be done in \"Settings > Customization\".
"},{"location":"administration/entities/#hidden-in-interface","title":"Hidden in interface","text":"
This configuration allows to hide a specific entity type throughout the entire platform. It provides a potent means to simplify the interface and tailor it to your domain expertise. For instance, if you have no interest in disinformation campaigns, you can conceal related entities such as Narratives and Channels from the menus.
You can specify which entities to hide on a platform-wide basis from \"Settings > Customization\" and from \"Settings > Parameters\", providing you with a list of hidden entities. Furthermore, you can designate hidden entities for specific Groups and Organizations from \"Settings > Security > Groups/Organizations\" by editing a Group/Organization.
An overview of hidden entity types is available in the \"Hidden entity types\" field in \"Settings > Parameters.\"
"},{"location":"administration/entities/#automatic-references-at-file-upload","title":"Automatic references at file upload","text":"
This configuration enables an entity to automatically construct an external reference from the uploaded file.
This configuration enables the requirement of a reference message on an entity creation or modification. This option is helpful if you want to keep a strong consistency and traceability of your knowledge and is well suited for manual creation and update.
For now, OpenCTI has a simple workflow approach. They're represented by the \"Processing status\" field embedded in each object. By default, this field is disabled for most objects but can be activated through the platform settings:
Click on the small pink pen icon next to \"Workflow\" to access the object customization window.
Add and configure the desired statuses, defining their order within the workflow.
In addition, the available statuses are defined by a collection of status templates visible in \"Settings > Taxonomies > Status templates\". This collection can be customized.
Confidence scale can be customized for each entity type by selecting another scale template or by editing directly the scale values. Once you have customized your scale, click on \"Update\" to save your configuration.
Max confidence level
The above scale also needs to take into account the confidence level per user. To understand the concept, please navigate to this page
Platform segregation by organization is available under the \"OpenCTI Enterprise Edition\" license. Please read the dedicated page to have all the information.
File indexing can be configured via the File indexing tab in the Settings menu.
The configuration and impact panel shows all file types that can be indexed, as well as the volume of storage used.
It is also possible to include or exclude files uploaded from the global Data import panel and that are not associated with a specific entity in the platform.
Finally, it is possible to set a maximum file size for indexing (5 Mb by default).
This guide aims to give you a full overview of the OpenCTI features and workflows. The platform can be used in various contexts to handle threats management use cases from a technical to a more strategic level.
The OpenCTI Administrative settings console allows administrators to configure many options dynamically within the system. As an Administrator, you can access this settings console, by clicking the settings link.
The Settings Console allows for configuration of various aspects of the system.
This section will show configured and enabled/disabled strategies. The configuration is done in the config/default.json file or via ENV variables detected at launch.
Platform Login Message (optional) - if configured this will be displayed on the login page. This is usually used to have a welcome type message for users before login.
Platform Consent Message (optional) - if configured this will be displayed on the login page. This is usually used to display some type of consent message for users to agree to before login. If enabled, a user must check the checkbox displayed to allow login.
Platform Consent Confirm Text (optional) - This is displayed next to the platform consent checkbox, if Platform Consent Message is configured. Users must agree to the checkbox before the login prompt will be displayed. This message can be configured, but by default reads: I have read and comply with the above statement
"},{"location":"administration/introduction/#dark-theme-color-scheme","title":"Dark Theme Color Scheme","text":"
Various aspects of the Dark Theme can be dynamically configured in this section.
"},{"location":"administration/introduction/#light-theme-color-scheme","title":"Light Theme Color Scheme","text":"
Various aspects of the Light Theme can be dynamically configured in this section.
Within the OpenCTI platform, the merge capability is present into the \"Data > Entities\" tab, and is fairly straightforward to use. To execute a merge, select the set of entities to be merged, then click on the Merge icon.
Merging limitation
It is not possible to merge entities of different types, nor is it possible to merge more than 4 entities at a time (it will have to be done in several stages).
Central to the merging process is the selection of a main entity. This primary entity becomes the anchor, retaining crucial attributes such as name and description. Other entities, while losing specific fields like descriptions, are aliased under the primary entity. This strategic decision preserves vital data while eliminating redundancy.
Once the choice has been made, simply validate to run the task in the background. Depending on the number of entity relationships, and the current workload on the platform, the merge may take more or less time. In the case of a healthy platform and around a hundred relationships per entity, merge is almost instantaneous.
"},{"location":"administration/merging/#data-preservation-and-relationship-continuity","title":"Data preservation and relationship continuity","text":"
A common concern when merging entities lies in the potential loss of information. In the context of OpenCTI, this worry is alleviated. Even if the merged entities were initially created by distinct sources, the platform ensures that data is not lost. Upon merging, the platform automatically generates relationships directly on the merged entity. This strategic approach ensures that all connections, regardless of their origin, are anchored to the consolidated entity. Post-merge, OpenCTI treats these once-separate entities as a singular, unified entity. Subsequent information from varied sources is channeled directly into the entity resulting from the merger. This unified entity becomes the focal point for all future relationships, ensuring the continuity of data and relationships without any loss or fragmentation.
Irreversible process: It's essential to know that a merge operation is irreversible. Once completed, the merged entities cannot be reverted to their original state. Consequently, careful consideration and validation are crucial before initiating the merge process.
Loss of fields in aliased entities: Fields, such as descriptions, in aliased entities - entities that have not been chosen as the main - will be lost during the merge. Ensuring that essential information is captured in the primary entity is crucial to prevent data loss.
Usefulness: To understand the benefits of entity merger, refer to the Merge objects page in the User Guide section of the documentation.
Deduplication mechanism: the platform is equipped with deduplication processes that automatically merge data at creation (either manually or by importing data from different sources) if it meets certain conditions.
"},{"location":"administration/notifier-samples/","title":"Notifier samples","text":""},{"location":"administration/notifier-samples/#configure-teams-webhook","title":"Configure Teams webhook","text":"
To configure a notifier for Teams, allowing to send notifications via Teams messages, we followed the guidelines outlined in the Microsoft documentation.
"},{"location":"administration/notifier-samples/#template-message-for-live-trigger","title":"Template message for live trigger","text":"
The Teams template message sent through webhook for a live notification is:
Leveraging the platform's built-in connectors, users can create custom notifiers tailored to their unique needs. OpenCTI features three built-in connectors: a webhook connector, a simple mailer connector, and a platform mailer connector. These connectors operate based on registered schemas that describe their interaction methods.
This notifier connector enables users to send notifications to external applications or services through HTTP requests. Users can specify:
Verb: Specifies the HTTP method (GET, POST, PUT, DELETE).
URL: Defines the destination URL for the webhook.
Template: Specifies the message template for the notification.
Parameters and Headers: Customizable parameters and headers sent through the webhook request.
OpenCTI provides two notifier samples by default, designed to communicate with Microsoft Teams through a webhook. A documentation page providing details on these samples is available.
"},{"location":"administration/notifiers/#configuration-and-access","title":"Configuration and access","text":"
Custom notifiers are manageable in the \"Settings > Customization > Notifiers\" window and can be restricted through Role-Based Access Control (RBAC). Administrators can control access, limiting usage to specific Users, Groups, or Organizations.
For guidance on configuring notification triggers and exploring the usages of notifiers, refer to the dedicated documentation page.
Taxonomies in OpenCTI refer to the structured classification systems that help in organizing and categorizing cyber threat intelligence data. They play a crucial role in the platform by allowing analysts to systematically tag and retrieve information based on predefined categories and terms.
Along with the Customization page, these pages allow the administrator to customize the platform.
Labels in OpenCTI serve as a powerful tool for organizing, categorizing, and prioritizing data. Here\u2019s how they can be used effectively:
Tagging and Categorization: Labels can be used to tag malware, incidents, or indicators (IOCs) with specific categories, making it easier to filter and search through large datasets.
Prioritization: By labeling threats based on their severity or impact, security analysts can prioritize their response efforts accordingly.
Correlation and Analysis: Labels help in correlating different pieces of intelligence. For example, if multiple indicators are tagged with the same label, it might indicate a larger campaign or a common adversary.
Automation and Integration: Labels can trigger automated workflows (also called playbooks) within OpenCTI. For instance, a label might automatically initiate further investigation or escalate an incident.
Reporting and Metrics: Labels facilitate the generation of reports and metrics, allowing organizations to track trends through dashboards, measure response effectiveness, and make data-driven decisions.
Sharing and Collaboration: When sharing intelligence with other organizations or platforms, labels provide a common language that helps in understanding the context and relevance of the shared data.
Tip
In order to achieve effective data labeling methods, it is recommended to establish a clear and consistent criteria for your labeling and document them in a policy or guideline.
Kill chain phases are used in OpenCTI to structure and analyze the data related to cyber threats and attacks. They describe the stages of an attack from the perspective of the attacker and provide a framework for identifying, analysing and responding to threats.
OpenCTI supports the following kill chain models:
Lockheed Martin Cyber Kill Chain
MITRE ATT&CK Framework (Entreprise, PRE, Mobile and ICS)
DISARM framework
You can add, edit, or delete kill chain phases in the settings page, and assign them to indicators, attack patterns, incidents, or courses of action in the platform. You can also filter the data by kill chain phase, and view the kill chain phases in a timeline or as a matrix.
Open vocabularies are sets of terms and definitions that are agreed upon by the CTI community. They help to standardize the communication documentation of cyber threat information. This section allows you to customize a set of available fields by adding vocabulary. Almost all of the drop-down menus available in the entities can be modified from this panel.
Open vocabularies in OpenCTI are mainly based on the STIX standard.
Status templates are predefined statuses that can be assigned to different entities in OpenCTI, such as reports, incidents, or cases (incident responses, requests for information and requests for takedown).
They help to track the progress of the analysis and response activities by defining statuses that are used in the workflows.
Platform segregation by organization is available under the \"OpenCTI Enterprise Edition\" license. Please read the dedicated page to have all the information.
Platform administrators can promote members of an organization as \"Organization administrator\". This elevated role grants them the necessary capabilities to create, edit and delete users from the corresponding Organization. Additionally, administrators have the flexibility to define a list of groups that can be granted to newly created members by the organization administrators. This feature simplifies the process of granting appropriate access and privileges to individuals joining the organization.
The platform administrator can promote/demote an organization admin through its user edition form.
Organization admin rights
The \"Organization admin\" has restricted access to Settings. They can only manage the members of the organizations for which they have been promoted as \"admins\".
This section allows the administrator to edit the following settings:
Platform title
Platform favicon URL
Sender email address: email address displayed as sender when sending notifications. The technical sender is defined in the SMTP configuration.
Theme
Language
Hidden entity types: allows you to customize which types of entities you want to see or hide in the platform. This can help you focus on the relevant information and avoid cluttering the platform with unnecessary data.
This is where the Enterprise edition can be enabled.
This section gives important information about the platform like the used version, the edition, the architecture mode (can be Standalone or Cluster) and the number used nodes.
Through the \"Remove Filigran logos\" toggle, the administrator has the option to hide the Filigran logo on the login page and the sidebar.
This section gives you the possibility to set and display Announcements in the platform. Those announcements will be visible to every user in the platform, on top of the interface.
They can be used to inform some of your users or all of important information, like a scheduled downtime, an incoming upgrade, or even to share important tips regarding the usage of the platform.
An Announcement can be accompanied by a \"Dismiss\u201d button. When clicked by a user, it makes the message disappear for this user.
This option can be deactivated to have a permanent announcement.
\u26a0\ufe0f Only one announcement is shown at a time, with priority given to dismissible ones. If there are no dismissible announcements, the most recent non-dismissible one is shown.
This section informs the administrator of the statuses of the different managers used in the Platform. More information about the managers can be found here. It shows also the used versions of the search engine database, RabbitMQ and Redis.
In cluster mode, the fact that a manager appears as enabled means that it is active in at least one node.
The Policies configuration window (in \"Settings > Security > Policies\") encompasses essential settings that govern the organizational sharing, authentication strategies, password policies, login messages, and banner appearance within the OpenCTI platform.
"},{"location":"administration/policies/#platform-main-organization","title":"Platform main organization","text":"
Allow to set a main organization for the entire platform. Users belonging to the main organization enjoy unrestricted access to all data stored in the platform. In contrast, users affiliated with other organizations will only have visibility into data explicitly shared with them.
Numerous repercussions linked to the activation of this feature
This feature has implications for the entire platform and must be fully understood before being used. For example, it's mandatory to have organizations set up for each user, otherwise they won't be able to log in. It is also advisable to include connector's users in the platform main organization to avoid import problems.
The authentication strategies section provides insights into the configured authentication methods. Additionally, an \"Enforce Two-Factor Authentication\" button is available, allowing administrators to mandate 2FA activation for users, enhancing overall account security.
Please see the Authentication section for further details on available authentication strategies.
This section encompasses a comprehensive set of parameters defining the local password policy. Administrators can specify requirements such as minimum/maximum number of characters, symbols, digits, and more to ensure robust password security across the platform. Here are all the parameters available:
Parameter Description Number of chars must be greater than or equals to Define the minimum length required for passwords. Number of chars must be lower or equals to (0 equals no maximum) Set an upper limit for password length. Number of symbols must be greater or equals to Specify the minimum number of symbols required in a password. Number of digits must be greater or equals to Set the minimum number of numeric characters in a password. Number of words (split on hyphen, space) must be greater or equals to Enforce a minimum count of words in a password. Number of lowercase chars must be greater or equals to Specify the minimum number of lowercase characters. Number of uppercase chars must be greater or equals to Specify the minimum number of uppercase characters. "},{"location":"administration/policies/#login-messages","title":"Login messages","text":"
Allow to define messages on the login page to customize and highlight your platform's security policy. Three distinct messages can be customized:
Platform login message: Appears above the login form to convey important information or announcements.
Platform consent message: A consent message that obscures the login form until users check the approval box, ensuring informed user consent.
Platform consent confirm text: A message accompanying the consent box, providing clarity on the consent confirmation process.
The platform banner configuration section allows administrators to display a custom banner message at the top and bottom of the screen. This feature enables customization for enhanced visual communication and branding within the OpenCTI platform. It can be used to add a disclaimer or system purpose.
This configuration has two parameters:
Platform banner level: Options defining the banner background color (Green, Red, or Yellow).
Platform banner text: Field referencing the message to be displayed within the banner.
The rules engine comprises a set of predefined rules (named inference rules) that govern how new relationships are inferred based on existing data. These rules are carefully crafted to ensure logical and accurate relationship creation. Here is the list of existing inference rules:
"},{"location":"administration/reasoning/#raise-incident-based-on-sighting","title":"Raise incident based on sighting","text":"Conditions Creations A non-revoked Indicator is sighted in an Entity Creation of an Incident linked to the sighted Indicator and the targeted Entity"},{"location":"administration/reasoning/#sightings-of-observables-via-observed-data","title":"Sightings of observables via observed data","text":"Conditions Creations An Indicator is based on an Observable contained in an Observed Data Creation of a sighting between the Indicator and the creating Identity of the Observed Data"},{"location":"administration/reasoning/#sightings-propagation-from-indicator","title":"Sightings propagation from indicator","text":"Conditions Creations An Indicator based on an Observable is sighted in an Entity The Observable is sighted in the Entity"},{"location":"administration/reasoning/#sightings-propagation-from-observable","title":"Sightings propagation from observable","text":"Conditions Creations An Indicator is based on an Observable sighted in an Entity The Indicator is sighted in the Entity"},{"location":"administration/reasoning/#relation-propagation-via-an-observable","title":"Relation propagation via an observable","text":"Conditions Creations An observable is related to two Entities Create a related to relationship between the two Entities"},{"location":"administration/reasoning/#attribution-propagation","title":"Attribution propagation","text":"Conditions Creations An Entity A is attributed to an Entity B and this Entity B is itself attributed to an Entity C The Entity A is attributed to Entity C"},{"location":"administration/reasoning/#belonging-propagation","title":"Belonging propagation","text":"Conditions Creations An Entity A is part of an Entity B and this Entity B is itself part of an Entity C The Entity A is part of Entity C"},{"location":"administration/reasoning/#location-propagation","title":"Location propagation","text":"Conditions Creations A Location A is located at a Location B and this Location B is itself located at a Location C The Location A is located at Location C"},{"location":"administration/reasoning/#organization-propagation-via-participation","title":"Organization propagation via participation","text":"Conditions Creations A User is affiliated with an Organization B, which is part of an Organization C The User is affiliated to the Organization C"},{"location":"administration/reasoning/#identities-propagation-in-reports","title":"Identities propagation in reports","text":"Conditions Creations A Report contains an Identity B and this Identity B is part of an Identity C The Report contains Identity C, as well as the Relationship between Identity B and Identity C"},{"location":"administration/reasoning/#locations-propagation-in-reports","title":"Locations propagation in reports","text":"Conditions Creations A Report contains a Location B and this Location B is located at a Location C The Report contains Location B, as well as the Relationship between Location B and Location C"},{"location":"administration/reasoning/#observables-propagation-in-reports","title":"Observables propagation in reports","text":"Conditions Creations A Report contains an Indicator and this Indicator is based on an Observable The Report contains the Observable, as well as the Relationship between the Indicator and the Observable"},{"location":"administration/reasoning/#usage-propagation-via-attribution","title":"Usage propagation via attribution","text":"Conditions Creations An Entity A, attributed to an Entity C, uses an Entity B The Entity C uses the Entity B"},{"location":"administration/reasoning/#inference-of-targeting-via-a-sighting","title":"Inference of targeting via a sighting","text":"Conditions Creations An Indicator, sighted at an Entity C, indicates an Entity B The Entity B targets the Entity C"},{"location":"administration/reasoning/#targeting-propagation-via-attribution","title":"Targeting propagation via attribution","text":"Conditions Creations An Entity A, attributed to an Entity C, targets an Entity B The Entity C targets the Entity B"},{"location":"administration/reasoning/#targeting-propagation-via-belonging","title":"Targeting propagation via belonging","text":"Conditions Creations An Entity A targets an Identity B, part of an Identity C The Entity A targets the Identity C"},{"location":"administration/reasoning/#targeting-propagation-via-location","title":"Targeting propagation via location","text":"Conditions Creations An Entity targets a Location B and this Location B is located at a Location C The Entity targets the Location C"},{"location":"administration/reasoning/#targeting-propagation-when-located","title":"Targeting propagation when located","text":"Conditions Creations An Entity A targets an Entity B and this target is located at Location D. The Entity A targets the Location D"},{"location":"administration/reasoning/#rule-execution","title":"Rule execution","text":""},{"location":"administration/reasoning/#rule-activation","title":"Rule activation","text":"
When a rule is activated, a background task is initiated. This task scans all platform data, identifying existing relationships that meet the conditions defined by the rule. Subsequently, it creates new objects (entities and/or relationships), expanding the network of insights within your threat intelligence environment. Then, activated rules operate continuously. Whenever a relationship is created or modified, and this change aligns with the conditions specified in an active rule, the reasoning mechanism is triggered. This ensures real-time relationship inference.
Deactivating a rule leads to the deletion of all objects and relationships created by it. This cleanup process maintains the accuracy and reliability of your threat intelligence database.
"},{"location":"administration/reasoning/#access-restrictions-and-data-impact","title":"Access restrictions and data impact","text":"
Access to the rule engine panel is restricted to administrators only. Regular users do not have visibility into this section of the platform. Administrators possess the authority to activate or deactivate rules.
The rules engine empowers OpenCTI with the capability to automatically establish intricate relationships within your data. However, these rules can lead to a very large number of objects created. Even if the operation is reversible, an administrator should consider the impact of activating a rule.
Usefulness: To understand the benefits and results of these rules, refer to the Inferences and reasoning page in the User Guide section of the documentation.
New inference rule: Given the potential impact of a rule on your data, users are not allowed to add new rules. However, users can submit rule suggestions via a GitHub issue for evaluation. These suggestions are carefully evaluated by our team, ensuring continuous improvement and innovation.
Retention rules serve the purpose of establishing data retention times, specifying when data should be automatically deleted from the platform. Users can define filters to target specific objects. Any object meeting these criteria that haven't been updated within the designated time frame will be permanently deleted.
Note that the data deleted by an active retention policy will not appear in the trash and thus cannot be restored.
Before activating a retention rule, users have the option to verify its impact using the \"Verify\" button. This action provides insight into the number of objects that currently match the rule's criteria and would be deleted if the rule is activated.
Verify before activation
Always use the \"Verify\" feature to assess the potential impact of a retention rule before activating it. Once the rule is activated, data deletion will begin, and retrieval of the deleted data will not be possible.
Retention rules contribute to maintaining a streamlined and efficient data lifecycle within OpenCTI, ensuring that outdated or irrelevant information is systematically removed from the platform, thereby optimizing disk space usage.
Data segregation in the context of Cyber Threat Intelligence refers to the practice of categorizing and separating different types of data or information related to cybersecurity threats based on specific criteria.
This separation helps organizations manage and analyze threat intelligence more effectively and securely and the goal of data segregation is to ensure that only those individuals who are authorized to view a particular set of data have access to that set of data.
Practically, \"Need-to-know basis\" and \"classification level\" are data segregation measures.
Marking definitions are essential in the context of data segregation to ensure that data is appropriately categorized and protected based on its sensitivity or classification level. Marking definitions establish a standardized framework for classifying data.
Marking Definition objects are unique among STIX objects in the STIX 2.1 standard in that they cannot be versioned. This restriction is in place to prevent the possibility of indirect alterations to the markings associated with a STIX Object.
Multiple markings can be added to the same object. Certain categories of marking definitions or trust groups may enforce rules that specify which markings take precedence over others or how some markings can be added to complement existing ones.
In OpenCTI, data is segregated based on knowledge marking. The diagram provided below illustrates the manner in which OpenCTI establishes connections between pieces of information to authorize data access for a user:
"},{"location":"administration/segregation/#manage-markings","title":"Manage markings","text":""},{"location":"administration/segregation/#create-new-markings","title":"Create new markings","text":"
To create a marking, you must first possess the capability Manage marking definitions. For further information on user administration, please refer to the Users and Role Based Access Control page.
Once you have access to the settings, navigate to \"Settings > Security > Marking Definitions\" to create a new marking.
A marking consists of the following attributes:
Type: Specifies the marking group to which it belongs.
Definition: The name assigned to the marking.
Color: The color associated with the marking.
Order: Determines the hierarchical order among markings of the same type.
The configuration of authorized markings for a user is determined at the Group level. To access entities and relationships associated with specific markings, the user must belong to a group that has been granted access to those markings.
There are two ways in which markings can be accessed:
The user is a member of a group that has been granted access to the marking.
The user is a member of a group that has access to a marking of the same type, with an equal or higher hierarchical order.
Access to an object with several markings
Access to all markings attached to an object is required in order to access it (not only one).
Automatically grant access to the new marking
To allow a group to automatically access a newly created marking definition, you can check Automatically authorize this group to new marking definition.
To apply a default marking when creating a new entity or relationship, you can choose which marking to add by default from the list of allowed markings. You can add only one marking per type, but you can have multiple types. This configuration is also done at the Group level.
Need a configuration change
Simply adding markings as default markings is insufficient to display the markings when creating an entity or relationship. You also need to enable default markings in the customization settings of an entity or relationship. For example, to enable default markings for a new report, navigate to \"Settings > Customization > Report > Markings\" and toggle the option to Activate/Desactivate default values.
This configuration allows to define, for each type of marking definitions, until which level we allow to share data externally (via Public dashboard or file export).
The marking definitions that can be shared by a group are the ones
that are allowed for this group
and whose order are inferior or equal to the order of the maximum shareable markings defined for each marking type.
Users with the Bypass capability can share all the markings.
By default, every marking of a given marking type is shareable.
For example in the capture below, for the type of marking TLP, only data with a marking definition that is allowed and has a level equal or below GREEN will be shareable. And no data with marking definition statement will be shared at all.
"},{"location":"administration/segregation/#management-of-multiple-markings","title":"Management of multiple markings","text":"
In scenarios where multiple markings of the same type but different orders are added, the platform will retain only the marking with the highest order for each type. This consolidation can occurs in various instances:
During entity creation, if multiple markings are selected.
During entity updates, whether manually or via a connector, if additional markings are introduced.
When multiple entities are merged, their respective markings will be amalgamated.
For example:
Create a new report and add markings PAP:AMBER,PAP:RED,TLP:AMBER+STRICT,TLP:CLEAR and a statement CC-BY-SA-4.0 DISARM Foundation
The final markings kept are: PAP:RED, TLP:AMBER+STRICT and CC-BY-SA-4.0 DISARM Foundation
"},{"location":"administration/segregation/#update-an-object-manually","title":"Update an object manually","text":"
When update an entity or a relationship:
add a marking with the same type and different orders, a pop-up will be displayed to confirm the choice,
add a marking with the same type and the same order, the marking will be added,
add a marking with different types, the marking will be added.
"},{"location":"administration/segregation/#import-data-from-a-connector","title":"Import data from a connector","text":"
As a result of this mechanism, when importing data from a connector, the connector is unable to downgrade a marking for an entity if a marking of the same type is already present on it.
The Traffic Light Protocol is implemented by default as marking definitions in OpenCTI. It allows you to segregate information by TLP levels in your platform and restrict access to marked data if users are not authorized to see the corresponding marking.
The Traffic Light Protocol (TLP) was designed by the Forum of Incident Response and Security Teams (FIRST) to provide a standardized method for classifying and handling sensitive information, based on four categories of sensitivity.
For more details, the diagram provided below illustrates how are categorized the marking definitions:
Support packages are useful for troubleshooting issue that occurs on OpenCTI platform. Administrators can request to create and download a support package that contains recent platform error logs and usage statistics.
Support package content
Even if we do our best to prevent logging any data, the support package may contains some sensitive information that you may not want to share with everyone. Before creating a ticket with your support package takes some time to check if you can safely share the content depending of your security policy.
Support Package can be requested from \"Settings > Support\" menu.
On a click on \"Generate support package\", a support event is propagated to every platform instances to request needed information. Every instance that will receive this message will process the request and send the files to the platform. During this processing the interface will display the expected support package name in an IN PROGRESS state waiting for completion. After finishing the process the support package will move to the READY state and the buttons download and delete will be activated.
In case of platform instability, some logs might not be retrieved and the support package will be incomplete.
If some instances fail to send their data, you will be able to force download a partial zip only after 1 minute. In case of a support package taking more than 5 minutes, the status will be moved to \"timeout\".
"},{"location":"administration/users/","title":"Users and Role Based Access Control","text":""},{"location":"administration/users/#introduction","title":"Introduction","text":"
In OpenCTI, the RBAC system not only related to what users can do or cannot do in the platform (aka. Capabilities) but also to the system of data segregation. Also, platform behavior such as default home dashboards, default triggers and digests as well as default hidden menus or entities can be defined across groups and organizations.
Roles are used in the platform to grant the given groups with some capabilities to define what users in those groups can do or cannot do.
"},{"location":"administration/users/#list-of-capabilities","title":"List of capabilities","text":"Capability Description Bypass all capabilities Just bypass everything including data segregation and enforcements. Access knowledge Access in read-only to all the knowledge in the platform. Access to collaborative creation Create notes and opinions (and modify its own) on entities and relations. Create / Update knowledge Create and update existing entities and relationships. Restrict organization access Share entities and relationships with other organizations. Delete knowledge Delete entities and relationships. Manage authorized members Restrict the access to an entity to a user, group or organization. Bypass enforced reference If external references enforced in a type of entity, be able to bypass the enforcement. Upload knowledge files Upload files in the Data and Content section of entities. Import knowledge Trigger the ingestion of an uploaded file. Download knowledge export Download the exports generated in the entities (in the Data section). Generate knowledge export Trigger the export of the knowledge of an entity. Ask for knowledge enrichment Trigger an enrichment for a given entity. Access dashboards Access to existing custom dashboards. Create / Update dashboards Create and update custom dashboards. Delete dashboards Delete existing custom dashboards. Manage public dashboards Manage public dashboards. Access investigations Access to existing investigations. Create / Update investigations Create and update investigations. Delete investigations Delete existing investigations. Access connectors Read information in the Data > Connectors section. Manage connector state Reset the connector state to restart ingestion from the beginning. Connectors API usage: register, ping, export push ... Connectors specific permissions for register, ping, push export files, etc. Access data sharing Access and consume data such as TAXII collections, CSV feeds and live streams. Manage data sharing Share data such as TAXII collections, CSV feeds and live streams or custom dashboards. Access ingestion Access (read only) remote OCTI streams, TAXII feeds, RSS feeds, CSV feeds. Manage ingestion Create, update, delete any remote OCTI streams, TAXII feeds, RSS feeds, CSV feeds. Manage CSV mappers Create, update and delete CSV mappers. Access to admin functionalities Parent capability allowing users to only view the settings. Access administration parameters Access and manage overall parameters of the platform in Settings > Parameters. Manage credentials Access and manage roles, groups, users, organizations and security policies. Manage marking definitions Update and delete marking definitions. Manage customization Customize entity types, rules, notifiers retention policies and decays rules. Manage taxonomies Manage labels, kill chain phases, vocabularies, status templates, cases templates. Access to security activity Access to activity log. Access to file indexing Manage file indexing. Access to support Generate and download support packages."},{"location":"administration/users/#manage-roles","title":"Manage roles","text":"
You can manage the roles in Settings > Security > Roles.
To create a role, just click on the + button:
Then you will be able to define the capabilities of the role:
You can manage the users in Settings > Security > Users. If you are using Single-Sign-On (SSO), the users in OpenCTI are automatically created upon login.
To create a user, just click on the + button:
"},{"location":"administration/users/#manage-a-user","title":"Manage a user","text":"
When access to a user, it is possible to:
Visualize information including the token
Modify it, reset 2FA if necessary
Manage its sessions
Manage its triggers and digests
Visualize the history and operations
Manage its max confidence levels
From this view you can edit the user's information by clicking the \"Update\" button, which opens a panel with several tabs.
Overview tab: edit all basic information such as the name or language
Password tab: change the password for this user
Groups tab: select the groups this user belongs to
Organization Admin tab: see Organization administration
Confidences tab: manage the user's maximum confidence level and overrides per entity type
Mandatory max confidence level
A user without Max confidence level won't have the ability to create, delete or update any data in our platform. Please be sure that your users are always either assigned to group that have a confidence level defined or that have an override of this group confidence level.
Groups are the main way to manage permissions and data segregation as well as platform customization for the given users part of this group. You can manage the groups in Settings > Security > Groups.
Here is the description of the group available parameters.
Parameter Description Auto new markings If a new marking definition is created, this group will automatically be granted to it. Default membership If a new user is created (manually or upon SSO), it will be added to this group. Roles Roles and capabilities granted to the users belonging to this group. Default dashboard Customize the home dashboard for the users belonging to this group. Default markings In Settings > Customization > Entity types, if a default marking definition is enabled, default markings of the group is used. Allowed markings Grant access to the group to the defined marking definitions, more details in data segregation. Max shareable markings Grant authorization to the group to share marking definitions. Triggers and digests Define defaults triggers and digests for the users belonging to this group. Max confidence level Define the maximum confidence level for the group: it will impact the capacity to update entities, the confidence level of a newly created entity by a user of the group.
Max confidence level when a user has multiple groups
A user with multiple groups will have the the highest confidence level of all its groups. For instance, if a user is part of group A (max confidence level = 100) and group B (max confidence level = 50), then the user max confidence level will be 100.
"},{"location":"administration/users/#manage-a-group","title":"Manage a group","text":"
When managing a group, you can define the members and all above configurations.
Users can belong to organizations, which is an additional layer of data segregation and customization. To find out more about this part, please refer to the page on organization segregation.
Platform administrators can promote members of an organization as \"Organization administrator\". This elevated role grants them the necessary capabilities to create, edit and delete users from the corresponding Organization. Additionally, administrators have the flexibility to define a list of groups that can be granted to newly created members by the organization administrators. This feature simplifies the process of granting appropriate access and privileges to individuals joining the organization.
The platform administrator can promote/demote an organization admin through its user edition form.
Organization admin rights
The \"Organization admin\" has restricted access to Settings. They can only manage the members of the organizations for which they have been promoted as \"admins\".
Activity unified interface and logging are available under the \"OpenCTI Enterprise Edition\" license.
Please read the dedicated page to have all information
As explained in the overview page, all administration actions are listened by default. However, all knowledge are not listened by default due to performance impact on the platform.
For this reason you need to explicitly activate extended listening on user / group or organization.
Listening will start just after the configuration. Every past events will not be taken into account.
OpenCTI activity capability is the way to unified whats really happen in the platform. In events section you will have access to the UI that will answer to \"who did what, where, and when?\" within your data with the maximum level of transparency.
By default, the events screen only show you the administration actions done by the users.
If you want to see also the information about the knowledge, you can simply activate the filter in the bar to get the complete overview of all user actions.
Don't hesitate to read again the overview page to have a better understanding of the difference between Audit, Basic/Extended knowledge.
Activity unified interface and logging are available under the \"OpenCTI Enterprise Edition\" license.
Please read the dedicated page to have all the information
OpenCTI activity capability is the way to unify what's really happening in the platform. With this feature you will be able to answer \"who did what, where, and when?\" within your data with the maximum level of transparency.
Enabling activity helps your security, auditing, and compliance entities monitor platform for possible vulnerabilities or external data misuse.
The basic knowledge refers to all STIX data knowledge inside OpenCTI. Every create/update/delete action on that knowledge is accessible through the history. That basic activity is handled by the history manager and can also be found directly on each entity.
The extended knowledge refers to extra information data to track specific user activity. As this kind of tracking is expensive, the tracking will only be done for specific users/groups/organizations explicitly configured in the configuration window.
Having all the history in the user interface (events) is sometimes not enough to have a proactive monitoring. For this reason, you can configure some specific triggers to receive notifications on audit events. You can configure like personal triggers, lives one that will be sent directly or digest depending on your needs.
Under the hood, we technically use the strategies provided by PassportJS. We integrate a subset of the strategies available with passport. If you need more, we can integrate other strategies.
The cert parameter is mandatory (PEM format) because it is used to validate the SAML response.
The private_key (PEM format) is optional and is only required if you want to sign the SAML client request.
Certificates
Be careful to put the cert / private_key key in PEM format. Indeed, a lot of systems generally export the keys in X509 / PCKS12 formats and so you will need to convert them. Here is an example to extract PEM from PCKS12:
This strategy allows to use the OpenID Connect Protocol to handle the authentication and is based on Node OpenID Client which is more powerful than the passport one.
By default, the claims are mapped based on the content of the JWT access_token. If you want to map claims which are in other JWT (such as id_token), you can define the following environment variables:
If this mode is activated and the headers are available, the user will be automatically logged without any action or notice. The logout uri will remove the session and redirect to the configured uri. If not specified, the redirect will be done to the request referer and so the header authentication will be done again.
"},{"location":"deployment/authentication/#automatically-create-group-on-sso","title":"Automatically create group on SSO","text":"
The variable auto_create_group can be added in the options of some strategies (LDAP, SAML and OpenID). If this variable is true, the groups of a user that logins will automatically be created if they don\u2019t exist.
More precisely, if the user that tries to authenticate has groups that don\u2019t exist in OpenCTI but exist in the SSO configuration, there are two cases:
if auto_create_group= true in the SSO configuration: the groups are created at the platform initialization and the user will be mapped on them.
else: an error is raised.
Example
We assume that Group1 exists in the platform, and newGroup doesn\u2019t exist. The user that tries to log in has the group newGroup. If auto_create_group = true in the SSO configuration, the group named newGroup will be created at the platform initialization and the user will be mapped on it. If auto_create_group = false or is undefined, the user can\u2019t log in and an error is raised.
"},{"location":"deployment/authentication/#examples","title":"Examples","text":""},{"location":"deployment/authentication/#ldap-then-fallback-to-local","title":"LDAP then fallback to local","text":"
In this example the users have a login form and need to enter login and password. The authentication is done on LDAP first, then locally if user failed to authenticate and finally fail if none of them succeeded. Here is an example for the production.json file:
Change to the observable \"promote\" return value: the API now returns the created Indicator instead of the original Observable.
GraphQL Mutation StixCyberObservableEditMutations.promote is now deprecated
New GraphQL Mutation StixCyberObservableEditMutations.promoteToIndicator introduced
Client-python method client.stix_cyber_observable.promote_to_indicator is now deprecated
New Client-python method client.stix_cyber_observable.promote_to_indicator_v2 introduced
Change to the SAML authentication: when want_assertions_signed and want_authn_response_signed SAML parameter are not present in OpenCTI configuration, the default is set to \"true\" by the underlaying library (passport-saml) when previously it was false by default. If you have issues after upgrade, you can try with both of them set to false.
Discontinued Support
Please note that the deprecated methods will be permanently removed in OpenCTI 6.5.
"},{"location":"deployment/breaking-changes/#how-to-migrate-from-opencti-62","title":"How to migrate from OpenCTI < 6.2","text":"
If you are using custom scripts that make use of the deprecated API methods, please update these scripts.
The changes are straightforward: if you are using the return value of the method, you should now expect the new Indicator instead of the Observable being promoted; adapt your code accordingly.
The OpenCTI platform technological stack has been designed to be able to scale horizontally. All dependencies such as Elastic or Redis can be deployed in cluster mode and performances can be drastically increased by deploying multiple platform and worker instances.
MinIO is an open source server able to serve S3 buckets. It can be deployed in cluster mode and is compatible with several storage backend. OpenCTI is compatible with any tool following the S3 standard.
As showed on the schema, best practices for cluster mode and to avoid any congestion in the technological stack are:
Deploy platform(s) dedicated to end users and connectors registration
Deploy platform(s) dedicated to workers / ingestion process
We recommend 3 to 4 workers maximum by OpenCTI instance.
The ingestion platforms will never be accessed directly by end users.
When enabling clustering, the number of nodes is displayed in Settings > Parameters.
"},{"location":"deployment/clustering/#managers-and-schedulers","title":"Managers and schedulers","text":"
Also, since some managers like the rule engine, the task manager and the notification manager can take some resources in the OpenCTI NodeJS process, it is highly recommended to disable them in the frontend cluster. OpenCTI automatically handle the distribution and the launching of the engines across all nodes in the cluster except where they are explicitly disabled in the configuration.
The purpose of this section is to learn how to configure OpenCTI to have it tailored for your production and development needs. It is possible to check all default parameters implemented in the platform in the default.json file.
Here are the configuration keys, for both containers (environment variables) and manual deployment.
Parameters equivalence
The equivalent of a config variable in environment variables is the usage of a double underscores (__) for a level of config.
"},{"location":"deployment/configuration/#platform","title":"Platform","text":""},{"location":"deployment/configuration/#api-frontend","title":"API & Frontend","text":""},{"location":"deployment/configuration/#basic-parameters","title":"Basic parameters","text":"Parameter Environment variable Default value Description app:port APP__PORT 4000 Listen port of the application app:base_path APP__BASE_PATH Specific URI (ie. /opencti) app:base_url APP__BASE_URL http://localhost:4000 Full URL of the platform (should include the base_path if any) app:request_timeout APP__REQUEST_TIMEOUT 1200000 Request timeout, in ms (default 20 minutes) app:session_timeout APP__SESSION_TIMEOUT 1200000 Session timeout, in ms (default 20 minutes) app:session_idle_timeout APP__SESSION_IDLE_TIMEOUT 0 Idle timeout (locking the screen), in ms (default 0 minute - disabled) app:session_cookie APP__SESSION_COOKIE false Use memory/session cookie instead of persistent one app:admin:email APP__ADMIN__EMAIL admin@opencti.io Default login email of the admin user app:admin:password APP__ADMIN__PASSWORD ChangeMe Default password of the admin user app:admin:token APP__ADMIN__TOKEN ChangeMe Default token (must be a valid UUIDv4) app:health_access_key APP__HEALTH_ACCESS_KEY ChangeMe Access key for the /health endpoint. Must be changed - will not respond to default value. Access with /health?health_access_key=ChangeMe"},{"location":"deployment/configuration/#network-and-security","title":"Network and security","text":"Parameter Environment variable Default value Description http_proxy HTTP_PROXY Proxy URL for HTTP connection (example: http://proxy:80080) https_proxy HTTPS_PROXY Proxy URL for HTTPS connection (example: http://proxy:80080) no_proxy NO_PROXY Comma separated list of hostnames for proxy exception (example: localhost,127.0.0.0/8,internal.opencti.io) app:https_cert:cookie_secure APP__HTTPS_CERT__COOKIE_SECURE false Set the flag \"secure\" for session cookies. app:https_cert:ca APP__HTTPS_CERT__CA Empty list [] Certificate authority paths or content, only if the client uses a self-signed certificate. app:https_cert:key APP__HTTPS_CERT__KEY Certificate key path or content app:https_cert:crt APP__HTTPS_CERT__CRT Certificate crt path or content app:https_cert:reject_unauthorized APP__HTTPS_CERT__REJECT_UNAUTHORIZED If not false, the server certificate is verified against the list of supplied CAs"},{"location":"deployment/configuration/#logging","title":"Logging","text":""},{"location":"deployment/configuration/#errors","title":"Errors","text":"Parameter Environment variable Default value Description app:app_logs:logs_level APP__APP_LOGS__LOGS_LEVEL info The application log level app:app_logs:logs_files APP__APP_LOGS__LOGS_FILES true If application logs is logged into files app:app_logs:logs_console APP__APP_LOGS__LOGS_CONSOLE true If application logs is logged to console (useful for containers) app:app_logs:logs_max_files APP__APP_LOGS__LOGS_MAX_FILES 7 Maximum number of daily files in logs app:app_logs:logs_directory APP__APP_LOGS__LOGS_DIRECTORY ./logs File logs directory"},{"location":"deployment/configuration/#audit","title":"Audit","text":"Parameter Environment variable Default value Description app:audit_logs:logs_files APP__AUDIT_LOGS__LOGS_FILES true If audit logs is logged into files app:audit_logs:logs_console APP__AUDIT_LOGS__LOGS_CONSOLE true If audit logs is logged to console (useful for containers) app:audit_logs:logs_max_files APP__AUDIT_LOGS__LOGS_MAX_FILES 7 Maximum number of daily files in logs app:audit_logs:logs_directory APP__AUDIT_LOGS__LOGS_DIRECTORY ./logs Audit logs directory"},{"location":"deployment/configuration/#telemetry","title":"Telemetry","text":"Parameter Environment variable Default value Description app:telemetry:metrics:enabled APP__TELEMETRY__METRICS__ENABLED false Enable the metrics collection. app:telemetry:metrics:exporter_otlp APP__TELEMETRY__METRICS__EXPORTER_OTLP Port to expose the OTLP endpoint. app:telemetry:metrics:exporter_prometheus APP__TELEMETRY__METRICS__EXPORTER_PROMETHEUS 14269 Port to expose the Prometheus endpoint."},{"location":"deployment/configuration/#maps-references","title":"Maps & references","text":"Parameter Environment variable Default value Description app:map_tile_server_dark APP__MAP_TILE_SERVER_DARK https://map.opencti.io/styles/filigran-dark2/{z}/{x}/{y}.png The address of the OpenStreetMap provider with dark theme style app:map_tile_server_light APP__MAP_TILE_SERVER_LIGHT https://map.opencti.io/styles/filigran-light2/{z}/{x}/{y}.png The address of the OpenStreetMap provider with light theme style app:reference_attachment APP__REFERENCE_ATTACHMENT false External reference mandatory attachment"},{"location":"deployment/configuration/#functional-customization","title":"Functional customization","text":"Parameter Environment variable Default value Description app:artifact_zip_password APP__ARTIFACT_ZIP_PASSWORD infected Artifact encrypted archive default password relations_deduplication:past_days RELATIONS_DEDUPLICATION__PAST_DAYS 30 De-duplicate relations based on start_time and stop_time - n days relations_deduplication:next_days RELATIONS_DEDUPLICATION__NEXT_DAYS 30 De-duplicate relations based on start_time and stop_time + n days relations_deduplication:created_by_based RELATIONS_DEDUPLICATION__CREATED_BY_BASED false Take into account the author to duplicate even if stat_time / stop_time are matching relations_deduplication:types_overrides:relationship_type:past_days RELATIONS_DEDUPLICATION__RELATIONSHIP_TYPE__PAST_DAYS Override the past days for a specific type of relationship (ex. targets) relations_deduplication:types_overrides:relationship_type:next_days RELATIONS_DEDUPLICATION__RELATIONSHIP_TYPE__NEXT_DAYS Override the next days for a specific type of relationship (ex. targets) relations_deduplication:types_overrides:relationship_type:created_by_based RELATIONS_DEDUPLICATION__RELATIONSHIP_TYPE__CREATED_BY_BASED Override the author duplication for a specific type of relationship (ex. targets)"},{"location":"deployment/configuration/#technical-customization","title":"Technical customization","text":"Parameter Environment variable Default value Description app:graphql:playground:enabled APP__GRAPHQL__PLAYGROUND__ENABLED true Enable the playground on /graphql app:graphql:playground:force_disabled_introspection APP__GRAPHQL__PLAYGROUND__FORCE_DISABLED_INTROSPECTION true Introspection is allowed to auth users but can be disabled in needed app:concurrency:retry_count APP__CONCURRENCY__RETRY_COUNT 200 Number of try to get the lock to work an element (create/update/merge, ...) app:concurrency:retry_delay APP__CONCURRENCY__RETRY_DELAY 100 Delay between 2 lock retry (in milliseconds) app:concurrency:retry_jitter APP__CONCURRENCY__RETRY_JITTER 50 Random jitter to prevent concurrent retry (in milliseconds) app:concurrency:max_ttl APP__CONCURRENCY__MAX_TTL 30000 Global maximum time for lock retry (in milliseconds)"},{"location":"deployment/configuration/#dependencies","title":"Dependencies","text":""},{"location":"deployment/configuration/#xtm-suite","title":"XTM Suite","text":"Parameter Environment variable Default value Description xtm:openbas_url XTM__OPENBAS_URL OpenBAS URL xtm:openbas_token XTM__OPENBAS_TOKEN OpenBAS token xtm:openbas_reject_unauthorized XTM__OPENBAS_REJECT_UNAUTHORIZED false Enable TLS certificate check xtm:openbas_disable_display XTM__OPENBAS_DISABLE_DISPLAY false Disable OpenBAS posture in the UI"},{"location":"deployment/configuration/#elasticsearch","title":"ElasticSearch","text":"Parameter Environment variable Default value Description elasticsearch:engine_selector ELASTICSEARCH__ENGINE_SELECTOR auto elk or opensearch, default is auto, please put elk if you use token auth. elasticsearch:engine_check ELASTICSEARCH__ENGINE_CHECK false Disable Search Engine compatibility matrix verification. Caution: OpenCTI was developed in compliance with the compatibility matrix. Setting the parameter to true may result in negative impacts. elasticsearch:url ELASTICSEARCH__URL http://localhost:9200 URL(s) of the ElasticSearch (supports http://user:pass@localhost:9200 and list of URLs) elasticsearch:username ELASTICSEARCH__USERNAME Username can be put in the URL or with this parameter elasticsearch:password ELASTICSEARCH__PASSWORD Password can be put in the URL or with this parameter elasticsearch:api_key ELASTICSEARCH__API_KEY API key for ElasticSearch token auth. Please set also engine_selector to elk elasticsearch:index_prefix ELASTICSEARCH__INDEX_PREFIX opencti Prefix for the indices elasticsearch:ssl:reject_unauthorized ELASTICSEARCH__SSL__REJECT_UNAUTHORIZED true Enable TLS certificate check elasticsearch:ssl:ca ELASTICSEARCH__SSL__CA Custom certificate path or content elasticsearch:search_wildcard_prefix ELASTICSEARCH__SEARCH_WILDCARD_PREFIX false Search includes words with automatic fuzzy comparison elasticsearch:search_fuzzy ELASTICSEARCH__SEARCH_FUZZY false Search will include words not starting with the search keyword"},{"location":"deployment/configuration/#redis","title":"Redis","text":"Parameter Environment variable Default value Description redis:mode REDIS__MODE single Connect to redis in \"single\", \"sentinel or \"cluster\" mode redis:namespace REDIS__NAMESPACE Namespace (to use as prefix) redis:hostname REDIS__HOSTNAME localhost Hostname of the Redis Server redis:hostnames REDIS__HOSTNAMES Hostnames definition for Redis cluster or sentinel mode: a list of host:port objects. redis:port REDIS__PORT 6379 Port of the Redis Server redis:sentinel_master_name REDIS__SENTINEL_MASTER_NAME Name of your Redis Sentinel Master (mandatory in sentinel mode) redis:use_ssl REDIS__USE_SSL false Is the Redis Server has TLS enabled redis:username REDIS__USERNAME Username of the Redis Server redis:password REDIS__PASSWORD Password of the Redis Server redis:ca REDIS__CA [] List of path(s) of the CA certificate(s) redis:trimming REDIS__TRIMMING 2000000 Number of elements to maintain in the stream. (0 = unlimited)"},{"location":"deployment/configuration/#rabbitmq","title":"RabbitMQ","text":"Parameter Environment variable Default value Description rabbitmq:hostname RABBITMQ__HOSTNAME localhost 7 Hostname of the RabbitMQ server rabbitmq:port RABBITMQ__PORT 5672 Port of the RabbitMQ server rabbitmq:port_management RABBITMQ__PORT_MANAGEMENT 15672 Port of the RabbitMQ Management Plugin rabbitmq:username RABBITMQ__USERNAME guest RabbitMQ user rabbitmq:password RABBITMQ__PASSWORD guest RabbitMQ password rabbitmq:queue_type RABBITMQ__QUEUE_TYPE \"classic\" RabbitMQ Queue Type (\"classic\" or \"quorum\") - - - - rabbitmq:use_ssl RABBITMQ__USE_SSL false Use TLS connection rabbitmq:use_ssl_cert RABBITMQ__USE_SSL_CERT Path or cert content rabbitmq:use_ssl_key RABBITMQ__USE_SSL_KEY Path or key content rabbitmq:use_ssl_pfx RABBITMQ__USE_SSL_PFX Path or pfx content rabbitmq:use_ssl_ca RABBITMQ__USE_SSL_CA [] List of path(s) of the CA certificate(s) rabbitmq:use_ssl_passphrase RABBITMQ__SSL_PASSPHRASE Passphrase for the key certificate rabbitmq:use_ssl_reject_unauthorized RABBITMQ__SSL_REJECT_UNAUTHORIZED false Reject rabbit self signed certificate - - - - rabbitmq:management_ssl RABBITMQ__MANAGEMENT_SSL false Is the Management Plugin has TLS enabled rabbitmq:management_ssl_reject_unauthorized RABBITMQ__SSL_REJECT_UNAUTHORIZED true Reject management self signed certificate"},{"location":"deployment/configuration/#s3-bucket","title":"S3 Bucket","text":"Parameter Environment variable Default value Description minio:endpoint MINIO__ENDPOINT localhost Hostname of the S3 Service. Example if you use AWS Bucket S3: s3.us-east-1.amazonaws.com (if minio:bucket_region value is us-east-1). This parameter value can be omitted if you use Minio as an S3 Bucket Service. minio:port MINIO__PORT 9000 Port of the S3 Service. For AWS Bucket S3 over HTTPS, this value can be changed (usually 443). minio:use_ssl MINIO__USE_SSL false Indicates whether the S3 Service has TLS enabled. For AWS Bucket S3 over HTTPS, this value could be true. minio:access_key MINIO__ACCESS_KEY ChangeMe Access key for the S3 Service. minio:secret_key MINIO__SECRET_KEY ChangeMe Secret key for the S3 Service. minio:bucket_name MINIO__BUCKET_NAME opencti-bucket S3 bucket name. Useful to change if you use AWS. minio:bucket_region MINIO__BUCKET_REGION us-east-1 Region of the S3 bucket if you are using AWS. This parameter value can be omitted if you use Minio as an S3 Bucket Service. minio:use_aws_role MINIO__USE_AWS_ROLE false Indicates whether to use AWS role auto credentials. When this parameter is configured, the minio:access_key and minio:secret_key parameters are not necessary."},{"location":"deployment/configuration/#smtp-service","title":"SMTP Service","text":"Parameter Environment variable Default value Description smtp:hostname SMTP__HOSTNAME SMTP Server hostname smtp:port SMTP__PORT 465 SMTP Port (25 or 465 for TLS) smtp:use_ssl SMTP__USE_SSL false SMTP over TLS smtp:reject_unauthorized SMTP__REJECT_UNAUTHORIZED false Enable TLS certificate check smtp:username SMTP__USERNAME SMTP Username if authentication is needed smtp:password SMTP__PASSWORD SMTP Password if authentication is needed"},{"location":"deployment/configuration/#ai-service","title":"AI Service","text":"
AI deployment and cloud services
There are several possibilities for Enterprise Edition customers to use OpenCTI AI endpoints:
Use the Filigran AI Service leveraging our custom AI model using the token given by the support team.
Use OpenAI or MistralAI cloud endpoints using your own tokens.
Deploy or use local AI endpoints (Filigran can provide you with the custom model).
Parameter Environment variable Default value Description ai:enabled AI__ENABLED true Enable AI capabilities ai:type AI__TYPE mistralai AI type (mistralai or openai) ai:endpoint AI__ENDPOINT Endpoint URL (empty means default cloud service) ai:token AI__TOKEN Token for endpoint credentials ai:model AI__MODEL Model to be used for text generation (depending on type) ai:model_images AI__MODEL_IMAGES Model to be used for image generation (depending on type)"},{"location":"deployment/configuration/#using-a-credentials-provider","title":"Using a credentials provider","text":"
In some cases, it may not be possible to put directly dependencies credentials directly in environment variables or static configuration. The platform can then retrieve them from a credentials provider. Here is the list of supported providers:
For each dependency, special configuration keys are available to ensure the platform retrieves credentials during start process. Not all dependencies support this mechanism, here is the exhaustive list:
Dependency Prefix ElasticSearch elasticsearch S3 Storage minio Redis redis OpenID secrets oic"},{"location":"deployment/configuration/#common-configurations","title":"Common configurations","text":"Parameter Environment variable Default value Description {prefix}:credentials_provider:https_cert:reject_unauthorized {PREFIX}__CREDENTIALS_PROVIDER__HTTPS_CERT__REJECT_UNAUTHORIZED false Reject unauthorized TLS connection {prefix}:credentials_provider:https_cert:crt {PREFIX}__CREDENTIALS_PROVIDER__HTTPS_CERT__CRT Path to the HTTPS certificate {prefix}:credentials_provider:https_cert:key {PREFIX}__CREDENTIALS_PROVIDER__HTTPS_CERT__KEY Path to the HTTPS key {prefix}:credentials_provider:https_cert:ca {PREFIX}__CREDENTIALS_PROVIDER__HTTPS_CERT__CA Path to the HTTPS CA certificate"},{"location":"deployment/configuration/#cyberark","title":"CyberArk","text":"Parameter Environment variable Default value Description {prefix}:credentials_provider:cyberark:uri {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__URI The URL of the CyberArk endpoint for credentials retrieval (GET request) {prefix}:credentials_provider:cyberark:app_id {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__APP_ID The used application ID for the dependency within CyberArk {prefix}:credentials_provider:cyberark:safe {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__SAFE The used safe key for the dependency within CyberArk {prefix}:credentials_provider:cyberark:object {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__OBJECT The used object key for the dependency within CyberArk {prefix}:credentials_provider:cyberark:default_splitter {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__DEFAULT_SPLITTER : Default splitter of the credentials results, for \"username:password\", default is \":\" {prefix}:credentials_provider:cyberark:field_targets {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__FIELD_TARGETS [] Fields targets in the data content response after splitting
Here is an example for ElasticSearch:
Environment variables:
- ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__URI=http://my.cyberark.com/AIMWebService/api/Accounts\n- ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__APP_ID=opencti-elastic\n- ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__SAFE=mysafe-key\n- ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__OBJECT=myobject-key\n- \"ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__DEFAULT_SPLITTER=:\" # As default is already \":\", may not be necessary\n- \"ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__FIELD_TARGETS=[\\\"username\\\",\\\"password\\\"]\"\n
- MINIO__CREDENTIALS_PROVIDER__HTTPS_CERT__CRT=/cert_volume/mycert.crt\n- MINIO__CREDENTIALS_PROVIDER__HTTPS_CERT__KEY=/cert_volume/mycert.key\n- MINIO__CREDENTIALS_PROVIDER__HTTPS_CERT__CA=/cert_volume/ca.crt\n- MINIO__CREDENTIALS_PROVIDER__CYBERARK__URI=http://my.cyberark.com/AIMWebService/api/Accounts\n- MINIO__CREDENTIALS_PROVIDER__CYBERARK__APP_ID=opencti-s3\n- MINIO__CREDENTIALS_PROVIDER__CYBERARK__SAFE=mysafe-key\n- MINIO__CREDENTIALS_PROVIDER__CYBERARK__OBJECT=myobject-key\n- \"MINIO__CREDENTIALS_PROVIDER__CYBERARK__DEFAULT_SPLITTER=:\" # As default is already \":\", may not be necessary\n- \"MINIO__CREDENTIALS_PROVIDER__CYBERARK__FIELD_TARGETS=[\\\"access_key\\\",\\\"secret_key\\\"]\"\n
"},{"location":"deployment/configuration/#engines-schedules-and-managers","title":"Engines, Schedules and Managers","text":"Parameter Environment variable Default value Description rule_engine:enabled RULE_ENGINE__ENABLED true Enable/disable the rule engine rule_engine:lock_key RULE_ENGINE__LOCK_KEY rule_engine_lock Lock key of the engine in Redis - - - - history_manager:enabled HISTORY_MANAGER__ENABLED true Enable/disable the history manager history_manager:lock_key HISTORY_MANAGER__LOCK_KEY history_manager_lock Lock key for the manager in Redis - - - - task_scheduler:enabled TASK_SCHEDULER__ENABLED true Enable/disable the task scheduler task_scheduler:lock_key TASK_SCHEDULER__LOCK_KEY task_manager_lock Lock key for the scheduler in Redis task_scheduler:interval TASK_SCHEDULER__INTERVAL 10000 Interval to check new task to do (in ms) - - - - sync_manager:enabled SYNC_MANAGER__ENABLED true Enable/disable the sync manager sync_manager:lock_key SYNC_MANAGER__LOCK_KEY sync_manager_lock Lock key for the manager in Redis sync_manager:interval SYNC_MANAGER__INTERVAL 10000 Interval to check new sync feeds to consume (in ms) - - - - expiration_scheduler:enabled EXPIRATION_SCHEDULER__ENABLED true Enable/disable the scheduler expiration_scheduler:lock_key EXPIRATION_SCHEDULER__LOCK_KEY expired_manager_lock Lock key for the scheduler in Redis expiration_scheduler:interval EXPIRATION_SCHEDULER__INTERVAL 300000 Interval to check expired indicators (in ms) - - - - retention_manager:enabled RETENTION_MANAGER__ENABLED true Enable/disable the retention manager retention_manager:lock_key RETENTION_MANAGER__LOCK_KEY retention_manager_lock Lock key for the manager in Redis retention_manager:interval RETENTION_MANAGER__INTERVAL 60000 Interval to check items to be deleted (in ms) - - - - notification_manager:enabled NOTIFICATION_MANAGER__ENABLED true Enable/disable the notification manager notification_manager:lock_live_key NOTIFICATION_MANAGER__LOCK_LIVE_KEY notification_live_manager_lock Lock live key for the manager in Redis notification_manager:lock_digest_key NOTIFICATION_MANAGER__LOCK_DIGEST_KEY notification_digest_manager_lock Lock digest key for the manager in Redis notification_manager:interval NOTIFICATION_MANAGER__INTERVAL 10000 Interval to push notifications - - - - publisher_manager:enabled PUBLISHER_MANAGER__ENABLED true Enable/disable the publisher manager publisher_manager:lock_key PUBLISHER_MANAGER__LOCK_KEY publisher_manager_lock Lock key for the manager in Redis publisher_manager:interval PUBLISHER_MANAGER__INTERVAL 10000 Interval to send notifications / digests (in ms) - - - - ingestion_manager:enabled INGESTION_MANAGER__ENABLED true Enable/disable the ingestion manager ingestion_manager:lock_key INGESTION_MANAGER__LOCK_KEY ingestion_manager_lock Lock key for the manager in Redis ingestion_manager:interval INGESTION_MANAGER__INTERVAL 300000 Interval to check for new data in remote feeds - - - - playbook_manager:enabled PLAYBOOK_MANAGER__ENABLED true Enable/disable the playbook manager playbook_manager:lock_key PLAYBOOK_MANAGER__LOCK_KEY publisher_manager_lock Lock key for the manager in Redis playbook_manager:interval PLAYBOOK_MANAGER__INTERVAL 60000 Interval to check new playbooks - - - - activity_manager:enabled ACTIVITY_MANAGER__ENABLED true Enable/disable the activity manager activity_manager:lock_key ACTIVITY_MANAGER__LOCK_KEY activity_manager_lock Lock key for the manager in Redis - - - - connector_manager:enabled CONNECTOR_MANAGER__ENABLED true Enable/disable the connector manager connector_manager:lock_key CONNECTOR_MANAGER__LOCK_KEY connector_manager_lock Lock key for the manager in Redis connector_manager:works_day_range CONNECTOR_MANAGER__WORKS_DAY_RANGE 7 Days range before considering the works as too old connector_manager:interval CONNECTOR_MANAGER__INTERVAL 10000 Interval to check the state of the works - - - - import_csv_built_in_connector:enabled IMPORT_CSV_CONNECTOR__ENABLED true Enable/disable the csv import connector import_csv_built_in_connector:validate_before_import IMPORT_CSV_CONNECTOR__VALIDATE_BEFORE_IMPORT false Validates the bundle before importing - - - - file_index_manager:enabled FILE_INDEX_MANAGER__ENABLED true Enable/disable the file indexing manager file_index_manager:stream_lock_key FILE_INDEX_MANAGER__STREAM_LOCK file_index_manager_stream_lock Stream lock key for the manager in Redis file_index_manager:interval FILE_INDEX_MANAGER__INTERVAL 60000 Interval to check for new files - - - - indicator_decay_manager:enabled INDICATOR_DECAY_MANAGER__ENABLED true Enable/disable the indicator decay manager indicator_decay_manager:lock_key INDICATOR_DECAY_MANAGER__LOCK_KEY indicator_decay_manager_lock Lock key for the manager in Redis indicator_decay_manager:interval INDICATOR_DECAY_MANAGER__INTERVAL 60000 Interval to check for indicators to update indicator_decay_manager:batch_size INDICATOR_DECAY_MANAGER__BATCH_SIZE 10000 Number of indicators handled by the manager - - - - garbage_collection_manager:enabled GARBAGE_COLLECTION_MANAGER__ENABLED true Enable/disable the trash manager garbage_collection_manager:lock_key GARBAGE_COLLECTION_MANAGER__LOCK_KEY garbage_collection_manager_lock Lock key for the manager in Redis garbage_collection_manager:interval GARBAGE_COLLECTION_MANAGER__INTERVAL 60000 Interval to check for trash elements to delete garbage_collection_manager:batch_size GARBAGE_COLLECTION_MANAGER__BATCH_SIZE 10000 Number of trash elements to delete at once garbage_collection_manager:deleted_retention_days GARBAGE_COLLECTION_MANAGER__DELETED_RETENTION_DAYS 7 Days after which elements in trash are deleted - - - - telemetry_manager:lock_key TELEMETRY_MANAGER__LOCK_LOCK telemetry_manager_lock Lock key for the manager in Redis
Manager's duties
A description of each manager's duties is available on a dedicated page.
"},{"location":"deployment/configuration/#worker-and-connector","title":"Worker and connector","text":"
Can be configured manually using the configuration file config.yml or through environment variables.
Parameter Environment variable Default value Description opencti:url OPENCTI_URL The URL of the OpenCTI platform opencti:token OPENCTI_TOKEN A token of an administrator account with bypass capability - - - - mq:use_ssl / / Depending of the API configuration (fetch from API) mq:use_ssl_ca MQ_USE_SSL_CA Path or ca content mq:use_ssl_cert MQ_USE_SSL_CERT Path or cert content mq:use_ssl_key MQ_USE_SSL_KEY Path or key content mq:use_ssl_passphrase MQ_USE_SSL_PASSPHRASE Passphrase for the key certificate mq:use_ssl_reject_unauthorized MQ_USE_SSL_REJECT_UNAUTHORIZED false Reject rabbit self signed certificate"},{"location":"deployment/configuration/#worker-specific-configuration","title":"Worker specific configuration","text":""},{"location":"deployment/configuration/#logging_1","title":"Logging","text":"Parameter Environment variable Default value Description worker:log_level WORKER_LOG_LEVEL info The log level (error, warning, info or debug)"},{"location":"deployment/configuration/#telemetry_1","title":"Telemetry","text":"Parameter Environment variable Default value Description worker:telemetry_enabled WORKER_TELEMETRY_ENABLED false Enable the Prometheus endpoint worker:telemetry_prometheus_port WORKER_PROMETHEUS_TELEMETRY_PORT 14270 Port of the Prometheus endpoint worker:telemetry_prometheus_host WORKER_PROMETHEUS_TELEMETRY_HOST 0.0.0.0 Listen address of the Prometheus endpoint"},{"location":"deployment/configuration/#connector-specific-configuration","title":"Connector specific configuration","text":"
For specific connector configuration, you need to check each connector behavior.
You are looking for the available connectors? The list is in the OpenCTI Ecosystem.
Connectors are the cornerstone of the OpenCTI platform and allow organizations to easily ingest, enrich or export data. According to their functionality and use case, they are categorized in the following classes.
These connectors automatically retrieve information from an external organization, application, or service, and convert it to STIX 2.1 bundles. Then, they import it into OpenCTI using the workers.
When a new object is created in the platform or on the user request, it is possible to trigger the internal enrichment connector to lookup and/or search the object in external organizations, applications, or services. If the object is found, the connectors will generate a STIX 2.1 bundle which will increase the level of knowledge about the concerned object.
These connectors connect to a platform live stream and continuously do something with the received events. In most cases, they are used to consume OpenCTI data and insert them in third-party platforms such as SIEMs, XDRs, EDRs, etc. In some cases, stream connectors can also query the external system on a regular basis and act as import connector for instance to gather alerts and sightings related to CTI data and push them to OpenCTI (bi-directional).
Information stored in OpenCTI can be extracted into different file formats like .csv or .json (STIX 2.1).
"},{"location":"deployment/connectors/#connector-configuration","title":"Connector configuration","text":""},{"location":"deployment/connectors/#connector-users-and-tokens","title":"Connector users and tokens","text":"
All connectors have to be able to access the OpenCTI API. To allow this connection, they have 2 mandatory configuration parameters, the OPENCTI_URL and the OPENCTI_TOKEN.
Connectors tokens
Be careful, we strongly recommend to use a dedicated token for each connector running in the platform. So you have to create a specific user for each of them.
Also, if all connectors users can run with a user belonging to the Connectors group (with the Connector role), the Internal Export Files should be run with a user who is Administrator (with bypass capability) because they impersonate the user requesting the export to avoid data leak.
Type Required role Used permissions EXTERNAL_IMPORT Connector Import data with the connector user. INTERNAL_ENRICHMENT Connector Enrich data with the connector user. INTERNAL_IMPORT_FILE Connector Import data with the connector user. INTERNAL_EXPORT_FILE Administrator Export data with the user who requested the export. STREAM Connector Consume the streams with the connector user."},{"location":"deployment/connectors/#parameters","title":"Parameters","text":"
In addition to these 2 parameters, connectors have other mandatory parameters that need to be set in order to get them work.
Here is an example of a connector docker-compose.yml file:
By default, connectors are connecting to RabbitMQ using parameters and credentials directly given by the API during the connector registration process. In some cases, you may need to override them.
Be aware that all connectors are reaching RabbitMQ based the RabbitMQ configuration provided by the OpenCTI platform. The connector must be able to reach RabbitMQ on the specified hostname and port. If you have a specific Docker network configuration, please be sure to adapt your docker-compose.yml file in such way that the connector container gets attached to the OpenCTI Network, e.g.:
"},{"location":"deployment/connectors/#connector-token","title":"Connector token","text":""},{"location":"deployment/connectors/#create-the-user","title":"Create the user","text":"
As mentioned previously, it is strongly recommended to run each connector with its own user. The Internal Export File connectors should be launched with a user that belongs to a group which has an \u201cAdministrator\u201d role (with bypass all capabilities enabled).
By default, in platform, a group named \"Connectors\" already exists. So just create a new user with the name [C] Name of the connector in Settings > Security > Users.
"},{"location":"deployment/connectors/#put-the-user-in-the-group","title":"Put the user in the group","text":"
Just go to the user you have just created and add it to the Connectors group.
Then just get the token of the user displayed in the interface.
You can either directly run the Docker image of connectors or add them to your current docker-compose.yml file.
"},{"location":"deployment/connectors/#add-a-connector-to-your-deployment","title":"Add a connector to your deployment","text":"
For instance, to enable the MISP connector, you can add a new service to your docker-compose.yml file:
connector-misp:\n image: opencti/connector-misp:latest\n environment:\n - OPENCTI_URL=http://localhost\n - OPENCTI_TOKEN=ChangeMe\n - CONNECTOR_ID=ChangeMe\n - CONNECTOR_TYPE=EXTERNAL_IMPORT\n - CONNECTOR_NAME=MISP\n - CONNECTOR_SCOPE=misp\n - CONNECTOR_LOG_LEVEL=info\n - MISP_URL=http://localhost # Required\n - MISP_KEY=ChangeMe # Required\n - MISP_SSL_VERIFY=False # Required\n - MISP_CREATE_REPORTS=True # Required, create report for MISP event\n - MISP_REPORT_CLASS=MISP event # Optional, report_class if creating report for event\n - MISP_IMPORT_FROM_DATE=2000-01-01 # Optional, import all event from this date\n - MISP_IMPORT_TAGS=opencti:import,type:osint # Optional, list of tags used for import events\n - MISP_INTERVAL=1 # Required, in minutes\n restart: always\n
"},{"location":"deployment/connectors/#launch-a-standalone-connector","title":"Launch a standalone connector","text":"
To launch a standalone connector, you can use the docker-compose.yml file of the connector itself. Just download the latest release and start the connector:
$ wget https://github.com/OpenCTI-Platform/connectors/archive/{RELEASE_VERSION}.zip\n$ unzip {RELEASE_VERSION}.zip\n$ cd connectors-{RELEASE_VERSION}/misp/\n
Change the configuration in the docker-compose.yml according to the parameters of the platform and of the targeted service. Then launch the connector:
The connector status can be displayed in the dedicated section of the platform available in Data > Ingestion > Connectors. You will be able to see the statistics of the RabbitMQ queue of the connector:
Problem
If you encounter problems deploying OpenCTI or connectors, you can consult the troubleshooting page.
All components of OpenCTI are shipped both as Docker images and manual installation packages.
Production deployment
For production deployment, we recommend to deploy all components in containers, including dependencies, using native cloud services or orchestration systems such as Kubernetes.
To have more details about deploying OpenCTI and its dependencies in cluster mode, please read the dedicated section.
Use Docker
Deploy OpenCTI using Docker and the default docker-compose.yml provided in the docker.
Setup
Manual installation
Deploy dependencies and launch the platform manually using the packages released in the GitHub releases.
Just download the appropriate Docker for Desktop version for your operating system.
"},{"location":"deployment/installation/#clone-the-repository","title":"Clone the repository","text":"
Docker helpers are available in the Docker GitHub repository.
mkdir -p /path/to/your/app && cd /path/to/your/app\ngit clone https://github.com/OpenCTI-Platform/docker.git\ncd docker\n
"},{"location":"deployment/installation/#configure-the-environment","title":"Configure the environment","text":"
ElasticSearch / OpenSearch configuration
We strongly recommend that you add the following ElasticSearch / OpenSearch parameter:
thread_pool.search.queue_size=5000\n
Check the OpenCTI Integration User Permissions in OpenSearch/ElasticSearch for detailed information about the user permissions required for the OpenSearch/ElasticSearch integration.
Before running the docker-compose command, the docker-compose.yml file should be configured. By default, the docker-compose.yml file is using environment variables available in the file .env.sample.
You can either rename the file .env.sample as .env and enter the values or just directly edit the docker-compose.yml with the values for your environment.
Configuration static parameters
The complete list of available static parameters is available in the configuration section.
Here is an example to quickly generate the .env file under Linux, especially all the default UUIDv4:
If your docker-compose deployment does not support .env files, just export all environment variables before launching the platform:
export $(cat .env | grep -v \"#\" | xargs)\n
As OpenCTI has a dependency on ElasticSearch, you have to set vm.max_map_count before running the containers, as mentioned in the ElasticSearch documentation.
sudo sysctl -w vm.max_map_count=1048575\n
To make this parameter persistent, add the following to the end of your /etc/sysctl.conf:
"},{"location":"deployment/installation/#run-opencti","title":"Run OpenCTI","text":""},{"location":"deployment/installation/#using-single-node-docker","title":"Using single node Docker","text":"
After changing your .env file run docker-compose in detached (-d) mode:
sudo systemctl start docker.service\n# Run docker-compose in detached\ndocker-compose up -d\n
In order to have the best experience with Docker, we recommend using the Docker stack feature. In this mode you will have the capacity to easily scale your deployment.
# If your virtual machine is not a part of a Swarm cluster, please use:\ndocker swarm init\n
Put your environment variables in /etc/environment:
# If you already exported your variables to .env from above:\nsudo cat .env >> /etc/environment\nsudo bash -c 'cat .env >> /etc/environment'\nsudo docker stack deploy --compose-file docker-compose.yml opencti\n
Installation done
You can now go to http://localhost:8080 and log in with the credentials configured in your environment variables.
"},{"location":"deployment/installation/#manual-installation","title":"Manual installation","text":""},{"location":"deployment/installation/#prerequisites","title":"Prerequisites","text":""},{"location":"deployment/installation/#installation-of-dependencies","title":"Installation of dependencies","text":"
You have to install all the needed dependencies for the main application and the workers. The example below is for Debian-based systems:
"},{"location":"deployment/installation/#download-the-application-files","title":"Download the application files","text":"
First, you have to download and extract the latest release file. Then select the version to install depending of your operating system:
For Linux:
If your OS supports libc (Ubuntu, Debian, ...) you have to install the opencti-release_{RELEASE_VERSION}.tar.gz version.
If your OS uses musl (Alpine, ...) you have to install the opencti-release-{RELEASE_VERSION}_musl.tar.gz version.
For Windows:
We don't provide any Windows release for now. However it is still possible to check the code out, manually install the dependencies and build the software.
mkdir /path/to/your/app && cd /path/to/your/app\nwget <https://github.com/OpenCTI-Platform/opencti/releases/download/{RELEASE_VERSION}/opencti-release-{RELEASE_VERSION}.tar.gz>\ntar xvfz opencti-release-{RELEASE_VERSION}.tar.gz\n
"},{"location":"deployment/installation/#install-the-main-platform","title":"Install the main platform","text":""},{"location":"deployment/installation/#configure-the-application","title":"Configure the application","text":"
The main application has just one JSON configuration file to change and a few Python modules to install
cd opencti\ncp config/default.json config/production.json\n
Change the config/production.json file according to your configuration of ElasticSearch, Redis, RabbitMQ and S3 bucket as well as default credentials (the ADMIN_TOKEN must be a valid UUID).
"},{"location":"deployment/installation/#install-the-python-modules","title":"Install the Python modules","text":"
cd src/python\npip3 install -r requirements.txt\ncd ../..\n
"},{"location":"deployment/installation/#start-the-application","title":"Start the application","text":"
The application is just a NodeJS process, the creation of the database schema and the migration will be done at starting.
Please verify that yarn version is greater than 4 and node version is greater or equals to v19. Please note that some Node.js version are outdated in linux package manager, you can download a recent one in https://nodejs.org/en/download or alternatively nvm can help to chose a recent version of Node.js https://github.com/nvm-sh/nvm
OpenCTI platform is based on a NodeJS runtime, with a memory limit of 8GB by default. If you encounter OutOfMemory exceptions, this limit could be changed:
- NODE_OPTIONS=--max-old-space-size=8096\n
"},{"location":"deployment/installation/#workers-and-connectors","title":"Workers and connectors","text":"
OpenCTI workers and connectors are Python processes. If you want to limit the memory of the process, we recommend to directly use Docker to do that. You can find more information in the official Docker documentation.
ElasticSearch is also a JAVA process. In order to setup the JAVA memory allocation, you can use the environment variable ES_JAVA_OPTS. You can find more information in the official ElasticSearch documentation.
Redis has a very small footprint on keys but will consume memory for the stream. By default the size of the stream is limited to 2 millions which represents a memory footprint around 8 GB. You can find more information in the Redis docker hub.
The RabbitMQ memory configuration can be find in the RabbitMQ official documentation. RabbitMQ will consumed memory until a specific threshold, therefore it should be configure along with the Docker memory limitation.
OpenCTI supports multiple ways to integrate with other systems which do not have native connectors or plugins to the platform. Here are the technical features available to ease the connection and the integration of the platform with other applications.
Connectors list
If you are looking for the list of OpenCTI connectors or native integration, please check the OpenCTI Ecosystem.
"},{"location":"deployment/integrations/#native-feeds-and-streams","title":"Native feeds and streams","text":"
To ease integrations with other products, OpenCTI has built-in capabilities to deliver the data to third-parties.
It is possible to create as many CSV feeds as needed, based on filters and accessible in HTTP. CSV feeds are available in Data > Data sharing > CSV deeds.
When creating a CSV feed, you need to select one or multiple types of entities to make available. Then, you must assign a field (of an entity type) to each column in the CSV:
Details
For more information about CSV feeds, filters and configuration, please check the Native feeds page.
Most of the modern cybersecurity systems such as SIEMs, EDRs, XDRs and even firewalls support the TAXII protocol which is basically a paginated HTTP STIX feed. OpenCTI implements a TAXII 2.1 server with the ability to create as many TAXII collections as needed in Data > Data sharing > TAXII Collections.
TAXII collections are a sub-selection of the knowledge available in the platform and rely on filters. For instance, it is possible to create TAXII collections for pieces of malware with a given label, for indicators with a score greater than n, etc.
After implementing CSV feeds and TAXII collections, we figured out that those 2 stateless APIs are definitely not enough when it comes to tackle advanced information sharing challenges such as:
Real time transmission of the information (i.e. avoid hundreds of systems to pull data every 5 minutes).
Dependencies resolution (i.e. an intrusion created by an organization but the organization is not in the TAXII collection).
Partial update for huge entities such as report (i.e. just having the update event).
Delete events when necessary (i.e. to handle indicators expiration in third party systems for instance).
That's why we've developed the live streams. They are available in Data > Data sharing > Live streams. As with TAXII collections, it is possible to create as many streams as needed using filters.
Streams implement the HTTP SSE (Server-sent events) protocol and give applications the possibility to consume a real time pure STIX 2.1 stream. Stream connectors in the OpenCTI Ecosystem are using live streams to consume data and do something such as create / update / delete information in SIEMs, XDRs, etc.
Your API key can be found in your profile available clicking on the top right icon.
Using basic authentication
Username: Your platform username\nPassword: Your plafrom password\nAuthorization: Basic c2FtdWVsLmhhc3NpbmVBZmlsaWdyYW4uaW86TG91aXNlMTMwNCM=\n
Using client certificate authentication
To know how to configure the client certificate authentication, please consult the authentication configuration section.
"},{"location":"deployment/integrations/#api-and-libraries","title":"API and libraries","text":""},{"location":"deployment/integrations/#graphql-api","title":"GraphQL API","text":"
To allow analysts and developers to implement more custom or complex use cases, a full GraphQL API is available in the application on the /graphql endpoint.
The API can be queried using various GraphQL client such as Postman but you can leverage any HTTP client to forge GraphQL queries using POST methods.
The playground is available on the /graphql endpoint. A link button is also available in the profile of your user.
All the schema documentation is directly available in the playground.
If you already logged to OpenCTI with the same browser you should be able to directly do some requests. If you are not authenticated or want to authenticate only through the playground you can use a header configuration using your profile token
Example of configuration (bottom left of the playground):
Additional GraphQL documentation
To find out more about GraphQL and the playground, you can find two additional documentation pages: the GraphQL API page and the GraphQL playground page.
Since not everyone is familiar with GraphQL APIs, we've developed a Python library to ease the interaction with it. The library is pretty easy to use. To initiate the client:
The activity manager in OpenCTI is a component that monitors and logs the user actions in the platform such as login, settings update, and user activities if configured (read, udpate, etc.).
The expiration scheduler is responsible for monitoring expired elements in the platform. It cancels the access rights of expired user accounts and revokes expired indicators from the platform.
The synchronization manager enables the data sharing between multiple OpenCTI platforms. It allows the user to create and configure synchronizers which are processes that connect to the live streams of remote OpenCTI platforms and import the data into the local platform.
The retention manager is a component that allows the user to define rules to help delete data in OpenCTI that is no longer relevant or useful. This helps to optimize the performance and storage of the OpenCTI platform and ensures the quality and accuracy of the data.
The playbook manager handles the automation scenarios which can be fully customized and enabled by platform administrators to enrich, filter and modify the data created or updated in the platform.
Please read the Playbook automation page to get more information.
"},{"location":"deployment/managers/#file-index-manager","title":"File index manager","text":"
The file indexing manager extracts and indexes the text content of the files, and stores it in the database. It allows users to search for text content within files uploaded to the platform.
The telemetry manager collects periodically statistical data about platform usage.
More information about data telemetry can be found here.
"},{"location":"deployment/map/","title":"Deploy on-premise map server with OpenCTI styles","text":""},{"location":"deployment/map/#introduction","title":"Introduction","text":"
The OpenStreetMap tiles for the planet will take 80GB. Here are the instructions to deploy a local OpenStreetMap server with the OpenCTI styles.
"},{"location":"deployment/map/#create-directory-for-the-data-and-upload-planet-data","title":"Create directory for the data and upload planet data","text":"
When you will launch the map server container, it will be necessary to mount a volume with the planet tiles data. Just create the directory for the data.
mkdir /var/YOUR_DATA_DIR\n
We have hosted the free-to-use planet tiles, just download the planet data from filigran.io.
Once the server is running, you should see the list of available styles:
Click on \"Viewer\", and take the URL:
\ud83d\udc49 http:/YOUR_URL/styles/{ID}/....
In the OpenCTI configuration, just put:
Parameter Environment variable Value Description app:map_tile_server_dark APP__MAP_TILE_SERVER_DARK http://{YOUR_MAP_SERVER}/styles/{ID_DARK}/{z}/{x}/{y}.png The address of the OpenStreetMap provider with dark theme style app:map_tile_server_light APP__MAP_TILE_SERVER_LIGHT http://{YOUR_MAP_SERVER}/styles/{ID_LIGHT}/{z}/{x}/{y}.png The address of the OpenStreetMap provider with light theme style
Before starting the installation, let's discover how OpenCTI is working, which dependencies are needed and what are the minimal requirements to deploy it in production.
The platform is the central part of the OpenCTI technological stack. It allows users to access to the user interface but also provides the GraphQL API used by connectors and workers to insert data. In the context of a production deployment, you may need to scale horizontally and launch multiple platforms behind a load balancer connected to the same databases (ElasticSearch, Redis, S3, RabbitMQ).
The workers are standalone Python processes consuming messages from the RabbitMQ broker in order to do asynchronous write queries. You can launch as many workers as you need to increase the write performances. At some point, the write performances will be limited by the throughput of the ElasticSearch database cluster.
Number of workers
If you need to increase performances, it is better to launch more platforms to handle worker queries. The recommended setup is to have at least one platform for 3 workers (ie. 9 workers distributed over 3 platforms).
The connectors are third-party pieces of software (Python processes) that can play five different roles on the platform:
Type Description Examples EXTERNAL_IMPORT Pull data from remote sources, convert it to STIX2 and insert it on the OpenCTI platform. MITRE Datasets, MISP, CVE, AlienVault, Mandiant, etc. INTERNAL_ENRICHMENT Listen for new OpenCTI entities or users requests, pull data from remote sources to enrich. Shodan, DomainTools, IpInfo, etc. INTERNAL_IMPORT_FILE Extract data from files uploaded on OpenCTI through the UI or the API. STIX 2.1, PDF, Text, HTML, etc. INTERNAL_EXPORT_FILE Generate export from OpenCTI data, based on a single object or a list. STIX 2.1, CSV, PDF, etc. STREAM Consume a platform data stream and do something with events. Splunk, Elastic Security, Q-Radar, etc.
List of connectors
You can find all currently available connectors in the OpenCTI Ecosystem.
"},{"location":"deployment/overview/#infrastructure-requirements","title":"Infrastructure requirements","text":""},{"location":"deployment/overview/#dependencies","title":"Dependencies","text":"Component Version CPU RAM Disk type Disk space ElasticSearch / OpenSearch >= 8.0 / >= 2.9 2 cores \u2265 8GB SSD \u2265 16GB Redis >= 7.1 1 core \u2265 1GB SSD \u2265 16GB RabbitMQ >= 3.11 1 core \u2265 512MB Standard \u2265 2GB S3 / MinIO >= RELEASE.2023-02 1 core \u2265 128MB SSD \u2265 16GB"},{"location":"deployment/overview/#platform_1","title":"Platform","text":"Component CPU RAM Disk type Disk space OpenCTI Core 2 cores \u2265 8GB None (stateless) - Worker(s) 1 core \u2265 128MB None (stateless) - Connector(s) 1 core \u2265 128MB None (stateless) -
Clustering
To have more details about deploying OpenCTI and its dependencies in cluster mode, please read the dedicated section.
OpenCTI is an open and modular platform. A lot of connectors, plugins and clients are created by Filigran and by the community. You can find here other resources available to complete your OpenCTI journey.
Access to monthly sectorial analysis from our experts team based on knowledge and data collected by our partners.
Consult
Case studies
Explore the Filigran case studies about stories and usages of the platform among our communities and customers.
Download
"},{"location":"deployment/rollover/","title":"Indices and rollover policies","text":"
Default rollover policies
Since OpenCTI 5.9.0, rollover policies are automatically created when the platform is initialized for the first time. If your platform has been initialized using an older version of OpenCTI or if you would like to understand (and customize) rollover policies please read the following documentation.
ElasticSearch and OpenSearch both support rollover on indices. OpenCTI has been designed to be able to use aliases for indices and so supports index lifecycle policies very well. Thus, by default OpenCTI initializes indices with a suffix of -00001 and uses wildcards to query indices. When rollover policies are implemented (default starting OCTI 5.9.X if you initialized your platform at this version), indices are splitted to keep a reasonable volume of data in shards.
"},{"location":"deployment/rollover/#opencti-integration-user-permissions-in-opensearchelasticsearch","title":"OpenCTI Integration User Permissions in OpenSearch/ElasticSearch","text":"
Index Permissions
Patterns: opencti* (Dependent on the parameter elasticsearch:index_prefix value)
Permissions: indices_all
Cluster Permissions
cluster_composite_ops_ro
cluster_manage_index_templates
cluster:admin/ingest/pipeline/put
cluster:admin/opendistro/ism/policy/write
cluster:monitor/health
cluster:monitor/main
cluster:monitor/state
indices:admin/index_template/put
indices:data/read/scroll/clear
indices:data/read/scroll
indices:data/write/bulk
About indices:* in Cluster Permissions
It is crucial to include indices:* permissions in Cluster Permissions for the proper functioning of the OpenCTI integration. Removing these, even if already present in Index Permissions, may result in startup issues for the OpenCTI Platform.
By default, a rollover policy is applied on all indices used by OpenCTI.
opencti_deleted_objects
opencti_files
opencti_history
opencti_inferred_entities
opencti_inferred_relationships
opencti_internal_objects
opencti_internal_relationships
opencti_stix_core_relationships
opencti_stix_cyber_observable_relationships
opencti_stix_cyber_observables
opencti_stix_domain_objects
opencti_stix_meta_objects
opencti_stix_meta_relationships
opencti_stix_sighting_relationships
For your information, the indices which can grow rapidly are:
Index opencti_stix_meta_relationships: it contains all the nested relationships between objects and labels / marking definitions / external references / authors, etc.
Index opencti_history: it contains the history log of all objects in the platform.
Index opencti_stix_cyber_observables: it contains all observables stored in the platform.
Index opencti_stix_core_relationships: it contains all main STIX relationships stored in the platform.
Here is the recommended policy (initialized starting 5.9.X):
Maximum primary shard size: 50 GB
Maximum age: 365 days
Maximum documents: 75,000,000
"},{"location":"deployment/rollover/#applying-rollover-policies-on-existing-indices","title":"Applying rollover policies on existing indices","text":"
Procedure information
Please read the following only if your platform has been initialized before 5.9.0, otherwise lifecycle policies has been created (but you can still cutomize them).
Unfortunately, to be able to implement rollover policies on ElasticSearch / OpenSearch indices, it will be needed to re-index all the data in new indices using ElasticSearch capabilities.
Then, in the OpenCTI configuration, change the ElasticSearch / OpenSearch default prefix to octi (default is opencti).
"},{"location":"deployment/rollover/#create-the-rollover-policy","title":"Create the rollover policy","text":"
Create a rollover policy named octi-ilm-policy (in Kibana, Management > Index Lifecycle Policies):
Maximum primary shard size: 50 GB
Maximum age: 365 days
Maximum documents: 75,000,000
"},{"location":"deployment/rollover/#create-index-templates","title":"Create index templates","text":"
In Kibana, clone the opencti-index-template to have one index template by OpenCTI index with the appropriate rollover policy, index pattern and rollover alias (in Kibana, Management > Index Management > Index Templates).
Create the following index templates:
octi_deleted_objects
octi_files
octi_history
octi_inferred_entities
octi_inferred_relationships
octi_internal_objects
octi_internal_relationships
octi_stix_core_relationships
octi_stix_cyber_observable_relationships
octi_stix_cyber_observables
octi_stix_domain_objects
octi_stix_meta_objects
octi_stix_meta_relationships
octi_stix_sighting_relationships
Here is the overview of all templates (you should have something with octi_ instead of opencti_).
"},{"location":"deployment/rollover/#apply-rollover-policy-on-all-index-templates","title":"Apply rollover policy on all index templates","text":"
Then, going back in the index lifecycle policies screen, you can click on the \"+\" button of the octi-ilm-policy to Add the policy to index template, then add the policy to add previously created template with the proper \"Alias for rollover index\".
"},{"location":"deployment/rollover/#bootstrap-all-new-indices","title":"Bootstrap all new indices","text":"
Before we can re-index, we need to create the new indices with aliases.
PUT octi_history-000001\n{\n \"aliases\": {\n \"octi_history\": {\n \"is_write_index\": true\n }\n }\n}\n
Repeat this step for all indices:
octi_deleted_objects
octi_files
octi_history
octi_inferred_entities
octi_inferred_relationships
octi_internal_objects
octi_internal_relationships
octi_stix_core_relationships
octi_stix_cyber_observable_relationships
octi_stix_cyber_observables
octi_stix_domain_objects
octi_stix_meta_objects
octi_stix_meta_relationships
"},{"location":"deployment/rollover/#re-index-all-indices","title":"Re-index all indices","text":"
Using the reindex API, re-index all indices one by one:
This page aims to explain the typical errors you can have with your OpenCTI platform.
"},{"location":"deployment/troubleshooting/#finding-the-relevant-logs","title":"Finding the relevant logs","text":"
It is highly recommended to monitor the error logs of the platforms, workers and connectors. All the components have log outputs in an understandable JSON format. If necessary, it is always possible to increase the log level. In production, it is recommended to have the log level set to error.
After 5 retries, if an element required to create another element is missing, the platform raises an exception. It usually comes from a connector that generates inconsistent STIX 2.1 bundles.
Cant upsert entity. Too many entities resolved
OpenCTI received an entity which is matching too many other entities in the platform. In this condition we cannot take a decision. We need to dig into the data bundle to identify why it matches too much entities and fix the data in the bundle / or the platform according to what you expect.
Execution timeout, too many concurrent call on the same entities
The platform supports multi workers and multiple parallel creation but different parameters can lead to some locking timeout in the execution.
Throughput capacity of your ElasticSearch
Number of workers started at the same time
Dependencies between data
Merging capacity of OpenCTI
If you have this kind of error, limit the number of workers deployed. Try to find the right balance of the number of workers, connectors and elasticsearch sizing.
Depending on your installation mode, upgrade path may change.
Migrations
The platform is taking care of all necessary underlying migrations in the databases if any. You can upgrade OpenCTI from any version to the latest one, including skipping multiple major releases.
The GraphQL playground is an integrated development environment (IDE) provided by OpenCTI for exploring and testing GraphQL APIs. It offers a user-friendly interface that allows developers to interactively query the GraphQL schema, experiment with different queries, and visualize the responses.
The Playground provides a text editor where developers can write GraphQL queries, mutations, and subscriptions. As you type, the Playground offers syntax highlighting, autocompletion, and error checking to aid in query composition.
Developers can access comprehensive documentation for the GraphQL schema directly within the Playground. This documentation includes descriptions of all available types, fields, and directives, making it easy to understand the data model and construct queries.
The playground keeps track of previously executed queries, allowing developers to revisit and reuse queries from previous sessions. This feature streamlines the development process by eliminating the need to retype complex queries.
Upon executing a query, the playground displays the response data in a structured and readable format. JSON responses are presented in a collapsible tree view, making it easy to navigate nested data structures and inspect individual fields.
Developers can explore the GraphQL schema using the built-in schema viewer. This feature provides a graphical representation of the schema, showing types, fields, and their relationships. Developers can explore the schema and understand its structure.
To access the GraphQL playground, navigate to the GraphQL endpoint of your OpenCTI instance: https://[your-opencti-instance]/graphql. Then, follow these steps to utilize the playground:
Query editor: Write GraphQL queries, mutations, and subscriptions in the text editor. Use syntax highlighting and autocompletion to speed up query composition.
Documentation explorer: Access documentation for the GraphQL schema by clicking on the \"Docs\" tab on the right. Browse types, fields, and descriptions to understand the available data and query syntax.
Query history: View and execute previously executed queries from the \"History\" tab on the top. Reuse queries and experiment with variations without retyping.
Response pane: Visualize query responses in the response pane. Expand and collapse sections to navigate complex data structures and inspect individual fields.
Schema viewer: Explore the GraphQL schema interactively using the \"Schema\" tab on the right. Navigate types, fields, and relationships to understand the data model and plan queries.
A connector in OpenCTI is a service that runs next to the platform and can be implemented in almost any programming language that has STIX2 support. Connectors are used to extend the functionality of OpenCTI and allow operators to shift some of the processing workload to external services. To use the conveniently provided OpenCTI connector SDK you need to use Python3 at the moment.
We choose to have a very decentralized approach on connectors, in order to bring a maximum freedom to developers and vendors. So a connector on OpenCTI can be defined by a standalone Python 3 process that pushes an understandable format of data to an ingestion queue of messages.
Each connector must implement a long-running process that can be launched just by executing the main Python file. The only mandatory dependency is the OpenCTIConnectorHelper class that enables the connector to send data to OpenCTI.
In the beginning first think about your use-case to choose an appropriate connector type - what do want to achieve with your connector? The following table gives you an overview of the current connector types and some typical use-cases:
Connector types
Type Typical use cases Example connector EXTERNAL_IMPORT Integrate external TI provider, Integrate external TI platform AlienVault INTERNAL_ENRICHMENT Enhance existing data with additional knowledge AbuseIP INTERNAL_IMPORT_FILE (Bulk) import knowledge from files Import document INTERNAL_EXPORT_FILE (Bulk) export knowledge to files STIX 2.1, CSV. STREAM Integrate external TI provider, Integrate external TI platform Elastic Security
After you've selected your connector type make yourself familiar with STIX2 and the supported relationships in OpenCTI. Having some knowledge about the internal data models with help you a lot with the implementation of your idea.
To develop and test your connector, you need a running OpenCTI instance with the frontend and the messaging broker accessible. If you don't plan on developing anything for the OpenCTI platform or the frontend, the easiest setup for the connector development is using the docker setup, For more details see here.
To give you an easy starting point we prepared an example connector in the public repository you can use as template to bootstrap your development.
Some prerequisites we recommend to follow this tutorial:
Code editor with good Python3 support (e.g. Visual Studio Code with the Python extension pack)
Python3 + setuptools is installed and configured
Command shell (either Linux/Mac terminal or WSL on Windows)
In the terminal check out the connectors repository and copy the template connector to $myconnector (replace it with your name throughout the following text examples).
$ pip3 install black flake8 pycti\n# Fork the current repository, then clone your fork\n$ git clone https://github.com/YOUR-USERNAME/connectors.git\n$ cd connectors\n$ git remote add upstream https://github.com/OpenCTI-Platform/connectors.git\n# Create a branch for your feature/fix\n$ git checkout -b [branch-name]\n# Copy the appropriate template directory for the connector type\n$ cp -r templates/$connector_type $connector_type/$myconnector\n$ cd $connector_type/$myconnector\n$ ls -R\nDockerfile docker-compose.yml requirements.txt\nREADME.md entrypoint.sh src\n\n./src:\nlib main.py\n\n./src/lib:\n$connector_type.py\n
"},{"location":"development/connectors/#changing-the-template","title":"Changing the template","text":"
There are a few files in the template we need to change for our connector to be unique. You can check for all places you need to change you connector name with the following command (the output will look similar):
$ grep -Ri template .\n\nREADME.md:# OpenCTI Template Connector\nREADME.md:| `connector_type` | `CONNECTOR_TYPE` | Yes | Must be `Template_Type` (this is the connector type). |\nREADME.md:| `connector_name` | `CONNECTOR_NAME` | Yes | Option `Template` |\nREADME.md:| `connector_scope` | `CONNECTOR_SCOPE` | Yes | Supported scope: Template Scope (MIME Type or Stix Object) |\nREADME.md:| `template_attribute` | `TEMPLATE_ATTRIBUTE` | Yes | Additional setting for the connector itself |\ndocker-compose.yml: connector-template:\ndocker-compose.yml: image: opencti/connector-template:4.5.5\ndocker-compose.yml: - CONNECTOR_TYPE=Template_Type\ndocker-compose.yml: - CONNECTOR_NAME=Template\ndocker-compose.yml: - CONNECTOR_SCOPE=Template_Scope # MIME type or Stix Object\nentrypoint.sh:cd /opt/opencti-connector-template\nDockerfile:COPY src /opt/opencti-template\nDockerfile: cd /opt/opencti-connector-template && \\\nsrc/main.py:class Template:\nsrc/main.py: \"TEMPLATE_ATTRIBUTE\", [\"template\", \"attribute\"], config, True\nsrc/main.py: connectorTemplate = Template()\nsrc/main.py: connectorTemplate.run()\nsrc/config.yml.sample: type: 'Template_Type'\nsrc/config.yml.sample: name: 'Template'\nsrc/config.yml.sample: scope: 'Template_Scope' # MIME type or SCO\n
Required changes:
Change Template or templatementions to your connector name e.g. ImportCsv or importcsv
Change TEMPLATE mentions to your connector name e.g. IMPORTCSV
Change Template_Scope mentions to the required scope of your connector. For processing imported files, that can be the Mime type e.g. application/pdf or for enriching existing information in OpenCTI, define the STIX object's name e.g. Report. Multiple scopes can be separated by a simple ,
Change Template_Type to the connector type you wish to develop. The OpenCTI types are defined hereafter:
EXTERNAL_IMPORT
INTERNAL_ENRICHMENT
INTERNAL_EXPORT_FILE
INTERNAL_IMPORT_FILE
STREAM
"},{"location":"development/connectors/#development","title":"Development","text":""},{"location":"development/connectors/#initialize-the-opencti-connector-helper","title":"Initialize the OpenCTI connector helper","text":"
After getting the configuration parameters of your connector, you have to initialize the OpenCTI connector helper by using the pycti Python library. This is shown in the following example:
class TemplateConnector:\n def __init__(self):\n # Instantiate the connector helper from config\n config_file_path = os.path.dirname(os.path.abspath(__file__)) + \"/config.yml\"\n config = (\n yaml.load(open(config_file_path), Loader=yaml.SafeLoader)\n if os.path.isfile(config_file_path)\n else {}\n )\n self.helper = OpenCTIConnectorHelper(config)\n self.custom_attribute = get_config_variable(\n \"TEMPLATE_ATTRIBUTE\", [\"template\", \"attribute\"], config\n )\n
Since there are some basic differences in the tasks of the different connector classes, the structure is also a bit class dependent. While the external-import and the stream connector run independently in a regular interval or constantly, the other 3 connector classes only run when being requested by the OpenCTI platform.
The self-triggered connectors run independently, but the OpenCTI need to define a callback function, which can be executed for the connector to start its work. This is done via self.helper.listen(self._process_message). In the appended examples, the difference of the setup can be seen.
from pycti import OpenCTIConnectorHelper, get_config_variable\n\nclass TemplateConnector:\n def __init__(self) -> None:\n # Initialization procedures\n [...]\n\n def _process_message(self, data: dict) -> str:\n # Main procedure \n\n # Start the main loop\n def start(self) -> None:\n self.helper.listen(self._process_message)\n\nif __name__ == \"__main__\":\n try:\n template_connector = TemplateConnector()\n template_connector.start()\n except Exception as e:\n print(e)\n time.sleep(10)\n exit(0)\n
"},{"location":"development/connectors/#write-and-read-operations","title":"Write and Read Operations","text":"
When using the OpenCTIConnectorHelper class, there are two way for reading from or writing data to the OpenCTI platform.
via the OpenCTI API interface via self.helper.api
via the OpenCTI worker via self.send_stix2_bundle
"},{"location":"development/connectors/#sending-data-to-the-opencti-platform","title":"Sending data to the OpenCTI platform","text":"
The recommended way for creating or updating data in the OpenCTI platform is via the OpenCTI worker. This enables the connector to just send and forget about thousands of entities at once to without having to think about the ingestion order, performance or error handling.
\u26a0\ufe0f **Please DO NOT use the api interface to create new objects in connectors.**
The OpenCTI connector helper method send_stix2_bundle must be used to send data to OpenCTI. The send_stix2_bundle function takes 2 arguments.
A serialized STIX2 bundle as a string (mandatory)
A list of entities types that should be ingested (optional)
Here is an example using the STIX2 Python library:
"},{"location":"development/connectors/#reading-from-the-opencti-platform","title":"Reading from the OpenCTI platform","text":"
Read queries to the OpenCTI platform can be achieved using the API and the STIX IDs can be attached to reports to create the relationship between those two entities.
If you want to add the found entity via objects_refs to another SDO, simply add a list of stix_ids to the SDO. Here's an example using the entity from the code snippet above:
from stix2 import Report\n\n[...]\n\nreport = Report(\n id=report[\"standard_id\"],\n object_refs=[entity[\"standard_id\"]],\n)\n
When something crashes for a user, you as a developer want to know as much as possible about this incident to easily improve your code and remove this issue. To do so, it is very helpful if your connector documents what it does. Use info messages for big changes like the beginning or the finishing of an operation, but to facilitate your bug removal attempts, implement debug messages for minor operation changes to document different steps in your code.
When encountering a crash, the connector's user can easily restart the troubling connector with the debug logging activated.
CONNECTOR_LOG_LEVEL=debug
Using those additional log messages, the bug report is more enriched with information about the possible cause of the problem. Here's an example of how the logging should be implemented:
def run(self) -> None:\n self.helper.log_info('Template connector starts')\n results = self._ask_for_news()\n [...]\n\n def _ask_for_news() -> None:\n overall = []\n for i in range(0, 10):\n self.log_debug(f\"Asking about news with count '{i}'\")\n # Do something\n self.log_debug(f\"Resut: '{result}'\")\n overall.append(result)\n return overall\n
Please make sure that the debug messages rich of useful information, but that they are not redundant and that the user is not drowned by unnecessary information.
If you are still unsure about how to implement certain things in your connector, we advise you to have a look at the code of other connectors of the same type. Maybe they are already using an approach which is suitable for addressing to your problem.
"},{"location":"development/connectors/#opencti-triggered-connector-special-cases","title":"OpenCTI triggered Connector - Special cases","text":""},{"location":"development/connectors/#data-layout-of-dictionary-from-callback-function","title":"Data Layout of Dictionary from Callback function","text":"
OpenCTI sends the connector a few instructions via the data dictionary in the callback function. Depending on the connector type, the data dictionary content is a bit different. Here are a few examples for each connector type.
Internal Import Connector
{ \n \"file_id\": \"<fileId>\",\n \"file_mime\": \"application/pdf\", \n \"file_fetch\": \"storage/get/<file_id>\", // Path to get the file\n \"entity_id\": \"report--82843863-6301-59da-b783-fe98249b464e\", // Context of the upload\n}\n
Internal Enrichment Connector
{ \n \"entity_id\": \"<stixCoreObjectId>\" // StixID of the object wanting to be enriched\n}\n
Internal Export Connector
{ \n \"export_scope\": \"single\", // 'single' or 'list'\n \"export_type\": \"simple\", // 'simple' or 'full'\n \"file_name\": \"<fileName>\", // Export expected file name\n \"max_marking\": \"<maxMarkingId>\", // Max marking id\n \"entity_type\": \"AttackPattern\", // Exported entity type\n // ONLY for single entity export\n \"entity_id\": \"<entity.id>\", // Exported element\n // ONLY for list entity export\n \"list_params\": \"[<parameters>]\" // Parameters for finding entities\n}\n
"},{"location":"development/connectors/#self-triggered-connector-special-cases","title":"Self triggered Connector - Special cases","text":""},{"location":"development/connectors/#initiating-a-work-before-pushing-data","title":"Initiating a 'Work' before pushing data","text":"
For self-triggered connectors, OpenCTI has to be told about new jobs to process and to import. This is done by registering a so called work before sending the stix bundle and signalling the end of a work. Here an example:
By implementing the work registration, they will show up as shown in this screenshot for the MITRE ATT&CK connector:
The connector is also responsible for making sure that it runs in certain intervals. In most cases, the intervals are definable in the connector config and then only need to be set and updated during the runtime.
class TemplateConnector:\n def __init__(self) -> None:\n # Initialization procedures\n [...]\n self.template_interval = get_config_variable(\n \"TEMPLATE_INTERVAL\", [\"template\", \"interval\"], config, True\n )\n\n def get_interval(self) -> int:\n return int(self.template_interval) * 60 * 60 * 24\n\n def run(self) -> None:\n self.helper.log_info(\"Fetching knowledge...\")\n while True:\n try:\n # Get the current timestamp and check\n timestamp = int(time.time())\n current_state = self.helper.get_state()\n if current_state is not None and \"last_run\" in current_state:\n last_run = current_state[\"last_run\"]\n self.helper.log_info(\n \"Connector last run: \"\n + datetime.utcfromtimestamp(last_run).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n )\n else:\n last_run = None\n self.helper.log_info(\"Connector has never run\")\n # If the last_run is more than interval-1 day\n if last_run is None or (\n (timestamp - last_run)\n > ((int(self.template_interval) - 1) * 60 * 60 * 24)\n ):\n timestamp = int(time.time())\n now = datetime.utcfromtimestamp(timestamp)\n friendly_name = \"Connector run @ \" + now.strftime(\"%Y-%m-%d %H:%M:%S\")\n\n ###\n # RUN CODE HERE \n ###\n\n # Store the current timestamp as a last run\n self.helper.log_info(\n \"Connector successfully run, storing last_run as \"\n + str(timestamp)\n )\n self.helper.set_state({\"last_run\": timestamp})\n message = (\n \"Last_run stored, next run in: \"\n + str(round(self.get_interval() / 60 / 60 / 24, 2))\n + \" days\"\n )\n self.helper.api.work.to_processed(work_id, message)\n self.helper.log_info(message)\n time.sleep(60)\n else:\n new_interval = self.get_interval() - (timestamp - last_run)\n self.helper.log_info(\n \"Connector will not run, next run in: \"\n + str(round(new_interval / 60 / 60 / 24, 2))\n + \" days\"\n )\n time.sleep(60)\n
"},{"location":"development/connectors/#running-the-connector","title":"Running the connector","text":"
For development purposes, it is easier to simply run the python script locally until everything works as it should.
$ virtualenv env\n$ source ./env/bin/activate\n$ pip3 install -r requirements\n$ cp config.yml.sample config.yml\n# Define the opencti url and token, as well as the connector's id\n$ vim config.yml\n$ python3 main.py\nINFO:root:Listing Threat-Actors with filters null.\nINFO:root:Connector registered with ID: a2de809c-fbb9-491d-90c0-96c7d1766000\nINFO:root:Starting ping alive thread\n...\n
Before submitting a Pull Request, please test your code for different use cases and scenarios. We don't have an automatic testing suite for the connectors yet, thus we highly depend on developers thinking about creative scenarios their code could encounter.
"},{"location":"development/connectors/#prepare-for-release","title":"Prepare for release","text":"
If you plan to provide your connector to be used by the community (\u2764\ufe0f) your code should pass the following (minimum) criteria.
# Linting with flake8 contains no errors or warnings\n$ flake8 --ignore=E,W\n# Verify formatting with black\n$ black .\nAll done! \u2728 \ud83c\udf70 \u2728\n1 file left unchanged.\n# Verify import sorting\n$ isort --profile black .\nFixing /path/to/connector/file.py\n# Push you feature/fix on Github\n$ git add [file(s)]\n$ git commit -m \"[connector_name] descriptive message\"\n$ git push origin [branch-name]\n# Open a pull request with the title \"[connector_name] message\"\n
If you have any trouble with this just reach out to the OpenCTI core team. We are happy to assist with this.
As OpenCTI has a dependency to ElasticSearch, you have to set the vm.max_map_count before running the containers, as mentioned in the ElasticSearch documentation.
$ sudo sysctl -w vm.max_map_count=262144\n
"},{"location":"development/environment_ubuntu/#nodejs-and-yarn","title":"NodeJS and yarn","text":"
The platform is developed on nodejs technology, so you need to install node and the yarn package manager.
Development stack require some base software that need to be installed.
"},{"location":"development/environment_windows/#docker-or-podman","title":"Docker or podman","text":"
Platform dependencies in development are deployed through container management, so you need to install a container stack.
We currently support docker and postman.
Docker Desktop from - https://docs.docker.com/desktop/install/windows-install/
Install new version of - https://docs.microsoft.com/windows/wsl/wsl2-kernel. This will require a reboot.
Shell out to CMD as Administrator and run the following powershell command:
wsl --set-default-version 2
Reboot computer and continue to next step
Load Docker Application
NOTE DOCKER LICENSE - You are agreeing to the licence for Non-commercial Open Source Project use. OpenCTI is Open Source and the version you would be possibly contributing to enhancing is the unpaid non-commercial/non-enterprise version. If you intention is different - please consult with your organization's legal/licensing department.
Leave Docker Desktop running
"},{"location":"development/environment_windows/#nodejs-and-yarn","title":"NodeJS and yarn","text":"
The platform is developed on nodejs technology, so you need to install node and the yarn package manager.
Install NodeJS from - https://nodejs.org/download/release/v16.20.0/node-v16.20.0-x64.msi
Select the option for installing Chocolatey on the Tools for Native Modules screen
Will do this install for you automatically - https://chocolatey.org/packages/visualstudio2019-workload-vctools
Includes Python 3.11.4
Shell out to CMD prompt as Administrator and install/run:
For worker and connectors, a python runtime is needed. Even if you already have a python runtime installed through node installation, on windows some nodejs package will be recompiled with python and C++ runtime.
For this reason Visual Studio Build Tools is required.
Install Visual Studio Build Tools from - https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools
Check off Desktop Development with C++
Run install
"},{"location":"development/environment_windows/#git-and-dev-tool","title":"Git and dev tool","text":"
Download GIT for Windows (64-bit Setup)- https://git-scm.com/download/win
Just use defaults on each screen
Install your preferred IDE
Intellij community edition - https://www.jetbrains.com/idea/download/
This summary should give you a detailed setup description for initiating the OpenCTI setup environment necessary for developing on the OpenCTI platform, a client library or the connectors. This page document how to set up an \"All-in-One\" development environment for OpenCTI. The devenv will contain data of 3 different repositories:
The GraphQL API is developed in JS and with some python code. As it's an \"all-in-one\" installation, the python environment will be installed in a virtual environment.
The API can be specifically configured with files depending on the starting profile. By default, the default.json file is used and will be correctly configured for local usage except for admin password.
So you need to create a development profile file. You can duplicate the default file and adapt if you need.
cd ~/opencti/opencti-platform/opencti-graphql/config\ncp default.json development.json\n
At minimum adapt the admin part for the password and token.
For starting the test you will need to create a test.json configuration file. You can use the same dependencies by only adapting all prefix for all dependencies.
Tests are using dedicated indices in the Elastic database (prefixed with test-* or the prefix that you have set up in test.json).
The following command will run the complete test suite using vitest, which might take more than 30 minutes. It starts by cleaning up the test database and seeding a minimal dataset. The file vitest.config.test.ts can be edited to run only a specific file pattern.
yarn test:dev
We also provide utility scripts to ease the development of new tests, especially integration tests that rely on the sample data loaded after executing 00-inject/loader-test.ts.
To solely initialize the test database with this sample dataset run:
yarn test:dev:init
And then, execute the following command to run the pattern specified in the file vitest.config.test.ts, or add a file name to the command line to run only this test file.
yarn test:dev:resume
This last command will NOT cleanup & initialize the test database and thus will be quicker to execute.
Based on development source you can build the package for production. This package will be minified and optimized with esbuild.
$ cd opencti-frontend\n$ yarn build\n$ cd ../opencti-graphql\n$ yarn build\n
After the build you can start the production build with yarn serv. This build will use the production.json configuration file.
$ cd ../opencti-graphql\n$ yarn serv\n
"},{"location":"development/platform/#continuous-integration-and-features-cross-repository","title":"Continuous Integration and features cross repository","text":"
When a feature requires changes in two or more repositories in opencti, connectors and client-python; then some specific convention must be used to have the continuous integration build them all together.
"},{"location":"development/platform/#naming-convention-of-branch","title":"Naming convention of branch","text":"
The Pull Request on opencti repository should be (issue or bug)/number + optional, example: issue/7062-contributing
The pull request on connector or client-python should refer to the opencti one by starting with \"opencti/\" and then the same name. Example: opencti/issue/7062-contributing
Note that if there are several matches, the first one is taken. So for example having issue/7062-contributing and issue/7062 that are both marked as \"multi-repository\" is not a good idea.
To install the latest Python client library, please use pip:
$ pip3 install pycti\n
"},{"location":"development/python/#using-the-helper-functions","title":"Using the helper functions","text":"
The main class OpenCTIApiClient contains all what you need to interact with the platform, you just have to initialize it.
The following example shows how you create an indicator in OpenCTI using the python library with TLP marking and OpenCTI compatible date format.
from dateutil.parser import parse\nfrom pycti import OpenCTIApiClient\nfrom stix2 import TLP_GREEN\n\n# OpenCTI API client initialization\nopencti_api_client = OpenCTIApiClient(\"https://myopencti.server\", \"mysupersecrettoken\")\n\n# Define an OpenCTI compatible date\ndate = parse(\"2019-12-01\").strftime(\"%Y-%m-%dT%H:%M:%SZ\")\n\n# Get the OpenCTI marking for stix2 TLP_GREEN\nTLP_GREEN_CTI = opencti_api_client.marking_definition.read(id=TLP_GREEN[\"id\"])\n\n# Use the client to create an indicator in OpenCTI\nindicator = opencti_api_client.indicator.create(\n name=\"C2 server of the new campaign\",\n description=\"This is the C2 server of the campaign\",\n pattern_type=\"stix\",\n pattern=\"[domain-name:value = 'www.5z8.info']\",\n x_opencti_main_observable_type=\"IPv4-Addr\",\n valid_from=date,\n update=True,\n markingDefinitions=[TLP_GREEN_CTI[\"id\"]],\n)\n
OpenCTI provides a comprehensive API based on GraphQL, allowing users to perform various actions programmatically. The API enables users to interact with OpenCTI's functionality and data, offering a powerful tool for automation, integration, and customization. All actions that can be performed through the platform's graphical interface are also achievable via the API.
Access to the OpenCTI API requires authentication using standard authentication mechanisms. Access rights to data via the API will be determined by the access privileges of the user associated with the API key. For authentication, users need to include the following headers in their API requests:
The OpenCTI API consists of various endpoints corresponding to different functionalities and operations within the platform. These endpoints allow users to perform actions such as querying data, creating or updating entities, and more. Users can refer to the Understand GraphQL section to understand how it works.
Documentation for the OpenCTI API, including schema definitions, the list of filters available and queryable fields, is available through the OpenCTI platform. It can be found on the GraphQL playground. However, query examples and mutation examples are not yet available. In the meantime, users can explore the available endpoints and their functionality by inspecting network traffic in the browser's developer tools or by examining the source code of the Python client.
GraphQL is a powerful query language for APIs that enables clients to request exactly the data they need. Unlike traditional REST APIs, which expose fixed endpoints and return predefined data structures, GraphQL APIs allow clients to specify the shape and structure of the data they require.
"},{"location":"reference/api/#core-concepts","title":"Core concepts","text":""},{"location":"reference/api/#schema-definition-language-sdl","title":"Schema Definition Language (SDL)","text":"
GraphQL APIs are defined by a schema, which describes the types of data that can be queried and the relationships between them. The schema is written using the Schema Definition Language (SDL), which defines types, fields, and their relationships.
GraphQL uses a query language to request data from the server. Clients can specify exactly which fields they need and how they are related, enabling precise data retrieval without over-fetching or under-fetching.
Resolvers are functions responsible for fetching the requested data. Each field in the GraphQL schema corresponds to a resolver function, which determines how to retrieve the data from the underlying data sources.
"},{"location":"reference/api/#how-it-works","title":"How it Works","text":"
Schema definition: The API provider defines a GraphQL schema using SDL, specifying the types and fields available for querying.
Query execution: Clients send GraphQL queries to the server, specifying the data they need. The server processes the query, resolves each field, and constructs a response object with the requested data.
Validation and execution: The server validates the query against the schema to ensure it is syntactically and semantically correct. If validation passes, the server executes the query, invoking the appropriate resolver functions to fetch the requested data.
Data retrieval: Resolvers fetch data from the relevant data sources, such as databases, APIs, or other services. They transform the raw data into the shape specified in the query and return it to the client.
Response formation: Once all resolvers have completed, the server assembles the response object containing the requested data and sends it back to the client.
As a cyber threat intelligence platform, OpenCTI offers functionalities that enable users to move quickly from raw data to operational intelligence by building up high-quality, structured information.
To do so, the platform provides a number of essential capabilities, such as automated data deduplication, merging of similar entities while preserving relationship integrity, the ability to modulate the confidence levels on your intelligence, and the presence of inference rules to automate the creation of logical relationships among your data.
The purpose of this page is to list the features of the platform that contribute to the intelligibility and quality of intelligence.
The first essential data intelligence mechanism in OpenCTI is the deduplication of information and relations.
This advanced functionality not only enables you to check whether a piece of data, information or a relationship is not a duplicate of an existing element, but will also, under certain conditions, enrich the element already present.
If the new duplicated entity has new content, the pre-existing entity can be enriched with the new information from the duplicate.
It works as follows (see details in the dedicated page):
For entities: based on the entity's properties based on a specific ID generated by the \"ID Contributing Properties\" platform (properties listed on the dedicated page).
For relationships: based on type, source, target, start time, and stop time.
For observables: a specific ID is also generated by the platform, this time based on the specifications of the STIX model.
The ability to update and enrich is determined by the confidence level and quality level of the entities and relationships (see diagram on page deduplication).
OpenCTI's merging function is one of the platform's crucial data intelligence elements.
From the Data > Entities tab, this feature lets you merge up to 4 entities of the same type. A parent entity is selected and assigned up to three child entities.
The benefit of this feature is to centralize a number of similar elements from different sources without losing data or degrading the quality of the information. During merging, the platform will create relationships to anchor all the data to the consolidated entity.
This enrichment function consolidates the data and avoids duplication, but above all initiates a structured intelligence process while preserving the integrity of pre-existing relationships as presented here.
"},{"location":"reference/data-intelligence/#confidence-level-and-data-segregation","title":"Confidence level and data segregation","text":"
Another key element of OpenCTI's data intelligence is its ability to apply confidence levels and to segregate the data present in the platform.
The confidence level is directly linked to users and Role Based Access Control. It is applied to a user directly or indirectly via the confidence level of the group to which the user belongs. This element is fundamental as it defines the levels of data manipulation to which the user (real or connector) is entitled.
The correct application of confidence levels is all the more important as it will determine the confidence level of the data manipulated by a user. It is therefore a decisive mechanism, since it underpins the confidence you have in the content of your instance.
While it is important to apply a level of trust to your users or groups, it is also important to define a way of categorizing and protecting your data.
Data segregation makes it possible to apply marking definitions and therefore establish a standardized framework for classifying data.
These marking definitions, like the classic Traffic Light Protocols (TLP) implemented by default in the platform, will determine whether a user can access a specific data set. The marking will be applied at the group level to which the user belongs, which will determine the data to which the user has access and therefore the data that the user can potentially handle.
In OpenCTI, data intelligence is not just about the ability to segregate, qualify or enrich data. OpenCTI's inference rules enable you to mobilize the data on your platform effectively and operationally.
These predefined rules enable the user to speed up cyber threat management. For example, inferences can be used to automatically identify incidents based on a sighting, to create sightings on observables based on new observed data, to propagate relationships based on an observable, etc.
In all, the platform includes some twenty high-performance inference rules that considerably speed up the analysis and response to threats (see the full list here).
These rules are based on a logical interpretation of the data, resulting in a pre-analysis of the information by creating relationships that will enrich the intelligence in the platform. There are three main benefits: efficiency, completeness and accuracy. These user benefits can be found here.
Note: If these rules are present in the platform, they are not activated by default.
Once activated, they scan all the data in your platform in the background to identify all the existing relationships that meet the conditions of the rules. Then, the rules operate continuously to create relationships. If you deactivate a rule, all the objects and relationships it has created will be deleted.
These actions can only be carried out by an administrator of the instance.
This page will be automatically generated to reference the platform's data model. We are doing our best to implement this automatic generation as quickly as possible.
"},{"location":"reference/filters-migration/","title":"Filters format migration for OpenCTI 5.12","text":"
The version 5.12 of OpenCTI introduces breaking changes to the filters format used in the API. This documentation describes how you can migrate your scripts or programs that call the OpenCTI API, when updating from a version of OpenCTI inferior to 5.12.
"},{"location":"reference/filters-migration/#why-this-migration","title":"Why this migration?","text":"
Before OpenCTI 5.12, it was not possible to construct complex filters combinations: we couldn't embed filters within filters, used different boolean modes (and/or), filter on all available attributes or relations for a given entity type, or even test for empty fields of any sort.
Legacy of years of development, the former format and filtering mechanics were not adapted for such task, and a profound refactoring was necessary to make it happen.
Here are the main pain points we identified beforehand:
The filters frontend and backend formats were very different, requiring careful conversions.
The filter were static lists of keys, depending on each given entity type and maintained by hands.
The operator (eq, not_eq, etc.) was inside the key (e.g. entity_type_not_eq), limiting operator combination and requiring error-prone parsing.
The frontend format imposed a unique form of combination (and between filters, or between values inside each filter, and nothing else possible).
The flat list structure made impossible filter imbrication by nature.
Filters and query options were mixed in GQL queries for the same purpose (for instance, option types analog to a filter on key entity_type).
// filter formats in OpenCTI < 5.12\n\ntype Filter = {\n key: string, // a key in the list of the available filter keys for given the entity type\n values: string[],\n operator: string,\n filterMode: 'and' | 'or',\n}\n\n// \"give me Reports labelled with labelX or labelY\"\nconst filters = [\n { \n \"key\": \"entity_type\",\n \"values\": [\"Report\"],\n \"operator\": \"eq\",\n \"filterMode\": \"or\"\n },\n { \n \"key\": \"labelledBy\",\n \"values\": [\"<id-for-labelX>\", \"<id-for-labelY>\"],\n \"operator\": \"eq\",\n \"filterMode\": \"or\"\n },\n]\n
The new format brings a lot of short-term benefits and is compatible with our long-term vision of the filtering capabilities in OpenCTI. We chose a simple recursive structure that allow complex combination of any sort with respect to basic boolean logic.
The list of operator is fixed and can be extended during future developments.
Because changing filters format impacts almost everything in the platform, we decided to do a complete refactoring once and for all. We want this migration process to be clear and easy.
"},{"location":"reference/filters-migration/#what-has-been-changed","title":"What has been changed","text":"
The new filter implementation bring major changes in the way filters are processed and executed.
We change the filters formats (see FilterGroup type above):
In the frontend, an operator and a mode are stored for each key.
The new format enables filters imbrication thanks to the new attribute 'filterGroups'.
The keys are of type string (no more static list of enums).
The 'values' attribute can no longer contain null values (use the nil operator instead).
We also renamed some filter keys, to be consistent with the entities schema definitions.
We implemented the handling of the different operators and modes in the backend.
We introduced new void operators (nil / not_nil) to test the presence or absence of value in any field.
"},{"location":"reference/filters-migration/#how-to-migrate-your-own-filters","title":"How to migrate your own filters","text":"
We wrote a migration script to convert all stored filters created prior to version 5.12. These filters will thus be migrated automatically when starting your updated platform.
However, you might have your own connectors, queries, or python scripts that use the graphql API or the python client. If this is the case, you must change the filter format if you want to run the code against OpenCTI >= 5.12.
If values contains a null value, you need to convert the filter by using the new nil / not_nil operators. Here's the procedure:
Extract one filter dedicated to null
if operator was 'eq', switch to operator: 'nil' / if operator was not_eq, switch to operator = 'not_nil'
values = []
Extract another filter for all the other values.
// \"Must have a label that is not Label1 or Label2\"\nconst oldFilter = {\n key: 'labelledBy',\n values: [null, 'id-for-Label1', 'id-for-Label2'],\n operator: 'not_eq',\n filterMode: 'and',\n}\n\nconst newFilters = {\n mode: 'and',\n filters: [\n {\n key: 'objectLabel',\n values: ['id-label-1', 'id-for-Label2'],\n operator: 'not_eq',\n mode: 'and',\n },\n {\n key: 'objectLabel',\n values: [],\n operator: 'not_nil',\n mode: 'and',\n },\n ],\n filterGroups: [],\n}\n
Switch to nested filter to preserve logic
To preserve the logic of your old filter you might need to compose nested filter groups. This could happen for instance when using eq operator with null values for one filter, combined in and mode with other filters.
Dynamic filters are not stored in the database, they enable to filter view in the UI, e.g. filters in entities list, investigations, knowledge graphs. They are saved as URL parameters, and can be saved in local storage.
These filters are not migrated automatically and are lost when moving to 5.12. This concerns the filters saved for each view, that are restored when coming back to the same view. You will need to reconstruct the filters by hand in the UI; these new filters will be properly saved and restored afterward.
Also, when going to an url with filters in the old format, OpenCTI will display a warning and remove the filter parameters. Only URLs built by OpenCTI 5.12 are compatible with it, so you will need to reconstruct the filters by hand and save / share your updated links.
There are two types of filters that are used in many locations in the platform:
in entities lists: to display only the entities matching the filters. If an export or a background task is generated, only the filtered data will be taken into account,
in investigations and knowledge graphs: to display only the entities matching the filters,
in dashboards: to create widget with only the entities matching the filters,
in feeds, TAXII collections, triggers, streams, playbooks, background tasks: to process only the data or events matching the filters.
Dynamic filters are not stored in the database, they enable to filter view in the UI, e.g. filters in entities list, investigations, knowledge graphs.
However, they are still persistent in the platform frontend side. The filters used in a view are saved as URL parameters, so you can save and share links of these filtered views.
Also, your web browser saves in local storage the filters that you are setting in various places of the platform, allowing to retrieve them when you come back to the same view. You can then keep working from where you left of.
Stored filters are attributes of an entity, and are therefore stored in the database. They are stored as an attribute in the object itself, e.g. filters in dashboards, feeds, TAXII collections, triggers, streams, playbooks.
"},{"location":"reference/filters/#create-a-filter","title":"Create a filter","text":"
To create a filter, add every key you need using the 'Add filter' select box. It will give you the possible attributes on which you can filter in the current view.
A grey box appears and allows to select:
the operator to use, and
the values to compare (if the operator is not \"empty\" or \"not_empty\").
You can add as many filters as you want, even use the same key twice with different operators and values.
The boolean modes (and/or) are either global (between every attribute filters) or local (between values inside a filter). Both can be switched with a single click, changing the logic of your filtering.
Since OpenCTI 5.12, the OpenCTI platform uses a new filter format called FilterGroup. The FilterGroup model enables to do complex filters imbrication with different boolean operators, which extends greatly the filtering capabilities in every part of the platform.
In a given filter group, the mode (and or or) represents the boolean operation between the different filters and filterGroups arrays. The filters and filterGroups arrays are composed of objects of type Filter and FilterGroup.
The Filter has 4 properties:
a key, representing the kind of data we want to target (example: objectLabel to filter on labels or createdBy to filter on the author),
an array of values, representing the values we want to compare to,
an operator representing the operation we want to apply between the key and the values,
a mode (and or or) to apply between the values if there are several ones.
Value Meaning Additional information eq equal not_eq different gl greater than against textual values, the alphabetical ordering is used gte greater than or equal against textual values, the alphabetical ordering is used lt lower than against textual values, the alphabetical ordering is used lte lower than or equal against textual values, the alphabetical ordering is used nil empty / no value nil do not require anything inside values not_nil non-empty / any value not_nil do not require anything inside values
In addition, there are operators:
starts_with / not_starts_with / ends_with / not_ends_with / contains / not contains, available for searching in short string fields (name, value, title, etc.),
search, available in short string and text fields.
There is a small difference between search and contains. search finds any occurrence of specified words, regardless of order, while \"contains\" specifically looks for the exact sequence of words you provide.
Always use single-key filters
Multi-key filters are not supported across the platform and are reserved to specific, internal cases.
Only a specific set of key can be used in the filters.
Automatic key checking prevents typing error when constructing filters via the API. If a user write an unhandled key (object-label instead of objectLabel for instance), the API will return an error instead of an empty list. Doing so, we make sure the platform do not provide misleading results.
Some keys do not exist in the schema definition, but are allowed in addition. They describe a special behavior.
It is the case for:
sightedBy: entities to which X is linked via a STIX sighting relationship,
workflow_id: status id of the entities, or status template id of the status of the entities,
representative: entities whose representative (name for reports, value for some observables, composition of the source and target names for a relationship...) matches the filter,
connectedToId: the listened instances for an instance trigger.
For some keys, negative equality filtering is not supported yet (not_eq operator). For instance, it is the case for:
fromId
fromTypes
toId
toTypes
The regardingOf filter key has a special format and enables to target the entities having a relationship of a certain type with certain entities. Here is an example of filter to fetch the entities related to the entity X:
"},{"location":"reference/filters/#limited-support-in-stream-events-filtering","title":"Limited support in stream events filtering","text":"
Filters that are run against the event stream are not using the complete schema definition in terms of filtering keys.
This concerns:
Live streams,
CSV feeds,
TAXII collection,
Triggers,
Playbooks.
For filters used in this context, only some keys are supported for the moment:
confidence
objectAssignee
createdBy
creator
x_opencti_detection
indicator_types
objectLabel
x_opencti_main_observable_type
objectMarking
objects
pattern_type
priority
revoked
severity
x_opencti_score
entity_type
x_opencti_workflow_id
connectedToId (for the instance triggers)
fromId (the instance in the \"from\" of a relationship)
fromTypes (the entity type in the \"from\" of a relationship)
toId (the instance in the \"to\" of a relationship)
toTypes (the entity type in the \"to\" of a relationship)
"},{"location":"reference/fips/","title":"SSL FIPS 140-2 deployment","text":""},{"location":"reference/fips/#introduction","title":"Introduction","text":"
For organizations that need to deploy OpenCTI in a SSL FIPS 140-2 compliant environment, we provide FIPS compliant OpenCTI images for all components of the platform. Please note that you will also need to deploy dependencies (ElasticSearch / OpenSearch, Redis, etc.) with FIPS 140-2 SSL to have the full compliant OpenCTI technological stack.
OpenCTI SSL FIPS 140-2 compliant builds
The OpenCTI platform, workers and connectors SSL FIPS 140-2 compliant images are based on packaged Alpine Linux with OpenSSL 3 and FIPS mode enabled maintened by the Filigran engineering team.
"},{"location":"reference/fips/#dependencies","title":"Dependencies","text":""},{"location":"reference/fips/#aws-native-services-in-fedramp-compliant-environment","title":"AWS Native Services in FedRAMP compliant environment","text":"
It is important to remind that OpenCTI is fully compatible with AWS native services and all dependencies are available in both FedRAMP Moderate (East / West) and FedRAMP High (GovCloud) scopes.
Redis does not provide FIPS 140-2 SSL compliant Docker images but supports very well custom tls-ciphersuites that can be configured to use the system FIPS 140-2 OpenSSL library.
Alternatively, you can use a Stunnel TLS endpoint to ensure encrypted communication between OpenCTI and Redis. There are a few examples available, here or here.
RabbitMQ does not provide FIPS 140-2 SSL compliant Docker images but, as Redis, supports custom cipher suites. Also, it is confirmed since RabbitMQ version 3.12.5, the associated Erlang build (> 26.1), supports FIPS mode on OpenSSL 3.
Alternatively, you can use a Stunnel TLS endpoint to ensure encrypted communication between OpenCTI and RabbitMQ.
If you cannot use an S3 endpoint already deployed in your FIPS 140-2 SSL compliant environment, MinIO provides FIPS 140-2 SSL compliant Docker images which then are very easy to deploy within your environment.
For the platform, we provide FIPS 140-2 SSL compliant Docker images. Just use the appropriate tag to ensure you are deploying the FIPS compliant version and follow the standard Docker deployment procedure.
For the worker, we provide FIPS 140-2 SSL compliant Docker images. Just use the appropriate tag to ensure you are deploying the FIPS compliant version and follow the standard Docker deployment procedure.
All connectors have FIPS 140-2 SSL compliant Docker images. For each connector you need to deploy, please use the tag {version}-fips instead of {version} and follow the standard deployment procedure. An example is available on Docker Hub.
In order to provide a real time way to consume STIX CTI information, OpenCTI provides data events in a stream that can be consumed to react on creation, update, deletion and merge. This way of getting information out of OpenCTI is highly efficient and already use by some connectors.
OpenCTI is currently using REDIS Stream as the technical layer. Each time something is modified in the OpenCTI database, a specific event is added in the stream.
In order to provide a really easy consuming protocol we decide to provide a SSE (https://fr.wikipedia.org/wiki/Server-sent_events) HTTP URL linked to the standard login system of OpenCTI. Any user with the correct access rights can open and access http://opencti_instance/stream, and open an SSE connection to start receiving live events. You can of course consume directly the stream in Redis, but you will have to manage access and rights directly.
id: {Event stream id} -> Like 1620249512318-0\nevent: {Event type} -> create / update / delete\ndata: { -> The complete event data\n version -> The version number of the event\n type -> The inner type of the event\n scope -> The scope of the event [internal or external]\n data: {STIX data} -> The STIX representation of the data.\n message -> A simple string to easy understand the event\n origin: {Data Origin} -> Complex object with different information about the origin of the event\n context: {Event context} -> Complex object with meta information depending of the event type\n}\n
It can be used to consume the stream from this specific point.
The current stix data representation is based on the STIX 2.1 format using extension mechanism. Please take a look to the STIX documentation for more information.
It's simply the data in STIX format just before his deletion. You will also find the automated deletions in context due to automatic dependency management.
This event type publish the complete STIX data information along with patches information. Thanks to the patches, it's possible to rebuild the previous version and easily understand what happens in the update.
Patch and reverse_patch follow the official jsonpatch specification. You can find more information on the jsonpatch page
Merge is a combination of an update of the merge targets and deletions of the sources. In this event you will find the same patch and reverse_patch as an update and the list of elements merged into the target in the \"sources\" attribute.
The stream hosted in /stream url contains all the raw events of the platform, always filtered by the user rights (marking based). It's a technical stream a bit complex to used but very useful for internal processing or some specific connectors like backup/restore.
This stream is live by default but if, you want to catchup, you can simply add the from parameter to your query. This parameter accept a timestamp in millisecond and also an event id, e.g. http://localhost/stream?from=1620249512599
Stream size?
The raw stream is really important in the platform and needs te be sized according to the period of retention you want to ensure. More retention you will have, more security about reprocessing the past information you will get. We usually recommand 1 month of retention, that usually match 2 000 000 of events. This limit can be configured with redis:trimming option, please check deployment configuration page.
This stream aims to simplify your usage of the stream through the connectors, providing a way to create stream with specific filters through the UI. After creating this stream, is simply accessible from /stream/{STREAM_ID}.
It's very useful for various cases of data externalization, synchronization, like SPLUNK, TANIUM...
This stream provides different interesting mechanics:
Stream the initial list of instances matching your filters when connecting based on main database if you use the recover parameter
Auto dependencies resolution to guarantee the consistency of the information distributed
Automatic events translation depending on the element segregation
If you want to dig in about the internal behavior you can check this complete diagram:
no-dependencies (query parameter or header, default false). Can be used to prevent the auto dependencies resolution. To be used with caution.
listen-delete (query parameter or header, default true). Can be used prevent receive deletion events. To be used with caution.
with-inferences (query parameter or header, default false). Can be used to add inferences events (from rule engine) in the stream.
"},{"location":"reference/streaming/#from-and-recover","title":"From and Recover","text":"
From and recover are 2 different options that need to be explains.
from (query parameter) is always the parameter that describe the initial date/event_id you want to start from. Can also be setup with request header from or last-event-id
recover (query parameter) is an option that let you consume the initial event from the database and not from the stream. Can also be setup with request header recover or recover-date
This difference will be transparent for the consumer but very important to get old information as an initial snapshot. This also let you consume information that is no longer in the stream retention period.
The next diagram will help you to understand the concept:
In OpenCTI, taxonomies serve as structured classification systems that aid in organizing and categorizing intelligence data. This reference guide provides an exhaustive description of the platform's customizable fields within the taxonomies' framework. Users can modify, add, or delete values within the available vocabularies to tailor the classification system to their specific requirements.
For broader documentation on the taxonomies section, please consult the appropriate page.
Default values are based on those defined in the STIX standard but can be tailored to better suit the organization's needs.
Name Used in Default value Account type vocabulary (account-type-ov) User account facebook, ldap, nis, openid, radius, skype, tacacs, twitter, unix, windows-domain, windows-local Attack motivation vocabulary (attack-motivation-ov) Threat actor group accidental, coercion, dominance, ideology, notoriety, organizational-gain, personal-gain, personal-satisfaction, revenge, unpredictable Attack resource level vocabulary (attack-resource-level-ov) Threat actor group club, contest, government, individual, organization, team Case priority vocabulary (case_priority_ov) Incident response P1, P2, P3, P4 Case severity vocabulary (case_severity_ov) Incident response critical, high, medium, low Channel type vocabulary (channel_type_ov) Channel Facebook, Twitter Collection layers vocabulary (collection_layers_ov) Data source cloud-control-plane, container, host, network, OSINT Event type vocabulary (event_type_ov) Event conference, financial, holiday, international-summit, local-election, national-election, sport-competition Eye color vocabulary (eye_color_ov) Threat actor individual black, blue, brown, green, hazel, other Gender vocabulary (gender_ov) Threat actor individual female, male, nonbinary, other Grouping context vocabulary (grouping_context_ov) Grouping malware-analysis, suspicious-activity, unspecified Hair color vocabulary vocabulary (hair_color_ov) Threat actor individual bald, black, blond, blue, brown, gray, green, other, red Implementation language vocabulary (implementation_language_ov) Malware applescript, bash, c, c++, c#, go, java, javascript, lua, objective-c, perl, php, powershell, python, ruby, scala, swift, typescript, visual-basic, x86-32, x86-64 Incident response type vocabulary (incident_response_type_ov) Incident response data-leak, ransomware Incident severity vocabulary (incident_severity_ov) Incident critical, high, medium, low Incident type vocabulary (incident_type_ov) Incident alert, compromise, cybercrime, data-leak, information-system-disruption, phishing, reputation-damage, typosquatting Indicator type vocabulary (indicator_type_ov) Indicator anomalous-activity, anonymization, attribution, benign, compromised, malicious-activity, unknown Infrastructure type vocabulary (infrastructure_type_ov) Infrastructure amplification, anonymization, botnet, command-and-control, control-system, exfiltration, firewall, hosting-malware, hosting-target-lists, phishing, reconnaissance, routers-switches, staging, unknown, workstation Integrity level vocabulary (integrity_level_ov) Process high, medium, low, system Malware capabilities vocabulary (malware_capabilities_ov) Malware accesses-remote-machines, anti-debugging, anti-disassembly, anti-emulation, anti-memory-forensics, anti-sandbox, anti-vm, captures-input-peripherals, captures-output-peripherals, captures-system-state-data, cleans-traces-of-infection, commits-fraud, communicates-with-c2, compromises-data-availability, compromises-data-integrity, compromises-system-availability, controls-local-machine, degrades-security-software, degrades-system-updates, determines-c2-server, emails-spam, escalates-privileges, evades-av, exfiltrates-data, fingerprints-host, hides-artifacts, hides-executing-code, infects-files, infects-remote-machines, installs-other-components, persists-after-system-reboot, prevents-artifact-access, prevents-artifact-deletion, probes-network-environment, self-modifies, steals-authentication-credentials, violates-system-operational-integrity Malware result vocabulary (malware_result_ov) Malware analysis benign, malicious, suspicious, unknown Malware type vocabulary (malware_type_ov) Malware adware, backdoor, bootkit, bot, ddos, downloader, dropper, exploit-kit, keylogger, ransomware, remote-access-trojan, resource-exploitation, rogue-security-software, rootkit, screen-capture, spyware, trojan, unknown, virus, webshell, wiper, worm Marital status vocabulary (marital_status_ov) Threat actor individual annulled, divorced, domestic_partner, legally_separated, married, never_married, polygamous, separated, single, widowed Note types vocabulary (note_types_ov) Note analysis, assessment, external, feedback, internal Opinion vocabulary (opinion_ov) Opinion agree, disagree, neutral, strongly-agree, strongly-disagree Pattern type vocabulary (pattern_type_ov) Indicator eql, pcre, shodan, sigma, snort, spl, stix, suricata, tanium-signal, yara Permissions vocabulary (permissions_ov) Attack pattern Administrator, root, User Platforms vocabulary (platforms_ov) Data source android, Azure AD, Containers, Control Server, Data Historian, Engineering Workstation, Field Controller/RTU/PLC/IED, Google Workspace, Human-Machine Interface, IaaS, Input/Output Server, iOS, linux, macos, Office 365, PRE, SaaS, Safety Instrumented System/Protection Relay, windows Processor architecture vocabulary (processor_architecture_ov) Malware alpha, arm, ia-64, mips, powerpc, sparc, x86, x86-64 Reliability vocabulary (reliability_ov) Report, Organization A - Completely reliable, B - Usually reliable, C - Fairly reliable, D - Not usually reliable, E - Unreliable, F - Reliability cannot be judged Report types vocabulary (report_types_ov) Report internal-report, threat-report Request for information types vocabulary (request_for_information_types_ov) Request for information none Request for takedown types vocabulary (request_for_takedown_types_ov) Request for takedown brand-abuse, phishing Service status vocabulary (service_status_ov) Process SERVICE_CONTINUE_PENDING, SERVICE_PAUSE_PENDING, SERVICE_PAUSED, SERVICE_RUNNING, SERVICE_START_PENDING, SERVICE_STOP_PENDING, SERVICE_STOPPED Service type vocabulary (service_type_ov) Process SERVICE_FILE_SYSTEM_DRIVER, SERVICE_KERNEL_DRIVER, SERVICE_WIN32_OWN_PROCESS, SERVICE_WIN32_SHARE_PROCESS Start type vocabulary (start_type_ov) Process SERVICE_AUTO_START, SERVICE_BOOT_START, SERVICE_DEMAND_START, SERVICE_DISABLED, SERVICE_SYSTEM_ALERT Threat actor group role vocabulary (threat_actor_group_role_ov) Threat actor group agent, director, independent, infrastructure-architect, infrastructure-operator, malware-author, sponsor Threat actor group sophistication vocabulary (threat_actor_group_sophistication_ov) Threat actor group advanced, expert, innovator, intermediate, minimal, none, strategic Threat actor group type vocabulary (threat_actor_group_type_ov) Threat actor group activist, competitor, crime-syndicate, criminal, hacker, insider-accidental, insider-disgruntled, nation-state, sensationalist, spy, terrorist, unknown Threat actor individual role vocabulary (threat_actor_individual_role_ov) Threat actor individual agent, director, independent, infrastructure-architect, infrastructure-operator, malware-author, sponsor Threat actor individual sophistication vocabulary (threat_actor_individual_sophistication_ov) Threat actor individual advanced, expert, innovator, intermediate, minimal, none, strategic Threat actor individual type vocabulary (threat_actor_individual_type_ov) Threat actor individual activist, competitor, crime-syndicate, criminal, hacker, insider-accidental, insider-disgruntled, nation-state, sensationalist, spy, terrorist, unknown Tool types vocabulary (tool_types_ov) Tool credential-exploitation, denial-of-service, exploitation, information-gathering, network-capture, remote-access, unknown, vulnerability-scanning "},{"location":"reference/taxonomy/#customization","title":"Customization","text":"
Users can customize the taxonomies by modifying the available values or adding new ones. These modifications enable users to adapt the classification system to their specific intelligence requirements. Additionally, within each vocabulary list, users have the flexibility to customize the order of the dropdown menu associated with the taxonomy. This feature allows users to prioritize certain values or arrange them in a manner that aligns with their specific classification needs. Additionally, users can track the usage count for each vocabulary, providing insights into the frequency of usage and helping to identify the most relevant and impactful classifications. These customization options empower users to tailor the taxonomy system to their unique intelligence requirements, enhancing the efficiency and effectiveness of intelligence analysis within the OpenCTI platform.
The application collects statistical data related to its usage and performances.
Confidentiality
The OpenCTI platform does not collect any information related to threat intelligence knowledge which remains strictly confidential. Also, the collection is strictly anonymous and personally identifiable information is NOT collected (including IP addresses).
All data collected is anonymized and aggregated to protect the privacy of individual users, in compliance with all privacy regulations.
"},{"location":"reference/usage-telemetry/#purpose-of-the-telemetry","title":"Purpose of the telemetry","text":"
The collected data is used for the following purposes:
Improving the functionality and performance of the application.
Analyzing user behavior to enhance user experience.
Generating aggregated and anonymized statistics for internal and external reporting.
"},{"location":"reference/usage-telemetry/#important-thing-to-know","title":"Important thing to know","text":"
The platform send the metrics to the hostname telemetry.filigran.io using the OTLP protocol (over HTTPS). The format of the data is OpenTelemetry JSON.
The metrics push is done every 6 hours if OpenCTI was able to connect to the hostname when the telemetry manager is started. Metrics are also written in specific logs files in order to be included in support package
Ask AI is available under the \"OpenCTI Enterprise Edition\" license.
Please read the dedicated page to have all information
"},{"location":"usage/ask-ai/#prerequisites-for-using-ask-ai","title":"Prerequisites for using Ask AI","text":"
There are several possibilities for Enterprise Edition customers to use OpenCTI AI endpoints:
Use the Filigran AI Service leveraging our custom AI model using the token given by the support team.
Use OpenAI or MistralAI cloud endpoints using your own tokens.
Deploy or use local AI endpoints (Filigran can provide you with the custom model).
Please read the configuration documentation
Beta Feature
Ask AI is a beta feature as we are currently fine-tuning our models. Consider checking important information.
"},{"location":"usage/ask-ai/#how-it-works","title":"How it works","text":"
Even if in the future, we would like to leverage AI to do RAG, for the moment we are mostly using AI to analyze and produce texts or images, based on data directly sent into the prompt.
This means that if you are using Filigran AI endpoint or a local one, your data is never used to re-train or adapt the model and everything relies on a pre-trained and fixed model. When using the Ask AI button in the platform, a prompt is generated with the proper instruction to generate the expected result and use it in the context of the button (in forms, rich text editor etc.).
We are hosting a scalable AI endpoint for all SaaS or On-Prem enterprise edition customers, this endpoint is based on MistralAI with a model that will be adapted over time to be more effective when processing threat intelligence related contents.
The model, which is still in beta version, will be adapted in the upcoming months to reach maturity at the end of 2024. It can be shared with on-prem enterprise edition customers under NDA.
"},{"location":"usage/ask-ai/#functionalities-of-ask-ai","title":"Functionalities of Ask AI","text":"
Ask AI is represented by a dedicated icon wherever on of its functionalities is available to use.
"},{"location":"usage/ask-ai/#assistance-for-writing-meaningful-content","title":"Assistance for writing meaningful content","text":"
Ask AI can assist you for writing better textual content, for example better title, name, description and detailed content of Objects.
Fix spelling & grammar: try to improve the text from a formulation and grammar perspective.
Make it shorter/longer: try to shorten or lengthen the text.
Change tone: try to change the tone of the text. You can select if you want the text to be written for Strategic (Management, decision makers), Tactical (for team leaders) or Operational (for technical CTI analysts) audiences.
Summarize: try to summarize the text in bullet points.
Explain: try to explain the context of the subject's text based on what is available to the LLM.
"},{"location":"usage/ask-ai/#assistance-for-importing-data-from-documents","title":"Assistance for importing data from documents","text":"
Fom the Content tab of a Container (Reports, Groupings and Cases), Ask AI can also assist you for importing data contained in uploaded documents into OpenCTI for further exploitation.
Generate report document: Generate a text report based on the knowledge graph (entities and relationships) of this container.
Summarize associated files: Generate a summary of the selected files (or all files associated to this container).
Try to convert the selected files (or all files associated to this container) in a STIX 2.1 bundle you will then be able to use at your convenience (for example importing it into the platform).
A short video on the FiligranHQ YouTube channel presents tha capabilities of AskAI: https://www.youtube.com/watch?v=lsP3VVsk5ds.
"},{"location":"usage/ask-ai/#improving-generated-elements-of-ask-ai","title":"Improving generated elements of Ask AI","text":"
Be aware that the text quality is highly dependent on the capabilities of the associated LLM.
That is why every generated text by Ask AI is provided in a dedicated panel, allowing you to verify and rectify any error the LLM could have made.
Playbooks automation is available under the \"OpenCTI Enterprise Edition\" license. Please read the dedicated page to have all information.
OpenCTI playbooks are flexible automation scenarios which can be fully customized and enabled by platform administrators to enrich, filter and modify the data created or updated in the platform.
Playbook automation is accessible in the user interface under Data > Processing > Automation.
Right needed
You need the \"Manage credentials\" capability to use the Playbooks automation, because you will be able to manipulate data simple users cannot access.
You will then be able to:
add labels depending on enrichment results to be used in threat intelligence-driven detection feeds,
create reports and cases based on various criteria,
trigger enrichments or webhooks in given conditions,
modify attributes such as first_seen and last_seen based on other pieces of knowledge,
Initiating with a component listening to a data stream, each subsequent component in the playbook processes a received STIX bundle. These components have the ability to modify the bundle and subsequently transmit the altered result to connected components.
In this paradigm, components can send out the STIX 2.1 bundle to multiple components, enabling the development of multiple branches within your playbook.
A well-designed playbook end with a component executing an action based on the processed information. For instance, this may involve writing the STIX 2.1 bundle in a data stream.
Validate ingestion
The STIX bundle processed by the playbook won't be written in the platform without specifying it using the appropriate component, i.e. \"Send for ingestion\".
"},{"location":"usage/automation/#create-a-playbook","title":"Create a Playbook","text":"
It is possible to create as many playbooks as needed which are running independently. You can give a name and description to each playbook.
The first step to define in the playbook is the \u201ctriggering event\u201d, which can be any knowledge event (create, update or delete) with customizable filters. To do so, click on the grey rectangle in the center of the workspace and choose the component to \"listen knowledge events\". Configure it with adequate filters. You can use same filters as in other part of the platform.
Then you have flexible choices for the next steps to:
filter the initial knowledge,
enrich data using external sources and internal rules,
modify entities and relationships by applying patches,
write the data, send notifications,
etc.
Do not forget to start your Playbook when ready, with the Start option of the burger button placed near the name of your Playbook.
By clicking the burger button of a component, you can replace it by another one.
By clicking on the arrow icon in the bottom right corner of a component, you can develop a new branch at the same level.
By clicking the \"+\" button on a link between components, you can insert a component between the two.
"},{"location":"usage/automation/#components-of-playbooks","title":"Components of playbooks","text":""},{"location":"usage/automation/#log-data-in-standard-output","title":"Log data in standard output","text":"
Will write the received STIX 2.1 bundle in platform logs with configurable log level and then send out the STIX 2.1 bundle unmodified.
"},{"location":"usage/automation/#send-for-ingestion","title":"Send for ingestion","text":"
Will pass the STIX 2.1 bundle to be written in the data stream. This component has no output and should end a branch of your playbook.
Will allow you to define filter and apply it to the received STIX 2.1 bundle. The component has 2 output, one for data matching the filter and one for the remainder.
By default, filtering is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
"},{"location":"usage/automation/#enrich-through-connector","title":"Enrich through connector","text":"
Will send the received STIX 2.1 bundle to a compatible enrichment connector and send out the modified bundle.
Will add, replace or remove compatible attribute of the entities contains in the received STIX 2.1 bundle and send out the modified bundle.
By default, modification is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
Will modify the received STIX 2.1 bundle to include the entities into an container of the type you configured. By default, wrapping is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
"},{"location":"usage/automation/#share-with-organizations","title":"Share with organizations","text":"
Will share every entity in the received STIX 2.1 bundle with Organizations you configured. Your platform needs to have declared a platform main organization in Settings/Parameters.
Will apply a complex automation built-in rule. This kind of rule might impact performance. Current rules are:
First/Last seen computing extension from report publication date: will populate first seen and last seen date of entities contained in the report based on its publication date,
Resolve indicators based on observables (add in bundle): will retrieve all indicators linked to the bundle's observables from the database,
Resolve observables an indicator is based on (add in bundle): retrieve all observables linked to the bundle's indicator from the database,
Resolve container references (add in bundle): will add to the bundle all the relationships and entities the container contains (if the entity having triggered the playbook is not a container, the output of this component will be empty),
Resolve neighbors relations and entities (add in bundle): will add to the bundle all relations of the entity having triggered the playbook, as well as all entities at the end of these relations, i.e. the \"first neighbors\" (if the entity is a container, the output of this component will be empty).
"},{"location":"usage/automation/#send-to-notifier","title":"Send to notifier","text":"
Will generate a Notification each time a STIX 2.1 bundle is received.
"},{"location":"usage/automation/#promote-observable-to-indicator","title":"Promote observable to indicator","text":"
Will generate indicator based on observables contained in the received STIX 2.1 bundle.
By default, it is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all observables in the bundle (e.g. observables that might result from predefined rule).
You can also add all indicators and relationships generated by this component in the entity having triggered the playbook, if this entity is a container.
"},{"location":"usage/automation/#extract-observables-from-indicator","title":"Extract observables from indicator","text":"
Will extract observables based on indicators contained in the received STIX 2.1 bundle.
By default, it is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all indicators in the bundle (e.g. indicators that might result from enrichment.
You can also add all observables and relationships generated by this component in the entity having triggered the playbook, if this entity is a container.
At the top right of the interface, you can access execution trace of your playbook and consult the raw data after every step of your playbook execution.
Rule tasks can be seen and activated in Settings > Customization > Rules engine. Knowledge and user tasks can be seen and managed in Data > Background Tasks. The scope of each task is indicated.
If a rule task is enabled, it leads to the scan of the whole platform data and the creation of entities or relationships in case a configuration corresponds to the tasks rules. The created data are called 'inferred data'. Each time an event occurs in the platform, the rule engine checks if inferred data should be updated/created/deleted.
Knowledge tasks are background tasks updating or deleting entities and correspond to mass operations on these data. To create one, select entities via the checkboxes in an entity list, and choose the action to perform via the toolbar.
To create a knowledge task, the user should have the capability to Update Knowledge (or the capability to delete knowledge if the task action is a deletion).
To see a knowledge task in the Background task section, the user should be the creator of the task, or have the KNOWLEDGE capability.
To delete a knowledge task from the Background task section, the user should be the creator of the task, or have the KNOWLEDGE_UPDATE capability.
User tasks are background tasks updating or deleting notifications. It can be done from the Notification section, by selecting several notifications via the checkboxes, and choosing an action via the toolbar.
A user can create a user task on its own notifications only.
To see or delete a user task, the user should be the creator of the task or have the SET_ACCESS capability.
"},{"location":"usage/case-management/","title":"Case management","text":""},{"location":"usage/case-management/#why-case-management","title":"Why Case management?","text":"
Compiling CTI data in one place, deduplicate and correlate to transform it into Intelligence is very important. But ultimately, you need to act based on this Intelligence. Some situations will need to be taken care of, like cybersecurity incidents, requests for information or requests for takedown. Some actions will then need to be traced, to be coordinated and oversaw. Some actions will include feedback and content delivery.
OpenCTI includes Cases to allow organizations to manage situations and organize their team's work. Better, by doing Case management in OpenCTI, you handle your cases with all the context and Intelligence you need, at hand.
"},{"location":"usage/case-management/#how-to-manage-your-case-in-opencti","title":"How to manage your Case in OpenCTI?","text":"
Multiple situations can be modelize in OpenCTI as a Case, either an Incident Response, a Request for Takedown or a Request for Information.
All Cases can contain any entities and relationships you need to represent the Intelligence context related to the situation. At the beginning of your case, you may find yourself with only some Observables sighted in a system. At the end, you may have Indicators, Threat Actor, impacted systems, attack patterns. All representing your findings, ready to be presented and exported as graph, pdf report, timeline, etc.
Some Cases may need some collaborative work and specific Tasks to be performed by people that have relevant skillsets. OpenCTI allows you to associate Tasks in your Cases and assign them to users in the platform. As some type of situation may need the same tasks to be done, it is also possible to pre-define lists of tasks to be applied on your case. You can define these lists by accessing the Settings/Taxonomies/Case templates panel. Then you just need to add it from the overview of your desire Case.
Tip: A user can have a custom dashboard showing him all the tasks that have been assigned to him.
As with other objects in OpenCTI, you can also leverage the Notes to add some investigation and analysis related comments, helping you shaping up the content of your case with unstructured data and trace all the work that have been done.
You can also use Opinions to collect how the Case has been handled, helping you to build Lessons Learned.
To trace the evolution of your Case and define specific resolution worflows, you can use the Status (that can be define in Settings/Taxonomies/Status templates).
At the end of your Case, you will certainly want to report on what has been done. OpenCTI allows you to export the content of the Case in a simple but customizable PDF (currently in refactor). But of course, your company has its own documents' templates, right? With OpenCTI, you will be able to include some nice graphics in it. For example, a Matrix view of the attacker attack pattern or even a graph display of how things are connected.
Also, we are currently working a more meaningfull Timeline view that will be possible to export too.
"},{"location":"usage/case-management/#use-case-example-a-suspicious-observable-is-sighted-by-a-defense-system-is-it-important","title":"Use case example: A suspicious observable is sighted by a defense system. Is it important?","text":"
Daily, your SIEM and EDR are fed Indicators of Compromise from your OpenCTI instance.
Today, your SIEM has sighted the domain name \"bad.com\" matching one of them. Its alert has been transfered to OpenCTI and has created a Sighting relationship between your System \"SIEM permiter A\" and the Observable \"bad.com\".
You are alerted immediatly, because you have activated the inference rule creating a corresponding Incident in this situation, and you have created an alert based on new Incident that sends you email notification and Teams message (webhook).
In OpenCTI, you can clearly see the link between the alerting System, the sighted Observable and the corresponding Indicator. Better, you can also see all the context of the Indicator. It is linked to a notorious and recent phishing campaign targeting your activity sector. \"bad.com\" is clearly something to investigate ASAP.
You quickly select all the context you have found, and add it to a new Incident responsecase. You position the priority to High, regarding the context, and the severity to Low, as you don't know yet if someone really interacted with \"bad.com\".
You also assign the case to one of your colleagues, on duty for investigative work. To guide him, you also create a Task in your case for verifying if an actual interaction happened with \"bad.com\".
In the STIX 2.1 standard, some STIX Domain Objects (SDO) can be considered as \"container of knowledge\", using the object_refs attribute to refer multiple other objects as nested references. In object_refs, it is possible to refer to entities and relationships.
"},{"location":"usage/containers/#implementation","title":"Implementation","text":""},{"location":"usage/containers/#types-of-container","title":"Types of container","text":"
In OpenCTI, containers are displayed differently than other entities, because they contain pieces of knowledge. Here is the list of containers in the platform:
Type of entity STIX standard Description Report Native Reports are collections of threat intelligence focused on one or more topics, such as a description of a threat actor, malware, or attack technique, including context and related details. Grouping Native A Grouping object explicitly asserts that the referenced STIX Objects have a shared context, unlike a STIX Bundle (which explicitly conveys no context). Observed Data Native Observed Data conveys information about cyber security related entities such as files, systems, and networks using the STIX Cyber-observable Objects (SCOs). Note Native A Note is intended to convey informative text to provide further context and/or to provide additional analysis not contained in the STIX Objects. Opinion Native An Opinion is an assessment of the correctness of the information in a STIX Object produced by a different entity. Case Extension A case whether an Incident Response, a Request for Information or a Request for Takedown is used to convey an epic with a set of tasks. Task Extension A task, generally used in the context of a case, is intended to convey information about something that must be done in a limited timeframe."},{"location":"usage/containers/#containers-behavior","title":"Containers behavior","text":"
In the platform, it is always possible to visualize the list of entities and/or observables referenced in a container (Container > Entities or Observables) but also to add / remove entities from the container.
As containers can also contain relationships, which are generally linked to the other entities in the container, it is also possible to visualize the container as a graph (Container > Knowledge)
"},{"location":"usage/containers/#containers-of-an-entity-or-a-relationship","title":"Containers of an entity or a relationship","text":"
On the entity or the relationship side, you can always find all containers where the object is contained using the top menu Analysis:
In all containers list, you can also filter containers based on one or multiple contained object(s):
OpenCTI provides a simple way to share a visualisation of a custom dashboard to anyone, even for people that are outside of the platform. We call those visualisations: public dashboards.
Public dashboards are a snapshot of a custom dashboard at a specific moment of time. By this way you can share a version of a custom dashboard, then modify your custom dashboard without worrying about the impact on public dashboards you have created.
On the contrary, if you want that your public dashboard is updated with the last version of the associated custom dashboard, you can do it with few clicks by recreating a public dashboard using the same name of the one to update.
To be able to share custom dashboards you need to have the Manage data sharing & ingestion capability.
"},{"location":"usage/dashboards-share/#create-a-public-dashboard","title":"Create a public dashboard","text":"
On the top-right of your custom dashboard page you will find a button that will open a panel to manage the public dashboards associated to this custom dashboard.
In this panel you will find two parts: - At the top you have a form allowing you to create public dashboards, - And below, the list of the public dashboards you have created.
"},{"location":"usage/dashboards-share/#form-to-create-a-new-public-dashboard","title":"Form to create a new public dashboard","text":"
First you need to specify a name for your public dashboard. This name will be displayed on the dashboard page. The name is also used to generate an ID for your public dashboard that will be used in the URL to access the dashboard.
The ID is generated as follow: replace all spaces with symbol - and remove special characters. This ID also needs to be unique in the platform as it is used in the URL to access the dashboard.
Then you can choose if the public dashboard is enabled or not. A disabled dashboard means that you cannot access the public dashboard through the custom URL. But you can still manage it from this panel.
Finally you choose the max level of marking definitions for the data to be displayed in the public dashboard. For example if you choose TLP:AMBER then the data fetched by the widgets inside the public dashboard will be at maximum AMBER, you won't retrieved data with RED marking definition.
Also note that the list of marking definitions you can see is based on your current marking access on the platform and the maximum sharable marking definitions defined in your groups.
Define maximum shareable markings in groups
As a platform administrator, you can define, for each group and for each type of marking definition, Maximum marking definitions shareable to be shared through Public Dashboard, regardless of the definition set by users in their public dashboard.
"},{"location":"usage/dashboards-share/#list-of-the-public-dashboards","title":"List of the public dashboards","text":"
When you have created a public dashboard, it will appear in the list below the form.
In this list each item represents a public dashboard you have created. For each you can see its name, path of the URL, max marking definitions, the date of creation, the status to know if the dashboard is enabled or not and some actions.
The possible actions are: copy the link of the public dashboard, disable or enable the dashboard and delete the dashboard.
To share a public dashboard just copy the link and give the URL to the person you want to share with. The dashboard will be visible even if the person is not connected to OpenCTI.
OpenCTI provides an adaptable and entirely customizable dashboard functionality. The flexibility of OpenCTI's dashboard ensures a tailored and insightful visualization of data, fostering a comprehensive understanding of the platform's knowledge, relationships, and activities.
You have the flexibility to tailor the arrangement of widgets on your dashboard, optimizing organization and workflow efficiency. Widgets can be intuitively placed to highlight key information. Additionally, you can resize widgets from the bottom right corner based on the importance of the information, enabling adaptation to specific analytical needs. This technical flexibility ensures a fluid, visually optimized user experience.
Moreover, the top banner of the dashboard offers a convenient feature to configure the timeframe for displayed data. This can be accomplished through the selection of a relative time period, such as \"Last 7 days\", or by specifying fixed \"Start\" and \"End\" dates, allowing users to precisely control the temporal scope of the displayed information.
In OpenCTI, the power to create custom dashboards comes with a flexible access control system, allowing users to tailor visibility and rights according to their collaborative needs.
When a user crafts a personalized dashboard, by default, it remains visible only to the dashboard creator. At this stage, the creator holds administrative access rights. However, they can extend access and rights to others using the \"Manage access\" button, denoted by a locker icon, located at the top right of the dashboard page.
Levels of access:
View: Access to view the dashboard.
Edit: View + Permission to modify and update the dashboard and its widgets.
Manage: Edit + Ability to delete the dashboard and to control user access and rights.
It's crucial to emphasize that at least one user must retain admin access to ensure ongoing management capabilities for the dashboard.
Knowledge access restriction
The platform's data access restrictions also apply to dashboards. The data displayed in the widgets is subject to the user's access rights within the platform. Therefore, an admin user will not see the same results in the widgets as a user with limited access, such as viewing only TLP:CLEAR data (assuming the platform contains data beyond TLP:CLEAR).
"},{"location":"usage/dashboards/#share-dashboard-and-widget-configurations","title":"Share dashboard and widget configurations","text":"
OpenCTI provides functionality for exporting, importing and duplicating dashboard and widget configurations, facilitating the seamless transfer of customized dashboard setups between instances or users.
The dashboard configuration will be saved as a JSON file, with the title formatted as [YYYYMMDD]_octi_dashboard_[dashboard title]. The expected configuration file content is as follows:
The widget configuration will be saved as a JSON file, with the title formatted as [YYYYMMDD]_octi_widget_[widget view]. The expected configuration file content is as follows:
When exporting a dashboard or widget configuration, all filters will be exported as is. Filters on objects that do not exist in the receiving platform will need manual deletion after import. Filters to be deleted can be identified by their \"delete\" barred value.
Dashboards can be imported from the custom dashboards list:
Hover over the Add button (+) in the right bottom corner.
Click on the Import dashboard button (cloud with an upward arrow).
Select your file.
To import a widget, the same mechanism is used, but from a dashboard view.
Configuration compatibility
Only JSON files with the required properties will be accepted, including \"openCTI_version: [5.12.0 and above]\", \"type: [dashboard|widget]\", and a \"configuration\". This applies to both dashboards and widgets configurations.
Dashboards can be duplicated from either the custom dashboards list or the dashboard view.
To duplicate a dashboard from the custom dashboards list:
Click on the burger menu button at the end of the dashboard line.
Select Duplicate.
To duplicate a widget, the same mechanism is used, but from the burger menu button in the upper right-hand corner of the widget.
To duplicate a dashboard from the dashboard view:
Navigate to the desired dashboard.
Click on the Duplicate the dashboard button (two stacked sheets) located in the top-right corner of the dashboard.
Upon successful duplication, a confirmation message is displayed for a short duration, accompanied by a link for easy access to the new dashboard view. Nevertheless, the new dashboard can still be found in the dashboards list.
Dashboard access
The user importing or duplicating a dashboard becomes the only one with access to it. Then, access can be managed as usual.
To enable a unified approach in the description of threat intelligence knowledge as well as importing and exporting data, the OpenCTI data model is based on the STIX 2.1 standard. Thus we highly recommend to take a look to the STIX Introductory Walkthrough and to the different kinds of STIX relationships to get a better understanding of how OpenCTI works.
Some more important STIX naming shortcuts are:
STIX Domain Objects (SDO): Attack Patterns, Malware, Threat Actors, etc.
STIX Cyber Observable (SCO): IP Addresses, domain names, hashes, etc.
In some cases, the model has been extended to be able to:
Support more types of SCOs to modelize information systems such as cryptocurrency wallets, user agents, etc.
Support more types of SDOs to modelize disinformation and cybercrime such as channels, events, narrative, etc.
Support more types of SROs to extend the new SDOs such asamplifies, publishes, etc.
"},{"location":"usage/data-model/#implementation-in-the-platform","title":"Implementation in the platform","text":""},{"location":"usage/data-model/#diagram-of-types","title":"Diagram of types","text":"
You can find below the digram of all types of entities and relationships available in OpenCTI.
"},{"location":"usage/data-model/#attributes-and-properties","title":"Attributes and properties","text":"
To get a comprehensive list of available properties for a given type of entity or relationship, you can use the GraphQL playground schema available in your \"Profile > Playground\". Then you can click on schema. You can for instance search for the keyword IntrusionSet:
"},{"location":"usage/dates/","title":"Meaning of dates","text":"
In OpenCTI, entities can contain various dates, each representing different types of information. The available dates vary depending on the entity types.
In OpenCTI, dates play a crucial role in understanding the context and history of entities. Here's a breakdown of the different dates you might encounter in the platform:
\u201cPlatform creation date\u201d: This date signifies the moment the entity was created within OpenCTI. On the API side, this timestamp corresponds to the created_at field. It reflects the initiation of the entity within the OpenCTI environment.
\u201cOriginal creation date\u201d: This date reflects the original creation date of the data on the source's side. It becomes relevant if the source provides this information and if the connector responsible for importing the data takes it into account. In cases where the source date is unavailable or not considered, this date defaults to the import date (i.e. the \u201cPlatform creation date\u201d). On the API side, this timestamp corresponds to the created field.
\u201cModification date\u201d: This date captures the most recent modification made to the entity, whether a connector automatically modifies it or a user manually edits the entity. On the API side, this timestamp corresponds to the updated_at field. It serves as a reference point for tracking the latest changes made to the entity.
Date not shown on GUI: There is an additional date which is not visible on the entity in the GUI. This date is the modified field on the API. This date reflects the original update date of the data on the source's side. The difference between modified and updated_at is identical to the difference between created and created_at.
Understanding these dates is pivotal for contextualizing the information within OpenCTI, ensuring a comprehensive view of entity history and evolution.
The technical dates refer to the dates directly associated to data management within the platform. The API fields corresponding to technical dates are:
created_at: Indicates the date and time when the entity was created in the platform.
updated_at: Represents the date and time of the most recent update to the entity in the platform.
The functional dates are the dates functionally significant, often indicating specific events or milestones. The API fields corresponding to functional dates are:
created: Denotes the date and time when the entity was created on the source's side.
modified: Represents the date and time of the most recent modification to the entity on the source's side.
start_time: Indicates the start date and time associated with a relationship.
stop_time: Indicates the stop date and time associated with a relationship.
first_seen: Represents the initial date and time when the entity/activity was observed.
last_seen: Represents the most recent date and time when the entity/activity was observed.
One of the core concept of the OpenCTI knowledge graph is all underlying mechanisms implemented to accurately de-duplicate and consolidate (aka. upserting) information about entities and relationships.
When an object is created in the platform, whether manually by a user or automatically by the connectors / workers chain, the platform checks if something already exist based on some properties of the object. If the object already exists, it will return the existing object and, in some cases, update it as well.
Technically, OpenCTI generates deterministic IDs based on the listed properties below to prevent duplicate (aka \"ID Contributing Properties\"). Also, it is important to note that there is a special link between name and aliases leading to not have entities with overlapping aliases or an alias already used in the name of another entity.
"},{"location":"usage/deduplication/#entities","title":"Entities","text":"Type Attributes Area (name OR x_opencti_alias) AND x_opencti_location_type Attack Pattern (name OR alias) AND optional x_mitre_id Campaign name OR alias Channel name OR alias City (name OR x_opencti_alias) AND x_opencti_location_type Country (name OR x_opencti_alias) AND x_opencti_location_type Course Of Action (name OR alias) AND optional x_mitre_id Data Component name OR alias Data Source name OR alias Event name OR alias Feedback Case name AND created (date) Grouping name AND context Incident name OR alias Incident Response Case name OR alias Indicator pattern OR alias Individual (name OR x_opencti_alias) and identity_class Infrastructure name OR alias Intrusion Set name OR alias Language name OR alias Malware name OR alias Malware Analysis name OR alias Narrative name OR alias Note None Observed Data name OR alias Opinion None Organization (name OR x_opencti_alias) and identity_class Position (name OR x_opencti_alias) AND x_opencti_location_type Region name OR alias Report name AND published (date) RFI Case name AND created (date) RFT Case name AND created (date) Sector (name OR alias) and identity_class Task None Threat Actor name OR alias Tool name OR alias Vulnerability name OR alias
Names and aliases management
The name and aliases of an entity define a set of unique values, so it's not possible to have the name equal to an alias and vice versa.
For STIX Cyber Observables, OpenCTI also generate deterministic IDs based on the STIX specification using the \"ID Contributing Properties\" defined for each type of observable.
In cases where an entity already exists in the platform, incoming creations can trigger updates to the existing entity's attributes. This logic has been implemented to converge the knowledge base towards the highest confidence and quality levels for both entities and relationships.
To understand in details how the deduplication mechanism works in context of the maximum confidence level, you can navigate through this diagram (section deduplication):
"},{"location":"usage/delete-restore/","title":"Delete and restore knowledge","text":"
Knowledge can be deleted from OpenCTI either in an overview of an object or using background tasks. When an object is deleted, all its relationships and references to other objects are also deleted.
The deletion event is written to the stream, to trigger automated playbooks or synchronize another platform.
Since OpenCTI 6.1, a record of the deleted objects is kept for a given period of a time, allowing to restore them on demand. This does not impact the stream events or other side effect of the deletion: the object is still deleted.
A view called \"Trash\" displays all \"delete\" operations, entities and relationships alike.
A delete operation contains not only the entity or relationship that has been deleted, but also all the relationships and references from (to) this main object to (from) other elements in the platform.
You can sort, filter or search this table using the usual UI controls. You are limited to the type of object, their representation (most of the time, the name of the object), the user who deleted the object, the date and time of deletion and the marking of the object.
Note that the delete operations (i.e. the entries in this table view) inherit the marking of the main entity that was deleted, and thus follow the same access restriction as the object that was deleted.
You can individually restore or permanently delete an object from the trash view using the burger menu at the end of the line.
Alternatively, you can use the checkboxes at the start of the line to select a subset of deleted objects, and trigger a background task to restore or permanently delete them by batch.
Restoring an element creates it again in the platform with the same information it had before its deletion. It also restores all the relationships from or to this object, that have been also deleted during the deletion operation. If the object had attached files (uploaded or exported), they are also restored.
When it comes to restoring a deleted object from the trash, the current implementation shows several limitations. First and foremost, if an object in the trash has lost a relationship dependency (i.e. the other side of a relationship from or to this object is no longer in live database), you will not be able to restore the object.
In such case, if the missing dependency is in the trash too, you can manually restore this dependency first and then retry.
If the missing dependency has been permanently deleted, the object cannot be recovered.
In other words: * no partial restore: the object and all its relationships must be restored in one pass * no \"cascading\" restore: restoring one object does not restore automatically all linked objects in the trash
Enriching the data within the OpenCTI platform is made seamlessly through the integration of enrichment connectors. These connectors facilitate the retrieval of additional data from external sources or portals.
Enrichment can be conducted automatically in two distinct modes:
Upon data arrival: Configuring the connector to run automatically when data arrives in OpenCTI ensures a real-time enrichment process, supplementing the platform's data. However, it's advisable to avoid automatic enrichment for quota-based connectors to paid sources to prevent quickly depleting all quotas. Additionally, this automatic enrichment contributes to increased data volume. On a large scale, with hundreds of thousands of objects, the disk space occupied by this data can be substantial, and it should be considered, especially if disk space is a concern. The automatic execution is configured at the connector level using the \"auto: true|false\" parameter.
Targeted enrichment via playbooks: Enrichment can also be performed in a more targeted manner using playbooks. This approach allows for a customized enrichment strategy, focusing on specific objects and optimizing the relevance of the retrieved data.
Manually initiating the enrichment process is straightforward. Simply locate the button with the cloud icon at the top right of an entity.
Clicking on this icon unveils a side panel displaying a list of available connectors that can be activated for the given object. If no connectors appear in the panel, it indicates that no enrichment connectors are available for the specific type of object in focus.
Activation of an enrichment connector triggers a contact with the designated remote source, importing a set of data into OpenCTI to enrich the selected object. Each enrichment connector operates uniquely, focusing on a specific set of object types it can enrich and a distinct set of data it imports. Depending on the connectors, they may, establish relationships, add external references, or complete object information, thereby contributing to the comprehensiveness of information within the platform.
The list of available connectors can be found in our connectors catalog. In addition, further documentation on connectors is available on the dedicated documentation page.
Impact of the max confidence level
The maximum confidence level per user can have an impact on enrichment connectors, not being able to update data in the platform. To understand the concept and the potential issues you could face, please navigate to this page to understand.
When you click on \"Analyses\" in the left-side bar, you see all the \"Analyses\" tabs, visible on the top bar on the left. By default, the user directly access the \"Reports\" tab, but can navigate to the other tabs as well.
From the Analyses section, users can access the following tabs:
Reports: See Reports as a sort of containers to detail and structure what is contained on a specific report, either from a source or write by yourself. Think of it as an Intelligence Production in OpenCTI.
Groupings: Groupings are containers, like Reports, but do not represent an Intelligence Production. They regroup Objects sharing an explicit context. For example, a Grouping might represent a set of data that, in time, given sufficient analysis, would mature to convey an incident or threat report as Report container.
Malware Analyses: As define by STIX 2.1 standard, Malware Analyses captures the metadata and results of a particular static or dynamic analysis performed on a malware instance or family.
Notes: Through this tab, you can find all the Notes that have been written in the platform, for example to add some analyst's unstructured knowledge about an Object.
External references: Intelligence is never created from nothing. External references give user a way to link sources or reference documents to any Object in the platform.
Reports are one of the central component of the platform. It is from a Report that knowledge is extracted and integrated in the platform for further navigation, analyses and exports. Always tying the information back to a report allows for the user to be able to identify the source of any piece of information in the platform at all time.
In the MITRE STIX 2.1 documentation, a Report is defined as such :
Reports are collections of threat intelligence focused on one or more topics, such as a description of a threat actor, malware, or attack technique, including context and related details. They are used to group related threat intelligence together so that it can be published as a comprehensive cyber threat story.
As a result, a Report object in OpenCTI is a set of attributes and metadata defining and describing a document outside the platform, which can be a threat intelligence report from a security reseearch team, a blog post, a press article a video, a conference extract, a MISP event, or any type of document and source.
When clicking on the Reports tab at the top left, you see the list of all the Reports you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of reports.
"},{"location":"usage/exploring-analysis/#visualizing-knowledge-within-a-report","title":"Visualizing Knowledge within a Report","text":"
When clicking on a Report, you land on the Overview tab. For a Report, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge contained in the report, accessible through different views (See below for a dive-in). As described here.
Content: a tab to upload or creates outcomes document displaying the content of the Report (for example in PDF, text, HTML or markdown files). The Content of the document is displayed to ease the access of Knowledge through a readable format. As described here.
Entities: A table containing all SDO (Stix Domain Objects) contained in the Report, with search and filters available. It also displays if the SDO has been added directly or through inferences with the reasoning engine
Observables: A table containing all SCO (Stix Cyber Observable) contained in the Report, with search and filters available. It also displays if the SCO has been added directly or through inferences with the reasoning engine
Data: as described here.
Exploring and modifying the structured Knowledge contained in a Report can be done through different lenses.
In Graph view, STIX SDO are displayed as graph nodes and relationships as graph links. Nodes are colored depending of their type. Direct relationship are displayed as plain link and inferred relationships in dotted link. At the top right, you will find a serie of icons. From there you can change the current type of view. Here you can also perform global action on the Knowledge of the Report. Let's highlight 2 of them: - Suggestions: This tool suggests you some logical relationships to add between your contained Object to give more consistency to your Knowledge. - Share with an Organization: if you have designated a main Organization in the platform settings, you can here share your Report and its content with users of an other Organization. At the bottom, you have many option to manipulate the graph: - Multiple option for shaping the graph and applying forces to the nodes and links - Multiple selection options - Multiple filters, including a time range selector allowing you to see the evolution of the Knowledge within the Report. - Multiple creation and edition tools to modify the Knowledge contained in the Report.
Through this view, you can map exsisting or new Objects directly from a readable content, allowing you to quickly append structured Knowledge in your Report before refining it with relationships and details. This view is a great place to see the continuum between unstructured and structured Knowledge of a specific Intelligence Production.
This view allows you to see the structured Knowledge chronologically. This view is really useful when the report describes an attack or a campaign that lasted some time, and the analyst payed attention to the dates. The view can be filtered and displayed relationships too.
The correlation view is a great way to visualize and find other Reports related to your current subject of interest. This graph displays all Report related to the important nodes contained in your current Report, for example Objects like Malware or Intrusion sets.
If your Report describes let's say an attack, a campaign, or an understanding of an Intrusion set, it should contains multiple attack patterns Objects to structure the Knowledge about the TTPs of the Threat Actor. Those attack patterns can be displayed as highlighted matrices, by default the MITRE ATT&CK Enterprise matrix. As some matrices can be huge, it can be also filtered to only display attack patterns describes in the Report.
Groupings are an alternative to Report for grouping Objects sharing a context without describing an Intelligence Production.
In the MITRE STIX 2.1 documentation, a Grouping is defined as such :
A Grouping object explicitly asserts that the referenced STIX Objects have a shared context, unlike a STIX Bundle (which explicitly conveys no context). A Grouping object should not be confused with an intelligence product, which should be conveyed via a STIX Report. A STIX Grouping object might represent a set of data that, in time, given sufficient analysis, would mature to convey an incident or threat report as a STIX Report object. For example, a Grouping could be used to characterize an ongoing investigation into a security event or incident. A Grouping object could also be used to assert that the referenced STIX Objects are related to an ongoing analysis process, such as when a threat analyst is collaborating with others in their trust community to examine a series of Campaigns and Indicators.
When clicking on the Groupings tab at the top of the interface, you see the list of all the Groupings you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the groupings.
Clicking on a Grouping, you land on its Overview tab. For a Groupings, the following tabs are accessible: - Overview: as described here. - Knowledge: a complex tab that regroups all the structured Knowledge contained in the groupings, as for a Report, except for the Timeline view. As described here. - Entities: A table containing all SDO (Stix Domain Objects) contained in the Grouping, with search and filters available. It also display if the SDO has been added directly or through inferences with the reasonging engine - Observables: A table containing all SCO (Stix Cyber Observable) contained in the Grouping, with search and filters available. It also display if the SDO has been added directly or through inferences with the reasonging engine - Data: as described here.
Malware analyses are an important part of the Cyber Threat Intelligence, allowing an precise understanding of what and how a malware really do on the host but also how and from where it receives its command and communicates its results.
In OpenCTI, Malware Analyses can be created from enrichment connectors that will take an Observable as input and perform a scan on a online service platform to bring back results. As such, Malware Analyses can be done on File, Domain and URL.
In the MITRE STIX 2.1 documentation, a Malware Analyses is defined as such :
Malware Analyses captures the metadata and results of a particular static or dynamic analysis performed on a malware instance or family.
When clicking on the Malware Analyses tab at the top of the interface, you see the list of all the Malware Analyses you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the Malware Analyses.
Clicking on a Malware Analyses, you land on its Overview tab. The following tabs are accessible: - Overview: This view contains some additions from the common Overview here. You will find here details about how the analysis have been performed, what is the global result regarding the malicioussness of the analysed artifact and all the Observables that have been found during the analysis. - Knowledge: If you Malware analysis is linked to other Objects that are not part of the analysis result, they will be displayed here. As described here. - Data: as described here. - History: as described here.
Not every Knowledge can be structured. For allowing any users to share their insights about a specific Knowledge, they can create a Note for every Object and relationship in OpenCTI they can access to. All the Notes are listed within the Analyses menu for allowing global review of this unstructured addition to the global Knowledge.
In the MITRE STIX 2.1 documentation, a Note is defined as such :
A Note is intended to convey informative text to provide further context and/or to provide additional analysis not contained in the STIX Objects, Marking Definition objects, or Language Content objects which the Note relates to. Notes can be created by anyone (not just the original object creator).
Clicking on a Note, you land on its Overview tab. The following tabs are accessible: - Overview: as described here. - Data: as described here. - History: as described here.
Intelligence is never created from nothing. External references give user a way to link sources or reference documents to any Object in the platform. All external references are listed within the Analyses menu for accessing directly sources of the structured Knowledge.
In the MITRE STIX 2.1 documentation, a External references is defined as such :
External references are used to describe pointers to information represented outside of STIX. For example, a Malware object could use an external reference to indicate an ID for that malware in an external database or a report could use references to represent source material.
Clicking on an External reference, you land on its Overview tab. The following tabs are accessible: - Overview: as described here.
When you click on \"Arsenal\" in the left-side bar, you access all the \"Arsenal\" tabs, visible on the top bar on the left. By default, the user directly access the \"Malware\" tab, but can navigate to the other tabs as well.
From the Arsenal section, users can access the following tabs:
Malware: Malware represents any piece of code specifically designed to damage, disrupt, or gain unauthorized access to computer systems, networks, or user data.
Channels: Channels, in the context of cybersecurity, refer to places or means through which actors disseminate information. This category is used in particular in the context of FIMI (Foreign Information Manipulation Interference).
Tools: Tools represent legitimate, installed software or hardware applications on an operating system that can be misused by attackers for malicious purposes. (e.g. LOLBAS).
Vulnerabilities: Vulnerabilities are weaknesses or that can be exploited by attackers to compromise the security, integrity, or availability of a computer system or network.
Malware encompasses a broad category of malicious pieces of code built, deployed, and operated by intrusion set. Malware can take many forms, including viruses, worms, Trojans, ransomware, spyware, and more. These entities are created by individuals or groups, including state-nations, state-sponsored groups, corporations, or hacktivist collectives.
Use the Malware SDO to model and track these threats comprehensively, facilitating in-depth analysis, response, and correlation with other security data.
When clicking on the Malware tab on the top left, you see the list of all the Malware you have access to, in respect with your allowed marking definitions. These malware are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, related intrusion sets, countries and sectors they target, and labels. You can then search and filter on some common and specific attributes of Malware.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-arsenal/#visualizing-knowledge-associated-with-a-malware","title":"Visualizing Knowledge associated with a Malware","text":"
When clicking on an Malware card you land on its Overview tab. For a Malware, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Malware. Different thematic views are proposed to easily see the victimology, the threat actors and intrusion sets using the Malware, etc. As described here.
Channels - such as forums, websites and social media platforms (e.g. Twitter, Telegram) - are mediums for disseminating news, knowledge, and messages to a broad audience. While they offer benefits like open communication and outreach, they can also be leveraged for nefarious purposes, such as spreading misinformation, coordinating cyberattacks, or promoting illegal activities.
Monitoring and managing content within Channels aids in analyzing threats, activities, and indicators associated with various threat actors, campaigns, and intrusion sets.
When clicking on the Channels tab at the top left, you see the list of all the Channels you have access to, in respect with your allowed marking definitions. These channels are displayed in a list where you can find certain fields characterizing the entity: type of channel, labels, and dates. You can then search and filter on some common and specific attributes of Channels.
"},{"location":"usage/exploring-arsenal/#visualizing-knowledge-associated-with-a-channel","title":"Visualizing Knowledge associated with a Channel","text":"
When clicking on a Channel in the list, you land on its Overview tab. For a Channel, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Channel. Different thematic views are proposed to easily see the victimology, the threat actors and intrusion sets using the Malware, etc. As described here.
Tools refers to legitimate, pre-installed software applications, command-line utilities, or scripts that are present on a compromised system. These objects enable you to model and monitor the activities of these tools, which can be misused by attackers.
When clicking on the Tools tab at the top left, you see the list of all the Tools you have access to, in respect with your allowed marking definitions. These tools are displayed in a list where you can find certain fields characterizing the entity: labels and dates. You can then search and filter on some common and specific attributes of Tools.
"},{"location":"usage/exploring-arsenal/#visualizing-knowledge-associated-with-an-observed-data","title":"Visualizing Knowledge associated with an Observed Data","text":"
When clicking on a Tool in the list, you land on its Overview tab. For a Tool, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Tool. Different thematic views are proposed to easily see the threat actors, the intrusion sets and the malware using the Tool. As described here.
Vulnerabilities represent weaknesses or flaws in software, hardware, configurations, or systems that can be exploited by malicious actors. This object assists in managing and tracking the organization's security posture by identifying areas that require attention and remediation, while also providing insights into associated intrusion sets, malware and campaigns where relevant.
When clicking on the Vulnerabilities tab at the top left, you see the list of all the Vulnerabilities you have access to, in respect with your allowed marking definitions. These vulnerabilities are displayed in a list where you can find certain fields characterizing the entity: CVSS3 severity, labels, dates and creators (in the platform). You can then search and filter on some common and specific attributes of Vulnerabilities.
"},{"location":"usage/exploring-arsenal/#visualizing-knowledge-associated-with-an-observed-data_1","title":"Visualizing Knowledge associated with an Observed Data","text":"
When clicking on a Vulnerabilities in the list, you land on its Overview tab. For a Vulnerability, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Vulnerability. Different thematic views are proposed to easily see the threat actors, the intrusion sets and the malware exploiting the Vulnerability. As described here.
When you click on \"Cases\" in the left-side bar, you access all the \"Cases\" tabs, visible on the top bar on the left. By default, the user directly access the \"Incident Responses\" tab, but can navigate to the other tabs as well.
As Analyses, Cases can contain other objects. This way, by adding context and results of your investigations in the case, you will be able to get an up-to-date overview of the ongoing situation, and later produce more easily an incident report.
From the Cases section, users can access the following tabs:
Incident Responses: This type of Cases is dedicated to the management of incidents. An Incident Response case does not represent an incident, but all the context and actions that will encompass the response to a specific incident.
Request for Information: CTI teams are often asked to provide extensive information and analysis on a specific subject, be it related to an ongoing incident or a particular trending threat. Request for Information cases allow you to store context and actions relative to this type of request and its response.
Request for Takedown: When an organization is targeted by an attack campaign, a typical response action can be to request the Takedown of elements of the attack infrastructure, for example a domain name impersonating the organization to phish its employees, or an email address used to deliver phishing content. As Takedown needs in most case to reach out to external providers and be effective quickly, it often needs specific workflows. Request for Takedown cases give you a dedicated space to manage these specific actions.
Tasks: In every case, you need tasks to be performed in order to solve it. The Tasks tab allows you to review all created tasks to quickly see past due date, or quickly see every task assigned to a specific user.
Feedbacks: If you use your platform to interact with other teams and provide them CTI Knowledge, some users may want to give you feedback about it. Those feedbacks can easily be considered as another type of case to solve, as it will often refer to Knowledge inconsistency or gaps.
"},{"location":"usage/exploring-cases/#incident-response-request-for-information-request-for-takedown","title":"Incident Response, Request for Information & Request for Takedown","text":""},{"location":"usage/exploring-cases/#general-presentation","title":"General presentation","text":"
Incident responses, Request for Information & Request for Takedown cases are an important part of the case management system in OpenCTI. Here, you can organize the work of your team to respond to cybersecurity situations. You can also give context to the team and other users on the platform about the situation and actions (to be) taken.
To manage the situation, you can issue Tasks and assign them to users in the platform, by directly creating a Task or by applying a Case template that will append a list of predefined tasks.
To bring context, you can use your Case as a container (like Reports or Groupings), allowing you to add any Knowledge from your platform in it. You can also use this possibility to trace your investigation, your Case playing the role of an Incident report. You will find more information about case management here.
Incident Response, Request for Information & Request for Takedown are not STIX 2.1 Objects.
When clicking on the Incident Response, Request for Information & Request for Takedown tabs at the top, you see the list of all the Cases you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes.
"},{"location":"usage/exploring-cases/#visualizing-knowledge-within-an-incident-response-request-for-information-request-for-takedown","title":"Visualizing Knowledge within an Incident Response, Request for Information & Request for Takedown","text":"
When clicking on an Incident Response, Request for Information or Request for Takedown, you land on the Overview tab. The following tabs are accessible:
Overview: Overview of Cases are slightly different from the usual (described here). Cases' Overview displays also the list of the tasks associated with the case. It also let you highlight Incident, Report or Sighting at the origin of the case. If other cases contains some Observables with your Case, they will be displayed as Related Cases in the Overview.
Knowledge: a complex tab that regroups all the structured Knowledge contained in the Case, accessible through different views (See below for a dive-in). As described here.
Content: a tab to upload or creates outcomes document displaying the content of the Case (for example in PDF, text, HTML or markdown files). The Content of the document is displayed to ease the access of Knowledge through a readable format. As described here.
Entities: A table containing all SDO (Stix Domain Objects) contained in the Case, with search and filters available. It also displays if the SDO has been added directly or through inferences with the reasoning engine
Observables: A table containing all SCO (Stix Cyber Observable) contained in the Case, with search and filters available. It also displays if the SDO has been added directly or through inferences with the reasoning engine
Data: as described here.
Exploring and modifying the structured Knowledge contained in a Case can be done through different lenses.
In Graph view, STIX SDO are displayed as graph nodes and relationships as graph links. Nodes are colored depending on their type. Direct relationship are displayed as plain link and inferred relationships in dotted link. At the top right, you will find a series of icons. From there you can change the current type of view. Here you can also perform global action on the Knowledge of the Case. Let's highlight 2 of them:
Suggestions: This tool suggests you some logical relationships to add between your contained Object to give more consistency to your Knowledge.
Share with an Organization: if you have designated a main Organization in the platform settings, you can here share your Case and its content with users of another Organization. At the bottom, you have many option to manipulate the graph:
Multiple option for shaping the graph and applying forces to the nodes and links
Multiple selection options
Multiple filters, including a time range selector allowing you to see the evolution of the Knowledge within the Case.
Multiple creation and edition tools to modify the Knowledge contained in the Case.
Through this view, you can map existing or new Objects directly from a readable content, allowing you to quickly append structured Knowledge in your Case before refining it with relationships and details. This view is a great place to see the continuum between unstructured and structured Knowledge.
This view allows you to see the structured Knowledge chronologically. This view is particularly useful in the context of a Case, allowing you to see the chain of events, either from the attack perspectives, the defense perspectives or both. The view can be filtered and displayed relationships too.
Tasks are actions to be performed in the context of a Case (Incident Response, Request for Information, Request for Takedown). Usually, a task is assigned to a user, but important tasks may involve more participants.
When clicking on the Tasks tab at the top of the interface, you see the list of all the Tasks you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the tasks.
Clicking on a Task, you land on its Overview tab. For a Tasks, the following tabs are accessible:
When a user fill a feedback form from its Profile/Feedback menu, it will then be accessible here.
This feature gives the opportunity to engage with other users of your platform and to respond directly to their concern about it or the Knowledge, without the need of third party software.
Clicking on a Feedback, you land on its Overview tab. For a Feedback, the following tabs are accessible:
OpenCTI's Entities objects provides a comprehensive framework for modeling various targets and attack victims within your threat intelligence data. With five distinct Entity object types, you can represent sectors, events, organizations, systems, and individuals. This robust classification empowers you to contextualize threats effectively, enhancing the depth and precision of your analysis.
When you click on \"Entities\" in the left-side bar, you access all the \"Entities\" tabs, visible on the top bar on the left. By default, the user directly access the \"Sectors\" tab, but can navigate to the other tabs as well.
From the Entities section, users can access the following tabs:
Sectors: areas of activity.
Events: event in the real world.
Organizations: groups with specific aims such as companies and government entities.
Systems: technologies such as platforms and software.
Sectors represent specific domains of activity, defining areas such as energy, government, health, finance, and more. Utilize sectors to categorize targeted industries or sectors of interest, providing valuable context for threat intelligence analysis within distinct areas of the economy.
When clicking on the Sectors tab at the top left, you see the list of all the Sectors you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-a-sector","title":"Visualizing Knowledge associated with a Sector","text":"
When clicking on a Sector in the list, you land on its Overview tab. For a Sector, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Sector. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the Sector. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in the Sector.
Events encompass occurrences like international sports events, summits (e.g., G20), trials, conferences, or any significant happening in the real world. By modeling events, you can analyze threats associated with specific occurrences, allowing for targeted investigations surrounding high-profile incidents.
When clicking on the Events tab at the top left, you see the list of all the Events you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-an-event","title":"Visualizing Knowledge associated with an Event","text":"
When clicking on an Event in the list, you land on its Overview tab. For an Event, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Event. Different thematic views are proposed to easily see the related entities, the threats, the locations, etc. linked to the Event. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted during an attack against the Event.
Organizations include diverse entities such as companies, government bodies, associations, non-profits, and other groups with specific aims. Modeling organizations enables you to understand the threat landscape concerning various entities, facilitating investigations into cyber-espionage, data breaches, or other malicious activities targeting specific groups.
When clicking on the Organizations tab at the top left, you see the list of all the Organizations you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-an-organization","title":"Visualizing Knowledge associated with an Organization","text":"
When clicking on an Organization in the list, you land on its Overview tab. For an Organization, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Organization. Different thematic views are proposed to easily see the related entities, the threats, the locations, etc. linked to the Organization. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in the Organization.
Data: as described here.
History: as described here.
Furthermore, an Organization can be observed from an \"Author\" perspective. It is possible to change this viewpoint to the right of the entity name, using the \"Display as\" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to \"Author\" mode, the observed data pertains to the entity's description as an author within the platform:
Overview: The \"Latest created relationships\" and \"Latest containers about the object\" panels are replaced by the \"Latest containers authored by this entity\" panel.
Knowledge: A tab that presents an overview of the data authored by the Organization (i.e. counters and a graph).
Analyses: The list of all Analyses (Report, Groupings) and Cases (Incident response, Request for Information, Request for Takedown) for which the Organization is the author.
Systems represent software applications, platforms, frameworks, or specific tools like WordPress, VirtualBox, Firefox, Python, etc. Modeling systems allows you to focus on threats related to specific software or technology, aiding in vulnerability assessments, patch management, and securing critical applications.
When clicking on the Systems tab at the top left, you see the list of all the Systems you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-a-system","title":"Visualizing Knowledge associated with a System","text":"
When clicking on a System in the list, you land on its Overview tab. For a System, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the System. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the System. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in the System.
Data: as described here.
History: as described here.
Furthermore, a System can be observed from an \"Author\" perspective. It is possible to change this viewpoint to the right of the entity name, using the \"Display as\" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to \"Author\" mode, the observed data pertains to the entity's description as an author within the platform:
Overview: The \"Latest created relationships\" and \"Latest containers about the object\" panels are replaced by the \"Latest containers authored by this entity\" panel.
Knowledge: A tab that presents an overview of the data authored by the System (i.e. counters and a graph).
Analyses: The list of all Analyses (Report, Groupings) and Cases (Incident response, Request for Information, Request for Takedown) for which the System is the author.
Individuals represent specific persons relevant to your threat intelligence analysis. This category includes targeted individuals, or influential figures in various fields. Modeling individuals enables you to analyze threats related to specific people, enhancing investigations into cyber-stalking, impersonation, or other targeted attacks.
When clicking on the Individuals tab at the top left, you see the list of all the Individuals you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-an-individual","title":"Visualizing Knowledge associated with an Individual","text":"
When clicking on an Individual in the list, you land on its Overview tab. For an Individual, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Individual. Different thematic views are proposed to easily see the related entities, the threats, the locations, etc. linked to the Individual. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in the Individual.
Data: as described here.
History: as described here.
Furthermore, an Individual can be observed from an \"Author\" perspective. It is possible to change this viewpoint to the right of the entity name, using the \"Display as\" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to \"Author\" mode, the observed data pertains to the entity's description as an author within the platform:
Overview: The \"Latest created relationships\" and \"Latest containers about the object\" panels are replaced by the \"Latest containers authored by this entity\" panel.
Knowledge: A tab that presents an overview of the data authored by the Individual (i.e. counters and a graph).
Analyses: The list of all Analyses (Report, Groupings) and Cases (Incident response, Request for Information, Request for Takedown) for which the Individual is the author.
When you click on \"Events\" in the left-side bar, you access all the \"Events\" tabs, visible on the top bar on the left. By default, the user directly access the \"Incidents\" tab, but can navigate to the other tabs as well.
From the Events section, users can access the following tabs:
Incidents: In OpenCTI, Incidents correspond to a negative event happening on an information system. This can include a cyberattack (intrusion, phishing, etc.), a consolidated security alert generated by a SIEM or EDR that need to be qualified, and so on. It can also refer to an information warfare attack in the context of countering disinformation.
Sightings: Sightings correspond to the event in which an Observable (IP, domain name, certificate, etc.) is detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or an EDR.
Observed Data: Observed Data has been added in OpenCTI by compliance with the STIX 2.1 standard. You can see it has a pseudo-container that contains Observables, like a line of firewall log for example. Currently, it is rarely used.
Incidents usually represents negative events impacting resources you want to protect, but local definitions can vary a lot, from a simple security events send by a SIEM to a massive scale supply chain attack impacting a whole activity sector.
In the MITRE STIX 2.1, the Incident SDO has not yet been finalized and is the object of important work as part of a forthcoming STIX Extension.
When clicking on the Incidents tab at the top left, you see the list of all the Incidents you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-events/#visualizing-knowledge-associated-with-an-incident","title":"Visualizing Knowledge associated with an Incident","text":"
When clicking on an Incident in the list, you land on its Overview tab. For an Incident, the following tabs are accessible:
Overview: as described here, with the particularity to display two distribution graphs of its related Entities (STIX SDO) and Observable (STIX SCO).
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Incident. Different thematic views are proposed to easily see the victimology, arsenal, techniques and so on used in the context of the Incident. As described here.
Content: This specific tab allows to previzualize, manage and write deliverable associated with the Incident. For example an analytic report to share with other teams, a markdown files to feed a collaborative wiki with, etc. As described here.
The Sightings correspond to events in which an Observable (IP, domain name, url, etc.) is detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or EDR.
In OpenCTI, as we are in a cybersecurity context, Sightings are associated with Indicators of Compromise (IoC) and the notion of \"True positive\" and \"False positive\".
It is important to note that Sightings are a type of relationship (not a STIX SDO or STIX SCO), between an Observable and an Entities or Locations.
When clicking on the Sightings tab at the top left, you see the list of all the Sightings you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-events/#visualizing-knowledge-associated-with-a-sighting","title":"Visualizing Knowledge associated with a Sighting","text":"
When clicking on a Sighting in the list, you land on its Overview tab. As other relationships in the platform, Sighting's overview displays common related metadata, containers, external references, notes and entities linked by the relationship.
In addition, this overview displays: - Qualification : if the Sighting is a True Positive or a False Positive - Count : number of times the event has been seen
Observed Data correspond to an extract from a log that contains Observables.
In the MITRE STIX 2.1, the Observed Data SDO is defined as such:
Observed Data conveys information about cybersecurity related entities such as files, systems, and networks using the STIX Cyber-observable Objects (SCOs). For example, Observed Data can capture information about an IP address, a network connection, a file, or a registry key. Observed Data is not an intelligence assertion, it is simply the raw information without any context for what it means.
When clicking on the Observed Data tab at the top left, you see the list of all the Observed Data you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-events/#visualizing-knowledge-associated-with-an-observed-data","title":"Visualizing Knowledge associated with an Observed Data","text":"
When clicking on an Observed Data in the list, you land on its Overview tab. The following tabs are accessible:
Overview: as described here, with the particularity to display a distribution graphs of its related Observables (STIX SCO).
Entities : a sortable and filterable list of all Entities (SDO)
Observables: a sortable and filterable list of all Observables (SCO) in relation with the Observed Data
OpenCTI's Locations objects provides a comprehensive framework for representing various geographic entities within your threat intelligence data. With five distinct Location object types, you can precisely define regions, countries, areas, cities, and specific positions. This robust classification empowers you to contextualize threats geographically, enhancing the depth and accuracy of your analysis.
When you click on \"Locations\" in the left-side bar, you access all the \"Locations\" tabs, visible on the top bar on the left. By default, the user directly access the \"Regions\" tab, but can navigate to the other tabs as well.
From the Locations section, users can access the following tabs:
Regions: very large geographical territories, such as a continent.
Countries: the world's countries.
Areas: more or less extensive geographical areas and often not having a very defined limit
Regions encapsulate broader geographical territories, often representing continents or significant parts of continents. Examples include EMEA (Europe, Middle East, and Africa), Asia, Western Europe, and North America. Utilize regions to categorize large geopolitical areas and gain macro-level insights into threat patterns.
When clicking on the Regions tab at the top left, you see the list of all the Regions you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-region","title":"Visualizing Knowledge associated with a Region","text":"
When clicking on a Region in the list, you land on its Overview tab. For a Region, the following tabs are accessible:
Overview: as described here, with the particularity of not having a Details section but a map locating the Region.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Region. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the Region. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in a Region.
Countries represent individual nations across the world. With this object type, you can specify detailed information about a particular country, enabling precise localization of threat intelligence data. Countries are fundamental entities in geopolitical analysis, offering a focused view of activities within national borders.
When clicking on the Countries tab at the top left, you see the list of all the Countries you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-country","title":"Visualizing Knowledge associated with a Country","text":"
When clicking on a Country in the list, you land on its Overview tab. For a Country, the following tabs are accessible:
Overview: as described here, with the particularity of not having a Details section but a map locating the Country.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Country. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the Country. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in a Country.
Areas define specific geographical regions of interest, such as the Persian Gulf, the Balkans, or the Caucasus. Use areas to identify unique zones with distinct geopolitical, cultural, or strategic significance. This object type facilitates nuanced analysis of threats within defined geographic contexts.
When clicking on the Areas tab at the top left, you see the list of all the Areas you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-an-area","title":"Visualizing Knowledge associated with an Area","text":"
When clicking on an Area in the list, you land on its Overview tab. For an Area, the following tabs are accessible:
Overview: as described here, with the particularity of not having a Details section but a map locating the Area.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Area. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the Area. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in an Area.
Cities provide granular information about urban centers worldwide. From major metropolises to smaller towns, cities are crucial in understanding localized threat activities. With this object type, you can pinpoint threats at the urban level, aiding in tactical threat assessments and response planning.
When clicking on the Cities tab at the top left, you see the list of all the Cities you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-city","title":"Visualizing Knowledge associated with a City","text":"
When clicking on a City in the list, you land on its Overview tab. For a City, the following tabs are accessible:
Overview: as described here, with the particularity of not having a Details section but a map locating the City.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the City. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the City. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in a City.
Positions represent highly precise geographical points, such as monuments, buildings, or specific event locations. This object type allows you to define exact coordinates, enabling accurate mapping of events or incidents. Positions enhance the granularity of your threat intelligence data, facilitating precise geospatial analysis.
When clicking on the Positions tab at the top left, you see the list of all the Positions you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-position","title":"Visualizing Knowledge associated with a Position","text":"
When clicking on a Position in the list, you land on its Overview tab. For a Position, the following tabs are accessible:
Overview: as described here, with the particularity to display a map locating the Position.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Position. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the Position. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted at a Position.
When you click on \"Observations\" in the left-side bar, you access all the \"Observations\" tabs, visible on the top bar on the left. By default, the user directly access the \"Observables\" tab, but can navigate to the other tabs as well.
From the Observations section, users can access the following tabs:
Observables: An Observable represents an immutable object. Observables can encompass a wide range of entities such as IPv4 addresses, domain names, email addresses, and more.
Artefacts: In OpenCTI, the Artefacts is a particular Observable. It may contain a file, such as a malware sample.
Indicators: An Indicator is a detection object. It is defined by a search pattern, which could be expressed in various formats such as STIX, Sigma, YARA, among others.
Infrastructures: An Infrastructure describes any systems, software services and any associated physical or virtual resources intended to support some purpose (e.g. C2 servers used as part of an attack, devices or servers that are part of defense, database servers targeted by an attack, etc.).
An Observable is a distinct entity from the Indicator within OpenCTI and represents an immutable object. Observables can encompass a wide range of entities such as IPv4 addresses, domain names, email addresses, and more. Importantly, Observables don't inherently imply malicious intent, they can include items like legitimate IP addresses or domains associated with an organization. Additionally, they serve as raw data points without the additional detection context found in Indicators.
When clicking on the Observables tab at the top left, you see the list of all the Observables you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-observations/#visualizing-knowledge-associated-with-an-observable","title":"Visualizing Knowledge associated with an Observable","text":"
When clicking on an Observable in the list, you land on its Overview tab. For an Observable, the following tabs are accessible:
Overview: as described here, with the particularity to display Indicators composed with the Observable.
Knowledge: a tab listing all its relationships and nested objects.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which the Observable (IP, domain name, url, etc.) has been sighted.
An Artefact is a particular Observable. It may contain a file, such as a malware sample. Files can be uploaded or downloaded in encrypted archives, providing an additional layer of security against potential manipulation of malicious payloads.
When clicking on the Artefacts tab at the top left, you see the list of all the Artefacts you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-observations/#visualizing-knowledge-associated-with-an-artefact","title":"Visualizing Knowledge associated with an Artefact","text":"
When clicking on an Artefact in the list, you land on its Overview tab. For an Artefact, the following tabs are accessible:
Overview: as described here, with the particularity to be able to download the attached file.
Knowledge: a tab listing all its relationships and nested objects.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which the Artefact has been sighted.
An Indicator is a detection object. It is defined by a search pattern, which could be expressed in various formats such as STIX, Sigma, YARA, among others. This pattern serves as a key to identify potential threats within the data. Furthermore, an Indicator includes additional information that enriches its detection context. This information encompasses:
Validity dates: Indicators are accompanied by a time frame, specifying the duration of their relevance, and modeled by the Valid from and Valid until dates.
Actionable fields: Linked to the validity dates, the Revoked and Detection fields can be used to sort Indicators for detection purposes.
Kill chain phase: They indicate the phase within the cyber kill chain where they are applicable, offering insights into the progression of a potential threat.
Types: Indicators are categorized based on their nature, aiding in classification and analysis.
When clicking on the Indicators tab at the top left, you see the list of all the Indicators you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-observations/#visualizing-knowledge-associated-with-an-indicator","title":"Visualizing Knowledge associated with an Indicator","text":"
When clicking on an Indicator in the list, you land on its Overview tab. For an Indicator, the following tabs are accessible:
Overview: as described here, with the particularity to display the Observables on which it is based.
Knowledge: a tab listing all its relationships.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which the Indicator has been sighted.
An Infrastructure refers to a set of resources, tools, systems, or services employed by a threat to conduct their activities. It represents the underlying framework or support system that facilitates malicious operations, such as the command and control (C2) servers in an attack. Notably, like Observables, an Infrastructure doesn't inherently imply malicious intent. It can also represent legitimate resources affiliated with an organization (e.g. devices or servers that are part of defense, database servers targeted by an attack, etc.).
When clicking on the Infrastructures tab at the top left, you see the list of all the Infrastructures you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-observations/#visualizing-knowledge-associated-with-an-infrastructure","title":"Visualizing Knowledge associated with an Infrastructure","text":"
When clicking on an Infrastructure in the list, you land on its Overview tab. For an Infrastructure, the following tabs are accessible:
Overview: as described here, with the particularity to display distribution graphs of its related Observable (STIX SCO).
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Infrastructure. Different thematic views are proposed to easily see the threats, the arsenal, the observations, etc. linked to the Infrastructure. As described here.
When you click on \"Techniques\" in the left-side bar, you access all the \"Techniques\" tabs, visible on the top bar on the left. By default, the user directly access the \"Attack pattern\" tab, but can navigate to the other tabs as well.
From the Techniques section, users can access the following tabs:
Attack pattern: attacks pattern used by the threat actors to perform their attacks. By default, OpenCTI is provisionned with attack patterns from MITRE ATT&CK matrices (for CTI) and DISARM matrix (for FIMI).
Narratives: In OpenCTI, narratives used by threat actors can be represented and linked to other Objects. Narratives are mainly used in the context of disinformation campaigns where it is important to trace which narratives have been and are still used by threat actors.
Courses of action: A Course of Action is an action taken either to prevent an attack or to respond to an attack that is in progress. It may describe technical, automatable responses (applying patches, reconfiguring firewalls) but can also describe higher level actions like employee training or policy changes. For example, a course of action to mitigate a vulnerability could describe applying the patch that fixes it.
Data sources: Data sources represent the various subjects/topics of information that can be collected by sensors/logs. Data sources also include data components,
Data components: Data components identify specific properties/values of a data source relevant to detecting a given ATT&CK technique or sub-technique.
Attacks pattern used by the threat actors to perform their attacks. By default, OpenCTI is provisionned with attack patterns from MITRE ATT&CK matrices and CAPEC (for CTI) and DISARM matrix (for FIMI).
In the MITRE STIX 2.1 documentation, an Attack pattern is defined as such :
Attack Patterns are a type of TTP that describe ways that adversaries attempt to compromise targets. Attack Patterns are used to help categorize attacks, generalize specific attacks to the patterns that they follow, and provide detailed information about how attacks are performed. An example of an attack pattern is \"spear phishing\": a common type of attack where an attacker sends a carefully crafted e-mail message to a party with the intent of getting them to click a link or open an attachment to deliver malware. Attack Patterns can also be more specific; spear phishing as practiced by a particular threat actor (e.g., they might generally say that the target won a contest) can also be an Attack Pattern.
When clicking on the Attack pattern tab at the top left, you access the list of all the attack pattern you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of attack patterns.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-an-attack-pattern","title":"Visualizing Knowledge associated with an Attack pattern","text":"
When clicking on an Attack pattern, you land on its Overview tab. For an Attack pattern, the following tabs are accessible:
Overview: Overview of Attack pattern is a bit different as the usual described here. The \"Details\" box is more structured and contains information about:
parent or subtechniques (as in the MITRE ATT&CK matrices),
related kill chain phases
Platform on which the Attack pattern is usable,
permission required to apply it
Related detection technique
Courses of action to mitigate the Attack pattern
Data components in which find data to detect the usage of the Attack pattern
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Attack pattern. Different thematic views are proposed to easily see Threat Actors and Intrusion Sets using this techniques, linked incidents, etc.
In OpenCTI, narratives used by threat actors can be represented and linked to other Objects. Narratives are mainly used in the context of disinformation campaigns where it is important to trace which narratives have been and are still used by threat actors.
An example of Narrative can be \"The country A is weak and corrupted\" or \"The ongoing operation aims to free people\".
Narrative can be a mean in the context of a more broad attack or the goal of the operation, a vision to impose.
When clicking on the Narrative tab at the top left, you access the list of all the Narratives you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of narratives.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-a-narrative","title":"Visualizing Knowledge associated with a Narrative","text":"
When clicking on a Narrative, you land on its Overview tab. For a Narrative, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Narratives. Different thematic views are proposed to easily see the Threat actors and Intrusion Set using the Narrative, etc.
Analyses: as described here.
Data: as described here.
History: as described here.
"},{"location":"usage/exploring-techniques/#courses-of-action","title":"Courses of action","text":""},{"location":"usage/exploring-techniques/#general-presentation_2","title":"General presentation","text":"
In the MITRE STIX 2.1 documentation, an Course of action is defined as such :
A Course of Action is an action taken either to prevent an attack or to respond to an attack that is in progress. It may describe technical, automatable responses (applying patches, reconfiguring firewalls) but can also describe higher level actions like employee training or policy changes. For example, a course of action to mitigate a vulnerability could describe applying the patch that fixes it.
When clicking on the Courses of action tab at the top left, you access the list of all the Courses of action you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of course of action.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-a-course-of-action","title":"Visualizing Knowledge associated with a Course of action","text":"
When clicking on a Course of Action, you land on its Overview tab. For a Course of action, the following tabs are accessible:
Overview: Overview of Course of action is a bit different as the usual described here. In \"Details\" box, mitigated attack pattern are listed.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Narratives. Different thematic views are proposed to easily see the Threat actors and Intrusion Set using the Narrative, etc.
Analyses: as described here.
Data: as described here.
History: as described here.
"},{"location":"usage/exploring-techniques/#data-sources-data-components","title":"Data sources & Data components","text":""},{"location":"usage/exploring-techniques/#general-presentation_3","title":"General presentation","text":"
In the MITRE ATT&CK documentation, Data sources are defined as such :
Data sources represent the various subjects/topics of information that can be collected by sensors/logs. Data sources also include data components, which identify specific properties/values of a data source relevant to detecting a given ATT&CK technique or sub-technique.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-a-data-source-or-a-data-components","title":"Visualizing Knowledge associated with a Data source or a Data components","text":"
When clicking on a Data source or a Data component, you land on its Overview tab. For a Course of action, the following tabs are accessible:
When you click on \"Threats\" in the left-side bar, you access all the \"Threats\" tabs, visible on the top bar on the left. By default, the user directly access the \"Threat Actor (Group)\" tab, but can navigate to the other tabs as well.
From the Threats section, users can access the following tabs:
Threat actors (Group): Threat actor (Group) represents a physical group of attackers operating an Intrusion set, using malware and attack infrastructure, etc.
Threat actors (Indvidual): Threat actor (Individual) represents a real attacker that can be described by physical and personal attributes and motivations. Threat actor (Individual) operates Intrusion set, uses malware and infrastructure, etc.
Intrusion sets: Intrusion set is an important concept in Cyber Threat Intelligence field. It is a consistent set of technical and non-technical elements corresponding of what, how and why a Threat actor acts. it is particularly useful for associating multiple attacks and malicious actions to a defined Threat, even without sufficient information regarding who did them. Often, with you understanding of the threat growing, you will link an Intrusion set to a Threat actor (either a Group or an Individual).
Campaigns: Campaign represents a series of attacks taking place in a certain period of time and/or targeting a consistent subset of Organization/Individual.
"},{"location":"usage/exploring-threats/#threat-actors-group-and-individual","title":"Threat actors (Group and Individual)","text":""},{"location":"usage/exploring-threats/#general-presentation","title":"General presentation","text":"
Threat actors are the humans who are building, deploying and operating intrusion sets. A threat actor can be an single individual or a group of attackers (who may be composed of individuals). A group of attackers may be a state-nation, a state-sponsored group, a corporation, a group of hacktivists, etc.
Beware, groups of attackers might be modelled as \"Intrusion sets\" in feeds, as there is sometimes a misunderstanding in the industry between group of people and the technical/operational intrusion set they operate.
When clicking on the Threat actor (Group or Individual) tabs at the top left, you see the list of all the groups of Threat actors or Individual Threat actors you have access to, in respect with your allowed marking definitions. These groups or individual are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware they used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Threat actors.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-threats/#demographic-and-biographic-information","title":"Demographic and Biographic Information","text":"
Individual Threat actors have unique properties to represent demographic and biographic information. Currently tracked demographics include their countries of residence, citizenships, date of birth, gender, and more.
Biographic information includes their eye and hair color, as well as known heights and weights.
An Individual Threat actor can also be tracked as employed by an Organization or a Threat Actor group. This relationship can be set under the knowledge tab.
"},{"location":"usage/exploring-threats/#visualizing-knowledge-associated-with-a-threat-actor","title":"Visualizing Knowledge associated with a Threat actor","text":"
When clicking on a Threat actor Card, you land on its Overview tab. For a Threat actor, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Threat actor. Different thematic views are proposed to easily see the victimology, arsenal and techniques used by the Threat actor, etc. As described here.
An intrusion set is a consistent group of technical elements such as \"tactics, technics and procedures\" (TTP), tools, malware and infrastructure used by a threat actor against one or a number of victims who are usually sharing some characteristics (field of activity, country or region) to reach a similar goal whoever the victim is. The intrusion set may be deployed once or several times and may evolve with time. Several intrusion sets may be linked to one threat actor. All the entities described below may be linked to one intrusion set. There are many debates in the Threat Intelligence community on how to define an intrusion set and how to distinguish several intrusion sets with regards to:
their differences
their evolutions
the possible reuse
\"false flag\" type of attacks
As OpenCTI is very customizable, each organization or individual may use these categories as they wish. Instead, it is also possible to use the import feed for the choice of categories.
When clicking on the Intrusion set tab on the top left, you see the list of all the Intrusion sets you have access to, in respect with your allowed marking definitions. These intrusion sets are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware they used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Intrusion set.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-threats/#visualizing-knowledge-associated-with-an-intrusion-set","title":"Visualizing Knowledge associated with an Intrusion set","text":"
When clicking on an Intrusion set Card, you land on its Overview tab. The following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Intrusion Set. Different thematic views are proposed to easily see the victimology, arsenal and techniques used by the Intrusion Set, etc. As described here.
A campaign can be defined as \"a series of malicious activities or attacks (sometimes called a \"wave of attacks\") taking place within a limited period of time, against a defined group of victims, associated to a similar intrusion set and characterized by the use of one or several identical malware towards the various victims and common TTPs\". However, a campaign is an investigation element and may not be widely recognized. Thus, a provider might define a series of attacks as a campaign and another as an intrusion set. Campaigns can be attributed to an Intrusion set.
When clicking on the Campaign tab on the top left, you see the list of all the Campaigns you have access to, in respect with your allowed marking definitions. These campaigns are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Campaigns.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-threats/#visualizing-knowledge-associated-with-a-campaign","title":"Visualizing Knowledge associated with a Campaign","text":"
When clicking on an Campaign Card, you land on its Overview tab. The following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Campaign. Different thematic views are proposed to easily see the victimology, arsenal and techniques used in the context of the Campaign. As described here.
With the OpenCTI platform, you can manually export your intelligence content in the following formats:
JSON,
CSV,
PDF,
TXT.
"},{"location":"usage/export/#export-in-structured-or-document-format","title":"Export in structured or document format","text":""},{"location":"usage/export/#generate-an-export","title":"Generate an export","text":"
To export one or more entities you have two possibilities. First you can click on the button \"Open export panel\". The list of pre-existing exports will open, and in the bottom right-hand corner you can configure and generate a new export.
This opens the export settings panel, where you can customize your export according to four fields:
desired export format (text/csv, application/pdf, application/vnd.oasis.stix+json, text/plain)
export type (simple or full),
the max marking definition levels of the elements to be included in the export (a TLP level, for instance). The list of the available max markings is limited by the user allowed markings and its maximum shareable markings (more details about maximum shareable marking definitions in data segregation). For a marking definition type to be taken into account here, a marking definition from this type must be provided. For example, if you select TLP:GREEN for this field, AMBER and RED elements will be excluded but it will not take into account any PAP markings unless one is elected too.
the file marking definition levels of the export (a TLP level, for instance). This marking on the file itself will then restrain the access to it in accordance with users' marking definition levels. For example, if a file has the marking TLP:RED and INTERNAL, a user will need to have these marking to see and access the file in the platform.```
The second way is to click directly on the \"Generate an Export\" button to export the content of an entity in the desired format. The same settings panel will open.
Both ways add your export in the Exported files list in the Data tab.
All entities in your instance can be exported either directly via Generate Export or indirectly via Export List in .json and .csv formats.
"},{"location":"usage/export/#export-a-list-of-entities","title":"Export a list of entities","text":"
You have the option to export either a single element, such as a report, or a collection of elements, such as multiple reports. These exports may contain not only the entity itself but also related elements, depending on the type of export you select: \"simple\" or \"full\". See the Export types (simple and full) section.
You can also choose to export a list of entities within a container. To do so, go to the container's entities tab. For example, for a report, if you only want to retrieve entity type attack pattern and indicators to design a detection strategy, go to the entities tab and select specific elements for export.
"},{"location":"usage/export/#export-types-simple-and-full","title":"Export types (simple and full)","text":"
When you wish to export only the content of a specific entity such as a report, you can choose a \"simple\" export type.
If you also wish to export associated content, you can choose a \"full\" export. With this type of export, the entity will be exported along with all entities directly associated with the central one (first neighbors).
"},{"location":"usage/export/#exports-list-panel","title":"Exports list panel","text":"
Once an export has been created, you can find it in the export list panel. Simply click on a particular export to download it.
You can also generate a new export directly from the Exports list, as explained in the Generate an export section.
Feeds are configured in the \"Data > Data sharing\" window. Configuration for all feed types is uniform and relies on the following parameters:
Filter setup: The feed can have specific filters to publish only a subset of the platform overall knowledge. Any data that meets the criteria established by the user's feed filters will be shared (e.g. specific types of entities, labels, marking definitions, etc.).
Access control: A feed can be either public, i.e. accessible without authentication, or restricted. By default, it's accessible to any user with the \"Access data sharing\" capability, but it's possible to increase restrictions by limiting access to a specific user, group, or organization.
By carefully configuring filters and access controls, you can tailor the behavior of Live streams, TAXII collections, and CSV feeds to align with your specific data-sharing needs.
Live streams, an exclusive OpenCTI feature, increase the capacity for real-time data sharing by serving STIX 2.1 bundles as TAXII collections with advanced capabilities. What distinguishes them is their dynamic nature, which includes the creation, updating, and deletion of data. Unlike TAXII, Live streams comprehensively resolve relationships and dependencies, ensuring a more nuanced and interconnected exchange of information. This is particularly beneficial in scenarios where sharing involves entities with complex relationships, providing a richer context for the shared data.
In scenarios involving data sharing between two OpenCTI platforms, Live streams emerge as the preferred mechanism. These streams operate like TAXII collections but are notably enhanced, supporting:
create, update and delete events depending on the parameters,
caching already created entities in the last 5 minutes,
resolving relationships and dependencies even out of the filters,
can be public (without authentication).
Resolve relationships and dependencies
Dependencies and relationships of entities shared via Live streams, as determined by specified filters, are automatically shared even beyond the confines of these filters. This means that interconnected data, which may not directly meet the filter criteria, is still included in the Live stream. However, OpenCTI data segregation mechanisms are still applied. They allow restricting access to shared data based on factors such as markings or organization. It's imperative to carefully configure and manage these access controls to ensure that no confidential data is shared.
To better understand how live streams are working, let's take a few examples, from simple to complex.
Given a live stream with filters Entity type: Indicator AND Label: detection. Let's see what happens with an indicator with:
Marking definition: TLP:GREEN
Author Crowdstrike
Relation indicates to the malware Emotet
Action Result in stream (with Avoid dependencies resolution=true) Result in stream (with Avoid dependencies resolution=false) 1. Create an indicator Nothing Nothing 2. Add the label detection Create TLP:GREEN, create CrowdStrike, create the indicator Create TLP:GREEN, create CrowdStrike, create the malware Emotet, create the indicator, create the relationship indicates 3. Remove the label detection Delete the indicator Delete the indicator and the relationship 4. Add the label detection Create the indicator Create the indicator, create the relationship indicates 5. Delete the indicator Delete the indicator Delete the indicator and the relationship
Details on how to consume these Live streams can be found on the dedicated page.
OpenCTI has an embedded TAXII API endpoint which provides valid STIX 2.1 bundles. If you wish to know more about the TAXII standard, please read the official introduction.
In OpenCTI you can create as many TAXII 2.1 collections as needed.
After creating a new collection, every system with a proper access token can consume the collection using different kinds of authentication (basic, bearer, etc.).
As when using the GraphQL API, TAXII 2.1 collections have a classic pagination system that should be handled by the consumer. Also, it's important to understand that element dependencies (nested IDs) inside the collection are not always contained/resolved in the bundle, so consistency needs to be handled at the client level.
The CSV feed facilitates the automatic generation of a CSV file, accessible via a URL. The CSV file is regenerated and updated at user-defined intervals, providing flexibility. The entries in the file correspond to the information that matches the filters applied and that were created or modified in the platform during the time interval (between the last generation of the CSV and the new one).
CSV size limit
The CSV file generated has a limit of 5 000 entries. If more than 5 000 entities are retrieved by the platform, only the most recent 5 000 will be shared in the file.
This guide aims to give you a full overview of the OpenCTI features and workflows. The platform can be used in various contexts to handle threats management use cases from a technical to a more strategic level. OpenCTI has been designed as a knowledge graph, taking inputs (threat intelligence feeds, sightings & alerts, vulnerabilities, assets, artifacts, etc.) and generating outputs based on built-in capabilities and / or connectors.
Here are some examples of use cases:
Cyber Threat Intelligence knowledge base
Detection as code feeds for XDR, EDR, SIEMs, firewalls, proxies, etc.
Incident response artifacts & cases management
Vulnerabilities management
Reporting, alerting and dashboarding on a subset of data
The welcome page gives any visitor on the OpenCTI platform an overview of what's happening on the platform. It can be replaced by a custom dashboard, created by a user (or the default dashboard set up in a role, a group or an organization).
"},{"location":"usage/getting-started/#indicators-in-the-dashboard","title":"Indicators in the dashboard","text":""},{"location":"usage/getting-started/#numbers","title":"Numbers","text":"Component Description Intrusion sets Number of intrusion sets . Malware Number of malware. Reports Number of reports. Indicators Number of indicators."},{"location":"usage/getting-started/#charts-lists","title":"Charts & lists","text":"Component Description Most active threats (3 last months) Top active threats (threat actor, intrusion set and campaign) during the last 3 months. Most targeted victims (3 last months) Intensity of the targeting tied to the number of relations targets for a given entities (organization, sector, location, etc.) during the last 3 months. Relationships created Volume of relationships created over the past 12 months. Most active malware (3 last months) Top active malware during the last 3 months. Most active vulnerabilities (3 last months) List of the vulnerabilities with the greatest number of relations over the last 3 months. Targeted countries (3 last months) Intensity of the targeting tied to the number of relations targets for a given country over the past 3 months. Latest reports Last reports ingested in the platform. Most active labels (3 last months) Top labels given to entities during the last 3 months.
Explore the platform
To start exploring the platform and understand how information is structured, we recommend starting with the overview documentation page.
Automated imports in OpenCTI streamline the process of data ingestion, allowing users to effortlessly bring in valuable intelligence from diverse sources. This page focuses on the automated methods of importing data, which serve as bridges between OpenCTI and diverse external systems, formatting it into a STIX bundle, and importing it into the OpenCTI platform.
Connectors in OpenCTI serve as dynamic gateways, facilitating the import of data from a wide array of sources and systems. Every connector is designed to handle specific data types and structures of the source, allowing OpenCTI to efficiently ingest the data.
The behavior of each connector is defined by its development, determining the types of data it imports and its configuration options. This flexibility allows users to customize the import process to their specific needs, ensuring a seamless and personalized data integration experience.
The level of configuration granularity regarding the imported data type varies with each connector. Nevertheless, connectors empower users to specify the date from which they wish to fetch data. This capability is particularly useful during the initial activation of a connector, enabling the retrieval of historical data. Following this, the connector operates in real-time, continuously importing new data from the source.
OpenCTI's connector ecosystem covers a broad spectrum of sources, enhancing the platform's capability to integrate data from various contexts, from threat intelligence providers to specialized databases. The list of available connectors can be found in our connectors catalog. Connectors are categorized into three types: import connectors (the focus here), enrichment connectors, and stream consumers. Further documentation on connectors is available on the dedicated documentation page.
In summary, automated imports through connectors empower OpenCTI users with a scalable, efficient, and customizable mechanism for data ingestion, ensuring that the platform remains enriched with the latest and most relevant intelligence.
In OpenCTI, the \"Data > Ingestion\" section provides users with built-in functions for automated data import. These functions are designed for specific purposes and can be configured to seamlessly ingest data into the platform. Here, we'll explore the configuration process for the four built-in functions: Live Streams, TAXII Feeds, RSS Feeds, and CSV Feeds.
Live Streams enable users to consume data from another OpenCTI platform, fostering collaborative intelligence sharing. Here's a step-by-step guide to configure Live streams synchroniser:
Remote OpenCTI URL: Provide the URL of the remote OpenCTI platform (e.g., https://[domain]; don't include the path).
Remote OpenCTI token: Provide the user token. An administrator from the remote platform must supply this token, and the associated user must have the \"Access data sharing\" privilege.
After filling in the URL and user token, validate the configuration.
Once validated, select a live stream to which you have access.
Additional configuration options:
User responsible for data creation: Define the user responsible for creating data received from this stream. Best practice is to dedicate one user per source for organizational clarity. Please see the section \"Best practices\" below for more information.
Starting synchronization: Specify the date of the oldest data to retrieve. Leave the field empty to import everything.
Take deletions into account: Enable this option to delete data from your platform if it was deleted on the providing stream. (Note: Data won't be deleted if another source has imported it previously.)
Verify SSL certificate: Check the validity of the certificate of the domain hosting the remote platform.
Avoid dependencies resolution: Import only entities without their relationships. For instance, if the stream shares malware, all the malware's relationships will be retrieved by default. This option enables you to choose not to recover them.
Use perfect synchronization: This option is specifically for synchronizing two platforms. If an imported entity already exists on the platform, the one from the stream will overwrite it.
TAXII Feeds in OpenCTI provide a robust mechanism for ingesting TAXII collections from TAXII servers or other OpenCTI instances. Configuring TAXII ingester involves specifying essential details to seamlessly integrate threat intelligence data. Here's a step-by-step guide to configure TAXII ingesters:
TAXII server URL: Provide the root API URL of the TAXII server. For collections from another OpenCTI instance, the URL is in the form https://[domain]/taxii2/root.
TAXII collection: Enter the ID of the TAXII collection to be ingested. For collections from another OpenCTI instance, the ID follows the format 426e3acb-db50-4118-be7e-648fab67c16c.
Authentication type (if necessary): Enter the authentication type. For non-public collections from another OpenCTI instance, the authentication type is \"Bearer token.\" Enter the token of a user with access to the collection (similar to the point 2 of the Live streams configuration above).
TAXII root API URL
Many ISAC TAXII configuration instructions will provide the URL for the collection or discovery service. In these cases, remove the last path segment from the TAXII Server URL in order to use it in OpenCTI. eg. use https://[domain]/tipapi/tip21, and not https://[domain]/tipapi/tip21/collections.
Additional configuration options:
User responsible for data creation: Define the user responsible for creating data received from this TAXII feed. Best practice is to dedicate one user per source for organizational clarity. Please see the section \"Best practices\" below for more information.
Import from date: Specify the date of the oldest data to retrieve. Leave the field empty to import everything.
RSS Feeds functionality enables users to seamlessly ingest items in report form from specified RSS feeds. Configuring RSS Feeds involves providing essential details and selecting preferences to tailor the import process. Here's a step-by-step guide to configure RSS ingesters:
RSS Feed URL: Provide the URL of the RSS feed from which items will be imported.
Additional configuration options:
User responsible for data creation: Define the user responsible for creating data received from this RSS feed. Best practice is to dedicate one user per source for organizational clarity. Please see the section \"Best practices\" below for more information.
Import from date: Specify the date of the oldest data to retrieve. Leave the field empty to import everything.
Default report types: Indicate the report type to be applied to the imported report.
Default author: Indicate the default author to be applied to the imported report. Please see the section \"Best practices\" below for more information.
Default marking definitions: Indicate the default markings to be applied to the imported reports.
CSV feed ingester enables users to import CSV files exposed on URLs. Here's a step-by-step guide to configure TAXII ingesters:
CSV URL: Provide the URL of the CSV file exposed from which items will be imported.
CSV Mappers: Choose the CSV mapper to be used to import the data.
Authentication type (if necessary): Enter the authentication type.
CSV mapper
CSV feed functionality is based on CSV mappers. It is necessary to create the appropriate CSV mapper to import the data contained in the file. See the page dedicated to the CSV mapper.
Additional configuration options:
User responsible for data creation: Define the user responsible for creating data received from this CSV feed. Best practice is to dedicate one user per source for organizational clarity. Please see the section \"Best practices\" below for more information.
Import from date: Specify the date of the oldest data to retrieve. Leave the field empty to import everything.
in CSV Mappers, if you created a representative for Marking definition, you could have chosen between 2 options:
let the user choose marking definitions
Use default marking definitions of the user
This configuration applies when using a CSV Mapper for a CSV Ingester. If you select a CSV Mapper containing the option \"Use default marking definitions of the user\", the default marking definitions of the user you chose to be responsible for the data creation will be applied to all data imported. If you select a CSV Mapper containing the option \"let the user choose marking definitions\", you will be presented with the list of all the marking definitions of the user you chose to be responsible for the data creation (and not yours!)
To finalize the creation, click on \"Verify\" to run a check on the submitted URL with the selected CSV mapper. A valid URL-CSV mapper combination results in the identification of up to 50 entities.
To start your new ingester, click on \"Start\", in the burger menu.
CSV feed ingestion is made possible thanks to the connector \"ImportCSV\". So you can track the progress in \"Data > Ingestion > Connectors\". On a regular basis, the ingestion is updated when new data is added to the CSV feed.
"},{"location":"usage/import-automated/#best-practices-for-feed-import","title":"Best practices for feed import","text":"
Ensuring a secure and well-organized environment is paramount in OpenCTI. Here are two recommended best practices to enhance security, traceability, and overall organizational clarity:
Create a dedicated user for each source: Generate a user specifically for feed import, following the convention [F] Source name for clear identification. Assign the user to the \"Connectors\" group to streamline user management and permission related to data creation. Please see here for more information on this good practice.
Establish a dedicated Organization for the source: Create an organization named after the data source for clear identification. Assign the newly created organization to the \"Default author\" field in feed import configuration if available.
By adhering to these best practices, you ensure independence in managing rights for each import source through dedicated user and organization structures. In addition, you enable clear traceability to the entity's creator, facilitating source evaluation, dashboard creation, data filtering and other administrative tasks.
Users can streamline the data ingestion process using various automated import capabilities. Each method proves beneficial in specific circumstances.
Connectors act as bridges to retrieve data from diverse sources and format it for seamless ingestion into OpenCTI.
Live Streams enable collaborative intelligence sharing across OpenCTI instances, fostering real-time updates and efficient data synchronization.
TAXII Feeds provide a standardized mechanism for ingesting threat intelligence data from TAXII servers or other OpenCTI instances.
RSS Feeds facilitate the import of items in report form from specified RSS feeds, offering a straightforward way to stay updated on relevant intelligence.
By leveraging these automated import functionalities, OpenCTI users can build a comprehensive, up-to-date threat intelligence database. The platform's adaptability and user-friendly configuration options ensure that intelligence workflows remain agile, scalable, and tailored to the unique needs of each organization.
"},{"location":"usage/import-files/","title":"Import from files","text":""},{"location":"usage/import-files/#import-mechanisms","title":"Import mechanisms","text":"
The platform provides a seamless process for automatically parsing data from various file formats. This capability is facilitated by two distinct mechanisms.
File import connectors: Currently, there are two connectors designed for importing files and automatically identifying entities.
ImportFileStix: Designed to handle STIX-structured files (json or xml format).
ImportDocument: Versatile connector supporting an array of file formats, including pdf, text, html, and markdown.
CSV mappers: The CSV mapper is a tailored functionality to facilitate the import of data stored in CSV files. For more in-depth information on using CSV mappers, refer to the CSV Mappers documentation page.
Both mechanisms can be employed wherever file uploads are possible. This includes the \"Data\" tabs of all entities and the dedicated panel named \"Data import and analyst workbenches\" located in the top right-hand corner (database logo with a small gear). Importing files from these two locations is not entirely equal; refer to the \"Relationship handling from entity's Data tab\" section below for details on this matter.
For ImportDocument connector, the identification process involves searching for existing entities in the platform and scanning the document for relevant information. In additions, the connector use regular expressions (regex) to detect IP addresses and domains within the document.
As for the ImportFileStix connector and the CSV mappers, there is no identification mechanism. The imported data will be, respectively, the data defined in the STIX bundle or according to the configuration of the CSV mapper used.
Upload file: Navigate to the desired location, such as the \"Data\" tabs of an entity or the \"Data import and analyst workbenches\" panel. Then, upload the file containing the relevant data by clicking on the small cloud with the arrow inside next to \"Uploaded files\".
Entity identification: For a CSV file, select the connector and CSV mapper to be used by clicking on the icon with an upward arrow in a circle. If it's not a CSV file, the connector will launch automatically. Then, the file import connectors or CSV mappers will identify entities within the uploaded document.
Workbench review and validation: Entities identified by connectors are not immediately integrated into the platform's knowledge base. Instead, they are thoughtfully placed in a workbench, awaiting review and validation by an analyst. Workbenches function as draft spaces, ensuring that no data is officially entered into the platform until the workbench has undergone the necessary validation process. For more information on workbenches, refer to the Analyst workbench documentation page.
Review workbenches
Import connectors may introduce errors in identifying object types or add \"unknown\" entities. Workbenches were established with the intent of reviewing the output of connectors before validation. Therefore, it is crucial to be vigilant when examining the workbench to prevent the import of incorrect data into the platform.
"},{"location":"usage/import-files/#additional-information","title":"Additional information","text":""},{"location":"usage/import-files/#no-workbench-for-csv-mapper","title":"No workbench for CSV mapper","text":"
It's essential to note that CSV mappers operate differently from other import mechanisms. Unlike connectors, CSV mappers do not generate workbenches. Instead, the data identified by CSV mappers is imported directly into the platform without an intermediary workbench stage.
"},{"location":"usage/import-files/#relationship-handling-from-entitys-data-tab","title":"Relationship handling from entity's \"Data\" tab","text":"
When importing a document directly from an entity's \"Data\" tab, there can be an automatic addition of relationships between the objects identified by connectors and the entity in focus. The process differs depending on the type of entity in which the import occurs:
If the entity is a container (e.g., Report, Grouping, and Cases), the identified objects in the imported file will be linked to the entity (upon workbench validation). In the context of a container, the object is said to be \"contained\".
For entities that are not containers, a distinct behavior unfolds. In this scenario, the identified objects are not linked to the entity, except for Observables. Related to relationships between the Observables and the entity are automatically added to the workbench and created after validation of this one.
"},{"location":"usage/import-files/#file-import-in-content-tab","title":"File import in Content tab","text":"
Expanding the scope of file imports, users can seamlessly add files in the Content tab of Analyses or Cases. In this scenario, the file is directly added as an attachment without utilizing an import mechanism.
In order to initiate file imports, users must possess the requisite capability: \"Upload knowledge files.\" This capability ensures that only authorized users can contribute and manage knowledge files within the OpenCTI platform, maintaining a controlled and secure environment for data uploads.
Deprecation warning
Using the ImportDocument connector to parse CSV file is now disallowed as it produces inconsistent results. Please configure and use CSV mappers dedicated to your specific CSV content for a reliable parsing. CSV mappers can be created and configured in the administration interface.
OpenCTI enforces strict rules to determine the period during which an indicator is effective for usage. This period is defined by the valid_from and valid_until dates. All along its lifecycle, the indicator score will decrease according to configured decay rules. After the indicator expires, the object is marked as revoked and the detection field is automatically set to false. Here, we outline how these dates are calculated within the OpenCTI platform and how the score is updated with decay rules.
"},{"location":"usage/indicators-lifecycle/#setting-validity-dates","title":"Setting validity dates","text":""},{"location":"usage/indicators-lifecycle/#data-source-provided-the-dates","title":"Data source provided the dates","text":"
If a data source provides valid_from and valid_until dates when creating an indicator on the platform, these dates are used without modification. But, if the creation is performed from the UI and the indicator is elligible to be manages by a decay rule, the platform will change this valid_until with the one calculated by the Decay rule.
"},{"location":"usage/indicators-lifecycle/#fallback-rules-for-unspecified-dates","title":"Fallback rules for unspecified dates","text":"
If a data source does not provide validity dates, OpenCTI applies the decay rule matching the indicator to determine these dates. The valid_until date is computed based on the revoke score of the decay rule : it is set at the exact time at which the indicator will reach the revoke score. Past valid_until date, the indicator is marked as revoked.
Indicators have an initial score at creation, either provided by data source, or 50 by default. Over time, this score is going to decrease according to the configured decay rules. Score is updated at each reaction point defined for the decay rule matching the indicator at creation.
Understanding how OpenCTI calculates validity periods and scores is essential for effective threat intelligence analysis. These rules ensure that your indicators are accurate and up-to-date, providing a reliable foundation for threat intelligence data.
"},{"location":"usage/inferences/","title":"Inferences and reasoning","text":""},{"location":"usage/inferences/#overview","title":"Overview","text":"
OpenCTI\u2019s inferences and reasoning capability is a robust engine that automates the process of relationship creation within your threat intelligence data. This capability, situated at the core of OpenCTI, allows logical rules to be applied to existing relationships, resulting in the automatic generation of new, pertinent connections.
"},{"location":"usage/inferences/#understanding-inferences-and-reasoning","title":"Understanding inferences and reasoning","text":"
Inferences and reasoning serve as OpenCTI\u2019s intelligent engine. It interprets your data logically. By activating specific predefined rules (of which there are around twenty), OpenCTI can deduce new relationships from the existing ones. For instance, if there's a connection indicating an Intrusion Set targets a specific country, and another relationship stating that this country is part of a larger region, OpenCTI can automatically infer that the Intrusion Set also targets the broader region.
Completeness: Fills relationship gaps, ensuring a comprehensive and interconnected threat intelligence database.
Accuracy: Minimizes manual input errors by deriving relationships from predefined, accurate logic.
"},{"location":"usage/inferences/#how-it-operates","title":"How it operates","text":"
When you activate an inference rule, OpenCTI continuously analyzes your existing relationships and applies the defined logical rules. These rules are logical statements that define conditions for new relationships. When the set of conditions is met, the OpenCTI creates the corresponding relationship automatically.
For example, if you activate a rule as follows:
IF [Entity A targets Identity B] AND [Identity B is part of Identity C] THEN [Entity A targets Identity C]
OpenCTI will apply this rule to existing data. If it finds an Intrusion Set (\"Entity A\") targeting a specific country (\"Identity B\") and that country is part of a larger region (\"Identity C\"), the platform will automatically establish a relationship between the Intrusion Set and the region.
Administration: To find out about existing inference rules and enable/disable them, refer to the Rules engine page in the Administration section of the documentation.
Playbooks: OpenCTI playbooks are highly customizable automation scenarios. This seamless integration allows for further automation, making your threat intelligence processes even more efficient and tailored to your specific needs. More information in our blog post.
Manual data creation in OpenCTI is an intuitive process that occurs throughout the platform. This page provides guidance on two key aspects of manual creation: Entity creation and Relationship creation.
Navigate to the relevant section: Be on the section of the platform related to the object type you want to create.
Click on the \"+\" icon: Locate the \"+\" icon located at the bottom right of the window.
Fill in entity-specific fields: A form on the right side of the window will appear, allowing to fill in specific fields of the entity. Certain fields are inherently obligatory, and administrators have the option to designate additional mandatory fields (See here for more information).
Click on \"Create\": Once you've filled in the desired fields, click on \"create\" to initiate the entity creation process.
Before delving into the creation of relationships between objects in OpenCTI, it's crucial to grasp some foundational concepts. Here are key points to understand:
On several aspects, including relationships, two categories of objects must be differentiated: containers (e.g., Reports, Groupings, and Cases) and others. Containers aren't related to but contains objects.
Relationships, like all other entities, are objects. They possess fields, can be linked, and share characteristics identical to other entities.
Relationships are inherently directional, comprising a \"from\" entity and a \"to\" entity. Understanding this directionality is essential for accurate relationship creation.
OpenCTI supports various relationship types, and their usage depends on the entity types being linked. For example, a \"target\" relationship might link malware to an organization, while linking malware to an intrusion set might involve a different relationship type.
Now, let\u2019s explore the process of creating relationships. To do this, we will differentiate the case of containers from the others.
When it comes to creating relationships within containers in OpenCTI, the process is straightforward. Follow these steps to attach objects to a container:
Navigate to the container: Go to the specific container to which you want to attach an object. This could be a Report, Grouping, or Cases.
Access the \"Entities\" tab: Within the container, locate and access the \"Entities\" tab.
Click on the \"+\" icon: Find the \"+\" icon located at the bottom right of the window.
Search for entities: A side window will appear. Search for the entities you want to add to the container.
Add entities to the container: Click on the desired entities. They will be added directly to the container.
When creating relationships not involving a container, the creation method is distinct. Follow these steps to create relationships between entities:
Navigate to one of the entities: Go to one of the entities you wish to link. Please be aware that the entity from which you create the relationship will be designated as the \"from\" entity for that relationship. So the decision of which entity to choose for creating the relationship should be considered, as it will impact the outcome.
Access the \"Knowledge\" tab: Within the entity, go to the \"Knowledge\" tab.
Select the relevant categories: In the right banner, navigate to the categories that correspond to the object to be linked. The available categories depend on the type of entity you are currently on. For example, if you are on malware and want to link to a sector, choose \"victimology.\"
Click on the \"+\" icon: Find the \"+\" icon located at the bottom right of the window.
Search for entities: A side window will appear. Search for the entities you want to link.
Add entities and click on \"Continue\": Click on the entities you wish to link. Multiple entities can be selected. Then click on \"Continue\" at the bottom right.
Fill in the relationship form: As relationships are objects, a creation form similar to creating an entity will appear.
Click on \"Create\": Once you've filled in the desired fields, click on \"create\" to initiate the relationship creation process.
While the aforementioned methods are primary for creating entities and relationships, OpenCTI offers versatility, allowing users to create objects in various locations within the platform. Here's a non-exhaustive list of additional places that facilitate on-the-fly creation:
Creating entities during relationship creation: During the \"Search for entities\" phase (see above) of the relationship creation process, click on the \"+\" icon to create a new entity directly.
Knowledge graph: Within the knowledge graph - found in the knowledge tab of the containers or in the investigation functionality - users can seamlessly create entities or relationships.
Inside a workbench: The workbench serves as another interactive space where users can create entities and relationships efficiently.
These supplementary methods offer users flexibility and convenience, allowing them to adapt their workflow to various contexts within the OpenCTI platform. As users explore the platform, they will naturally discover additional means of creating entities and relationships.
Max confidence level
When creating knowledge in the platform, the maximum confidence level of the users is used. Please navigate to this page to understand this concept and the impact it can have on the knowledge creation.
OpenCTI\u2019s merge capability stands as a pivotal tool for optimizing threat intelligence data, allowing to consolidate multiple entities of the same type. This mechanism serves as a powerful cleanup tool, harmonizing the platform and unifying scattered information. In this section, we explore the significance of this feature, the process of merging entities, and the strategic considerations involved.
In the ever-expanding landscape of threat intelligence and the multitude of names chosen by different data sources, data cleanliness is essential. Duplicates and fragmented information hinder efficient analysis. The merge capability is a strategic solution for amalgamating related entities into a cohesive unit. Central to the merging process is the selection of a main entity. This primary entity becomes the anchor, retaining crucial attributes such as name and description. Other entities, while losing specific fields like descriptions, are aliased under the primary entity. This strategic decision preserves vital data while eliminating redundancy.
One of the key feature of the merge capability is its ability to preserve relationships. While merging entities, their interconnected relationships are not lost. Instead, they seamlessly integrate into the new, merged entity. This ensures that the intricate web of relationships within the data remains intact, fostering a comprehensive understanding of the threat landscape.
OpenCTI\u2019s merge capability helps improve the quality of threat intelligence data. By consolidating entities and centralizing relationships, OpenCTI empowers analysts to focus on insights and strategies, unburdened by data silos or fragmentation. However, exercising caution and foresight in the merging process is essential, ensuring a robust and streamlined knowledge basis.
Administration: To understand how to merge entities and the consideration to take into account, refer to the Merging page in the Administration section of the documentation.
Deduplication mechanism: the platform is equipped with deduplication processes that automatically merge data at creation (either manually or by importing data from different sources) if it meets certain conditions.
"},{"location":"usage/nested/","title":"Nested references and objects","text":""},{"location":"usage/nested/#stix-standard","title":"STIX standard","text":""},{"location":"usage/nested/#definition","title":"Definition","text":"
In the STIX 2.1 standard, objects can:
Refer to other objects in directly in their attributes, by referencing one or multiple IDs.
Have other objects directly embedded in the entity.
In OpenCTI, all nested references and objects are modelized as relationships, to be able to pivot more easily on labels, external references, kill chain phases, marking definitions, etc.
When importing and exporting data to/from OpenCTI, the translation between nested references and objects to full-fledged nodes and edges is automated and therefore transparent for the users. Here is an example with the object in the graph above:
"},{"location":"usage/notifications/","title":"Notifications and alerting","text":"
In the evolving landscape of cybersecurity, timely awareness is crucial. OpenCTI empowers users to stay informed and act swiftly through its robust notifications and alerting system. This feature allows users to create personalized triggers that actively monitor the platform for specific events and notify them promptly when these conditions are met.
From individual users tailoring their alert preferences to administrators orchestrating collaborative triggers for Groups or Organizations, OpenCTI's notification system is a versatile tool for keeping cybersecurity stakeholders in the loop.
The main menu \"Notifications and triggers\" for creating and managing notifications is located in the top right-hand corner with the bell icon.
In OpenCTI, triggers serve as personalized mechanisms for users to stay informed about specific events that align with their cybersecurity priorities. Users can create and manage triggers to tailor their notification experience. Each trigger operates by actively listening to events based on predefined filters and event types, promptly notifying users via chosen notifiers when conditions are met.
Individual user triggers: Each user possesses the autonomy to craft their own triggers, finely tuned to their unique preferences and responsibilities. By setting up personalized filters and selecting preferred notifiers, users ensure that they receive timely and relevant notifications aligned with their specific focus areas.
Administrative control: Platform administrators have the capability to create and manage triggers for Users, Groups and Organizations. This provides centralized control and the ability to configure triggers that address collective cybersecurity objectives. Users within the designated Group or Organization will benefit from triggers with read-only access rights. These triggers are to be created directly on the User|Group|Organization with whom to share them in \"Settings > Security > Users|Groups|Organizations\".
Leveraging the filters, users can meticulously define the criteria that activate their triggers. This level of granularity ensures that triggers are accurate, responding precisely to events that matter most. Users can tailor filters to consider various parameters such as object types, markings, sources, or other contextual details. They can also allow notifications for the assignment of a Task, a Case, etc.
Beyond filters, a trigger can be configured to respond to three event types: creation, modification, and deletion.
Instance triggers offer a targeted approach to live monitoring by allowing users to set up triggers specific to one or several entities. These triggers, when activated, keep a vigilant eye on a predefined set of events related to the selected entities, ensuring that you stay instantly informed about crucial changes.
On an entity's overview, locate the \"Instance trigger quick subscription\" button with the bell icon at the top right.
Click on the button to create the instance trigger.
(Optional) Click on it again to modify the instance trigger created.
"},{"location":"usage/notifications/#events-monitored-by-instance-triggers","title":"Events monitored by instance triggers","text":"
An instance trigger set on an entity X actively monitors the following events:
Update/Deletion of X: Stay informed when the selected entity undergoes changes or is deleted.
Creation/Deletion of relationships: Receive notifications about relationships being added or removed from/to X.
Creation/Deletion of related entities: Be alerted when entities that have X in its refs - i.e. contains X, is shared with X, is created by X, etc. - are created or deleted.
Adding/Removing X in ref: Stay in the loop when X is included or excluded from the ref of other entities - i.e. adding X in the author of an entity, adding X in a report, etc.).
Entity deletion notification
It's important to note that the notification of entity deletion can occur in two scenarios: - Real entity deletion: When the entity is genuinely deleted from the platform. - Visibility loss: When a modification to the entity results in the user losing visibility for that entity.
Digests provide an efficient way to streamline and organize your notifications. By grouping notifications based on selected triggers and specifying the delivery period (daily, weekly, monthly), you gain the flexibility to receive consolidated updates at your preferred time, as opposed to immediate notifications triggered by individual events.
Configure digest: Set the parameters, including triggers to be included and the frequency of notifications (daily, weekly, monthly).
Choose the notifier(s): Select the notification method(s) (e.g. within the OpenCTI interface, via email, etc.).
"},{"location":"usage/notifications/#benefits-of-digests","title":"Benefits of digests","text":"
Organized notifications: Digests enable you to organize and categorize notifications, preventing a flood of individual alerts.
Customized delivery: Choose the frequency of digest delivery based on your preferences, whether it's a daily overview, a weekly summary, or a monthly roundup.
Reduced distractions: Receive notifications at a scheduled time, minimizing interruptions and allowing you to focus on critical tasks.
Digests enhance your control over notification management, ensuring a more structured and convenient approach to staying informed about important events.
In OpenCTI, notifiers serve as the channels for delivering notifications, allowing users to stay informed about critical events. The platform offers two built-in notifiers, \"Default mailer\" for email notifications and \"User interface\" for in-platform alerts.
OpenCTI features built-in notifier connectors that empower users to create personalized notifiers for notification and activity alerting. Three essential connectors are available:
Platform mailer connector: Enables sending notifications directly within the OpenCTI platform.
Simple mailer connector: Offers a straightforward approach to email notifications with simplified configuration options.
Generic webhook connector: Facilitates communication through webhooks.
OpenCTI provides two samples of webhook notifiers designed for Teams integration.
"},{"location":"usage/notifications/#configuration-and-access","title":"Configuration and Access","text":"
Notifiers are manageable in the \"Settings > Customization > Notifiers\" window and can be restricted through Role-Based Access Control (RBAC). Administrators can restrict access to specific Users, Groups, or Organizations, ensuring controlled usage.
For guidance on configuring custom notifiers and explore detailed setup instructions, refer to the dedicated documentation page.
The following chapter aims at giving the reader a step-by-step description of what is available on the platform and the meaning of the different tabs and entries.
When the user connects to the platform, the home page is the Dashboard. This Dashboard contains several visuals summarizing the types and quantity of data recently imported into the platform.
Dashboard
To get more information about the components of the default dashboard, you can consult the Getting started.
The left side panel allows the user to navigate through different windows and access different views and categories of knowledge.
The first part of the platform in the left menu is dedicated to what we call the \"hot knowledge\", which means this is the entities and relationships which are added on a daily basis in the platform and which generally require work / analysis from the users.
Analyses: all containers which convey relevant knowledge such as reports, groupings and malware analyses.
Cases: all types of case like incident responses, requests for information, for takedown, etc.
Events: all incidents & alerts coming from operational systems as well as sightings.
Observations: all technical data in the platform such as observables, artifacts and indicators.
The second part of the platform in the left menu is dedicated to the \"cold knowledge\", which means this is the entities and relationships used in the hot knowledge. You can see this as the \"encyclopedia\" of all pieces of knowledge you need to get context: threats, countries, sectors, etc.
Threats: all threats entities from campaigns to threat actors, including intrusion sets.
Arsenal: all tools and pieces of malware used and/or targeted by threats, including vulnerabilities.
Techniques: all objects related to tactics and techniques used by threats (TTPs, etc.).
Entities: all non-geographical contextual information such as sectors, events, organizations, etc.
Locations: all geographical contextual information, from cities to regions, including precise positions.
In the Settings > Parameters, it is possible for the platform administrator to hide categories in the platform for all users.
"},{"location":"usage/overview/#hide-categories-in-roles","title":"Hide categories in roles","text":"
In OpenCTI, the different roles are highly customizable. It is possible to defined default dashboards, triggers, etc. but also be able to hide categories in the roles:
"},{"location":"usage/overview/#presentation-of-a-typical-page-in-opencti","title":"Presentation of a typical page in OpenCTI","text":"
While OpenCTI features numerous entities and tabs, many of them share similarities, with only minor differences arising from specific characteristics. These differences may involve the inclusion or exclusion of certain fields, depending on the nature of the entity.
In this part will only be detailed a general outline of a \"typical\" OpenCTI page. The specifies of the different entities will be detailed in the corresponding pages below (Activities and Knowledge).
In the Overview tab on the entity, you will find all properties of the entity as well as the recent activities.
First, you will find the Details section, where are displayed all properties specific to the type of entity you are looking at, an example below with a piece of malware:
Thus, in the Basic information section, are displayed all common properties to all objects in OpenCTI, such as the marking definition, the author, the labels (i.e. tags), etc.
Below these two sections, you will find latest modifications in the Knowledge base related to the Entity:
Latest created relationships: display the latest relationships that have been created from or to this Entity. For example, latest Indicators of Compromise and associated Threat Actor of a Malware.
latest containers about the object: display all the Cases and Analyses that contains this Entity. For example, the latest Reports about a Malware.
External references: display all the external sources associated with the Entity. You will often find here links to external reports or webpages from where Entity's information came from.
History: display the latest chronological modifications of the Entity and its relationships that occurred in the platform, in order to traceback any alteration.
Last, all Notes written by users of the platform about this Entity are displayed in order to access unstructured analysis comments.
In the Knowledge tab, which is the central part of the entity, you will find all the Knowledge related to the current entity. The Knowledge tab is different for Analyses (Report, Groupings) and Cases (Incident response, Request for Information, Request for Takedown) entities than for all the other entity types.
The Knowledge tab of those entities (who represents Analyses or Cases that can contains a collection of Objects) is the place to integrate and link together entities. For more information on how to integrate information in OpenCTI using the knowledge tab of a report, please refer to the part Manual creation.
The Knowledge tabs of any other entity (that does not aim to contain a collection of Objects) gather all the entities which have been at some point linked to the entity the user is looking at. For instance, as shown in the following capture, the Knowledge tab of Intrusion set APT29, gives access to the list of all entities APT29 is attributed to, all victims the intrusion set has targeted, all its campaigns, TTPs, malware etc. For entities to appear in these tabs under Knowledge, they need to have been linked to the entity directly or have been computed with the inference engine.
"},{"location":"usage/overview/#focus-on-indicators-and-observables","title":"Focus on Indicators and Observables","text":"
The Indicators and Observables section offers 3 display modes: - The entities view, which displays the indicators/observables linked to the entity. - The relationship view, which displays the various relationships between the indicators/observables linked to the entity and the entity itself. - The contextual view, which displays the indicators/observables contained in the cases and analyses that contain the entity.
The Content tab allows for uploading and creating outcomes documents related to the content of the current entity (in PDF, text, HTML or markdown files). This specific tab enable to previzualize, manage and write deliverable associated with the entity. For example an analytic report to share with other teams, a markdown files to feed a collaborative wiki with, etc.
The Content tab is available for a subset of entities: Report, Incident, Incident response, Request for Information, and Request for Takedown.
The Analyses tab contains the list of all Analyses (Report, Groupings) and Cases (Incident response, Request for Information, Request for Takedown) in which the entity has been identified.
By default, this tab display the list, but you can also display the content of all the listed Analyses on a graph, allowing you to explore all their Knowledge and have a glance of the context around the Entity.
The Observables tab (for Reports and Observed data): A table containing all SCO (Stix Cyber Observable) contained in the Report or the Observed data, with search and filters available. It also displays if the SCO has been added directly or through inferences with the reasoning engine
the Entities tab (for Reports and Observed data): A table containing all SDO (Stix Domain Objects) contained in the Report or the Observed data, with search and filters available. It also displays if the SDO has been added directly or through inferences with the reasoning engine
Observables:
the Sightings tab (for Indicators and Observables): A table containing all Sightings relationships corresponding to events in which Indicators (IP, domain name, URL, etc.) are detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or EDR.
"},{"location":"usage/pivoting/","title":"Pivot and investigate","text":"
In OpenCTI, all data are structured as an extensive knowledge graph, where every element is interconnected. The investigation functionality provides a powerful tool for pivoting on any entity or relationship within the platform. Pivoting enables users to explore and analyze connections between entities and relationships, facilitating a comprehensive understanding of the data.
To access investigations, navigate to the top right corner of the toolbar:
Access restriction
When an investigation is created, it is initially visible only to the creator, allowing them to work on the investigation before deciding to share it. The sharing mechanism is akin to that of dashboards. For further details, refer to the Access control section in the dashboard documentation page.
We can add any existing entity of the platform to your investigation.
After adding an entity, we can choose the entity and view its details in the panel that appears on the right of the screen.
On each node, we'll notice a bullet with a number inside, serving as a visual indication of how many entities are linked to it but not currently displayed in the graph. Keep in mind that this number is an approximation, which is why there's a \"+\" next to it. If there's no bullet displayed, it means there's nothing to expand from this node.
To incorporate these linked entities into the graph, we just have to expand the nodes. Utilize the button with a 4-arrows logo in the mentioned menu, or double-click on the entity directly. This action opens a new window where we can choose the types of entities and relationships we wish to expand.
For instance, in the image above, selecting the target Malware and the relationship Uses implies expanding in my investigation graph all Malware linked to this node with a relationship of type Uses.
"},{"location":"usage/pivoting/#roll-back-expansion","title":"Roll back expansion","text":"
Expanding a graph can add a lot of entities and relations, making it not only difficult to read but sometimes counterproductive since it brings entities and relations that are not useful to your investigations. To solve this problem, there is a button to undo the last expansion.
When clicking on this button, we will retrieve the state in which your graph was before your expansion. As a result, please note that all add or remove actions made since the last expansion will be lost: in other words, if you have expanded your graph, and then have added some entities in your graph, when clicking on rollback button, the entities that you have added will not be in your graph.
You can roll back your investigation graph up to the last 10 expand actions.
We can create a relationship between entities directly within our investigation. To achieve this, select multiple entities by clicking on them while holding down the shift key. Subsequently, a button appears at the bottom right to create one (or more, depending on the number of entities selected) relationships.
Relationship creation
Creating a relationship in the investigation graph will generate the relationship in your knowledge base.
"},{"location":"usage/pivoting/#capitalize-on-an-investigation","title":"Capitalize on an investigation","text":""},{"location":"usage/pivoting/#export-investigation","title":"Export investigation","text":"
Users have the capability to export investigations, providing a way to share, document, or archive their findings.
PDF and image formats: Users can export investigations in either PDF or image format, offering flexibility in sharing and documentation.
STIX bundle: The platform allows the export of the entire content of an investigation graph as a STIX bundle. In the STIX format, all objects within the investigation graph are automatically aggregated into a Report object.
"},{"location":"usage/pivoting/#turn-investigation-into-a-container","title":"Turn investigation into a container","text":"
Users can efficiently collect and consolidate the findings of an investigation by adding the content into dedicated containers. The contents of an investigation can be imported into various types of containers, including:
Grouping
Incident Response
Report
Request for Information
Request for Takedown
We have the flexibility to choose between creating a new container on the fly or adding investigation content to an existing container.
After clicking on the ADD button, the browser will redirect to the Knowledge tab of the container where we added the content of our investigation. If we added it to multiple containers, the redirection will be to the first of the list.
"},{"location":"usage/reliability-confidence/","title":"Reliability and Confidence","text":""},{"location":"usage/reliability-confidence/#generalities","title":"Generalities","text":"
In (Cyber) Threat Intelligence, evaluation of information sources and of information quality is one of the most important aspect of the work. It is of the utter most importance to assess situations by taking into account reliability of the sources and credibility of the information.
This concept is foundational in OpenCTI, and have real impact on:
the data deduplication process
the data stream filtering for ingestion and sharing
"},{"location":"usage/reliability-confidence/#what-is-the-reliability-of-a-source","title":"What is the Reliability of a source?","text":"
Reliability of a source of information is a measurement of the trust that the analyst can have about the source, based on the technical capabilities or history of the source. Is the source a reliable partner with long sharing history? A competitor? Unknown?
Reliability of sources are often stated at organizational level, as it requires an overview of the whole history with it.
In the Intelligence field, Reliability is often notated with the NATO Admiralty code.
"},{"location":"usage/reliability-confidence/#what-is-confidence-of-an-information","title":"What is Confidence of an information?","text":"
Reliability of a source is important but even a trusted source can be wrong. Information in itself has a credibility, based on what is known about the subject and the level of corroboration by other sources.
Credibility is often stated at the analyst team level, expert of the subject, able to judge the information with its context.
In the Intelligence field, Confidence is often notated with the NATO Admiralty code.
Why Confidence instead of Credibility?
Using both Reliability and Credibility is an advanced use case for most of CTI teams. It requires a mature organization and a well staffed team. For most of internal CTI team, a simple confidence level is enough to forge assessment, in particular for teams that concentrate on technical CTI.
Thus in OpenCTI, we have made the choice to fuse the notion of Credibility with the Confidence level that is commonly used by the majority of users. They have now the liberty to push forward their practice and use both Confidence and Reliability in their daily assessments.
"},{"location":"usage/reliability-confidence/#reliability-open-vocabulary","title":"Reliability open vocabulary","text":"
Reliability value can be set for every Entity in the platform that can be Author of Knowledge:
Organizations
Individuals
Systems
and also Reports
Reliability on Reports allows you to specify the reliability associated to the original author of the report if you received it through a provider.
For all Knowledge in the platform, the reliability of the source of the Knowledge (author) is displayed in the Overview. This way, you can always forge your assessment of the provided Knowledge regarding the reliability of the author.
You can also now filter entities by the reliability of its author.
Tip
This way, you may choose to feed your work with only Knowledge provided by reliable sources.
Reliability is an open vocabulary that can be customized in Settings -> Taxonomies -> Vocabularies : reliability_ov.
Info
The setting by default is the Reliability scale from NATO Admiralty code. But you can define whatever best fit your organization.
Cases: Incident Response, Request for Information, Request for Takedown, Feedback
Events: Incident, Sighting, Observed data
Observations: Indicator, Infrastructure
Threats: Threat actor (Group), Threat actor (Individual), Intrusion Set, Campaign
Arsenal: Malware, Channel, Tool, Vulnerability
For all of these entities, the Confidence level is displayed in the Overview, along with the Reliability. This way, you can rapidly assess the Knowledge with the Confidence level representing the credibility/quality of the information.
Confidence level is a numerical value between 0 and 100. But Multiple \"Ticks\" can be defined and labelled to provide a meaningful scale.
Confidence level can be customized for each entity type in Settings > Customization > Entity type.
As such customization can be cumbersome, three confidence level templates are provided in OpenCTI:
Admiralty: corresponding to the Admiralty code's credibility scale
Objective: corresponding to a full objective scale, aiming to leave any subjectivity behind. With this scale, an information confidence is:
\"Cannot be judged\": there is no data regarding the credibility of the information
\"Told\": the information is known because it has been told to the source. The source doesn't verify it by any means.
\"Induced\": the information is the result of an analysis work and is based on other similar information assumed to be true.
\"Deduced\": the information is the result of an analysis work, and is a logical conclusion of other information assumed to be true.
\"Witnessed\": the source have observed itself the described situation or object.
standard: the historic confidence level scale in OpenCTI defining a Low, Med and High level of confidence.
It is always possible to modify an existing template to define a custom scale adapted to your context.
Tip
If you use the Admiralty code setting for both reliability and Confidence, you will find yourself with the equivalent of NATO confidence notation in the Overview of your different entities (A1, B2, C3, etc.)
We know that in organizations, different users do not always have the same expertise or seniority. As a result, some specific users can be more \"trusted\" when creating or updating knowledge than others. Additionally, because connectors, TAXII feeds and streams are all linked to respectively one user, it is important to be able to differentiate which connector, stream or TAXII feed is more trustable than others.
This is why we have introduced the concept of max confidence level to tackle this use case.
Max confidence level per user allows organizations to fine tune their users to ensure knowledge updated and created stays as consistent as possible.
The maximum confidence level can be set at the Group level or at the User level, and can be overridden by entity type for fine-tuning your confidence policy.
"},{"location":"usage/reliability-confidence/#overall-way-of-working","title":"Overall way of working","text":"
The overall idea is that users with a max confidence level lower than a confidence level of an entity cannot update or delete this entity.
Also, in a conservative approach, when 2 confidence levels are possible, we would always take the lowest one.
To have a detailed understanding of the concept, please browse through this diagram:
User and group confidence level configuration shall be viewed as:
a maximum confidence level between 0 and 100 (optional for users, mandatory for groups);
a list of overrides (a max confidence level between 0 and 100) per entity type (optional).
The user's effective confidence level is the result of this configuration from multiple sources (user and their groups).
To compute this value, OpenCTI uses the following strategy:
effective maximum confidence is the maximum value found in the user's groups;
effective overrides per entity type are cumulated from all groups, taking the maximum value if several overrides are set on the same entity type
if a user maximum confidence level is set, it overrides everything from groups, including the overrides per entity type defined at group level
if not, but the user has specific overrides per entity types, they override the corresponding confidence levels per entity types coming from groups
if a user has the administrator's \"Bypass\" capability, the effective confidence level will always be 100 without overrides, regardless of the group and user configuration on confidence level
The following diagram describes the different use-cases you can address with this system.
"},{"location":"usage/reliability-confidence/#how-to-set-a-confidence-level","title":"How to set a confidence level","text":"
You can set up a maximum confidence levels from the Confidences tab in the edition panel of your user or group. The value can be selected between 0 and 100, or using the admiralty scale selector.
At the group level, the maximum confidence level is mandatory, but is optional at the user level (you have to enable it using the corresponding toggle button).
"},{"location":"usage/reliability-confidence/#how-to-override-a-max-confidence-level-per-entity-type","title":"How to override a max confidence level per entity type","text":"
You also have the possibility to override a max confidence level per entity type, limited to Stix Domain Objects.
You can visualize the user's effective confidence level in the user's details view, by hovering the corresponding tooltip. It describes where the different values might come from.
"},{"location":"usage/reliability-confidence/#usage-in-opencti","title":"Usage in OpenCTI","text":""},{"location":"usage/reliability-confidence/#example-with-the-admiralty-code-template","title":"Example with the admiralty code template","text":"
Your organization have received a report from a CTI provider. At your organization level, this provider is considered as reliable most of the time and its reliability level has been set to \"B - Usually Reliable\" (your organization uses the Admiralty code).
This report concerns ransomware threat landscape and have been analysed by your CTI analyst specialized in cybercrime. This analyst has granted a confidence level of \"2 - Probably True\" to the information.
As a technical analyst, through the cumulated reliability and Confidence notations, you now know that the technical elements of this report are probably worth consideration.
"},{"location":"usage/reliability-confidence/#example-with-the-objective-template","title":"Example with the Objective template","text":"
As a CTI analyst in a governmental CSIRT, you build up Knowledge that will be shared within the platform to beneficiaries. Your CSIRT is considered as a reliable source by your beneficiaries, even if you play a role of a proxy with other sources, but your beneficiaries need some insights about how the Knowledge has been built/gathered.
For that, you use the \"Objective\" confidence scale in your platform to provide beneficiaries with that. When the Knowledge is the work of the investigation of your CSIRT, either from incident response or attack infrastructure investigation, you set the confidence level to \"Witnessed\", \"Deduced\" or \"Induced\" (depending on if you observed directly the data, or inferred it during your research). When the information has not been verified by the CSIRT but has value to be shared with beneficiaries, you can use the \"Told\" level to make it clear to them that the information is probably valuable but has not been verified.
"},{"location":"usage/search/","title":"Search for knowledge","text":"
In OpenCTI, you have access to different capabilities to be able to search for knowledge in the platform. In most cases, a search by keyword can be refined with additional filters for instance on the type of object, the author etc.
The global search is always available in the top bar of the platform.
This search covers all STIX Domain Objects (SDOs) and STIX Cyber Observables (SCOs) in the platform. The search results are sorted according to the following behaviour:
Priority 1 for exact matching of the keyword in one attribute of the objects.
Priority 2 for partial matching of the keyword in the name, the aliases and the description attributes (full text search).
Priority 3 for partial matching of the keyword in all other attributes (full text search).
If you get unexpected result, it is always possible to add some filters after the initial search:
Also, using the Advanced search button, it is possible to directly put filters in a global search:
Advanced filters
You have access to advanced filters all accross the UI, if you want to know more about how to use these filters with the API or the Python library, don't hesitate to read the dedicated page
"},{"location":"usage/search/#full-text-search-in-files-content","title":"Full text search in files content","text":"
Enterprise edition
Full text search in files content is available under the \"OpenCTI Enterprise Edition\" license.
Please read the dedicated page to have all information
It's possible to extend the global search by keywords to the content of documents uploaded to the platform via the Data import tab, or directly linked to an entity via its Data tab.
It is particularly useful to enable Full text indexing to avoid missing important information that may not have been structured within the platform. This situation can arise due to a partial automatic import of document content, limitations of a connector, and, of course, errors during manual processing.
In order to search in files, you need to configure file indexing.
The bulk search capabilities is available in the top bar of the platform and allows you to copy paste a list of keyword or objects (ie. list of domains, list of IP addresses, list of vulnerabilities, etc.) to search in the platform:
When searching in bulk, OpenCTI is only looking for an exact match in some properties:
name
aliases
x_opencti_aliases
x_mitre_id
value
subject
abstract
hashes.MD5
hashes.SHA-1
hashes.SHA-256
hashes.SHA-512
x_opencti_additional_names
When something is not found, it appears in the list as Unknown and will be excluded if you choose to export your search result in a JSON STIX bundle or in a CSV file.
Some other screens can contain search bars for specific purposes. For instance, in the graph views to filter the nodes displayed on the graph:
"},{"location":"usage/tips-widget-creation/","title":"Pro-tips on widget creation","text":"
Previously, the creation of widgets has been covered. To help users being more confident in creating widgets, here are some details to master the widget creation.
"},{"location":"usage/tips-widget-creation/#how-to-choose-the-appropriate-widget-visualization-for-your-use-case","title":"How to choose the appropriate widget visualization for your use case?","text":"
Use these widgets when you would like to display information about one single type of object (entity or relation).
Widget visualizations: number, list, list (distribution), timeline, donuts, radar, map, bookmark, tree map.
Use case example: view the amount of malware in platform (number widget), view the top 10 threat actor group target a specific country (distribution list widget), etc.
Use case example: view the amount of malware, intrusion sets, threat actor groups added in the course of last month in the platform (line or area widget).
Type of object in widget
These widgets need to use the same \"type\" of object to work properly. You always need to add relationships in the filter view if you have selected a \"knowledge graph\" perspective. If you have selected the knowledge graph entity, adding \"Entities\" (click on + entities) will not work, since you are not counting the same things.
"},{"location":"usage/tips-widget-creation/#break-down-widgets","title":"Break down widgets","text":"
Use this widget if you want to divide your data set into smaller parts to make it clearer and more useful for analysis.
Widget visualization: horizontal bars.
Use case example: view the list of malware targeting a country breakdown by the type of malware.
"},{"location":"usage/tips-widget-creation/#adding-datasets-to-your-widget","title":"Adding datasets to your widget","text":"
Adding datasets can serve two purposes: comparing data or breakdown a view to have deeper understanding on what a specific dataset is composed of.
"},{"location":"usage/tips-widget-creation/#use-case-1-compare-several-datasets","title":"Use Case 1: compare several datasets","text":"
As mentioned in How to choose the appropriate widget visualization for your use case? section you can add data sets to compare different data. Make sure to add the same type of objects (entities or relations) to be able to compare the same objects, by using access buttons like +, + Relationships, or + Entities.
You can add up to 5 different data sets. The Label field allows you to name a data set, and this label can then be shown as a legend in the widget using the Display legend button in the widget parameters (see the next section).
"},{"location":"usage/tips-widget-creation/#use-case-2-break-down-your-chart","title":"Use case 2: break down your chart","text":"
As mentioned in How to choose the appropriate widget visualization for your use case? section you can add data sets to decompose your graph into smaller meaningful chunks. In the below points, you can find some use cases that will help you understand how to structure your data.
You can break down a view either by entity or by relations, depending on what you need to count.
"},{"location":"usage/tips-widget-creation/#break-down-by-entity","title":"Break down by entity","text":"
Use case example: I need to understand what are the most targeted countries by malware, and have a breakdown for each country by malware type.
Process:
To achieve this use case, you first need to select the horizontal bar vizualisation.
Then you need to select the knowledge graph perspective.
In the filters view:
Then input your main query Source type = Malware AND Target type = Countries AND Relation type = Targets. Add a label to your dataset.
Add an entity data set by using access button + Entities.
Add the following filters Entity type = Malware AND In regards of = targets. Add a label to your dataset.
In the parameter view:
Attribute (of your relation) = entity (so that you display the different entities values)
Display the source toggle = off
Attribute (of your entity malware) = Malware type (since you want to break down your relations by the malware types)
As a result, you get a list of countries broken down by malware types.
"},{"location":"usage/tips-widget-creation/#break-down-by-relation","title":"Break down by relation","text":"
Use case example: I need to understand what are the top targeting malware and have a breakdown of the top targets per malware
Process:
To achieve this use case, you first need to select the horizontal bar vizualisation.
Then you need to select the knowledge graph perspective.
In the filters view:
Then input your main query Source type = Malware AND Relation type = Targets. Add a label to your dataset.
Add a relation data set by using access button + Relationships
Add the following filters Source type = Malware AND Relation type = targets. Add a label to your dataset.
In the parameter view:
Attribute (of your relation): entity (so that you display the different entities values)
Display the source toggle = on
Attribute (of your entity malware) = Malware type (since you want to break down your relations by the malware types)
Display the source toggle = off
As a result, you get a list of malware with the breakdown of their top targets.
"},{"location":"usage/tips-widget-creation/#more-use-cases","title":"More use cases","text":"
To see more use cases, feel free to have a look at this blog post that will provide you additional information.
Creating widgets on the dashboard involves a four-step configuration process. By navigating through these configuration steps, users can design widgets that meet their specific requirements.
Users can select from 15 diverse visualization options to highlight different aspects of their data. This includes simple views like counters and lists, as well as more intricate views like heatmaps and trees. The chosen visualization impacts the available perspectives and parameters, making it crucial to align the view with the desired data observations. Here are a few insights:
Line and Area views: Ideal for visualizing activity volumes over time.
Horizontal bar views: Designed to identify top entities that best satisfy applied filters (e.g., top malware targeting the Finance sector).
Tree views: Useful for comparing activity volumes.
A perspective is the way the platform will count the data to display in your widgets:
Entities Perspective: Focuses on entities, allowing observation of simple knowledge based on defined filters and criteria. The count will be based on entities only.
Knowledge Graph Perspective: Concentrates on relationships, displaying intricate knowledge derived from relationships between entities and specified filters. The count will be based on relations only.
Activity & History Perspective: Centers on activities within the platform, not the knowledge content. This perspective is valuable for monitoring user and connector activities, evaluating data sources, and more.
Filters vary based on the selected perspective, defining the dataset to be utilized in the widget. Filters are instrumental in narrowing down the scope of data for a more focused analysis.
While filters in the \"Entities\" and \"Activity & History\" perspectives align with the platform's familiar search and feed creation filters, the \"Knowledge Graph\" perspective introduces a more intricate filter configuration.Therefore, they need to be addressed in more detail.
"},{"location":"usage/widgets/#filter-in-the-context-of-knowledge-graph","title":"Filter in the context of Knowledge Graph","text":"
Two types of filters are available in the Knowledge Graph perspective:
Main query filter
Classic filters (gray): Define the relationships to be retrieved, forming the basis on which the widget displays data. Remember, statistics in the Knowledge Graph perspective are based on relationships.
Pre-query filters
Pre-query filters are used to provide to your main query a specific dataset. In other words, instead of making a query on the whole data set of your platform, you can already target a subset of data that will match certain criteria. They are two types of pre-query filters:
Dynamic filters on the source (orange): Refine data by filtering on entities positioned as the source (in the \"from\" position) of the relationship.
Dynamic filters on the target (green): Refine data by filtering on entities positioned as the target (in the \"to\" position) of the relationship.
Pre-query limitation
The pre-query is limited to 5000 results. If your pre-query results in having more than 5000 results, your widget will only display statistics based on these 5000 results matching your pre-query, resulting in a wrong view. To avoid this issue, be specific in your pre-query filters.
Example scenario:
Let's consider an example scenario: Analyzing the initial access attack patterns used by intrusion sets targeting the finance sector.
Classic filters: Define the relationships associated with the use of attack patterns by intrusion sets
Dynamic filters on the source (Orange): Narrow down the data by filtering on intrusion sets targeting the finance sector.
Dynamic filters on the target (Green): Narrow down the data by filtering on attack patterns associated with the kill chain's initial access phase.
By leveraging these advanced filters, users can conduct detailed analyses within the Knowledge Graph perspective, unlocking insights that are crucial for understanding intricate relationships and statistics.
In certain views, you can access buttons like +, + Relationships, or + Entities. These buttons enable you to incorporate different data into the same widget for comparative analysis. For instance, in a Line view, adding a second set of filters will display two curves in the widget, each corresponding to one of the filtered data sets. Depending on the view, you can work with 1 to 5 sets of filters. The Label field allows you to name a data set, and this label can then be shown as a legend in the widget using the Display legend button in the widget parameters (see the next section).
Parameters depend on the chosen visualization and allow users to define widget titles, choose displayed elements from the filtered data, select data reference date, and configure various other parameters specific to each visualization.
For the \"Knowledge Graph\" perspective, a critical parameter is the Display the source toggle. This feature empowers users to choose whether the widget displays entities from the source side or the target side of the relationships.
Toggle ON (\"Display the source\"): The widget focuses on entities positioned as the source of the relationships (in the \"from\" position).
Toggle OFF (\"Display the target\"): The widget shifts its focus to entities positioned as the target of the relationships (in the \"to\" position).
This level of control ensures that your dashboard aligns precisely with your analytical objectives, offering a tailored perspective based on your data and relationship.
To successfully configure widgets in OpenCTI, having a solid understanding of the platform's data modeling is essential. Knowing specific relationships, entities, and their attributes helps refine filters accurately. Let's explore two examples.
Scenarios 1:
Consider the scenario where you aim to visualize relationships between intrusion sets and attack patterns. In this case, the relevant relationship type connecting intrusion sets to attack patterns is labeled as \"Uses\" (as illustrated in the \"Filters\" section).
Scenarios 2:
Suppose your goal is to retrieve all reports associated with the finance sector. In this case, it's essential to use the correct filter for the finance sector. Instead of placing the finance sector in the \"Related entity\" filter, it should be placed in the \"Contains\" filter. Since a Report is a container object (like Cases and Groupings), it contains entities within it and is not related to entities.
"},{"location":"usage/widgets/#key-data-modeling-aspects","title":"Key data modeling aspects","text":"
Entities: Recognizing container (e.g. Reports, Cases and Groupings) and understanding the difference with non-container.
Relationships: Identifying the relationship types connecting entities.
Attributes: Understanding entities and relationships attributes for effective filtering.
Having this prerequisite knowledge allows you to navigate the widget configuration process seamlessly, ensuring accurate and insightful visualizations based on your specific data requirements.
Workbenches serve as dedicated workspaces for manipulating data before it is officially imported into the platform.
"},{"location":"usage/workbench/#location-of-use","title":"Location of use","text":"
The workbenches are located at various places within the platform:
"},{"location":"usage/workbench/#data-import-and-analyst-workbenches-window","title":"Data import and analyst workbenches window","text":"
This window encompasses all the necessary tools for importing a file. Files imported through this interface will subsequently be processed by the import connectors, resulting in the creation of workbenches. Additionally, analysts can manually create a workbench by clicking on the \"+\" icon at the bottom right of the window.
"},{"location":"usage/workbench/#data-tabs-of-all-entities","title":"Data tabs of all entities","text":"
Workbenches are also accessible through the \"Data\" tabs of entities, providing convenient access to import data associated with the entity.
Workbenches are automatically generated upon the import of a file through an import connector. When an import connector is initiated, it scans files for recognizable entities and subsequently creates a workbench. All identified entities are placed within this workbench for analyst reviews. Alternatively, analysts have the option to manually create a workbench by clicking on the \"+\" icon at the bottom right of the \"Data import and analyst workbenches\" window.
The workbench being a draft space, the analysts use it to review connector proposals before finalizing them for import. Within the workbench, analysts have the flexibility to add, delete, or modify entities to meet specific requirements.
Once the content within the workbench is deemed acceptable, the analyst must initiate the ingestion process by clicking on Validate this workbench. This action signifies writing the data in the knowledge base.
Workbenches are drafting spaces
Until the workbench is validated, the contained data remains in draft form and is not recorded in the knowledge base. This ensures that only reviewed and approved data is officially integrated into the platform.
For more information on importing files, refer to the Import from files documentation page.
Confidence level of created knowledge through workbench
The confidence level of knowledge created through workbench is affected by the confidence level of the user. Please navigate to this page to understand in more details.
"},{"location":"usage/workflows/","title":"Workflows and assignation","text":"
Efficiently manage and organize your work within the OpenCTI platform by leveraging workflows and assignment. These capabilities provide a structured approach to tracking the status of objects and assigning responsibilities to users.
Workflows are designed to trace the status of objects in the system. They are represented by the \"Processing status\" field embedded in each object. By default, this field is disabled for most objects but can be activated through the platform settings. For details on activating and configuring workflows, refer to the dedicated documentation page.
Enabling workflows enhances visibility into the progress and status of different objects, providing a comprehensive view for effective management.
Certain objects, including Reports, Cases, and Tasks, come equipped with \"Assignees\" and \"Participants\" attributes. These attributes serve the purpose of designating individuals responsible for the object and those who actively participate in it.
Attributes can be set as mandatory or with default values, streamlining the assignment process. Users can also be assigned or designated as participants manually, contributing to a collaborative and organized workflow. For details on configuring attributes, refer to the dedicated documentation page.
Users can stay informed about assignments through notification triggers. By setting up notification triggers, users receive alerts when an object is assigned to them. This ensures timely communication and proactive engagement with assigned tasks or responsibilities.
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"OpenCTI Documentation Space","text":"
Welcome to the OpenCTI Documentation space. Here you will be able to find all documents, meeting notes and presentations about the platform.
Release notes
Please, be sure to also take a look at the OpenCTI releases notes, they may contain important information about releases and deployments.
OpenCTI is an open source platform allowing organizations to manage their cyber threat intelligence knowledge and observables. It has been created in order to structure, store, organize and visualize technical and non-technical information about cyber threats.
Learn how to deploy and configure the platform as well as launch connectors to get the first data in OpenCTI.
Deploy now
User Guide
Understand how to use the platform, explore the knowledge, import and export information, create dashboard, etc.
Explore
Administration
Know how to administrate OpenCTI, create users and groups using RBAC / segregation, put retention policies and custom taxonomies.
Customize
Need more help?
We are doing our best to keep this documentation complete, accurate and up to date.
If you still have questions or you find something which is not sufficiently explained, join the Filigran Community on Slack.
"},{"location":"#latest-blog-posts","title":"Latest blog posts","text":"
All tutorials are published directly on the Medium blog, this section provides a comprehensive list of the most important ones.
Introducing decay rules implementation for Indicators in OpenCTI Mar 25, 2024
Cyber Threat Intelligence is made to be used. To be useful, it must be relevant and on time. It is why managing the lifecycle of Indicators of Compromise...
Read
Introducing advanced filtering possibilities in OpenCTI Feb 5, 2024
CTI databases are usually vast and made of complex, inter-dependent objects ingested from various sources. In this challenging context, cyber analysts need...
Read
Breaking change: evolution of the way Connector, Streams and Feeds import data in OpenCTI Jan 29, 2024
How Connectors, Feeds and Streams use Confidence level currently...
In OpenCTI, CSV Mappers allow to parse CSV files in a STIX 2.1 Object. The mappers are created and configured by users with the Manage CSV mapper capability. Then, they are available to users who import CSV files, for instance inside a report or in the global import view.
The mapper contains representations of STIX 2.1 entities and relationships, in order for the parser to properly extract them. One mapper is dedicated to parsing a specific CSV file structure, and thus dedicated mappers should be created for every specific CSV structure you might need to ingest in the platform.
"},{"location":"administration/csv-mappers/#create-a-new-csv-mapper","title":"Create a new CSV Mapper","text":"
In menu Data, select the submenu Processing, and on the right menu select CSV Mappers. You are presented with a list of all the mappers set in the platform. Note that you can delete or update any mapper from the context menu via the burger button beside each mapper.
Click on the button + in the bottom-right corner to add a new Mapper.
Enter a name for your mapper and some basic information about your CSV files:
The line separator used (defaults to the standard comma character)
The presence of a header on the first line
Header management
The parser will not extract any information from the CSV header if any, it will just skip the first line during parsing.
Then, you need to create every representation, one per entity and relationship type represented in the CSV file. Click on the + button to add an empty representation in the list, and click on the chevron to expand the section and configure the representation.
Depending on the entity type, the form contains the fields that are either required (input outlined in red) or optional. For each field, set the corresponding columns mapping (the letter-based index of the column in the CSV table, as presented in common spreadsheet tools).
References to other entities should be picked from the list of all the other representations already defined earlier in the mapper.
You can do the same for all the relationships between entities that might be defined in this particular CSV file structure.
Fields might have options besides the mandatory column index, to help extract relevant data:
Date values are expected in ISO 8601 format, but you can set your own format to the time parser
Multiple values can be extracted by specifying the separator used inside the cell (e.g. + or |)
Or to set default values in case some data is missing in the imported file.
The only parameter required to save a CSV Mapper is a name. The creation and refinement of its representations can be done iteratively.
Nonetheless, all CSV Mappers go through a quick validation that checks if all the representations have all their mandatory fields set. Only valid mappers can be run by the users on their CSV files.
Mapper validity is visible in the list of CSV Mappers as shown below.
"},{"location":"administration/csv-mappers/#test-your-csv-mapper","title":"Test your CSV mapper","text":"
In the creation or edition form, hit the button Test to open a dialog. Select a sample CSV file and hit the Test button.
The code block contains the raw result of the parsing attempt, in the form of a STIX 2.1 bundle in JSON format.
You can then check if the extracted values match the expected entities and relationships.
Partial test
The test conducted in this window relies only on the translation of CSV data according to the chosen representation in the mapper. It does not take into account checks for accurate entity formatting (e.g. IPv4) or specific entity configurations (e.g. mandatory \"description\" field on reports). Consequently, the entities visible in the test window may not be created during the actual import process.
Test with a small file
We strongly recommend limiting test files to 100 lines and 1MB. Otherwise, the browser may crash.
"},{"location":"administration/csv-mappers/#use-a-mapper-for-importing-a-csv-file","title":"Use a mapper for importing a CSV file","text":"
You can change the default configuration of the import csv connector in your configuration file.
In Data import section, or Data tab of an entity, when you upload a CSV, you can select a mapper to apply to the file. The file will then be parsed following the representation rules set in the mapper.
By default, the imported elements will be added in a new Analyst Workbench where you will be able to check the result of the import.
"},{"location":"administration/csv-mappers/#default-values-for-attributes","title":"Default values for attributes","text":"
In the case of the CSV file misses some data, you can complete it with default values. To achieve this, you have two possibilities:
Set default values in the settings of the entities,
Set default values directly in the CSV mapper.
"},{"location":"administration/csv-mappers/#set-default-values-in-the-settings-of-the-entities","title":"Set default values in the settings of the entities","text":"
Default value mechanisms
Note that adding default values in settings have an impact at entity creation globally on the platform, not only on CSV mappers. If you want to apply those default values only at CSV mapper level, please use the second option.
In settings > Customization, you can select an entity type and then set default values for its attributes.
In the configuration of the entity, you have access to the entity's attributes that can be managed.
Click on the attribute to add a default value information.
Enter the default value in the input and save the update.
The value filled will be used in the case where the CSV file lacks data for this attribute.
"},{"location":"administration/csv-mappers/#set-specific-default-values-directly-in-the-csv-mapper","title":"Set specific default values directly in the CSV mapper","text":"
Information retained in case of default value
If you fill a default value in entity settings and the CSV mapper, the one from CSV mapper will be used.
In the mapper form, you will see next to the column index input a gear icon to add extra information for the attribute. If the attribute can have a customizable default value, then you will be able to set one here.
The example above shows the case of the attribute architecture implementation of a malware. You have some information here. First, it seems we have a default value already set in entity settings for this attribute with the value [powerpc, x86]. However, we want to override this value with another one for our case: [alpha].
"},{"location":"administration/csv-mappers/#specific-case-of-marking-definitions","title":"Specific case of marking definitions","text":"
For marking definitions, setting a default value is different from other attributes. We are not choosing a particular marking definition to use if none is specified in the CSV file. Instead, we will choose a default policy. Two option are available:
Use the default marking definitions of the user. In this case the default marking definitions of the connected user importing the CSV file will be used,
Let the user choose marking definitions. Here the user importing the CSV file will choose marking definitions (among the ones they can see) when selecting the CSV mapper.
Decay rules can be configured in the \"Settings > Customization > Decay rule\" menu.
There are built-in decay rules that can't be modified and are applied by default to indicators depending on their main observable type. Decay rules are applied from highest to lowest order (the lowest being 0).
You can create new decay rules with higher order to apply them along with (or instead of) the built-in rules.
When you create a decay rule, you can specify on which indicators' main observable types it will apply. If you don't enter any, it will apply to all indicators.
You can also add reaction points which represent the scores at which indicators are updated. For example, if you add one reaction point at 60 and another one at 40, indicators that have an initial score of 80 will be updated with a score of 60, then 40, depending on the decay curve.
The decay curve is based on two parameters:
the decay factor, which represents the speed at which the score falls, and
the lifetime, which represents the time (in days) during which the value will be lowered until it reaches 0.
Finally, the revoke score is the score at which the indicator can be revoked automatically.
Once you have created a new decay rule, you will be able to view its details, along with a life curve graph showing the score evolution over time.
You will also be able to edit your rule, change all its parameters and order, activate or deactivate it (only activated rules are applied), or delete it.
Indicator decay manager
Decay rules are only applied, and indicators score updated, if indicator decay manager is enabled (enabled by default).
Please read the dedicated page to have all information
Filigran is providing an Enterprise Edition of the platform, whether on-premise or in the SaaS.
"},{"location":"administration/enterprise/#what-is-opencti-ee","title":"What is OpenCTI EE?","text":"
OpenCTI Enterprise Edition is based on the open core concept. This means that the source code of OCTI EE remains open source and included in the main GitHub repository of the platform but is published under a specific license. As specified in the GitHub license file:
The OpenCTI Community Edition is licensed under the Apache License, Version 2.0 (the \u201cApache License\u201d).
The OpenCTI Enterprise Edition is licensed under the OpenCTI Enterprise Edition License (the \u201cEnterprise Edition Licensee\u201d).
The source files in this repository have a header indicating which license they are under. If no such header is provided, this means that the file belongs to the Community Edition under the Apache License, Version 2.0.
We wrote a complete article to explain the enterprise edition, feel free to read it to have more information
Audit logs help you answer \"who did what, where, and when?\" within your data with the maximum level of transparency. Please read Activity monitoring page to get all information.
"},{"location":"administration/enterprise/#playbooks-and-automation","title":"Playbooks and automation","text":"
OpenCTI playbooks are flexible automation scenarios which can be fully customized and enabled by platform administrators to enrich, filter and modify the data created or updated in the platform. Please read Playbook automation page to get all information.
"},{"location":"administration/enterprise/#organizations-management-and-segregation","title":"Organizations management and segregation","text":"
Organizations segregation is a way to segregate your data considering the organization associated to the users. Useful when your platform aims to share data to multiple organizations that have access to the same OpenCTI platform. Please read Organizations RBAC to get more information.
"},{"location":"administration/enterprise/#full-text-indexing","title":"Full text indexing","text":"
Full text indexing grants improved searches across structured and unstructured data. OpenCTI classic searches are based on metadata fields (e.g. title, description, type) while advanced indexing capability enables searches to be extended to the document\u2019s contents. Please read File indexing to get all information.
"},{"location":"administration/enterprise/#more-to-come","title":"More to come","text":"
More features will be available in OpenCTI in the future. Features like:
Generative AI for correlation and content generation.
Supervised machine learning for natural language processing.
A variety of entity customization options are available to optimize data representation, workflow management, and enhance overall user experience. Whether you're fine-tuning processing statuses, configuring entities' attributes, or hiding entities, OpenCTI's customization capabilities provide the flexibility you need to create a tailored environment for your threat intelligence and cybersecurity workflows.
The following chapter aims to provide readers with an understanding of the available customization options by entity type. Customize entities can be done in \"Settings > Customization\".
"},{"location":"administration/entities/#hidden-in-interface","title":"Hidden in interface","text":"
This configuration allows to hide a specific entity type throughout the entire platform. It provides a potent means to simplify the interface and tailor it to your domain expertise. For instance, if you have no interest in disinformation campaigns, you can conceal related entities such as Narratives and Channels from the menus.
You can specify which entities to hide on a platform-wide basis from \"Settings > Customization\" and from \"Settings > Parameters\", providing you with a list of hidden entities. Furthermore, you can designate hidden entities for specific Groups and Organizations from \"Settings > Security > Groups/Organizations\" by editing a Group/Organization.
An overview of hidden entity types is available in the \"Hidden entity types\" field in \"Settings > Parameters.\"
"},{"location":"administration/entities/#automatic-references-at-file-upload","title":"Automatic references at file upload","text":"
This configuration enables an entity to automatically construct an external reference from the uploaded file.
This configuration enables the requirement of a reference message on an entity creation or modification. This option is helpful if you want to keep a strong consistency and traceability of your knowledge and is well suited for manual creation and update.
For now, OpenCTI has a simple workflow approach. They're represented by the \"Processing status\" field embedded in each object. By default, this field is disabled for most objects but can be activated through the platform settings:
Click on the small pink pen icon next to \"Workflow\" to access the object customization window.
Add and configure the desired statuses, defining their order within the workflow.
In addition, the available statuses are defined by a collection of status templates visible in \"Settings > Taxonomies > Status templates\". This collection can be customized.
Confidence scale can be customized for each entity type by selecting another scale template or by editing directly the scale values. Once you have customized your scale, click on \"Update\" to save your configuration.
Max confidence level
The above scale also needs to take into account the confidence level per user. To understand the concept, please navigate to this page
Platform segregation by organization is available under the \"OpenCTI Enterprise Edition\" license. Please read the dedicated page to have all the information.
File indexing can be configured via the File indexing tab in the Settings menu.
The configuration and impact panel shows all file types that can be indexed, as well as the volume of storage used.
It is also possible to include or exclude files uploaded from the global Data import panel and that are not associated with a specific entity in the platform.
Finally, it is possible to set a maximum file size for indexing (5 Mb by default).
This guide aims to give you a full overview of the OpenCTI features and workflows. The platform can be used in various contexts to handle threats management use cases from a technical to a more strategic level.
The OpenCTI Administrative settings console allows administrators to configure many options dynamically within the system. As an Administrator, you can access this settings console, by clicking the settings link.
The Settings Console allows for configuration of various aspects of the system.
This section will show configured and enabled/disabled strategies. The configuration is done in the config/default.json file or via ENV variables detected at launch.
Platform Login Message (optional) - if configured this will be displayed on the login page. This is usually used to have a welcome type message for users before login.
Platform Consent Message (optional) - if configured this will be displayed on the login page. This is usually used to display some type of consent message for users to agree to before login. If enabled, a user must check the checkbox displayed to allow login.
Platform Consent Confirm Text (optional) - This is displayed next to the platform consent checkbox, if Platform Consent Message is configured. Users must agree to the checkbox before the login prompt will be displayed. This message can be configured, but by default reads: I have read and comply with the above statement
"},{"location":"administration/introduction/#dark-theme-color-scheme","title":"Dark Theme Color Scheme","text":"
Various aspects of the Dark Theme can be dynamically configured in this section.
"},{"location":"administration/introduction/#light-theme-color-scheme","title":"Light Theme Color Scheme","text":"
Various aspects of the Light Theme can be dynamically configured in this section.
Within the OpenCTI platform, the merge capability is present into the \"Data > Entities\" tab, and is fairly straightforward to use. To execute a merge, select the set of entities to be merged, then click on the Merge icon.
Merging limitation
It is not possible to merge entities of different types, nor is it possible to merge more than 4 entities at a time (it will have to be done in several stages).
Central to the merging process is the selection of a main entity. This primary entity becomes the anchor, retaining crucial attributes such as name and description. Other entities, while losing specific fields like descriptions, are aliased under the primary entity. This strategic decision preserves vital data while eliminating redundancy.
Once the choice has been made, simply validate to run the task in the background. Depending on the number of entity relationships, and the current workload on the platform, the merge may take more or less time. In the case of a healthy platform and around a hundred relationships per entity, merge is almost instantaneous.
"},{"location":"administration/merging/#data-preservation-and-relationship-continuity","title":"Data preservation and relationship continuity","text":"
A common concern when merging entities lies in the potential loss of information. In the context of OpenCTI, this worry is alleviated. Even if the merged entities were initially created by distinct sources, the platform ensures that data is not lost. Upon merging, the platform automatically generates relationships directly on the merged entity. This strategic approach ensures that all connections, regardless of their origin, are anchored to the consolidated entity. Post-merge, OpenCTI treats these once-separate entities as a singular, unified entity. Subsequent information from varied sources is channeled directly into the entity resulting from the merger. This unified entity becomes the focal point for all future relationships, ensuring the continuity of data and relationships without any loss or fragmentation.
Irreversible process: It's essential to know that a merge operation is irreversible. Once completed, the merged entities cannot be reverted to their original state. Consequently, careful consideration and validation are crucial before initiating the merge process.
Loss of fields in aliased entities: Fields, such as descriptions, in aliased entities - entities that have not been chosen as the main - will be lost during the merge. Ensuring that essential information is captured in the primary entity is crucial to prevent data loss.
Usefulness: To understand the benefits of entity merger, refer to the Merge objects page in the User Guide section of the documentation.
Deduplication mechanism: the platform is equipped with deduplication processes that automatically merge data at creation (either manually or by importing data from different sources) if it meets certain conditions.
"},{"location":"administration/notifier-samples/","title":"Notifier samples","text":""},{"location":"administration/notifier-samples/#configure-teams-webhook","title":"Configure Teams webhook","text":"
To configure a notifier for Teams, allowing to send notifications via Teams messages, we followed the guidelines outlined in the Microsoft documentation.
"},{"location":"administration/notifier-samples/#template-message-for-live-trigger","title":"Template message for live trigger","text":"
The Teams template message sent through webhook for a live notification is:
Leveraging the platform's built-in connectors, users can create custom notifiers tailored to their unique needs. OpenCTI features three built-in connectors: a webhook connector, a simple mailer connector, and a platform mailer connector. These connectors operate based on registered schemas that describe their interaction methods.
This notifier connector enables users to send notifications to external applications or services through HTTP requests. Users can specify:
Verb: Specifies the HTTP method (GET, POST, PUT, DELETE).
URL: Defines the destination URL for the webhook.
Template: Specifies the message template for the notification.
Parameters and Headers: Customizable parameters and headers sent through the webhook request.
OpenCTI provides two notifier samples by default, designed to communicate with Microsoft Teams through a webhook. A documentation page providing details on these samples is available.
"},{"location":"administration/notifiers/#configuration-and-access","title":"Configuration and access","text":"
Custom notifiers are manageable in the \"Settings > Customization > Notifiers\" window and can be restricted through Role-Based Access Control (RBAC). Administrators can control access, limiting usage to specific Users, Groups, or Organizations.
For guidance on configuring notification triggers and exploring the usages of notifiers, refer to the dedicated documentation page.
Taxonomies in OpenCTI refer to the structured classification systems that help in organizing and categorizing cyber threat intelligence data. They play a crucial role in the platform by allowing analysts to systematically tag and retrieve information based on predefined categories and terms.
Along with the Customization page, these pages allow the administrator to customize the platform.
Labels in OpenCTI serve as a powerful tool for organizing, categorizing, and prioritizing data. Here\u2019s how they can be used effectively:
Tagging and Categorization: Labels can be used to tag malware, incidents, or indicators (IOCs) with specific categories, making it easier to filter and search through large datasets.
Prioritization: By labeling threats based on their severity or impact, security analysts can prioritize their response efforts accordingly.
Correlation and Analysis: Labels help in correlating different pieces of intelligence. For example, if multiple indicators are tagged with the same label, it might indicate a larger campaign or a common adversary.
Automation and Integration: Labels can trigger automated workflows (also called playbooks) within OpenCTI. For instance, a label might automatically initiate further investigation or escalate an incident.
Reporting and Metrics: Labels facilitate the generation of reports and metrics, allowing organizations to track trends through dashboards, measure response effectiveness, and make data-driven decisions.
Sharing and Collaboration: When sharing intelligence with other organizations or platforms, labels provide a common language that helps in understanding the context and relevance of the shared data.
Tip
In order to achieve effective data labeling methods, it is recommended to establish a clear and consistent criteria for your labeling and document them in a policy or guideline.
Kill chain phases are used in OpenCTI to structure and analyze the data related to cyber threats and attacks. They describe the stages of an attack from the perspective of the attacker and provide a framework for identifying, analysing and responding to threats.
OpenCTI supports the following kill chain models:
Lockheed Martin Cyber Kill Chain
MITRE ATT&CK Framework (Entreprise, PRE, Mobile and ICS)
DISARM framework
You can add, edit, or delete kill chain phases in the settings page, and assign them to indicators, attack patterns, incidents, or courses of action in the platform. You can also filter the data by kill chain phase, and view the kill chain phases in a timeline or as a matrix.
Open vocabularies are sets of terms and definitions that are agreed upon by the CTI community. They help to standardize the communication documentation of cyber threat information. This section allows you to customize a set of available fields by adding vocabulary. Almost all of the drop-down menus available in the entities can be modified from this panel.
Open vocabularies in OpenCTI are mainly based on the STIX standard.
Status templates are predefined statuses that can be assigned to different entities in OpenCTI, such as reports, incidents, or cases (incident responses, requests for information and requests for takedown).
They help to track the progress of the analysis and response activities by defining statuses that are used in the workflows.
Platform segregation by organization is available under the \"OpenCTI Enterprise Edition\" license. Please read the dedicated page to have all the information.
Platform administrators can promote members of an organization as \"Organization administrator\". This elevated role grants them the necessary capabilities to create, edit and delete users from the corresponding Organization. Additionally, administrators have the flexibility to define a list of groups that can be granted to newly created members by the organization administrators. This feature simplifies the process of granting appropriate access and privileges to individuals joining the organization.
The platform administrator can promote/demote an organization admin through its user edition form.
Organization admin rights
The \"Organization admin\" has restricted access to Settings. They can only manage the members of the organizations for which they have been promoted as \"admins\".
This section allows the administrator to edit the following settings:
Platform title
Platform favicon URL
Sender email address: email address displayed as sender when sending notifications. The technical sender is defined in the SMTP configuration.
Theme
Language
Hidden entity types: allows you to customize which types of entities you want to see or hide in the platform. This can help you focus on the relevant information and avoid cluttering the platform with unnecessary data.
This is where the Enterprise edition can be enabled.
This section gives important information about the platform like the used version, the edition, the architecture mode (can be Standalone or Cluster) and the number used nodes.
Through the \"Remove Filigran logos\" toggle, the administrator has the option to hide the Filigran logo on the login page and the sidebar.
This section gives you the possibility to set and display Announcements in the platform. Those announcements will be visible to every user in the platform, on top of the interface.
They can be used to inform some of your users or all of important information, like a scheduled downtime, an incoming upgrade, or even to share important tips regarding the usage of the platform.
An Announcement can be accompanied by a \"Dismiss\u201d button. When clicked by a user, it makes the message disappear for this user.
This option can be deactivated to have a permanent announcement.
\u26a0\ufe0f Only one announcement is shown at a time, with priority given to dismissible ones. If there are no dismissible announcements, the most recent non-dismissible one is shown.
This section informs the administrator of the statuses of the different managers used in the Platform. More information about the managers can be found here. It shows also the used versions of the search engine database, RabbitMQ and Redis.
In cluster mode, the fact that a manager appears as enabled means that it is active in at least one node.
The Policies configuration window (in \"Settings > Security > Policies\") encompasses essential settings that govern the organizational sharing, authentication strategies, password policies, login messages, and banner appearance within the OpenCTI platform.
"},{"location":"administration/policies/#platform-main-organization","title":"Platform main organization","text":"
Allow to set a main organization for the entire platform. Users belonging to the main organization enjoy unrestricted access to all data stored in the platform. In contrast, users affiliated with other organizations will only have visibility into data explicitly shared with them.
Numerous repercussions linked to the activation of this feature
This feature has implications for the entire platform and must be fully understood before being used. For example, it's mandatory to have organizations set up for each user, otherwise they won't be able to log in. It is also advisable to include connector's users in the platform main organization to avoid import problems.
The authentication strategies section provides insights into the configured authentication methods. Additionally, an \"Enforce Two-Factor Authentication\" button is available, allowing administrators to mandate 2FA activation for users, enhancing overall account security.
Please see the Authentication section for further details on available authentication strategies.
This section encompasses a comprehensive set of parameters defining the local password policy. Administrators can specify requirements such as minimum/maximum number of characters, symbols, digits, and more to ensure robust password security across the platform. Here are all the parameters available:
Parameter Description Number of chars must be greater than or equals to Define the minimum length required for passwords. Number of chars must be lower or equals to (0 equals no maximum) Set an upper limit for password length. Number of symbols must be greater or equals to Specify the minimum number of symbols required in a password. Number of digits must be greater or equals to Set the minimum number of numeric characters in a password. Number of words (split on hyphen, space) must be greater or equals to Enforce a minimum count of words in a password. Number of lowercase chars must be greater or equals to Specify the minimum number of lowercase characters. Number of uppercase chars must be greater or equals to Specify the minimum number of uppercase characters. "},{"location":"administration/policies/#login-messages","title":"Login messages","text":"
Allow to define messages on the login page to customize and highlight your platform's security policy. Three distinct messages can be customized:
Platform login message: Appears above the login form to convey important information or announcements.
Platform consent message: A consent message that obscures the login form until users check the approval box, ensuring informed user consent.
Platform consent confirm text: A message accompanying the consent box, providing clarity on the consent confirmation process.
The platform banner configuration section allows administrators to display a custom banner message at the top and bottom of the screen. This feature enables customization for enhanced visual communication and branding within the OpenCTI platform. It can be used to add a disclaimer or system purpose.
This configuration has two parameters:
Platform banner level: Options defining the banner background color (Green, Red, or Yellow).
Platform banner text: Field referencing the message to be displayed within the banner.
The rules engine comprises a set of predefined rules (named inference rules) that govern how new relationships are inferred based on existing data. These rules are carefully crafted to ensure logical and accurate relationship creation. Here is the list of existing inference rules:
"},{"location":"administration/reasoning/#raise-incident-based-on-sighting","title":"Raise incident based on sighting","text":"Conditions Creations A non-revoked Indicator is sighted in an Entity Creation of an Incident linked to the sighted Indicator and the targeted Entity"},{"location":"administration/reasoning/#sightings-of-observables-via-observed-data","title":"Sightings of observables via observed data","text":"Conditions Creations An Indicator is based on an Observable contained in an Observed Data Creation of a sighting between the Indicator and the creating Identity of the Observed Data"},{"location":"administration/reasoning/#sightings-propagation-from-indicator","title":"Sightings propagation from indicator","text":"Conditions Creations An Indicator based on an Observable is sighted in an Entity The Observable is sighted in the Entity"},{"location":"administration/reasoning/#sightings-propagation-from-observable","title":"Sightings propagation from observable","text":"Conditions Creations An Indicator is based on an Observable sighted in an Entity The Indicator is sighted in the Entity"},{"location":"administration/reasoning/#relation-propagation-via-an-observable","title":"Relation propagation via an observable","text":"Conditions Creations An observable is related to two Entities Create a related to relationship between the two Entities"},{"location":"administration/reasoning/#attribution-propagation","title":"Attribution propagation","text":"Conditions Creations An Entity A is attributed to an Entity B and this Entity B is itself attributed to an Entity C The Entity A is attributed to Entity C"},{"location":"administration/reasoning/#belonging-propagation","title":"Belonging propagation","text":"Conditions Creations An Entity A is part of an Entity B and this Entity B is itself part of an Entity C The Entity A is part of Entity C"},{"location":"administration/reasoning/#location-propagation","title":"Location propagation","text":"Conditions Creations A Location A is located at a Location B and this Location B is itself located at a Location C The Location A is located at Location C"},{"location":"administration/reasoning/#organization-propagation-via-participation","title":"Organization propagation via participation","text":"Conditions Creations A User is affiliated with an Organization B, which is part of an Organization C The User is affiliated to the Organization C"},{"location":"administration/reasoning/#identities-propagation-in-reports","title":"Identities propagation in reports","text":"Conditions Creations A Report contains an Identity B and this Identity B is part of an Identity C The Report contains Identity C, as well as the Relationship between Identity B and Identity C"},{"location":"administration/reasoning/#locations-propagation-in-reports","title":"Locations propagation in reports","text":"Conditions Creations A Report contains a Location B and this Location B is located at a Location C The Report contains Location B, as well as the Relationship between Location B and Location C"},{"location":"administration/reasoning/#observables-propagation-in-reports","title":"Observables propagation in reports","text":"Conditions Creations A Report contains an Indicator and this Indicator is based on an Observable The Report contains the Observable, as well as the Relationship between the Indicator and the Observable"},{"location":"administration/reasoning/#usage-propagation-via-attribution","title":"Usage propagation via attribution","text":"Conditions Creations An Entity A, attributed to an Entity C, uses an Entity B The Entity C uses the Entity B"},{"location":"administration/reasoning/#inference-of-targeting-via-a-sighting","title":"Inference of targeting via a sighting","text":"Conditions Creations An Indicator, sighted at an Entity C, indicates an Entity B The Entity B targets the Entity C"},{"location":"administration/reasoning/#targeting-propagation-via-attribution","title":"Targeting propagation via attribution","text":"Conditions Creations An Entity A, attributed to an Entity C, targets an Entity B The Entity C targets the Entity B"},{"location":"administration/reasoning/#targeting-propagation-via-belonging","title":"Targeting propagation via belonging","text":"Conditions Creations An Entity A targets an Identity B, part of an Identity C The Entity A targets the Identity C"},{"location":"administration/reasoning/#targeting-propagation-via-location","title":"Targeting propagation via location","text":"Conditions Creations An Entity targets a Location B and this Location B is located at a Location C The Entity targets the Location C"},{"location":"administration/reasoning/#targeting-propagation-when-located","title":"Targeting propagation when located","text":"Conditions Creations An Entity A targets an Entity B and this target is located at Location D. The Entity A targets the Location D"},{"location":"administration/reasoning/#rule-execution","title":"Rule execution","text":""},{"location":"administration/reasoning/#rule-activation","title":"Rule activation","text":"
When a rule is activated, a background task is initiated. This task scans all platform data, identifying existing relationships that meet the conditions defined by the rule. Subsequently, it creates new objects (entities and/or relationships), expanding the network of insights within your threat intelligence environment. Then, activated rules operate continuously. Whenever a relationship is created or modified, and this change aligns with the conditions specified in an active rule, the reasoning mechanism is triggered. This ensures real-time relationship inference.
Deactivating a rule leads to the deletion of all objects and relationships created by it. This cleanup process maintains the accuracy and reliability of your threat intelligence database.
"},{"location":"administration/reasoning/#access-restrictions-and-data-impact","title":"Access restrictions and data impact","text":"
Access to the rule engine panel is restricted to administrators only. Regular users do not have visibility into this section of the platform. Administrators possess the authority to activate or deactivate rules.
The rules engine empowers OpenCTI with the capability to automatically establish intricate relationships within your data. However, these rules can lead to a very large number of objects created. Even if the operation is reversible, an administrator should consider the impact of activating a rule.
Usefulness: To understand the benefits and results of these rules, refer to the Inferences and reasoning page in the User Guide section of the documentation.
New inference rule: Given the potential impact of a rule on your data, users are not allowed to add new rules. However, users can submit rule suggestions via a GitHub issue for evaluation. These suggestions are carefully evaluated by our team, ensuring continuous improvement and innovation.
Retention rules serve the purpose of establishing data retention times, specifying when data should be automatically deleted from the platform. Users can define filters to target specific objects. Any object meeting these criteria that haven't been updated within the designated time frame will be permanently deleted.
Note that the data deleted by an active retention policy will not appear in the trash and thus cannot be restored.
Before activating a retention rule, users have the option to verify its impact using the \"Verify\" button. This action provides insight into the number of objects that currently match the rule's criteria and would be deleted if the rule is activated.
Verify before activation
Always use the \"Verify\" feature to assess the potential impact of a retention rule before activating it. Once the rule is activated, data deletion will begin, and retrieval of the deleted data will not be possible.
Retention rules contribute to maintaining a streamlined and efficient data lifecycle within OpenCTI, ensuring that outdated or irrelevant information is systematically removed from the platform, thereby optimizing disk space usage.
Data segregation in the context of Cyber Threat Intelligence refers to the practice of categorizing and separating different types of data or information related to cybersecurity threats based on specific criteria.
This separation helps organizations manage and analyze threat intelligence more effectively and securely and the goal of data segregation is to ensure that only those individuals who are authorized to view a particular set of data have access to that set of data.
Practically, \"Need-to-know basis\" and \"classification level\" are data segregation measures.
Marking definitions are essential in the context of data segregation to ensure that data is appropriately categorized and protected based on its sensitivity or classification level. Marking definitions establish a standardized framework for classifying data.
Marking Definition objects are unique among STIX objects in the STIX 2.1 standard in that they cannot be versioned. This restriction is in place to prevent the possibility of indirect alterations to the markings associated with a STIX Object.
Multiple markings can be added to the same object. Certain categories of marking definitions or trust groups may enforce rules that specify which markings take precedence over others or how some markings can be added to complement existing ones.
In OpenCTI, data is segregated based on knowledge marking. The diagram provided below illustrates the manner in which OpenCTI establishes connections between pieces of information to authorize data access for a user:
"},{"location":"administration/segregation/#manage-markings","title":"Manage markings","text":""},{"location":"administration/segregation/#create-new-markings","title":"Create new markings","text":"
To create a marking, you must first possess the capability Manage marking definitions. For further information on user administration, please refer to the Users and Role Based Access Control page.
Once you have access to the settings, navigate to \"Settings > Security > Marking Definitions\" to create a new marking.
A marking consists of the following attributes:
Type: Specifies the marking group to which it belongs.
Definition: The name assigned to the marking.
Color: The color associated with the marking.
Order: Determines the hierarchical order among markings of the same type.
The configuration of authorized markings for a user is determined at the Group level. To access entities and relationships associated with specific markings, the user must belong to a group that has been granted access to those markings.
There are two ways in which markings can be accessed:
The user is a member of a group that has been granted access to the marking.
The user is a member of a group that has access to a marking of the same type, with an equal or higher hierarchical order.
Access to an object with several markings
Access to all markings attached to an object is required in order to access it (not only one).
Automatically grant access to the new marking
To allow a group to automatically access a newly created marking definition, you can check Automatically authorize this group to new marking definition.
To apply a default marking when creating a new entity or relationship, you can choose which marking to add by default from the list of allowed markings. You can add only one marking per type, but you can have multiple types. This configuration is also done at the Group level.
Need a configuration change
Simply adding markings as default markings is insufficient to display the markings when creating an entity or relationship. You also need to enable default markings in the customization settings of an entity or relationship. For example, to enable default markings for a new report, navigate to \"Settings > Customization > Report > Markings\" and toggle the option to Activate/Desactivate default values.
This configuration allows to define, for each type of marking definitions, until which level we allow to share data externally (via Public dashboard or file export).
The marking definitions that can be shared by a group are the ones
that are allowed for this group
and whose order are inferior or equal to the order of the maximum shareable markings defined for each marking type.
Users with the Bypass capability can share all the markings.
By default, every marking of a given marking type is shareable.
For example in the capture below, for the type of marking TLP, only data with a marking definition that is allowed and has a level equal or below GREEN will be shareable. And no data with marking definition statement will be shared at all.
"},{"location":"administration/segregation/#management-of-multiple-markings","title":"Management of multiple markings","text":"
In scenarios where multiple markings of the same type but different orders are added, the platform will retain only the marking with the highest order for each type. This consolidation can occurs in various instances:
During entity creation, if multiple markings are selected.
During entity updates, whether manually or via a connector, if additional markings are introduced.
When multiple entities are merged, their respective markings will be amalgamated.
For example:
Create a new report and add markings PAP:AMBER,PAP:RED,TLP:AMBER+STRICT,TLP:CLEAR and a statement CC-BY-SA-4.0 DISARM Foundation
The final markings kept are: PAP:RED, TLP:AMBER+STRICT and CC-BY-SA-4.0 DISARM Foundation
"},{"location":"administration/segregation/#update-an-object-manually","title":"Update an object manually","text":"
When update an entity or a relationship:
add a marking with the same type and different orders, a pop-up will be displayed to confirm the choice,
add a marking with the same type and the same order, the marking will be added,
add a marking with different types, the marking will be added.
"},{"location":"administration/segregation/#import-data-from-a-connector","title":"Import data from a connector","text":"
As a result of this mechanism, when importing data from a connector, the connector is unable to downgrade a marking for an entity if a marking of the same type is already present on it.
The Traffic Light Protocol is implemented by default as marking definitions in OpenCTI. It allows you to segregate information by TLP levels in your platform and restrict access to marked data if users are not authorized to see the corresponding marking.
The Traffic Light Protocol (TLP) was designed by the Forum of Incident Response and Security Teams (FIRST) to provide a standardized method for classifying and handling sensitive information, based on four categories of sensitivity.
For more details, the diagram provided below illustrates how are categorized the marking definitions:
Support packages are useful for troubleshooting issue that occurs on OpenCTI platform. Administrators can request to create and download a support package that contains recent platform error logs and usage statistics.
Support package content
Even if we do our best to prevent logging any data, the support package may contains some sensitive information that you may not want to share with everyone. Before creating a ticket with your support package takes some time to check if you can safely share the content depending of your security policy.
Support Package can be requested from \"Settings > Support\" menu.
On a click on \"Generate support package\", a support event is propagated to every platform instances to request needed information. Every instance that will receive this message will process the request and send the files to the platform. During this processing the interface will display the expected support package name in an IN PROGRESS state waiting for completion. After finishing the process the support package will move to the READY state and the buttons download and delete will be activated.
In case of platform instability, some logs might not be retrieved and the support package will be incomplete.
If some instances fail to send their data, you will be able to force download a partial zip only after 1 minute. In case of a support package taking more than 5 minutes, the status will be moved to \"timeout\".
"},{"location":"administration/users/","title":"Users and Role Based Access Control","text":""},{"location":"administration/users/#introduction","title":"Introduction","text":"
In OpenCTI, the RBAC system not only related to what users can do or cannot do in the platform (aka. Capabilities) but also to the system of data segregation. Also, platform behavior such as default home dashboards, default triggers and digests as well as default hidden menus or entities can be defined across groups and organizations.
Roles are used in the platform to grant the given groups with some capabilities to define what users in those groups can do or cannot do.
"},{"location":"administration/users/#list-of-capabilities","title":"List of capabilities","text":"Capability Description Bypass all capabilities Just bypass everything including data segregation and enforcements. Access knowledge Access in read-only to all the knowledge in the platform. Access to collaborative creation Create notes and opinions (and modify its own) on entities and relations. Create / Update knowledge Create and update existing entities and relationships. Restrict organization access Share entities and relationships with other organizations. Delete knowledge Delete entities and relationships. Manage authorized members Restrict the access to an entity to a user, group or organization. Bypass enforced reference If external references enforced in a type of entity, be able to bypass the enforcement. Upload knowledge files Upload files in the Data and Content section of entities. Import knowledge Trigger the ingestion of an uploaded file. Download knowledge export Download the exports generated in the entities (in the Data section). Generate knowledge export Trigger the export of the knowledge of an entity. Ask for knowledge enrichment Trigger an enrichment for a given entity. Access dashboards Access to existing custom dashboards. Create / Update dashboards Create and update custom dashboards. Delete dashboards Delete existing custom dashboards. Manage public dashboards Manage public dashboards. Access investigations Access to existing investigations. Create / Update investigations Create and update investigations. Delete investigations Delete existing investigations. Access connectors Read information in the Data > Connectors section. Manage connector state Reset the connector state to restart ingestion from the beginning. Connectors API usage: register, ping, export push ... Connectors specific permissions for register, ping, push export files, etc. Access data sharing Access and consume data such as TAXII collections, CSV feeds and live streams. Manage data sharing Share data such as TAXII collections, CSV feeds and live streams or custom dashboards. Access ingestion Access (read only) remote OCTI streams, TAXII feeds, RSS feeds, CSV feeds. Manage ingestion Create, update, delete any remote OCTI streams, TAXII feeds, RSS feeds, CSV feeds. Manage CSV mappers Create, update and delete CSV mappers. Access to admin functionalities Parent capability allowing users to only view the settings. Access administration parameters Access and manage overall parameters of the platform in Settings > Parameters. Manage credentials Access and manage roles, groups, users, organizations and security policies. Manage marking definitions Update and delete marking definitions. Manage customization Customize entity types, rules, notifiers retention policies and decays rules. Manage taxonomies Manage labels, kill chain phases, vocabularies, status templates, cases templates. Access to security activity Access to activity log. Access to file indexing Manage file indexing. Access to support Generate and download support packages."},{"location":"administration/users/#manage-roles","title":"Manage roles","text":"
You can manage the roles in Settings > Security > Roles.
To create a role, just click on the + button:
Then you will be able to define the capabilities of the role:
You can manage the users in Settings > Security > Users. If you are using Single-Sign-On (SSO), the users in OpenCTI are automatically created upon login.
To create a user, just click on the + button:
"},{"location":"administration/users/#manage-a-user","title":"Manage a user","text":"
When access to a user, it is possible to:
Visualize information including the token
Modify it, reset 2FA if necessary
Manage its sessions
Manage its triggers and digests
Visualize the history and operations
Manage its max confidence levels
From this view you can edit the user's information by clicking the \"Update\" button, which opens a panel with several tabs.
Overview tab: edit all basic information such as the name or language
Password tab: change the password for this user
Groups tab: select the groups this user belongs to
Organization Admin tab: see Organization administration
Confidences tab: manage the user's maximum confidence level and overrides per entity type
Mandatory max confidence level
A user without Max confidence level won't have the ability to create, delete or update any data in our platform. Please be sure that your users are always either assigned to group that have a confidence level defined or that have an override of this group confidence level.
Groups are the main way to manage permissions and data segregation as well as platform customization for the given users part of this group. You can manage the groups in Settings > Security > Groups.
Here is the description of the group available parameters.
Parameter Description Auto new markings If a new marking definition is created, this group will automatically be granted to it. Default membership If a new user is created (manually or upon SSO), it will be added to this group. Roles Roles and capabilities granted to the users belonging to this group. Default dashboard Customize the home dashboard for the users belonging to this group. Default markings In Settings > Customization > Entity types, if a default marking definition is enabled, default markings of the group is used. Allowed markings Grant access to the group to the defined marking definitions, more details in data segregation. Max shareable markings Grant authorization to the group to share marking definitions. Triggers and digests Define defaults triggers and digests for the users belonging to this group. Max confidence level Define the maximum confidence level for the group: it will impact the capacity to update entities, the confidence level of a newly created entity by a user of the group.
Max confidence level when a user has multiple groups
A user with multiple groups will have the the highest confidence level of all its groups. For instance, if a user is part of group A (max confidence level = 100) and group B (max confidence level = 50), then the user max confidence level will be 100.
"},{"location":"administration/users/#manage-a-group","title":"Manage a group","text":"
When managing a group, you can define the members and all above configurations.
Users can belong to organizations, which is an additional layer of data segregation and customization. To find out more about this part, please refer to the page on organization segregation.
Platform administrators can promote members of an organization as \"Organization administrator\". This elevated role grants them the necessary capabilities to create, edit and delete users from the corresponding Organization. Additionally, administrators have the flexibility to define a list of groups that can be granted to newly created members by the organization administrators. This feature simplifies the process of granting appropriate access and privileges to individuals joining the organization.
The platform administrator can promote/demote an organization admin through its user edition form.
Organization admin rights
The \"Organization admin\" has restricted access to Settings. They can only manage the members of the organizations for which they have been promoted as \"admins\".
Activity unified interface and logging are available under the \"OpenCTI Enterprise Edition\" license.
Please read the dedicated page to have all information
As explained in the overview page, all administration actions are listened by default. However, all knowledge are not listened by default due to performance impact on the platform.
For this reason you need to explicitly activate extended listening on user / group or organization.
Listening will start just after the configuration. Every past events will not be taken into account.
OpenCTI activity capability is the way to unified whats really happen in the platform. In events section you will have access to the UI that will answer to \"who did what, where, and when?\" within your data with the maximum level of transparency.
By default, the events screen only show you the administration actions done by the users.
If you want to see also the information about the knowledge, you can simply activate the filter in the bar to get the complete overview of all user actions.
Don't hesitate to read again the overview page to have a better understanding of the difference between Audit, Basic/Extended knowledge.
Activity unified interface and logging are available under the \"OpenCTI Enterprise Edition\" license.
Please read the dedicated page to have all the information
OpenCTI activity capability is the way to unify what's really happening in the platform. With this feature you will be able to answer \"who did what, where, and when?\" within your data with the maximum level of transparency.
Enabling activity helps your security, auditing, and compliance entities monitor platform for possible vulnerabilities or external data misuse.
The basic knowledge refers to all STIX data knowledge inside OpenCTI. Every create/update/delete action on that knowledge is accessible through the history. That basic activity is handled by the history manager and can also be found directly on each entity.
The extended knowledge refers to extra information data to track specific user activity. As this kind of tracking is expensive, the tracking will only be done for specific users/groups/organizations explicitly configured in the configuration window.
Having all the history in the user interface (events) is sometimes not enough to have a proactive monitoring. For this reason, you can configure some specific triggers to receive notifications on audit events. You can configure like personal triggers, lives one that will be sent directly or digest depending on your needs.
Under the hood, we technically use the strategies provided by PassportJS. We integrate a subset of the strategies available with passport. If you need more, we can integrate other strategies.
The cert parameter is mandatory (PEM format) because it is used to validate the SAML response.
The private_key (PEM format) is optional and is only required if you want to sign the SAML client request.
Certificates
Be careful to put the cert / private_key key in PEM format. Indeed, a lot of systems generally export the keys in X509 / PCKS12 formats and so you will need to convert them. Here is an example to extract PEM from PCKS12:
Starting from OpenCTI 6.2 when want_assertions_signed and want_authn_response_signed SAML parameter are not present in OpenCTI configuration, the default is set to \"true\" by the underlaying library (passport-saml) when previously it was false by default. If you have issues after upgrade, you can try with both of them set to false.
Here is an example of SAML configuration using environment variables:
This strategy allows to use the OpenID Connect Protocol to handle the authentication and is based on Node OpenID Client which is more powerful than the passport one.
By default, the claims are mapped based on the content of the JWT access_token. If you want to map claims which are in other JWT (such as id_token), you can define the following environment variables:
If this mode is activated and the headers are available, the user will be automatically logged without any action or notice. The logout uri will remove the session and redirect to the configured uri. If not specified, the redirect will be done to the request referer and so the header authentication will be done again.
"},{"location":"deployment/authentication/#automatically-create-group-on-sso","title":"Automatically create group on SSO","text":"
The variable auto_create_group can be added in the options of some strategies (LDAP, SAML and OpenID). If this variable is true, the groups of a user that logins will automatically be created if they don\u2019t exist.
More precisely, if the user that tries to authenticate has groups that don\u2019t exist in OpenCTI but exist in the SSO configuration, there are two cases:
if auto_create_group= true in the SSO configuration: the groups are created at the platform initialization and the user will be mapped on them.
else: an error is raised.
Example
We assume that Group1 exists in the platform, and newGroup doesn\u2019t exist. The user that tries to log in has the group newGroup. If auto_create_group = true in the SSO configuration, the group named newGroup will be created at the platform initialization and the user will be mapped on it. If auto_create_group = false or is undefined, the user can\u2019t log in and an error is raised.
"},{"location":"deployment/authentication/#examples","title":"Examples","text":""},{"location":"deployment/authentication/#ldap-then-fallback-to-local","title":"LDAP then fallback to local","text":"
In this example the users have a login form and need to enter login and password. The authentication is done on LDAP first, then locally if user failed to authenticate and finally fail if none of them succeeded. Here is an example for the production.json file:
"},{"location":"deployment/breaking-changes/","title":"Breaking changes and migrations","text":"
This section lists breaking changes introduced in OpenCTI, per version starting with the latest.
Please follow the migration guides if you need to upgrade your platform.
"},{"location":"deployment/breaking-changes/#opencti-62","title":"OpenCTI 6.2","text":""},{"location":"deployment/breaking-changes/#change-to-the-observable-promote","title":"Change to the observable \"promote\"","text":"
The API calls that promote an Observable to Indicator now return the created Indicator instead of the original Observable.
GraphQL API
Mutation StixCyberObservableEditMutations.promote is now deprecated
New Mutation StixCyberObservableEditMutations.promoteToIndicator introduced
Client-Python API
Client-python method client.stix_cyber_observable.promote_to_indicator is now deprecated
New Client-python method client.stix_cyber_observable.promote_to_indicator_v2 introduced
Discontinued Support
Please note that the deprecated methods will be permanently removed in OpenCTI 6.5.
"},{"location":"deployment/breaking-changes/#how-to-migrate","title":"How to migrate","text":"
If you are using custom scripts that make use of the deprecated API methods, please update these scripts.
The changes are straightforward: if you are using the return value of the method, you should now expect the new Indicator instead of the Observable being promoted; adapt your code accordingly.
"},{"location":"deployment/breaking-changes/#change-to-saml-authentication","title":"Change to SAML authentication","text":"
When want_assertions_signed and want_authn_response_signed SAML parameter are not present in OpenCTI configuration, the default is now set to true by the underlying library (passport-saml) when previously it was false by default.
"},{"location":"deployment/breaking-changes/#how-to-migrate_1","title":"How to migrate","text":"
If you have issues after upgrade, you can try with both parameters set to false.
"},{"location":"deployment/breaking-changes/#opencti-512","title":"OpenCTI 5.12","text":""},{"location":"deployment/breaking-changes/#major-changes-to-the-filtering-api","title":"Major changes to the filtering APi","text":"
OpenCTI 5.12 introduces a major rework of the filter engine with breaking changes to the model.
A dedicated blog post describes the reasons behind these changes.
"},{"location":"deployment/breaking-changes/#how-to-migrate_2","title":"How to migrate","text":"
The OpenCTI platform technological stack has been designed to be able to scale horizontally. All dependencies such as Elastic or Redis can be deployed in cluster mode and performances can be drastically increased by deploying multiple platform and worker instances.
MinIO is an open source server able to serve S3 buckets. It can be deployed in cluster mode and is compatible with several storage backend. OpenCTI is compatible with any tool following the S3 standard.
As showed on the schema, best practices for cluster mode and to avoid any congestion in the technological stack are:
Deploy platform(s) dedicated to end users and connectors registration
Deploy platform(s) dedicated to workers / ingestion process
We recommend 3 to 4 workers maximum by OpenCTI instance.
The ingestion platforms will never be accessed directly by end users.
When enabling clustering, the number of nodes is displayed in Settings > Parameters.
"},{"location":"deployment/clustering/#managers-and-schedulers","title":"Managers and schedulers","text":"
Also, since some managers like the rule engine, the task manager and the notification manager can take some resources in the OpenCTI NodeJS process, it is highly recommended to disable them in the frontend cluster. OpenCTI automatically handle the distribution and the launching of the engines across all nodes in the cluster except where they are explicitly disabled in the configuration.
The purpose of this section is to learn how to configure OpenCTI to have it tailored for your production and development needs. It is possible to check all default parameters implemented in the platform in the default.json file.
Here are the configuration keys, for both containers (environment variables) and manual deployment.
Parameters equivalence
The equivalent of a config variable in environment variables is the usage of a double underscores (__) for a level of config.
"},{"location":"deployment/configuration/#platform","title":"Platform","text":""},{"location":"deployment/configuration/#api-frontend","title":"API & Frontend","text":""},{"location":"deployment/configuration/#basic-parameters","title":"Basic parameters","text":"Parameter Environment variable Default value Description app:port APP__PORT 4000 Listen port of the application app:base_path APP__BASE_PATH Specific URI (ie. /opencti) app:base_url APP__BASE_URL http://localhost:4000 Full URL of the platform (should include the base_path if any) app:request_timeout APP__REQUEST_TIMEOUT 1200000 Request timeout, in ms (default 20 minutes) app:session_timeout APP__SESSION_TIMEOUT 1200000 Session timeout, in ms (default 20 minutes) app:session_idle_timeout APP__SESSION_IDLE_TIMEOUT 0 Idle timeout (locking the screen), in ms (default 0 minute - disabled) app:session_cookie APP__SESSION_COOKIE false Use memory/session cookie instead of persistent one app:admin:email APP__ADMIN__EMAIL admin@opencti.io Default login email of the admin user app:admin:password APP__ADMIN__PASSWORD ChangeMe Default password of the admin user app:admin:token APP__ADMIN__TOKEN ChangeMe Default token (must be a valid UUIDv4) app:health_access_key APP__HEALTH_ACCESS_KEY ChangeMe Access key for the /health endpoint. Must be changed - will not respond to default value. Access with /health?health_access_key=ChangeMe"},{"location":"deployment/configuration/#network-and-security","title":"Network and security","text":"Parameter Environment variable Default value Description http_proxy HTTP_PROXY Proxy URL for HTTP connection (example: http://proxy:80080) https_proxy HTTPS_PROXY Proxy URL for HTTPS connection (example: http://proxy:80080) no_proxy NO_PROXY Comma separated list of hostnames for proxy exception (example: localhost,127.0.0.0/8,internal.opencti.io) app:https_cert:cookie_secure APP__HTTPS_CERT__COOKIE_SECURE false Set the flag \"secure\" for session cookies. app:https_cert:ca APP__HTTPS_CERT__CA Empty list [] Certificate authority paths or content, only if the client uses a self-signed certificate. app:https_cert:key APP__HTTPS_CERT__KEY Certificate key path or content app:https_cert:crt APP__HTTPS_CERT__CRT Certificate crt path or content app:https_cert:reject_unauthorized APP__HTTPS_CERT__REJECT_UNAUTHORIZED If not false, the server certificate is verified against the list of supplied CAs"},{"location":"deployment/configuration/#logging","title":"Logging","text":""},{"location":"deployment/configuration/#errors","title":"Errors","text":"Parameter Environment variable Default value Description app:app_logs:logs_level APP__APP_LOGS__LOGS_LEVEL info The application log level app:app_logs:logs_files APP__APP_LOGS__LOGS_FILES true If application logs is logged into files app:app_logs:logs_console APP__APP_LOGS__LOGS_CONSOLE true If application logs is logged to console (useful for containers) app:app_logs:logs_max_files APP__APP_LOGS__LOGS_MAX_FILES 7 Maximum number of daily files in logs app:app_logs:logs_directory APP__APP_LOGS__LOGS_DIRECTORY ./logs File logs directory"},{"location":"deployment/configuration/#audit","title":"Audit","text":"Parameter Environment variable Default value Description app:audit_logs:logs_files APP__AUDIT_LOGS__LOGS_FILES true If audit logs is logged into files app:audit_logs:logs_console APP__AUDIT_LOGS__LOGS_CONSOLE true If audit logs is logged to console (useful for containers) app:audit_logs:logs_max_files APP__AUDIT_LOGS__LOGS_MAX_FILES 7 Maximum number of daily files in logs app:audit_logs:logs_directory APP__AUDIT_LOGS__LOGS_DIRECTORY ./logs Audit logs directory"},{"location":"deployment/configuration/#telemetry","title":"Telemetry","text":"Parameter Environment variable Default value Description app:telemetry:metrics:enabled APP__TELEMETRY__METRICS__ENABLED false Enable the metrics collection. app:telemetry:metrics:exporter_otlp APP__TELEMETRY__METRICS__EXPORTER_OTLP Port to expose the OTLP endpoint. app:telemetry:metrics:exporter_prometheus APP__TELEMETRY__METRICS__EXPORTER_PROMETHEUS 14269 Port to expose the Prometheus endpoint."},{"location":"deployment/configuration/#maps-references","title":"Maps & references","text":"Parameter Environment variable Default value Description app:map_tile_server_dark APP__MAP_TILE_SERVER_DARK https://map.opencti.io/styles/filigran-dark2/{z}/{x}/{y}.png The address of the OpenStreetMap provider with dark theme style app:map_tile_server_light APP__MAP_TILE_SERVER_LIGHT https://map.opencti.io/styles/filigran-light2/{z}/{x}/{y}.png The address of the OpenStreetMap provider with light theme style app:reference_attachment APP__REFERENCE_ATTACHMENT false External reference mandatory attachment"},{"location":"deployment/configuration/#functional-customization","title":"Functional customization","text":"Parameter Environment variable Default value Description app:artifact_zip_password APP__ARTIFACT_ZIP_PASSWORD infected Artifact encrypted archive default password relations_deduplication:past_days RELATIONS_DEDUPLICATION__PAST_DAYS 30 De-duplicate relations based on start_time and stop_time - n days relations_deduplication:next_days RELATIONS_DEDUPLICATION__NEXT_DAYS 30 De-duplicate relations based on start_time and stop_time + n days relations_deduplication:created_by_based RELATIONS_DEDUPLICATION__CREATED_BY_BASED false Take into account the author to duplicate even if stat_time / stop_time are matching relations_deduplication:types_overrides:relationship_type:past_days RELATIONS_DEDUPLICATION__RELATIONSHIP_TYPE__PAST_DAYS Override the past days for a specific type of relationship (ex. targets) relations_deduplication:types_overrides:relationship_type:next_days RELATIONS_DEDUPLICATION__RELATIONSHIP_TYPE__NEXT_DAYS Override the next days for a specific type of relationship (ex. targets) relations_deduplication:types_overrides:relationship_type:created_by_based RELATIONS_DEDUPLICATION__RELATIONSHIP_TYPE__CREATED_BY_BASED Override the author duplication for a specific type of relationship (ex. targets)"},{"location":"deployment/configuration/#technical-customization","title":"Technical customization","text":"Parameter Environment variable Default value Description app:graphql:playground:enabled APP__GRAPHQL__PLAYGROUND__ENABLED true Enable the playground on /graphql app:graphql:playground:force_disabled_introspection APP__GRAPHQL__PLAYGROUND__FORCE_DISABLED_INTROSPECTION true Introspection is allowed to auth users but can be disabled in needed app:concurrency:retry_count APP__CONCURRENCY__RETRY_COUNT 200 Number of try to get the lock to work an element (create/update/merge, ...) app:concurrency:retry_delay APP__CONCURRENCY__RETRY_DELAY 100 Delay between 2 lock retry (in milliseconds) app:concurrency:retry_jitter APP__CONCURRENCY__RETRY_JITTER 50 Random jitter to prevent concurrent retry (in milliseconds) app:concurrency:max_ttl APP__CONCURRENCY__MAX_TTL 30000 Global maximum time for lock retry (in milliseconds)"},{"location":"deployment/configuration/#dependencies","title":"Dependencies","text":""},{"location":"deployment/configuration/#xtm-suite","title":"XTM Suite","text":"Parameter Environment variable Default value Description xtm:openbas_url XTM__OPENBAS_URL OpenBAS URL xtm:openbas_token XTM__OPENBAS_TOKEN OpenBAS token xtm:openbas_reject_unauthorized XTM__OPENBAS_REJECT_UNAUTHORIZED false Enable TLS certificate check xtm:openbas_disable_display XTM__OPENBAS_DISABLE_DISPLAY false Disable OpenBAS posture in the UI"},{"location":"deployment/configuration/#elasticsearch","title":"ElasticSearch","text":"Parameter Environment variable Default value Description elasticsearch:engine_selector ELASTICSEARCH__ENGINE_SELECTOR auto elk or opensearch, default is auto, please put elk if you use token auth. elasticsearch:engine_check ELASTICSEARCH__ENGINE_CHECK false Disable Search Engine compatibility matrix verification. Caution: OpenCTI was developed in compliance with the compatibility matrix. Setting the parameter to true may result in negative impacts. elasticsearch:url ELASTICSEARCH__URL http://localhost:9200 URL(s) of the ElasticSearch (supports http://user:pass@localhost:9200 and list of URLs) elasticsearch:username ELASTICSEARCH__USERNAME Username can be put in the URL or with this parameter elasticsearch:password ELASTICSEARCH__PASSWORD Password can be put in the URL or with this parameter elasticsearch:api_key ELASTICSEARCH__API_KEY API key for ElasticSearch token auth. Please set also engine_selector to elk elasticsearch:index_prefix ELASTICSEARCH__INDEX_PREFIX opencti Prefix for the indices elasticsearch:ssl:reject_unauthorized ELASTICSEARCH__SSL__REJECT_UNAUTHORIZED true Enable TLS certificate check elasticsearch:ssl:ca ELASTICSEARCH__SSL__CA Custom certificate path or content elasticsearch:search_wildcard_prefix ELASTICSEARCH__SEARCH_WILDCARD_PREFIX false Search includes words with automatic fuzzy comparison elasticsearch:search_fuzzy ELASTICSEARCH__SEARCH_FUZZY false Search will include words not starting with the search keyword"},{"location":"deployment/configuration/#redis","title":"Redis","text":"Parameter Environment variable Default value Description redis:mode REDIS__MODE single Connect to redis in \"single\", \"sentinel or \"cluster\" mode redis:namespace REDIS__NAMESPACE Namespace (to use as prefix) redis:hostname REDIS__HOSTNAME localhost Hostname of the Redis Server redis:hostnames REDIS__HOSTNAMES Hostnames definition for Redis cluster or sentinel mode: a list of host:port objects. redis:port REDIS__PORT 6379 Port of the Redis Server redis:sentinel_master_name REDIS__SENTINEL_MASTER_NAME Name of your Redis Sentinel Master (mandatory in sentinel mode) redis:use_ssl REDIS__USE_SSL false Is the Redis Server has TLS enabled redis:username REDIS__USERNAME Username of the Redis Server redis:password REDIS__PASSWORD Password of the Redis Server redis:database REDIS__DATABASE Database of the Redis Server (only work in single mode) redis:ca REDIS__CA [] List of path(s) of the CA certificate(s) redis:trimming REDIS__TRIMMING 2000000 Number of elements to maintain in the stream. (0 = unlimited)"},{"location":"deployment/configuration/#rabbitmq","title":"RabbitMQ","text":"Parameter Environment variable Default value Description rabbitmq:hostname RABBITMQ__HOSTNAME localhost 7 Hostname of the RabbitMQ server rabbitmq:port RABBITMQ__PORT 5672 Port of the RabbitMQ server rabbitmq:port_management RABBITMQ__PORT_MANAGEMENT 15672 Port of the RabbitMQ Management Plugin rabbitmq:username RABBITMQ__USERNAME guest RabbitMQ user rabbitmq:password RABBITMQ__PASSWORD guest RabbitMQ password rabbitmq:queue_type RABBITMQ__QUEUE_TYPE \"classic\" RabbitMQ Queue Type (\"classic\" or \"quorum\") - - - - rabbitmq:use_ssl RABBITMQ__USE_SSL false Use TLS connection rabbitmq:use_ssl_cert RABBITMQ__USE_SSL_CERT Path or cert content rabbitmq:use_ssl_key RABBITMQ__USE_SSL_KEY Path or key content rabbitmq:use_ssl_pfx RABBITMQ__USE_SSL_PFX Path or pfx content rabbitmq:use_ssl_ca RABBITMQ__USE_SSL_CA [] List of path(s) of the CA certificate(s) rabbitmq:use_ssl_passphrase RABBITMQ__SSL_PASSPHRASE Passphrase for the key certificate rabbitmq:use_ssl_reject_unauthorized RABBITMQ__SSL_REJECT_UNAUTHORIZED false Reject rabbit self signed certificate - - - - rabbitmq:management_ssl RABBITMQ__MANAGEMENT_SSL false Is the Management Plugin has TLS enabled rabbitmq:management_ssl_reject_unauthorized RABBITMQ__SSL_REJECT_UNAUTHORIZED true Reject management self signed certificate"},{"location":"deployment/configuration/#s3-bucket","title":"S3 Bucket","text":"Parameter Environment variable Default value Description minio:endpoint MINIO__ENDPOINT localhost Hostname of the S3 Service. Example if you use AWS Bucket S3: s3.us-east-1.amazonaws.com (if minio:bucket_region value is us-east-1). This parameter value can be omitted if you use Minio as an S3 Bucket Service. minio:port MINIO__PORT 9000 Port of the S3 Service. For AWS Bucket S3 over HTTPS, this value can be changed (usually 443). minio:use_ssl MINIO__USE_SSL false Indicates whether the S3 Service has TLS enabled. For AWS Bucket S3 over HTTPS, this value could be true. minio:access_key MINIO__ACCESS_KEY ChangeMe Access key for the S3 Service. minio:secret_key MINIO__SECRET_KEY ChangeMe Secret key for the S3 Service. minio:bucket_name MINIO__BUCKET_NAME opencti-bucket S3 bucket name. Useful to change if you use AWS. minio:bucket_region MINIO__BUCKET_REGION us-east-1 Region of the S3 bucket if you are using AWS. This parameter value can be omitted if you use Minio as an S3 Bucket Service. minio:use_aws_role MINIO__USE_AWS_ROLE false Indicates whether to use AWS role auto credentials. When this parameter is configured, the minio:access_key and minio:secret_key parameters are not necessary."},{"location":"deployment/configuration/#smtp-service","title":"SMTP Service","text":"Parameter Environment variable Default value Description smtp:hostname SMTP__HOSTNAME SMTP Server hostname smtp:port SMTP__PORT 465 SMTP Port (25 or 465 for TLS) smtp:use_ssl SMTP__USE_SSL false SMTP over TLS smtp:reject_unauthorized SMTP__REJECT_UNAUTHORIZED false Enable TLS certificate check smtp:username SMTP__USERNAME SMTP Username if authentication is needed smtp:password SMTP__PASSWORD SMTP Password if authentication is needed"},{"location":"deployment/configuration/#ai-service","title":"AI Service","text":"
AI deployment and cloud services
There are several possibilities for Enterprise Edition customers to use OpenCTI AI endpoints:
Use the Filigran AI Service leveraging our custom AI model using the token given by the support team.
Use OpenAI or MistralAI cloud endpoints using your own tokens.
Deploy or use local AI endpoints (Filigran can provide you with the custom model).
Parameter Environment variable Default value Description ai:enabled AI__ENABLED true Enable AI capabilities ai:type AI__TYPE mistralai AI type (mistralai or openai) ai:endpoint AI__ENDPOINT Endpoint URL (empty means default cloud service) ai:token AI__TOKEN Token for endpoint credentials ai:model AI__MODEL Model to be used for text generation (depending on type) ai:model_images AI__MODEL_IMAGES Model to be used for image generation (depending on type)"},{"location":"deployment/configuration/#using-a-credentials-provider","title":"Using a credentials provider","text":"
In some cases, it may not be possible to put directly dependencies credentials directly in environment variables or static configuration. The platform can then retrieve them from a credentials provider. Here is the list of supported providers:
For each dependency, special configuration keys are available to ensure the platform retrieves credentials during start process. Not all dependencies support this mechanism, here is the exhaustive list:
Dependency Prefix ElasticSearch elasticsearch S3 Storage minio Redis redis OpenID secrets oic"},{"location":"deployment/configuration/#common-configurations","title":"Common configurations","text":"Parameter Environment variable Default value Description {prefix}:credentials_provider:https_cert:reject_unauthorized {PREFIX}__CREDENTIALS_PROVIDER__HTTPS_CERT__REJECT_UNAUTHORIZED false Reject unauthorized TLS connection {prefix}:credentials_provider:https_cert:crt {PREFIX}__CREDENTIALS_PROVIDER__HTTPS_CERT__CRT Path to the HTTPS certificate {prefix}:credentials_provider:https_cert:key {PREFIX}__CREDENTIALS_PROVIDER__HTTPS_CERT__KEY Path to the HTTPS key {prefix}:credentials_provider:https_cert:ca {PREFIX}__CREDENTIALS_PROVIDER__HTTPS_CERT__CA Path to the HTTPS CA certificate"},{"location":"deployment/configuration/#cyberark","title":"CyberArk","text":"Parameter Environment variable Default value Description {prefix}:credentials_provider:cyberark:uri {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__URI The URL of the CyberArk endpoint for credentials retrieval (GET request) {prefix}:credentials_provider:cyberark:app_id {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__APP_ID The used application ID for the dependency within CyberArk {prefix}:credentials_provider:cyberark:safe {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__SAFE The used safe key for the dependency within CyberArk {prefix}:credentials_provider:cyberark:object {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__OBJECT The used object key for the dependency within CyberArk {prefix}:credentials_provider:cyberark:default_splitter {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__DEFAULT_SPLITTER : Default splitter of the credentials results, for \"username:password\", default is \":\" {prefix}:credentials_provider:cyberark:field_targets {PREFIX}__CREDENTIALS_PROVIDER__CYBERARK__FIELD_TARGETS [] Fields targets in the data content response after splitting
Here is an example for ElasticSearch:
Environment variables:
- ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__URI=http://my.cyberark.com/AIMWebService/api/Accounts\n- ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__APP_ID=opencti-elastic\n- ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__SAFE=mysafe-key\n- ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__OBJECT=myobject-key\n- \"ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__DEFAULT_SPLITTER=:\" # As default is already \":\", may not be necessary\n- \"ELASTICSEARCH__CREDENTIALS_PROVIDER__CYBERARK__FIELD_TARGETS=[\\\"username\\\",\\\"password\\\"]\"\n
- MINIO__CREDENTIALS_PROVIDER__HTTPS_CERT__CRT=/cert_volume/mycert.crt\n- MINIO__CREDENTIALS_PROVIDER__HTTPS_CERT__KEY=/cert_volume/mycert.key\n- MINIO__CREDENTIALS_PROVIDER__HTTPS_CERT__CA=/cert_volume/ca.crt\n- MINIO__CREDENTIALS_PROVIDER__CYBERARK__URI=http://my.cyberark.com/AIMWebService/api/Accounts\n- MINIO__CREDENTIALS_PROVIDER__CYBERARK__APP_ID=opencti-s3\n- MINIO__CREDENTIALS_PROVIDER__CYBERARK__SAFE=mysafe-key\n- MINIO__CREDENTIALS_PROVIDER__CYBERARK__OBJECT=myobject-key\n- \"MINIO__CREDENTIALS_PROVIDER__CYBERARK__DEFAULT_SPLITTER=:\" # As default is already \":\", may not be necessary\n- \"MINIO__CREDENTIALS_PROVIDER__CYBERARK__FIELD_TARGETS=[\\\"access_key\\\",\\\"secret_key\\\"]\"\n
"},{"location":"deployment/configuration/#engines-schedules-and-managers","title":"Engines, Schedules and Managers","text":"Parameter Environment variable Default value Description rule_engine:enabled RULE_ENGINE__ENABLED true Enable/disable the rule engine rule_engine:lock_key RULE_ENGINE__LOCK_KEY rule_engine_lock Lock key of the engine in Redis - - - - history_manager:enabled HISTORY_MANAGER__ENABLED true Enable/disable the history manager history_manager:lock_key HISTORY_MANAGER__LOCK_KEY history_manager_lock Lock key for the manager in Redis - - - - task_scheduler:enabled TASK_SCHEDULER__ENABLED true Enable/disable the task scheduler task_scheduler:lock_key TASK_SCHEDULER__LOCK_KEY task_manager_lock Lock key for the scheduler in Redis task_scheduler:interval TASK_SCHEDULER__INTERVAL 10000 Interval to check new task to do (in ms) - - - - sync_manager:enabled SYNC_MANAGER__ENABLED true Enable/disable the sync manager sync_manager:lock_key SYNC_MANAGER__LOCK_KEY sync_manager_lock Lock key for the manager in Redis sync_manager:interval SYNC_MANAGER__INTERVAL 10000 Interval to check new sync feeds to consume (in ms) - - - - expiration_scheduler:enabled EXPIRATION_SCHEDULER__ENABLED true Enable/disable the scheduler expiration_scheduler:lock_key EXPIRATION_SCHEDULER__LOCK_KEY expired_manager_lock Lock key for the scheduler in Redis expiration_scheduler:interval EXPIRATION_SCHEDULER__INTERVAL 300000 Interval to check expired indicators (in ms) - - - - retention_manager:enabled RETENTION_MANAGER__ENABLED true Enable/disable the retention manager retention_manager:lock_key RETENTION_MANAGER__LOCK_KEY retention_manager_lock Lock key for the manager in Redis retention_manager:interval RETENTION_MANAGER__INTERVAL 60000 Interval to check items to be deleted (in ms) - - - - notification_manager:enabled NOTIFICATION_MANAGER__ENABLED true Enable/disable the notification manager notification_manager:lock_live_key NOTIFICATION_MANAGER__LOCK_LIVE_KEY notification_live_manager_lock Lock live key for the manager in Redis notification_manager:lock_digest_key NOTIFICATION_MANAGER__LOCK_DIGEST_KEY notification_digest_manager_lock Lock digest key for the manager in Redis notification_manager:interval NOTIFICATION_MANAGER__INTERVAL 10000 Interval to push notifications - - - - publisher_manager:enabled PUBLISHER_MANAGER__ENABLED true Enable/disable the publisher manager publisher_manager:lock_key PUBLISHER_MANAGER__LOCK_KEY publisher_manager_lock Lock key for the manager in Redis publisher_manager:interval PUBLISHER_MANAGER__INTERVAL 10000 Interval to send notifications / digests (in ms) - - - - ingestion_manager:enabled INGESTION_MANAGER__ENABLED true Enable/disable the ingestion manager ingestion_manager:lock_key INGESTION_MANAGER__LOCK_KEY ingestion_manager_lock Lock key for the manager in Redis ingestion_manager:interval INGESTION_MANAGER__INTERVAL 300000 Interval to check for new data in remote feeds - - - - playbook_manager:enabled PLAYBOOK_MANAGER__ENABLED true Enable/disable the playbook manager playbook_manager:lock_key PLAYBOOK_MANAGER__LOCK_KEY publisher_manager_lock Lock key for the manager in Redis playbook_manager:interval PLAYBOOK_MANAGER__INTERVAL 60000 Interval to check new playbooks - - - - activity_manager:enabled ACTIVITY_MANAGER__ENABLED true Enable/disable the activity manager activity_manager:lock_key ACTIVITY_MANAGER__LOCK_KEY activity_manager_lock Lock key for the manager in Redis - - - - connector_manager:enabled CONNECTOR_MANAGER__ENABLED true Enable/disable the connector manager connector_manager:lock_key CONNECTOR_MANAGER__LOCK_KEY connector_manager_lock Lock key for the manager in Redis connector_manager:works_day_range CONNECTOR_MANAGER__WORKS_DAY_RANGE 7 Days range before considering the works as too old connector_manager:interval CONNECTOR_MANAGER__INTERVAL 10000 Interval to check the state of the works - - - - import_csv_built_in_connector:enabled IMPORT_CSV_CONNECTOR__ENABLED true Enable/disable the csv import connector import_csv_built_in_connector:validate_before_import IMPORT_CSV_CONNECTOR__VALIDATE_BEFORE_IMPORT false Validates the bundle before importing - - - - file_index_manager:enabled FILE_INDEX_MANAGER__ENABLED true Enable/disable the file indexing manager file_index_manager:stream_lock_key FILE_INDEX_MANAGER__STREAM_LOCK file_index_manager_stream_lock Stream lock key for the manager in Redis file_index_manager:interval FILE_INDEX_MANAGER__INTERVAL 60000 Interval to check for new files - - - - indicator_decay_manager:enabled INDICATOR_DECAY_MANAGER__ENABLED true Enable/disable the indicator decay manager indicator_decay_manager:lock_key INDICATOR_DECAY_MANAGER__LOCK_KEY indicator_decay_manager_lock Lock key for the manager in Redis indicator_decay_manager:interval INDICATOR_DECAY_MANAGER__INTERVAL 60000 Interval to check for indicators to update indicator_decay_manager:batch_size INDICATOR_DECAY_MANAGER__BATCH_SIZE 10000 Number of indicators handled by the manager - - - - garbage_collection_manager:enabled GARBAGE_COLLECTION_MANAGER__ENABLED true Enable/disable the trash manager garbage_collection_manager:lock_key GARBAGE_COLLECTION_MANAGER__LOCK_KEY garbage_collection_manager_lock Lock key for the manager in Redis garbage_collection_manager:interval GARBAGE_COLLECTION_MANAGER__INTERVAL 60000 Interval to check for trash elements to delete garbage_collection_manager:batch_size GARBAGE_COLLECTION_MANAGER__BATCH_SIZE 10000 Number of trash elements to delete at once garbage_collection_manager:deleted_retention_days GARBAGE_COLLECTION_MANAGER__DELETED_RETENTION_DAYS 7 Days after which elements in trash are deleted - - - - telemetry_manager:lock_key TELEMETRY_MANAGER__LOCK_LOCK telemetry_manager_lock Lock key for the manager in Redis
Manager's duties
A description of each manager's duties is available on a dedicated page.
"},{"location":"deployment/configuration/#worker-and-connector","title":"Worker and connector","text":"
Can be configured manually using the configuration file config.yml or through environment variables.
Parameter Environment variable Default value Description opencti:url OPENCTI_URL The URL of the OpenCTI platform opencti:token OPENCTI_TOKEN A token of an administrator account with bypass capability - - - - mq:use_ssl / / Depending of the API configuration (fetch from API) mq:use_ssl_ca MQ_USE_SSL_CA Path or ca content mq:use_ssl_cert MQ_USE_SSL_CERT Path or cert content mq:use_ssl_key MQ_USE_SSL_KEY Path or key content mq:use_ssl_passphrase MQ_USE_SSL_PASSPHRASE Passphrase for the key certificate mq:use_ssl_reject_unauthorized MQ_USE_SSL_REJECT_UNAUTHORIZED false Reject rabbit self signed certificate"},{"location":"deployment/configuration/#worker-specific-configuration","title":"Worker specific configuration","text":""},{"location":"deployment/configuration/#logging_1","title":"Logging","text":"Parameter Environment variable Default value Description worker:log_level WORKER_LOG_LEVEL info The log level (error, warning, info or debug)"},{"location":"deployment/configuration/#telemetry_1","title":"Telemetry","text":"Parameter Environment variable Default value Description worker:telemetry_enabled WORKER_TELEMETRY_ENABLED false Enable the Prometheus endpoint worker:telemetry_prometheus_port WORKER_PROMETHEUS_TELEMETRY_PORT 14270 Port of the Prometheus endpoint worker:telemetry_prometheus_host WORKER_PROMETHEUS_TELEMETRY_HOST 0.0.0.0 Listen address of the Prometheus endpoint"},{"location":"deployment/configuration/#connector-specific-configuration","title":"Connector specific configuration","text":"
For specific connector configuration, you need to check each connector behavior.
You are looking for the available connectors? The list is in the OpenCTI Ecosystem.
Connectors are the cornerstone of the OpenCTI platform and allow organizations to easily ingest, enrich or export data. According to their functionality and use case, they are categorized in the following classes.
These connectors automatically retrieve information from an external organization, application, or service, and convert it to STIX 2.1 bundles. Then, they import it into OpenCTI using the workers.
When a new object is created in the platform or on the user request, it is possible to trigger the internal enrichment connector to lookup and/or search the object in external organizations, applications, or services. If the object is found, the connectors will generate a STIX 2.1 bundle which will increase the level of knowledge about the concerned object.
These connectors connect to a platform live stream and continuously do something with the received events. In most cases, they are used to consume OpenCTI data and insert them in third-party platforms such as SIEMs, XDRs, EDRs, etc. In some cases, stream connectors can also query the external system on a regular basis and act as import connector for instance to gather alerts and sightings related to CTI data and push them to OpenCTI (bi-directional).
Information stored in OpenCTI can be extracted into different file formats like .csv or .json (STIX 2.1).
"},{"location":"deployment/connectors/#connector-configuration","title":"Connector configuration","text":""},{"location":"deployment/connectors/#connector-users-and-tokens","title":"Connector users and tokens","text":"
All connectors have to be able to access the OpenCTI API. To allow this connection, they have 2 mandatory configuration parameters, the OPENCTI_URL and the OPENCTI_TOKEN.
Connectors tokens
Be careful, we strongly recommend to use a dedicated token for each connector running in the platform. So you have to create a specific user for each of them.
Also, if all connectors users can run with a user belonging to the Connectors group (with the Connector role), the Internal Export Files should be run with a user who is Administrator (with bypass capability) because they impersonate the user requesting the export to avoid data leak.
Type Required role Used permissions EXTERNAL_IMPORT Connector Import data with the connector user. INTERNAL_ENRICHMENT Connector Enrich data with the connector user. INTERNAL_IMPORT_FILE Connector Import data with the connector user. INTERNAL_EXPORT_FILE Administrator Export data with the user who requested the export. STREAM Connector Consume the streams with the connector user."},{"location":"deployment/connectors/#parameters","title":"Parameters","text":"
In addition to these 2 parameters, connectors have other mandatory parameters that need to be set in order to get them work.
Here is an example of a connector docker-compose.yml file:
By default, connectors are connecting to RabbitMQ using parameters and credentials directly given by the API during the connector registration process. In some cases, you may need to override them.
Be aware that all connectors are reaching RabbitMQ based the RabbitMQ configuration provided by the OpenCTI platform. The connector must be able to reach RabbitMQ on the specified hostname and port. If you have a specific Docker network configuration, please be sure to adapt your docker-compose.yml file in such way that the connector container gets attached to the OpenCTI Network, e.g.:
"},{"location":"deployment/connectors/#connector-token","title":"Connector token","text":""},{"location":"deployment/connectors/#create-the-user","title":"Create the user","text":"
As mentioned previously, it is strongly recommended to run each connector with its own user. The Internal Export File connectors should be launched with a user that belongs to a group which has an \u201cAdministrator\u201d role (with bypass all capabilities enabled).
By default, in platform, a group named \"Connectors\" already exists. So just create a new user with the name [C] Name of the connector in Settings > Security > Users.
"},{"location":"deployment/connectors/#put-the-user-in-the-group","title":"Put the user in the group","text":"
Just go to the user you have just created and add it to the Connectors group.
Then just get the token of the user displayed in the interface.
You can either directly run the Docker image of connectors or add them to your current docker-compose.yml file.
"},{"location":"deployment/connectors/#add-a-connector-to-your-deployment","title":"Add a connector to your deployment","text":"
For instance, to enable the MISP connector, you can add a new service to your docker-compose.yml file:
connector-misp:\n image: opencti/connector-misp:latest\n environment:\n - OPENCTI_URL=http://localhost\n - OPENCTI_TOKEN=ChangeMe\n - CONNECTOR_ID=ChangeMe\n - CONNECTOR_TYPE=EXTERNAL_IMPORT\n - CONNECTOR_NAME=MISP\n - CONNECTOR_SCOPE=misp\n - CONNECTOR_LOG_LEVEL=info\n - MISP_URL=http://localhost # Required\n - MISP_KEY=ChangeMe # Required\n - MISP_SSL_VERIFY=False # Required\n - MISP_CREATE_REPORTS=True # Required, create report for MISP event\n - MISP_REPORT_CLASS=MISP event # Optional, report_class if creating report for event\n - MISP_IMPORT_FROM_DATE=2000-01-01 # Optional, import all event from this date\n - MISP_IMPORT_TAGS=opencti:import,type:osint # Optional, list of tags used for import events\n - MISP_INTERVAL=1 # Required, in minutes\n restart: always\n
"},{"location":"deployment/connectors/#launch-a-standalone-connector","title":"Launch a standalone connector","text":"
To launch a standalone connector, you can use the docker-compose.yml file of the connector itself. Just download the latest release and start the connector:
$ wget https://github.com/OpenCTI-Platform/connectors/archive/{RELEASE_VERSION}.zip\n$ unzip {RELEASE_VERSION}.zip\n$ cd connectors-{RELEASE_VERSION}/misp/\n
Change the configuration in the docker-compose.yml according to the parameters of the platform and of the targeted service. Then launch the connector:
The connector status can be displayed in the dedicated section of the platform available in Data > Ingestion > Connectors. You will be able to see the statistics of the RabbitMQ queue of the connector:
Problem
If you encounter problems deploying OpenCTI or connectors, you can consult the troubleshooting page.
All components of OpenCTI are shipped both as Docker images and manual installation packages.
Production deployment
For production deployment, we recommend to deploy all components in containers, including dependencies, using native cloud services or orchestration systems such as Kubernetes.
To have more details about deploying OpenCTI and its dependencies in cluster mode, please read the dedicated section.
Use Docker
Deploy OpenCTI using Docker and the default docker-compose.yml provided in the docker.
Setup
Manual installation
Deploy dependencies and launch the platform manually using the packages released in the GitHub releases.
Just download the appropriate Docker for Desktop version for your operating system.
"},{"location":"deployment/installation/#clone-the-repository","title":"Clone the repository","text":"
Docker helpers are available in the Docker GitHub repository.
mkdir -p /path/to/your/app && cd /path/to/your/app\ngit clone https://github.com/OpenCTI-Platform/docker.git\ncd docker\n
"},{"location":"deployment/installation/#configure-the-environment","title":"Configure the environment","text":"
ElasticSearch / OpenSearch configuration
We strongly recommend that you add the following ElasticSearch / OpenSearch parameter:
thread_pool.search.queue_size=5000\n
Check the OpenCTI Integration User Permissions in OpenSearch/ElasticSearch for detailed information about the user permissions required for the OpenSearch/ElasticSearch integration.
Before running the docker-compose command, the docker-compose.yml file should be configured. By default, the docker-compose.yml file is using environment variables available in the file .env.sample.
You can either rename the file .env.sample as .env and enter the values or just directly edit the docker-compose.yml with the values for your environment.
Configuration static parameters
The complete list of available static parameters is available in the configuration section.
Here is an example to quickly generate the .env file under Linux, especially all the default UUIDv4:
If your docker-compose deployment does not support .env files, just export all environment variables before launching the platform:
export $(cat .env | grep -v \"#\" | xargs)\n
As OpenCTI has a dependency on ElasticSearch, you have to set vm.max_map_count before running the containers, as mentioned in the ElasticSearch documentation.
sudo sysctl -w vm.max_map_count=1048575\n
To make this parameter persistent, add the following to the end of your /etc/sysctl.conf:
"},{"location":"deployment/installation/#run-opencti","title":"Run OpenCTI","text":""},{"location":"deployment/installation/#using-single-node-docker","title":"Using single node Docker","text":"
After changing your .env file run docker-compose in detached (-d) mode:
sudo systemctl start docker.service\n# Run docker-compose in detached\ndocker-compose up -d\n
In order to have the best experience with Docker, we recommend using the Docker stack feature. In this mode you will have the capacity to easily scale your deployment.
# If your virtual machine is not a part of a Swarm cluster, please use:\ndocker swarm init\n
Put your environment variables in /etc/environment:
# If you already exported your variables to .env from above:\nsudo cat .env >> /etc/environment\nsudo bash -c 'cat .env >> /etc/environment'\nsudo docker stack deploy --compose-file docker-compose.yml opencti\n
Installation done
You can now go to http://localhost:8080 and log in with the credentials configured in your environment variables.
"},{"location":"deployment/installation/#manual-installation","title":"Manual installation","text":""},{"location":"deployment/installation/#prerequisites","title":"Prerequisites","text":""},{"location":"deployment/installation/#installation-of-dependencies","title":"Installation of dependencies","text":"
You have to install all the needed dependencies for the main application and the workers. The example below is for Debian-based systems:
"},{"location":"deployment/installation/#download-the-application-files","title":"Download the application files","text":"
First, you have to download and extract the latest release file. Then select the version to install depending of your operating system:
For Linux:
If your OS supports libc (Ubuntu, Debian, ...) you have to install the opencti-release_{RELEASE_VERSION}.tar.gz version.
If your OS uses musl (Alpine, ...) you have to install the opencti-release-{RELEASE_VERSION}_musl.tar.gz version.
For Windows:
We don't provide any Windows release for now. However it is still possible to check the code out, manually install the dependencies and build the software.
mkdir /path/to/your/app && cd /path/to/your/app\nwget <https://github.com/OpenCTI-Platform/opencti/releases/download/{RELEASE_VERSION}/opencti-release-{RELEASE_VERSION}.tar.gz>\ntar xvfz opencti-release-{RELEASE_VERSION}.tar.gz\n
"},{"location":"deployment/installation/#install-the-main-platform","title":"Install the main platform","text":""},{"location":"deployment/installation/#configure-the-application","title":"Configure the application","text":"
The main application has just one JSON configuration file to change and a few Python modules to install
cd opencti\ncp config/default.json config/production.json\n
Change the config/production.json file according to your configuration of ElasticSearch, Redis, RabbitMQ and S3 bucket as well as default credentials (the ADMIN_TOKEN must be a valid UUID).
"},{"location":"deployment/installation/#install-the-python-modules","title":"Install the Python modules","text":"
cd src/python\npip3 install -r requirements.txt\ncd ../..\n
"},{"location":"deployment/installation/#start-the-application","title":"Start the application","text":"
The application is just a NodeJS process, the creation of the database schema and the migration will be done at starting.
Please verify that yarn version is greater than 4 and node version is greater or equals to v19. Please note that some Node.js version are outdated in linux package manager, you can download a recent one in https://nodejs.org/en/download or alternatively nvm can help to chose a recent version of Node.js https://github.com/nvm-sh/nvm
OpenCTI platform is based on a NodeJS runtime, with a memory limit of 8GB by default. If you encounter OutOfMemory exceptions, this limit could be changed:
- NODE_OPTIONS=--max-old-space-size=8096\n
"},{"location":"deployment/installation/#workers-and-connectors","title":"Workers and connectors","text":"
OpenCTI workers and connectors are Python processes. If you want to limit the memory of the process, we recommend to directly use Docker to do that. You can find more information in the official Docker documentation.
ElasticSearch is also a JAVA process. In order to setup the JAVA memory allocation, you can use the environment variable ES_JAVA_OPTS. You can find more information in the official ElasticSearch documentation.
Redis has a very small footprint on keys but will consume memory for the stream. By default the size of the stream is limited to 2 millions which represents a memory footprint around 8 GB. You can find more information in the Redis docker hub.
The RabbitMQ memory configuration can be find in the RabbitMQ official documentation. RabbitMQ will consumed memory until a specific threshold, therefore it should be configure along with the Docker memory limitation.
OpenCTI supports multiple ways to integrate with other systems which do not have native connectors or plugins to the platform. Here are the technical features available to ease the connection and the integration of the platform with other applications.
Connectors list
If you are looking for the list of OpenCTI connectors or native integration, please check the OpenCTI Ecosystem.
"},{"location":"deployment/integrations/#native-feeds-and-streams","title":"Native feeds and streams","text":"
To ease integrations with other products, OpenCTI has built-in capabilities to deliver the data to third-parties.
It is possible to create as many CSV feeds as needed, based on filters and accessible in HTTP. CSV feeds are available in Data > Data sharing > CSV deeds.
When creating a CSV feed, you need to select one or multiple types of entities to make available. Then, you must assign a field (of an entity type) to each column in the CSV:
Details
For more information about CSV feeds, filters and configuration, please check the Native feeds page.
Most of the modern cybersecurity systems such as SIEMs, EDRs, XDRs and even firewalls support the TAXII protocol which is basically a paginated HTTP STIX feed. OpenCTI implements a TAXII 2.1 server with the ability to create as many TAXII collections as needed in Data > Data sharing > TAXII Collections.
TAXII collections are a sub-selection of the knowledge available in the platform and rely on filters. For instance, it is possible to create TAXII collections for pieces of malware with a given label, for indicators with a score greater than n, etc.
After implementing CSV feeds and TAXII collections, we figured out that those 2 stateless APIs are definitely not enough when it comes to tackle advanced information sharing challenges such as:
Real time transmission of the information (i.e. avoid hundreds of systems to pull data every 5 minutes).
Dependencies resolution (i.e. an intrusion created by an organization but the organization is not in the TAXII collection).
Partial update for huge entities such as report (i.e. just having the update event).
Delete events when necessary (i.e. to handle indicators expiration in third party systems for instance).
That's why we've developed the live streams. They are available in Data > Data sharing > Live streams. As with TAXII collections, it is possible to create as many streams as needed using filters.
Streams implement the HTTP SSE (Server-sent events) protocol and give applications the possibility to consume a real time pure STIX 2.1 stream. Stream connectors in the OpenCTI Ecosystem are using live streams to consume data and do something such as create / update / delete information in SIEMs, XDRs, etc.
Your API key can be found in your profile available clicking on the top right icon.
Using basic authentication
Username: Your platform username\nPassword: Your plafrom password\nAuthorization: Basic c2FtdWVsLmhhc3NpbmVBZmlsaWdyYW4uaW86TG91aXNlMTMwNCM=\n
Using client certificate authentication
To know how to configure the client certificate authentication, please consult the authentication configuration section.
"},{"location":"deployment/integrations/#api-and-libraries","title":"API and libraries","text":""},{"location":"deployment/integrations/#graphql-api","title":"GraphQL API","text":"
To allow analysts and developers to implement more custom or complex use cases, a full GraphQL API is available in the application on the /graphql endpoint.
The API can be queried using various GraphQL client such as Postman but you can leverage any HTTP client to forge GraphQL queries using POST methods.
The playground is available on the /graphql endpoint. A link button is also available in the profile of your user.
All the schema documentation is directly available in the playground.
If you already logged to OpenCTI with the same browser you should be able to directly do some requests. If you are not authenticated or want to authenticate only through the playground you can use a header configuration using your profile token
Example of configuration (bottom left of the playground):
Additional GraphQL documentation
To find out more about GraphQL and the playground, you can find two additional documentation pages: the GraphQL API page and the GraphQL playground page.
Since not everyone is familiar with GraphQL APIs, we've developed a Python library to ease the interaction with it. The library is pretty easy to use. To initiate the client:
The activity manager in OpenCTI is a component that monitors and logs the user actions in the platform such as login, settings update, and user activities if configured (read, udpate, etc.).
The expiration scheduler is responsible for monitoring expired elements in the platform. It cancels the access rights of expired user accounts and revokes expired indicators from the platform.
The synchronization manager enables the data sharing between multiple OpenCTI platforms. It allows the user to create and configure synchronizers which are processes that connect to the live streams of remote OpenCTI platforms and import the data into the local platform.
The retention manager is a component that allows the user to define rules to help delete data in OpenCTI that is no longer relevant or useful. This helps to optimize the performance and storage of the OpenCTI platform and ensures the quality and accuracy of the data.
The playbook manager handles the automation scenarios which can be fully customized and enabled by platform administrators to enrich, filter and modify the data created or updated in the platform.
Please read the Playbook automation page to get more information.
"},{"location":"deployment/managers/#file-index-manager","title":"File index manager","text":"
The file indexing manager extracts and indexes the text content of the files, and stores it in the database. It allows users to search for text content within files uploaded to the platform.
The telemetry manager collects periodically statistical data about platform usage.
More information about data telemetry can be found here.
"},{"location":"deployment/map/","title":"Deploy on-premise map server with OpenCTI styles","text":""},{"location":"deployment/map/#introduction","title":"Introduction","text":"
The OpenStreetMap tiles for the planet will take 80GB. Here are the instructions to deploy a local OpenStreetMap server with the OpenCTI styles.
"},{"location":"deployment/map/#create-directory-for-the-data-and-upload-planet-data","title":"Create directory for the data and upload planet data","text":"
When you will launch the map server container, it will be necessary to mount a volume with the planet tiles data. Just create the directory for the data.
mkdir /var/YOUR_DATA_DIR\n
We have hosted the free-to-use planet tiles, just download the planet data from filigran.io.
Once the server is running, you should see the list of available styles:
Click on \"Viewer\", and take the URL:
\ud83d\udc49 http:/YOUR_URL/styles/{ID}/....
In the OpenCTI configuration, just put:
Parameter Environment variable Value Description app:map_tile_server_dark APP__MAP_TILE_SERVER_DARK http://{YOUR_MAP_SERVER}/styles/{ID_DARK}/{z}/{x}/{y}.png The address of the OpenStreetMap provider with dark theme style app:map_tile_server_light APP__MAP_TILE_SERVER_LIGHT http://{YOUR_MAP_SERVER}/styles/{ID_LIGHT}/{z}/{x}/{y}.png The address of the OpenStreetMap provider with light theme style
Before starting the installation, let's discover how OpenCTI is working, which dependencies are needed and what are the minimal requirements to deploy it in production.
The platform is the central part of the OpenCTI technological stack. It allows users to access to the user interface but also provides the GraphQL API used by connectors and workers to insert data. In the context of a production deployment, you may need to scale horizontally and launch multiple platforms behind a load balancer connected to the same databases (ElasticSearch, Redis, S3, RabbitMQ).
The workers are standalone Python processes consuming messages from the RabbitMQ broker in order to do asynchronous write queries. You can launch as many workers as you need to increase the write performances. At some point, the write performances will be limited by the throughput of the ElasticSearch database cluster.
Number of workers
If you need to increase performances, it is better to launch more platforms to handle worker queries. The recommended setup is to have at least one platform for 3 workers (ie. 9 workers distributed over 3 platforms).
The connectors are third-party pieces of software (Python processes) that can play five different roles on the platform:
Type Description Examples EXTERNAL_IMPORT Pull data from remote sources, convert it to STIX2 and insert it on the OpenCTI platform. MITRE Datasets, MISP, CVE, AlienVault, Mandiant, etc. INTERNAL_ENRICHMENT Listen for new OpenCTI entities or users requests, pull data from remote sources to enrich. Shodan, DomainTools, IpInfo, etc. INTERNAL_IMPORT_FILE Extract data from files uploaded on OpenCTI through the UI or the API. STIX 2.1, PDF, Text, HTML, etc. INTERNAL_EXPORT_FILE Generate export from OpenCTI data, based on a single object or a list. STIX 2.1, CSV, PDF, etc. STREAM Consume a platform data stream and do something with events. Splunk, Elastic Security, Q-Radar, etc.
List of connectors
You can find all currently available connectors in the OpenCTI Ecosystem.
"},{"location":"deployment/overview/#infrastructure-requirements","title":"Infrastructure requirements","text":""},{"location":"deployment/overview/#dependencies","title":"Dependencies","text":"Component Version CPU RAM Disk type Disk space ElasticSearch / OpenSearch >= 8.0 / >= 2.9 2 cores \u2265 8GB SSD \u2265 16GB Redis >= 7.1 1 core \u2265 1GB SSD \u2265 16GB RabbitMQ >= 3.11 1 core \u2265 512MB Standard \u2265 2GB S3 / MinIO >= RELEASE.2023-02 1 core \u2265 128MB SSD \u2265 16GB"},{"location":"deployment/overview/#platform_1","title":"Platform","text":"Component CPU RAM Disk type Disk space OpenCTI Core 2 cores \u2265 8GB None (stateless) - Worker(s) 1 core \u2265 128MB None (stateless) - Connector(s) 1 core \u2265 128MB None (stateless) -
Clustering
To have more details about deploying OpenCTI and its dependencies in cluster mode, please read the dedicated section.
OpenCTI is an open and modular platform. A lot of connectors, plugins and clients are created by Filigran and by the community. You can find here other resources available to complete your OpenCTI journey.
Access to monthly sectorial analysis from our experts team based on knowledge and data collected by our partners.
Consult
Case studies
Explore the Filigran case studies about stories and usages of the platform among our communities and customers.
Download
"},{"location":"deployment/rollover/","title":"Indices and rollover policies","text":"
Default rollover policies
Since OpenCTI 5.9.0, rollover policies are automatically created when the platform is initialized for the first time. If your platform has been initialized using an older version of OpenCTI or if you would like to understand (and customize) rollover policies please read the following documentation.
ElasticSearch and OpenSearch both support rollover on indices. OpenCTI has been designed to be able to use aliases for indices and so supports index lifecycle policies very well. Thus, by default OpenCTI initializes indices with a suffix of -00001 and uses wildcards to query indices. When rollover policies are implemented (default starting OCTI 5.9.X if you initialized your platform at this version), indices are splitted to keep a reasonable volume of data in shards.
"},{"location":"deployment/rollover/#opencti-integration-user-permissions-in-opensearchelasticsearch","title":"OpenCTI Integration User Permissions in OpenSearch/ElasticSearch","text":"
Index Permissions
Patterns: opencti* (Dependent on the parameter elasticsearch:index_prefix value)
Permissions: indices_all
Cluster Permissions
cluster_composite_ops_ro
cluster_manage_index_templates
cluster:admin/ingest/pipeline/put
cluster:admin/opendistro/ism/policy/write
cluster:monitor/health
cluster:monitor/main
cluster:monitor/state
indices:admin/index_template/put
indices:data/read/scroll/clear
indices:data/read/scroll
indices:data/write/bulk
About indices:* in Cluster Permissions
It is crucial to include indices:* permissions in Cluster Permissions for the proper functioning of the OpenCTI integration. Removing these, even if already present in Index Permissions, may result in startup issues for the OpenCTI Platform.
By default, a rollover policy is applied on all indices used by OpenCTI.
opencti_deleted_objects
opencti_files
opencti_history
opencti_inferred_entities
opencti_inferred_relationships
opencti_internal_objects
opencti_internal_relationships
opencti_stix_core_relationships
opencti_stix_cyber_observable_relationships
opencti_stix_cyber_observables
opencti_stix_domain_objects
opencti_stix_meta_objects
opencti_stix_meta_relationships
opencti_stix_sighting_relationships
For your information, the indices which can grow rapidly are:
Index opencti_stix_meta_relationships: it contains all the nested relationships between objects and labels / marking definitions / external references / authors, etc.
Index opencti_history: it contains the history log of all objects in the platform.
Index opencti_stix_cyber_observables: it contains all observables stored in the platform.
Index opencti_stix_core_relationships: it contains all main STIX relationships stored in the platform.
Here is the recommended policy (initialized starting 5.9.X):
Maximum primary shard size: 50 GB
Maximum age: 365 days
Maximum documents: 75,000,000
"},{"location":"deployment/rollover/#applying-rollover-policies-on-existing-indices","title":"Applying rollover policies on existing indices","text":"
Procedure information
Please read the following only if your platform has been initialized before 5.9.0, otherwise lifecycle policies has been created (but you can still cutomize them).
Unfortunately, to be able to implement rollover policies on ElasticSearch / OpenSearch indices, it will be needed to re-index all the data in new indices using ElasticSearch capabilities.
Then, in the OpenCTI configuration, change the ElasticSearch / OpenSearch default prefix to octi (default is opencti).
"},{"location":"deployment/rollover/#create-the-rollover-policy","title":"Create the rollover policy","text":"
Create a rollover policy named octi-ilm-policy (in Kibana, Management > Index Lifecycle Policies):
Maximum primary shard size: 50 GB
Maximum age: 365 days
Maximum documents: 75,000,000
"},{"location":"deployment/rollover/#create-index-templates","title":"Create index templates","text":"
In Kibana, clone the opencti-index-template to have one index template by OpenCTI index with the appropriate rollover policy, index pattern and rollover alias (in Kibana, Management > Index Management > Index Templates).
Create the following index templates:
octi_deleted_objects
octi_files
octi_history
octi_inferred_entities
octi_inferred_relationships
octi_internal_objects
octi_internal_relationships
octi_stix_core_relationships
octi_stix_cyber_observable_relationships
octi_stix_cyber_observables
octi_stix_domain_objects
octi_stix_meta_objects
octi_stix_meta_relationships
octi_stix_sighting_relationships
Here is the overview of all templates (you should have something with octi_ instead of opencti_).
"},{"location":"deployment/rollover/#apply-rollover-policy-on-all-index-templates","title":"Apply rollover policy on all index templates","text":"
Then, going back in the index lifecycle policies screen, you can click on the \"+\" button of the octi-ilm-policy to Add the policy to index template, then add the policy to add previously created template with the proper \"Alias for rollover index\".
"},{"location":"deployment/rollover/#bootstrap-all-new-indices","title":"Bootstrap all new indices","text":"
Before we can re-index, we need to create the new indices with aliases.
PUT octi_history-000001\n{\n \"aliases\": {\n \"octi_history\": {\n \"is_write_index\": true\n }\n }\n}\n
Repeat this step for all indices:
octi_deleted_objects
octi_files
octi_history
octi_inferred_entities
octi_inferred_relationships
octi_internal_objects
octi_internal_relationships
octi_stix_core_relationships
octi_stix_cyber_observable_relationships
octi_stix_cyber_observables
octi_stix_domain_objects
octi_stix_meta_objects
octi_stix_meta_relationships
"},{"location":"deployment/rollover/#re-index-all-indices","title":"Re-index all indices","text":"
Using the reindex API, re-index all indices one by one:
This page aims to explain the typical errors you can have with your OpenCTI platform.
"},{"location":"deployment/troubleshooting/#finding-the-relevant-logs","title":"Finding the relevant logs","text":"
It is highly recommended to monitor the error logs of the platforms, workers and connectors. All the components have log outputs in an understandable JSON format. If necessary, it is always possible to increase the log level. In production, it is recommended to have the log level set to error.
After 5 retries, if an element required to create another element is missing, the platform raises an exception. It usually comes from a connector that generates inconsistent STIX 2.1 bundles.
Cant upsert entity. Too many entities resolved
OpenCTI received an entity which is matching too many other entities in the platform. In this condition we cannot take a decision. We need to dig into the data bundle to identify why it matches too much entities and fix the data in the bundle / or the platform according to what you expect.
Execution timeout, too many concurrent call on the same entities
The platform supports multi workers and multiple parallel creation but different parameters can lead to some locking timeout in the execution.
Throughput capacity of your ElasticSearch
Number of workers started at the same time
Dependencies between data
Merging capacity of OpenCTI
If you have this kind of error, limit the number of workers deployed. Try to find the right balance of the number of workers, connectors and elasticsearch sizing.
Depending on your installation mode, upgrade path may change.
Migrations
The platform is taking care of all necessary underlying migrations in the databases if any. You can upgrade OpenCTI from any version to the latest one, including skipping multiple major releases.
The GraphQL playground is an integrated development environment (IDE) provided by OpenCTI for exploring and testing GraphQL APIs. It offers a user-friendly interface that allows developers to interactively query the GraphQL schema, experiment with different queries, and visualize the responses.
The Playground provides a text editor where developers can write GraphQL queries, mutations, and subscriptions. As you type, the Playground offers syntax highlighting, autocompletion, and error checking to aid in query composition.
Developers can access comprehensive documentation for the GraphQL schema directly within the Playground. This documentation includes descriptions of all available types, fields, and directives, making it easy to understand the data model and construct queries.
The playground keeps track of previously executed queries, allowing developers to revisit and reuse queries from previous sessions. This feature streamlines the development process by eliminating the need to retype complex queries.
Upon executing a query, the playground displays the response data in a structured and readable format. JSON responses are presented in a collapsible tree view, making it easy to navigate nested data structures and inspect individual fields.
Developers can explore the GraphQL schema using the built-in schema viewer. This feature provides a graphical representation of the schema, showing types, fields, and their relationships. Developers can explore the schema and understand its structure.
To access the GraphQL playground, navigate to the GraphQL endpoint of your OpenCTI instance: https://[your-opencti-instance]/graphql. Then, follow these steps to utilize the playground:
Query editor: Write GraphQL queries, mutations, and subscriptions in the text editor. Use syntax highlighting and autocompletion to speed up query composition.
Documentation explorer: Access documentation for the GraphQL schema by clicking on the \"Docs\" tab on the right. Browse types, fields, and descriptions to understand the available data and query syntax.
Query history: View and execute previously executed queries from the \"History\" tab on the top. Reuse queries and experiment with variations without retyping.
Response pane: Visualize query responses in the response pane. Expand and collapse sections to navigate complex data structures and inspect individual fields.
Schema viewer: Explore the GraphQL schema interactively using the \"Schema\" tab on the right. Navigate types, fields, and relationships to understand the data model and plan queries.
A connector in OpenCTI is a service that runs next to the platform and can be implemented in almost any programming language that has STIX2 support. Connectors are used to extend the functionality of OpenCTI and allow operators to shift some of the processing workload to external services. To use the conveniently provided OpenCTI connector SDK you need to use Python3 at the moment.
We choose to have a very decentralized approach on connectors, in order to bring a maximum freedom to developers and vendors. So a connector on OpenCTI can be defined by a standalone Python 3 process that pushes an understandable format of data to an ingestion queue of messages.
Each connector must implement a long-running process that can be launched just by executing the main Python file. The only mandatory dependency is the OpenCTIConnectorHelper class that enables the connector to send data to OpenCTI.
In the beginning first think about your use-case to choose an appropriate connector type - what do want to achieve with your connector? The following table gives you an overview of the current connector types and some typical use-cases:
Connector types
Type Typical use cases Example connector EXTERNAL_IMPORT Integrate external TI provider, Integrate external TI platform AlienVault INTERNAL_ENRICHMENT Enhance existing data with additional knowledge AbuseIP INTERNAL_IMPORT_FILE (Bulk) import knowledge from files Import document INTERNAL_EXPORT_FILE (Bulk) export knowledge to files STIX 2.1, CSV. STREAM Integrate external TI provider, Integrate external TI platform Elastic Security
After you've selected your connector type make yourself familiar with STIX2 and the supported relationships in OpenCTI. Having some knowledge about the internal data models with help you a lot with the implementation of your idea.
To develop and test your connector, you need a running OpenCTI instance with the frontend and the messaging broker accessible. If you don't plan on developing anything for the OpenCTI platform or the frontend, the easiest setup for the connector development is using the docker setup, For more details see here.
To give you an easy starting point we prepared an example connector in the public repository you can use as template to bootstrap your development.
Some prerequisites we recommend to follow this tutorial:
Code editor with good Python3 support (e.g. Visual Studio Code with the Python extension pack)
Python3 + setuptools is installed and configured
Command shell (either Linux/Mac terminal or WSL on Windows)
In the terminal check out the connectors repository and copy the template connector to $myconnector (replace it with your name throughout the following text examples).
$ pip3 install black flake8 pycti\n# Fork the current repository, then clone your fork\n$ git clone https://github.com/YOUR-USERNAME/connectors.git\n$ cd connectors\n$ git remote add upstream https://github.com/OpenCTI-Platform/connectors.git\n# Create a branch for your feature/fix\n$ git checkout -b [branch-name]\n# Copy the appropriate template directory for the connector type\n$ cp -r templates/$connector_type $connector_type/$myconnector\n$ cd $connector_type/$myconnector\n$ ls -R\nDockerfile docker-compose.yml requirements.txt\nREADME.md entrypoint.sh src\n\n./src:\nlib main.py\n\n./src/lib:\n$connector_type.py\n
"},{"location":"development/connectors/#changing-the-template","title":"Changing the template","text":"
There are a few files in the template we need to change for our connector to be unique. You can check for all places you need to change you connector name with the following command (the output will look similar):
$ grep -Ri template .\n\nREADME.md:# OpenCTI Template Connector\nREADME.md:| `connector_type` | `CONNECTOR_TYPE` | Yes | Must be `Template_Type` (this is the connector type). |\nREADME.md:| `connector_name` | `CONNECTOR_NAME` | Yes | Option `Template` |\nREADME.md:| `connector_scope` | `CONNECTOR_SCOPE` | Yes | Supported scope: Template Scope (MIME Type or Stix Object) |\nREADME.md:| `template_attribute` | `TEMPLATE_ATTRIBUTE` | Yes | Additional setting for the connector itself |\ndocker-compose.yml: connector-template:\ndocker-compose.yml: image: opencti/connector-template:4.5.5\ndocker-compose.yml: - CONNECTOR_TYPE=Template_Type\ndocker-compose.yml: - CONNECTOR_NAME=Template\ndocker-compose.yml: - CONNECTOR_SCOPE=Template_Scope # MIME type or Stix Object\nentrypoint.sh:cd /opt/opencti-connector-template\nDockerfile:COPY src /opt/opencti-template\nDockerfile: cd /opt/opencti-connector-template && \\\nsrc/main.py:class Template:\nsrc/main.py: \"TEMPLATE_ATTRIBUTE\", [\"template\", \"attribute\"], config, True\nsrc/main.py: connectorTemplate = Template()\nsrc/main.py: connectorTemplate.run()\nsrc/config.yml.sample: type: 'Template_Type'\nsrc/config.yml.sample: name: 'Template'\nsrc/config.yml.sample: scope: 'Template_Scope' # MIME type or SCO\n
Required changes:
Change Template or templatementions to your connector name e.g. ImportCsv or importcsv
Change TEMPLATE mentions to your connector name e.g. IMPORTCSV
Change Template_Scope mentions to the required scope of your connector. For processing imported files, that can be the Mime type e.g. application/pdf or for enriching existing information in OpenCTI, define the STIX object's name e.g. Report. Multiple scopes can be separated by a simple ,
Change Template_Type to the connector type you wish to develop. The OpenCTI types are defined hereafter:
EXTERNAL_IMPORT
INTERNAL_ENRICHMENT
INTERNAL_EXPORT_FILE
INTERNAL_IMPORT_FILE
STREAM
"},{"location":"development/connectors/#development","title":"Development","text":""},{"location":"development/connectors/#initialize-the-opencti-connector-helper","title":"Initialize the OpenCTI connector helper","text":"
After getting the configuration parameters of your connector, you have to initialize the OpenCTI connector helper by using the pycti Python library. This is shown in the following example:
class TemplateConnector:\n def __init__(self):\n # Instantiate the connector helper from config\n config_file_path = os.path.dirname(os.path.abspath(__file__)) + \"/config.yml\"\n config = (\n yaml.load(open(config_file_path), Loader=yaml.SafeLoader)\n if os.path.isfile(config_file_path)\n else {}\n )\n self.helper = OpenCTIConnectorHelper(config)\n self.custom_attribute = get_config_variable(\n \"TEMPLATE_ATTRIBUTE\", [\"template\", \"attribute\"], config\n )\n
Since there are some basic differences in the tasks of the different connector classes, the structure is also a bit class dependent. While the external-import and the stream connector run independently in a regular interval or constantly, the other 3 connector classes only run when being requested by the OpenCTI platform.
The self-triggered connectors run independently, but the OpenCTI need to define a callback function, which can be executed for the connector to start its work. This is done via self.helper.listen(self._process_message). In the appended examples, the difference of the setup can be seen.
from pycti import OpenCTIConnectorHelper, get_config_variable\n\nclass TemplateConnector:\n def __init__(self) -> None:\n # Initialization procedures\n [...]\n\n def _process_message(self, data: dict) -> str:\n # Main procedure \n\n # Start the main loop\n def start(self) -> None:\n self.helper.listen(self._process_message)\n\nif __name__ == \"__main__\":\n try:\n template_connector = TemplateConnector()\n template_connector.start()\n except Exception as e:\n print(e)\n time.sleep(10)\n exit(0)\n
"},{"location":"development/connectors/#write-and-read-operations","title":"Write and Read Operations","text":"
When using the OpenCTIConnectorHelper class, there are two way for reading from or writing data to the OpenCTI platform.
via the OpenCTI API interface via self.helper.api
via the OpenCTI worker via self.send_stix2_bundle
"},{"location":"development/connectors/#sending-data-to-the-opencti-platform","title":"Sending data to the OpenCTI platform","text":"
The recommended way for creating or updating data in the OpenCTI platform is via the OpenCTI worker. This enables the connector to just send and forget about thousands of entities at once to without having to think about the ingestion order, performance or error handling.
\u26a0\ufe0f **Please DO NOT use the api interface to create new objects in connectors.**
The OpenCTI connector helper method send_stix2_bundle must be used to send data to OpenCTI. The send_stix2_bundle function takes 2 arguments.
A serialized STIX2 bundle as a string (mandatory)
A list of entities types that should be ingested (optional)
Here is an example using the STIX2 Python library:
"},{"location":"development/connectors/#reading-from-the-opencti-platform","title":"Reading from the OpenCTI platform","text":"
Read queries to the OpenCTI platform can be achieved using the API and the STIX IDs can be attached to reports to create the relationship between those two entities.
If you want to add the found entity via objects_refs to another SDO, simply add a list of stix_ids to the SDO. Here's an example using the entity from the code snippet above:
from stix2 import Report\n\n[...]\n\nreport = Report(\n id=report[\"standard_id\"],\n object_refs=[entity[\"standard_id\"]],\n)\n
When something crashes for a user, you as a developer want to know as much as possible about this incident to easily improve your code and remove this issue. To do so, it is very helpful if your connector documents what it does. Use info messages for big changes like the beginning or the finishing of an operation, but to facilitate your bug removal attempts, implement debug messages for minor operation changes to document different steps in your code.
When encountering a crash, the connector's user can easily restart the troubling connector with the debug logging activated.
CONNECTOR_LOG_LEVEL=debug
Using those additional log messages, the bug report is more enriched with information about the possible cause of the problem. Here's an example of how the logging should be implemented:
def run(self) -> None:\n self.helper.log_info('Template connector starts')\n results = self._ask_for_news()\n [...]\n\n def _ask_for_news() -> None:\n overall = []\n for i in range(0, 10):\n self.log_debug(f\"Asking about news with count '{i}'\")\n # Do something\n self.log_debug(f\"Resut: '{result}'\")\n overall.append(result)\n return overall\n
Please make sure that the debug messages rich of useful information, but that they are not redundant and that the user is not drowned by unnecessary information.
If you are still unsure about how to implement certain things in your connector, we advise you to have a look at the code of other connectors of the same type. Maybe they are already using an approach which is suitable for addressing to your problem.
"},{"location":"development/connectors/#opencti-triggered-connector-special-cases","title":"OpenCTI triggered Connector - Special cases","text":""},{"location":"development/connectors/#data-layout-of-dictionary-from-callback-function","title":"Data Layout of Dictionary from Callback function","text":"
OpenCTI sends the connector a few instructions via the data dictionary in the callback function. Depending on the connector type, the data dictionary content is a bit different. Here are a few examples for each connector type.
Internal Import Connector
{ \n \"file_id\": \"<fileId>\",\n \"file_mime\": \"application/pdf\", \n \"file_fetch\": \"storage/get/<file_id>\", // Path to get the file\n \"entity_id\": \"report--82843863-6301-59da-b783-fe98249b464e\", // Context of the upload\n}\n
Internal Enrichment Connector
{ \n \"entity_id\": \"<stixCoreObjectId>\" // StixID of the object wanting to be enriched\n}\n
Internal Export Connector
{ \n \"export_scope\": \"single\", // 'single' or 'list'\n \"export_type\": \"simple\", // 'simple' or 'full'\n \"file_name\": \"<fileName>\", // Export expected file name\n \"max_marking\": \"<maxMarkingId>\", // Max marking id\n \"entity_type\": \"AttackPattern\", // Exported entity type\n // ONLY for single entity export\n \"entity_id\": \"<entity.id>\", // Exported element\n // ONLY for list entity export\n \"list_params\": \"[<parameters>]\" // Parameters for finding entities\n}\n
"},{"location":"development/connectors/#self-triggered-connector-special-cases","title":"Self triggered Connector - Special cases","text":""},{"location":"development/connectors/#initiating-a-work-before-pushing-data","title":"Initiating a 'Work' before pushing data","text":"
For self-triggered connectors, OpenCTI has to be told about new jobs to process and to import. This is done by registering a so called work before sending the stix bundle and signalling the end of a work. Here an example:
By implementing the work registration, they will show up as shown in this screenshot for the MITRE ATT&CK connector:
The connector is also responsible for making sure that it runs in certain intervals. In most cases, the intervals are definable in the connector config and then only need to be set and updated during the runtime.
class TemplateConnector:\n def __init__(self) -> None:\n # Initialization procedures\n [...]\n self.template_interval = get_config_variable(\n \"TEMPLATE_INTERVAL\", [\"template\", \"interval\"], config, True\n )\n\n def get_interval(self) -> int:\n return int(self.template_interval) * 60 * 60 * 24\n\n def run(self) -> None:\n self.helper.log_info(\"Fetching knowledge...\")\n while True:\n try:\n # Get the current timestamp and check\n timestamp = int(time.time())\n current_state = self.helper.get_state()\n if current_state is not None and \"last_run\" in current_state:\n last_run = current_state[\"last_run\"]\n self.helper.log_info(\n \"Connector last run: \"\n + datetime.utcfromtimestamp(last_run).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n )\n else:\n last_run = None\n self.helper.log_info(\"Connector has never run\")\n # If the last_run is more than interval-1 day\n if last_run is None or (\n (timestamp - last_run)\n > ((int(self.template_interval) - 1) * 60 * 60 * 24)\n ):\n timestamp = int(time.time())\n now = datetime.utcfromtimestamp(timestamp)\n friendly_name = \"Connector run @ \" + now.strftime(\"%Y-%m-%d %H:%M:%S\")\n\n ###\n # RUN CODE HERE \n ###\n\n # Store the current timestamp as a last run\n self.helper.log_info(\n \"Connector successfully run, storing last_run as \"\n + str(timestamp)\n )\n self.helper.set_state({\"last_run\": timestamp})\n message = (\n \"Last_run stored, next run in: \"\n + str(round(self.get_interval() / 60 / 60 / 24, 2))\n + \" days\"\n )\n self.helper.api.work.to_processed(work_id, message)\n self.helper.log_info(message)\n time.sleep(60)\n else:\n new_interval = self.get_interval() - (timestamp - last_run)\n self.helper.log_info(\n \"Connector will not run, next run in: \"\n + str(round(new_interval / 60 / 60 / 24, 2))\n + \" days\"\n )\n time.sleep(60)\n
"},{"location":"development/connectors/#running-the-connector","title":"Running the connector","text":"
For development purposes, it is easier to simply run the python script locally until everything works as it should.
$ virtualenv env\n$ source ./env/bin/activate\n$ pip3 install -r requirements\n$ cp config.yml.sample config.yml\n# Define the opencti url and token, as well as the connector's id\n$ vim config.yml\n$ python3 main.py\nINFO:root:Listing Threat-Actors with filters null.\nINFO:root:Connector registered with ID: a2de809c-fbb9-491d-90c0-96c7d1766000\nINFO:root:Starting ping alive thread\n...\n
Before submitting a Pull Request, please test your code for different use cases and scenarios. We don't have an automatic testing suite for the connectors yet, thus we highly depend on developers thinking about creative scenarios their code could encounter.
"},{"location":"development/connectors/#prepare-for-release","title":"Prepare for release","text":"
If you plan to provide your connector to be used by the community (\u2764\ufe0f) your code should pass the following (minimum) criteria.
# Linting with flake8 contains no errors or warnings\n$ flake8 --ignore=E,W\n# Verify formatting with black\n$ black .\nAll done! \u2728 \ud83c\udf70 \u2728\n1 file left unchanged.\n# Verify import sorting\n$ isort --profile black .\nFixing /path/to/connector/file.py\n# Push you feature/fix on Github\n$ git add [file(s)]\n$ git commit -m \"[connector_name] descriptive message\"\n$ git push origin [branch-name]\n# Open a pull request with the title \"[connector_name] message\"\n
If you have any trouble with this just reach out to the OpenCTI core team. We are happy to assist with this.
As OpenCTI has a dependency to ElasticSearch, you have to set the vm.max_map_count before running the containers, as mentioned in the ElasticSearch documentation.
$ sudo sysctl -w vm.max_map_count=262144\n
"},{"location":"development/environment_ubuntu/#nodejs-and-yarn","title":"NodeJS and yarn","text":"
The platform is developed on nodejs technology, so you need to install node and the yarn package manager.
Development stack require some base software that need to be installed.
"},{"location":"development/environment_windows/#docker-or-podman","title":"Docker or podman","text":"
Platform dependencies in development are deployed through container management, so you need to install a container stack.
We currently support docker and postman.
Docker Desktop from - https://docs.docker.com/desktop/install/windows-install/
Install new version of - https://docs.microsoft.com/windows/wsl/wsl2-kernel. This will require a reboot.
Shell out to CMD as Administrator and run the following powershell command:
wsl --set-default-version 2
Reboot computer and continue to next step
Load Docker Application
NOTE DOCKER LICENSE - You are agreeing to the licence for Non-commercial Open Source Project use. OpenCTI is Open Source and the version you would be possibly contributing to enhancing is the unpaid non-commercial/non-enterprise version. If you intention is different - please consult with your organization's legal/licensing department.
Leave Docker Desktop running
"},{"location":"development/environment_windows/#nodejs-and-yarn","title":"NodeJS and yarn","text":"
The platform is developed on nodejs technology, so you need to install node and the yarn package manager.
Install NodeJS from - https://nodejs.org/download/release/v16.20.0/node-v16.20.0-x64.msi
Select the option for installing Chocolatey on the Tools for Native Modules screen
Will do this install for you automatically - https://chocolatey.org/packages/visualstudio2019-workload-vctools
Includes Python 3.11.4
Shell out to CMD prompt as Administrator and install/run:
For worker and connectors, a python runtime is needed. Even if you already have a python runtime installed through node installation, on windows some nodejs package will be recompiled with python and C++ runtime.
For this reason Visual Studio Build Tools is required.
Install Visual Studio Build Tools from - https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools
Check off Desktop Development with C++
Run install
"},{"location":"development/environment_windows/#git-and-dev-tool","title":"Git and dev tool","text":"
Download GIT for Windows (64-bit Setup)- https://git-scm.com/download/win
Just use defaults on each screen
Install your preferred IDE
Intellij community edition - https://www.jetbrains.com/idea/download/
This summary should give you a detailed setup description for initiating the OpenCTI setup environment necessary for developing on the OpenCTI platform, a client library or the connectors. This page document how to set up an \"All-in-One\" development environment for OpenCTI. The devenv will contain data of 3 different repositories:
The GraphQL API is developed in JS and with some python code. As it's an \"all-in-one\" installation, the python environment will be installed in a virtual environment.
The API can be specifically configured with files depending on the starting profile. By default, the default.json file is used and will be correctly configured for local usage except for admin password.
So you need to create a development profile file. You can duplicate the default file and adapt if you need.
cd ~/opencti/opencti-platform/opencti-graphql/config\ncp default.json development.json\n
At minimum adapt the admin part for the password and token.
For starting the test you will need to create a test.json configuration file. You can use the same dependencies by only adapting all prefix for all dependencies.
Tests are using dedicated indices in the Elastic database (prefixed with test-* or the prefix that you have set up in test.json).
The following command will run the complete test suite using vitest, which might take more than 30 minutes. It starts by cleaning up the test database and seeding a minimal dataset. The file vitest.config.test.ts can be edited to run only a specific file pattern.
yarn test:dev
We also provide utility scripts to ease the development of new tests, especially integration tests that rely on the sample data loaded after executing 00-inject/loader-test.ts.
To solely initialize the test database with this sample dataset run:
yarn test:dev:init
And then, execute the following command to run the pattern specified in the file vitest.config.test.ts, or add a file name to the command line to run only this test file.
yarn test:dev:resume
This last command will NOT cleanup & initialize the test database and thus will be quicker to execute.
Based on development source you can build the package for production. This package will be minified and optimized with esbuild.
$ cd opencti-frontend\n$ yarn build\n$ cd ../opencti-graphql\n$ yarn build\n
After the build you can start the production build with yarn serv. This build will use the production.json configuration file.
$ cd ../opencti-graphql\n$ yarn serv\n
"},{"location":"development/platform/#continuous-integration-and-features-cross-repository","title":"Continuous Integration and features cross repository","text":"
When a feature requires changes in two or more repositories in opencti, connectors and client-python; then some specific convention must be used to have the continuous integration build them all together.
"},{"location":"development/platform/#naming-convention-of-branch","title":"Naming convention of branch","text":"
The Pull Request on opencti repository should be (issue or bug)/number + optional, example: issue/7062-contributing
The pull request on connector or client-python should refer to the opencti one by starting with \"opencti/\" and then the same name. Example: opencti/issue/7062-contributing
Note that if there are several matches, the first one is taken. So for example having issue/7062-contributing and issue/7062 that are both marked as \"multi-repository\" is not a good idea.
To install the latest Python client library, please use pip:
$ pip3 install pycti\n
"},{"location":"development/python/#using-the-helper-functions","title":"Using the helper functions","text":"
The main class OpenCTIApiClient contains all what you need to interact with the platform, you just have to initialize it.
The following example shows how you create an indicator in OpenCTI using the python library with TLP marking and OpenCTI compatible date format.
from dateutil.parser import parse\nfrom pycti import OpenCTIApiClient\nfrom stix2 import TLP_GREEN\n\n# OpenCTI API client initialization\nopencti_api_client = OpenCTIApiClient(\"https://myopencti.server\", \"mysupersecrettoken\")\n\n# Define an OpenCTI compatible date\ndate = parse(\"2019-12-01\").strftime(\"%Y-%m-%dT%H:%M:%SZ\")\n\n# Get the OpenCTI marking for stix2 TLP_GREEN\nTLP_GREEN_CTI = opencti_api_client.marking_definition.read(id=TLP_GREEN[\"id\"])\n\n# Use the client to create an indicator in OpenCTI\nindicator = opencti_api_client.indicator.create(\n name=\"C2 server of the new campaign\",\n description=\"This is the C2 server of the campaign\",\n pattern_type=\"stix\",\n pattern=\"[domain-name:value = 'www.5z8.info']\",\n x_opencti_main_observable_type=\"IPv4-Addr\",\n valid_from=date,\n update=True,\n markingDefinitions=[TLP_GREEN_CTI[\"id\"]],\n)\n
OpenCTI provides a comprehensive API based on GraphQL, allowing users to perform various actions programmatically. The API enables users to interact with OpenCTI's functionality and data, offering a powerful tool for automation, integration, and customization. All actions that can be performed through the platform's graphical interface are also achievable via the API.
Access to the OpenCTI API requires authentication using standard authentication mechanisms. Access rights to data via the API will be determined by the access privileges of the user associated with the API key. For authentication, users need to include the following headers in their API requests:
The OpenCTI API consists of various endpoints corresponding to different functionalities and operations within the platform. These endpoints allow users to perform actions such as querying data, creating or updating entities, and more. Users can refer to the Understand GraphQL section to understand how it works.
Documentation for the OpenCTI API, including schema definitions, the list of filters available and queryable fields, is available through the OpenCTI platform. It can be found on the GraphQL playground. However, query examples and mutation examples are not yet available. In the meantime, users can explore the available endpoints and their functionality by inspecting network traffic in the browser's developer tools or by examining the source code of the Python client.
GraphQL is a powerful query language for APIs that enables clients to request exactly the data they need. Unlike traditional REST APIs, which expose fixed endpoints and return predefined data structures, GraphQL APIs allow clients to specify the shape and structure of the data they require.
"},{"location":"reference/api/#core-concepts","title":"Core concepts","text":""},{"location":"reference/api/#schema-definition-language-sdl","title":"Schema Definition Language (SDL)","text":"
GraphQL APIs are defined by a schema, which describes the types of data that can be queried and the relationships between them. The schema is written using the Schema Definition Language (SDL), which defines types, fields, and their relationships.
GraphQL uses a query language to request data from the server. Clients can specify exactly which fields they need and how they are related, enabling precise data retrieval without over-fetching or under-fetching.
Resolvers are functions responsible for fetching the requested data. Each field in the GraphQL schema corresponds to a resolver function, which determines how to retrieve the data from the underlying data sources.
"},{"location":"reference/api/#how-it-works","title":"How it Works","text":"
Schema definition: The API provider defines a GraphQL schema using SDL, specifying the types and fields available for querying.
Query execution: Clients send GraphQL queries to the server, specifying the data they need. The server processes the query, resolves each field, and constructs a response object with the requested data.
Validation and execution: The server validates the query against the schema to ensure it is syntactically and semantically correct. If validation passes, the server executes the query, invoking the appropriate resolver functions to fetch the requested data.
Data retrieval: Resolvers fetch data from the relevant data sources, such as databases, APIs, or other services. They transform the raw data into the shape specified in the query and return it to the client.
Response formation: Once all resolvers have completed, the server assembles the response object containing the requested data and sends it back to the client.
As a cyber threat intelligence platform, OpenCTI offers functionalities that enable users to move quickly from raw data to operational intelligence by building up high-quality, structured information.
To do so, the platform provides a number of essential capabilities, such as automated data deduplication, merging of similar entities while preserving relationship integrity, the ability to modulate the confidence levels on your intelligence, and the presence of inference rules to automate the creation of logical relationships among your data.
The purpose of this page is to list the features of the platform that contribute to the intelligibility and quality of intelligence.
The first essential data intelligence mechanism in OpenCTI is the deduplication of information and relations.
This advanced functionality not only enables you to check whether a piece of data, information or a relationship is not a duplicate of an existing element, but will also, under certain conditions, enrich the element already present.
If the new duplicated entity has new content, the pre-existing entity can be enriched with the new information from the duplicate.
It works as follows (see details in the dedicated page):
For entities: based on the entity's properties based on a specific ID generated by the \"ID Contributing Properties\" platform (properties listed on the dedicated page).
For relationships: based on type, source, target, start time, and stop time.
For observables: a specific ID is also generated by the platform, this time based on the specifications of the STIX model.
The ability to update and enrich is determined by the confidence level and quality level of the entities and relationships (see diagram on page deduplication).
OpenCTI's merging function is one of the platform's crucial data intelligence elements.
From the Data > Entities tab, this feature lets you merge up to 4 entities of the same type. A parent entity is selected and assigned up to three child entities.
The benefit of this feature is to centralize a number of similar elements from different sources without losing data or degrading the quality of the information. During merging, the platform will create relationships to anchor all the data to the consolidated entity.
This enrichment function consolidates the data and avoids duplication, but above all initiates a structured intelligence process while preserving the integrity of pre-existing relationships as presented here.
"},{"location":"reference/data-intelligence/#confidence-level-and-data-segregation","title":"Confidence level and data segregation","text":"
Another key element of OpenCTI's data intelligence is its ability to apply confidence levels and to segregate the data present in the platform.
The confidence level is directly linked to users and Role Based Access Control. It is applied to a user directly or indirectly via the confidence level of the group to which the user belongs. This element is fundamental as it defines the levels of data manipulation to which the user (real or connector) is entitled.
The correct application of confidence levels is all the more important as it will determine the confidence level of the data manipulated by a user. It is therefore a decisive mechanism, since it underpins the confidence you have in the content of your instance.
While it is important to apply a level of trust to your users or groups, it is also important to define a way of categorizing and protecting your data.
Data segregation makes it possible to apply marking definitions and therefore establish a standardized framework for classifying data.
These marking definitions, like the classic Traffic Light Protocols (TLP) implemented by default in the platform, will determine whether a user can access a specific data set. The marking will be applied at the group level to which the user belongs, which will determine the data to which the user has access and therefore the data that the user can potentially handle.
In OpenCTI, data intelligence is not just about the ability to segregate, qualify or enrich data. OpenCTI's inference rules enable you to mobilize the data on your platform effectively and operationally.
These predefined rules enable the user to speed up cyber threat management. For example, inferences can be used to automatically identify incidents based on a sighting, to create sightings on observables based on new observed data, to propagate relationships based on an observable, etc.
In all, the platform includes some twenty high-performance inference rules that considerably speed up the analysis and response to threats (see the full list here).
These rules are based on a logical interpretation of the data, resulting in a pre-analysis of the information by creating relationships that will enrich the intelligence in the platform. There are three main benefits: efficiency, completeness and accuracy. These user benefits can be found here.
Note: If these rules are present in the platform, they are not activated by default.
Once activated, they scan all the data in your platform in the background to identify all the existing relationships that meet the conditions of the rules. Then, the rules operate continuously to create relationships. If you deactivate a rule, all the objects and relationships it has created will be deleted.
These actions can only be carried out by an administrator of the instance.
This page will be automatically generated to reference the platform's data model. We are doing our best to implement this automatic generation as quickly as possible.
"},{"location":"reference/filters-migration/","title":"Filters format migration for OpenCTI 5.12","text":"
The version 5.12 of OpenCTI introduces breaking changes to the filters format used in the API. This documentation describes how you can migrate your scripts or programs that call the OpenCTI API, when updating from a version of OpenCTI inferior to 5.12.
"},{"location":"reference/filters-migration/#why-this-migration","title":"Why this migration?","text":"
Before OpenCTI 5.12, it was not possible to construct complex filters combinations: we couldn't embed filters within filters, used different boolean modes (and/or), filter on all available attributes or relations for a given entity type, or even test for empty fields of any sort.
Legacy of years of development, the former format and filtering mechanics were not adapted for such task, and a profound refactoring was necessary to make it happen.
Here are the main pain points we identified beforehand:
The filters frontend and backend formats were very different, requiring careful conversions.
The filter were static lists of keys, depending on each given entity type and maintained by hands.
The operator (eq, not_eq, etc.) was inside the key (e.g. entity_type_not_eq), limiting operator combination and requiring error-prone parsing.
The frontend format imposed a unique form of combination (and between filters, or between values inside each filter, and nothing else possible).
The flat list structure made impossible filter imbrication by nature.
Filters and query options were mixed in GQL queries for the same purpose (for instance, option types analog to a filter on key entity_type).
// filter formats in OpenCTI < 5.12\n\ntype Filter = {\n key: string, // a key in the list of the available filter keys for given the entity type\n values: string[],\n operator: string,\n filterMode: 'and' | 'or',\n}\n\n// \"give me Reports labelled with labelX or labelY\"\nconst filters = [\n { \n \"key\": \"entity_type\",\n \"values\": [\"Report\"],\n \"operator\": \"eq\",\n \"filterMode\": \"or\"\n },\n { \n \"key\": \"labelledBy\",\n \"values\": [\"<id-for-labelX>\", \"<id-for-labelY>\"],\n \"operator\": \"eq\",\n \"filterMode\": \"or\"\n },\n]\n
The new format brings a lot of short-term benefits and is compatible with our long-term vision of the filtering capabilities in OpenCTI. We chose a simple recursive structure that allow complex combination of any sort with respect to basic boolean logic.
The list of operator is fixed and can be extended during future developments.
Because changing filters format impacts almost everything in the platform, we decided to do a complete refactoring once and for all. We want this migration process to be clear and easy.
"},{"location":"reference/filters-migration/#what-has-been-changed","title":"What has been changed","text":"
The new filter implementation bring major changes in the way filters are processed and executed.
We change the filters formats (see FilterGroup type above):
In the frontend, an operator and a mode are stored for each key.
The new format enables filters imbrication thanks to the new attribute 'filterGroups'.
The keys are of type string (no more static list of enums).
The 'values' attribute can no longer contain null values (use the nil operator instead).
We also renamed some filter keys, to be consistent with the entities schema definitions.
We implemented the handling of the different operators and modes in the backend.
We introduced new void operators (nil / not_nil) to test the presence or absence of value in any field.
"},{"location":"reference/filters-migration/#how-to-migrate-your-own-filters","title":"How to migrate your own filters","text":"
We wrote a migration script to convert all stored filters created prior to version 5.12. These filters will thus be migrated automatically when starting your updated platform.
However, you might have your own connectors, queries, or python scripts that use the graphql API or the python client. If this is the case, you must change the filter format if you want to run the code against OpenCTI >= 5.12.
If values contains a null value, you need to convert the filter by using the new nil / not_nil operators. Here's the procedure:
Extract one filter dedicated to null
if operator was 'eq', switch to operator: 'nil' / if operator was not_eq, switch to operator = 'not_nil'
values = []
Extract another filter for all the other values.
// \"Must have a label that is not Label1 or Label2\"\nconst oldFilter = {\n key: 'labelledBy',\n values: [null, 'id-for-Label1', 'id-for-Label2'],\n operator: 'not_eq',\n filterMode: 'and',\n}\n\nconst newFilters = {\n mode: 'and',\n filters: [\n {\n key: 'objectLabel',\n values: ['id-label-1', 'id-for-Label2'],\n operator: 'not_eq',\n mode: 'and',\n },\n {\n key: 'objectLabel',\n values: [],\n operator: 'not_nil',\n mode: 'and',\n },\n ],\n filterGroups: [],\n}\n
Switch to nested filter to preserve logic
To preserve the logic of your old filter you might need to compose nested filter groups. This could happen for instance when using eq operator with null values for one filter, combined in and mode with other filters.
Dynamic filters are not stored in the database, they enable to filter view in the UI, e.g. filters in entities list, investigations, knowledge graphs. They are saved as URL parameters, and can be saved in local storage.
These filters are not migrated automatically and are lost when moving to 5.12. This concerns the filters saved for each view, that are restored when coming back to the same view. You will need to reconstruct the filters by hand in the UI; these new filters will be properly saved and restored afterward.
Also, when going to an url with filters in the old format, OpenCTI will display a warning and remove the filter parameters. Only URLs built by OpenCTI 5.12 are compatible with it, so you will need to reconstruct the filters by hand and save / share your updated links.
There are two types of filters that are used in many locations in the platform:
in entities lists: to display only the entities matching the filters. If an export or a background task is generated, only the filtered data will be taken into account,
in investigations and knowledge graphs: to display only the entities matching the filters,
in dashboards: to create widget with only the entities matching the filters,
in feeds, TAXII collections, triggers, streams, playbooks, background tasks: to process only the data or events matching the filters.
Dynamic filters are not stored in the database, they enable to filter view in the UI, e.g. filters in entities list, investigations, knowledge graphs.
However, they are still persistent in the platform frontend side. The filters used in a view are saved as URL parameters, so you can save and share links of these filtered views.
Also, your web browser saves in local storage the filters that you are setting in various places of the platform, allowing to retrieve them when you come back to the same view. You can then keep working from where you left of.
Stored filters are attributes of an entity, and are therefore stored in the database. They are stored as an attribute in the object itself, e.g. filters in dashboards, feeds, TAXII collections, triggers, streams, playbooks.
"},{"location":"reference/filters/#create-a-filter","title":"Create a filter","text":"
To create a filter, add every key you need using the 'Add filter' select box. It will give you the possible attributes on which you can filter in the current view.
A grey box appears and allows to select:
the operator to use, and
the values to compare (if the operator is not \"empty\" or \"not_empty\").
You can add as many filters as you want, even use the same key twice with different operators and values.
The boolean modes (and/or) are either global (between every attribute filters) or local (between values inside a filter). Both can be switched with a single click, changing the logic of your filtering.
Since OpenCTI 5.12, the OpenCTI platform uses a new filter format called FilterGroup. The FilterGroup model enables to do complex filters imbrication with different boolean operators, which extends greatly the filtering capabilities in every part of the platform.
In a given filter group, the mode (and or or) represents the boolean operation between the different filters and filterGroups arrays. The filters and filterGroups arrays are composed of objects of type Filter and FilterGroup.
The Filter has 4 properties:
a key, representing the kind of data we want to target (example: objectLabel to filter on labels or createdBy to filter on the author),
an array of values, representing the values we want to compare to,
an operator representing the operation we want to apply between the key and the values,
a mode (and or or) to apply between the values if there are several ones.
Value Meaning Additional information eq equal not_eq different gl greater than against textual values, the alphabetical ordering is used gte greater than or equal against textual values, the alphabetical ordering is used lt lower than against textual values, the alphabetical ordering is used lte lower than or equal against textual values, the alphabetical ordering is used nil empty / no value nil do not require anything inside values not_nil non-empty / any value not_nil do not require anything inside values
In addition, there are operators:
starts_with / not_starts_with / ends_with / not_ends_with / contains / not contains, available for searching in short string fields (name, value, title, etc.),
search, available in short string and text fields.
There is a small difference between search and contains. search finds any occurrence of specified words, regardless of order, while \"contains\" specifically looks for the exact sequence of words you provide.
Always use single-key filters
Multi-key filters are not supported across the platform and are reserved to specific, internal cases.
Only a specific set of key can be used in the filters.
Automatic key checking prevents typing error when constructing filters via the API. If a user write an unhandled key (object-label instead of objectLabel for instance), the API will return an error instead of an empty list. Doing so, we make sure the platform do not provide misleading results.
Some keys do not exist in the schema definition, but are allowed in addition. They describe a special behavior.
It is the case for:
sightedBy: entities to which X is linked via a STIX sighting relationship,
workflow_id: status id of the entities, or status template id of the status of the entities,
representative: entities whose representative (name for reports, value for some observables, composition of the source and target names for a relationship...) matches the filter,
connectedToId: the listened instances for an instance trigger.
For some keys, negative equality filtering is not supported yet (not_eq operator). For instance, it is the case for:
fromId
fromTypes
toId
toTypes
The regardingOf filter key has a special format and enables to target the entities having a relationship of a certain type with certain entities. Here is an example of filter to fetch the entities related to the entity X:
"},{"location":"reference/filters/#limited-support-in-stream-events-filtering","title":"Limited support in stream events filtering","text":"
Filters that are run against the event stream are not using the complete schema definition in terms of filtering keys.
This concerns:
Live streams,
CSV feeds,
TAXII collection,
Triggers,
Playbooks.
For filters used in this context, only some keys are supported for the moment:
confidence
objectAssignee
createdBy
creator
x_opencti_detection
indicator_types
objectLabel
x_opencti_main_observable_type
objectMarking
objects
pattern_type
priority
revoked
severity
x_opencti_score
entity_type
x_opencti_workflow_id
connectedToId (for the instance triggers)
fromId (the instance in the \"from\" of a relationship)
fromTypes (the entity type in the \"from\" of a relationship)
toId (the instance in the \"to\" of a relationship)
toTypes (the entity type in the \"to\" of a relationship)
"},{"location":"reference/fips/","title":"SSL FIPS 140-2 deployment","text":""},{"location":"reference/fips/#introduction","title":"Introduction","text":"
For organizations that need to deploy OpenCTI in a SSL FIPS 140-2 compliant environment, we provide FIPS compliant OpenCTI images for all components of the platform. Please note that you will also need to deploy dependencies (ElasticSearch / OpenSearch, Redis, etc.) with FIPS 140-2 SSL to have the full compliant OpenCTI technological stack.
OpenCTI SSL FIPS 140-2 compliant builds
The OpenCTI platform, workers and connectors SSL FIPS 140-2 compliant images are based on packaged Alpine Linux with OpenSSL 3 and FIPS mode enabled maintened by the Filigran engineering team.
"},{"location":"reference/fips/#dependencies","title":"Dependencies","text":""},{"location":"reference/fips/#aws-native-services-in-fedramp-compliant-environment","title":"AWS Native Services in FedRAMP compliant environment","text":"
It is important to remind that OpenCTI is fully compatible with AWS native services and all dependencies are available in both FedRAMP Moderate (East / West) and FedRAMP High (GovCloud) scopes.
Redis does not provide FIPS 140-2 SSL compliant Docker images but supports very well custom tls-ciphersuites that can be configured to use the system FIPS 140-2 OpenSSL library.
Alternatively, you can use a Stunnel TLS endpoint to ensure encrypted communication between OpenCTI and Redis. There are a few examples available, here or here.
RabbitMQ does not provide FIPS 140-2 SSL compliant Docker images but, as Redis, supports custom cipher suites. Also, it is confirmed since RabbitMQ version 3.12.5, the associated Erlang build (> 26.1), supports FIPS mode on OpenSSL 3.
Alternatively, you can use a Stunnel TLS endpoint to ensure encrypted communication between OpenCTI and RabbitMQ.
If you cannot use an S3 endpoint already deployed in your FIPS 140-2 SSL compliant environment, MinIO provides FIPS 140-2 SSL compliant Docker images which then are very easy to deploy within your environment.
For the platform, we provide FIPS 140-2 SSL compliant Docker images. Just use the appropriate tag to ensure you are deploying the FIPS compliant version and follow the standard Docker deployment procedure.
For the worker, we provide FIPS 140-2 SSL compliant Docker images. Just use the appropriate tag to ensure you are deploying the FIPS compliant version and follow the standard Docker deployment procedure.
All connectors have FIPS 140-2 SSL compliant Docker images. For each connector you need to deploy, please use the tag {version}-fips instead of {version} and follow the standard deployment procedure. An example is available on Docker Hub.
In order to provide a real time way to consume STIX CTI information, OpenCTI provides data events in a stream that can be consumed to react on creation, update, deletion and merge. This way of getting information out of OpenCTI is highly efficient and already use by some connectors.
OpenCTI is currently using REDIS Stream as the technical layer. Each time something is modified in the OpenCTI database, a specific event is added in the stream.
In order to provide a really easy consuming protocol we decide to provide a SSE (https://fr.wikipedia.org/wiki/Server-sent_events) HTTP URL linked to the standard login system of OpenCTI. Any user with the correct access rights can open and access http://opencti_instance/stream, and open an SSE connection to start receiving live events. You can of course consume directly the stream in Redis, but you will have to manage access and rights directly.
id: {Event stream id} -> Like 1620249512318-0\nevent: {Event type} -> create / update / delete\ndata: { -> The complete event data\n version -> The version number of the event\n type -> The inner type of the event\n scope -> The scope of the event [internal or external]\n data: {STIX data} -> The STIX representation of the data.\n message -> A simple string to easy understand the event\n origin: {Data Origin} -> Complex object with different information about the origin of the event\n context: {Event context} -> Complex object with meta information depending of the event type\n}\n
It can be used to consume the stream from this specific point.
The current stix data representation is based on the STIX 2.1 format using extension mechanism. Please take a look to the STIX documentation for more information.
It's simply the data in STIX format just before his deletion. You will also find the automated deletions in context due to automatic dependency management.
This event type publish the complete STIX data information along with patches information. Thanks to the patches, it's possible to rebuild the previous version and easily understand what happens in the update.
Patch and reverse_patch follow the official jsonpatch specification. You can find more information on the jsonpatch page
Merge is a combination of an update of the merge targets and deletions of the sources. In this event you will find the same patch and reverse_patch as an update and the list of elements merged into the target in the \"sources\" attribute.
The stream hosted in /stream url contains all the raw events of the platform, always filtered by the user rights (marking based). It's a technical stream a bit complex to used but very useful for internal processing or some specific connectors like backup/restore.
This stream is live by default but if, you want to catchup, you can simply add the from parameter to your query. This parameter accept a timestamp in millisecond and also an event id, e.g. http://localhost/stream?from=1620249512599
Stream size?
The raw stream is really important in the platform and needs te be sized according to the period of retention you want to ensure. More retention you will have, more security about reprocessing the past information you will get. We usually recommand 1 month of retention, that usually match 2 000 000 of events. This limit can be configured with redis:trimming option, please check deployment configuration page.
This stream aims to simplify your usage of the stream through the connectors, providing a way to create stream with specific filters through the UI. After creating this stream, is simply accessible from /stream/{STREAM_ID}.
It's very useful for various cases of data externalization, synchronization, like SPLUNK, TANIUM...
This stream provides different interesting mechanics:
Stream the initial list of instances matching your filters when connecting based on main database if you use the recover parameter
Auto dependencies resolution to guarantee the consistency of the information distributed
Automatic events translation depending on the element segregation
If you want to dig in about the internal behavior you can check this complete diagram:
no-dependencies (query parameter or header, default false). Can be used to prevent the auto dependencies resolution. To be used with caution.
listen-delete (query parameter or header, default true). Can be used prevent receive deletion events. To be used with caution.
with-inferences (query parameter or header, default false). Can be used to add inferences events (from rule engine) in the stream.
"},{"location":"reference/streaming/#from-and-recover","title":"From and Recover","text":"
From and recover are 2 different options that need to be explains.
from (query parameter) is always the parameter that describe the initial date/event_id you want to start from. Can also be setup with request header from or last-event-id
recover (query parameter) is an option that let you consume the initial event from the database and not from the stream. Can also be setup with request header recover or recover-date
This difference will be transparent for the consumer but very important to get old information as an initial snapshot. This also let you consume information that is no longer in the stream retention period.
The next diagram will help you to understand the concept:
In OpenCTI, taxonomies serve as structured classification systems that aid in organizing and categorizing intelligence data. This reference guide provides an exhaustive description of the platform's customizable fields within the taxonomies' framework. Users can modify, add, or delete values within the available vocabularies to tailor the classification system to their specific requirements.
For broader documentation on the taxonomies section, please consult the appropriate page.
Default values are based on those defined in the STIX standard but can be tailored to better suit the organization's needs.
Name Used in Default value Account type vocabulary (account-type-ov) User account facebook, ldap, nis, openid, radius, skype, tacacs, twitter, unix, windows-domain, windows-local Attack motivation vocabulary (attack-motivation-ov) Threat actor group accidental, coercion, dominance, ideology, notoriety, organizational-gain, personal-gain, personal-satisfaction, revenge, unpredictable Attack resource level vocabulary (attack-resource-level-ov) Threat actor group club, contest, government, individual, organization, team Case priority vocabulary (case_priority_ov) Incident response P1, P2, P3, P4 Case severity vocabulary (case_severity_ov) Incident response critical, high, medium, low Channel type vocabulary (channel_type_ov) Channel Facebook, Twitter Collection layers vocabulary (collection_layers_ov) Data source cloud-control-plane, container, host, network, OSINT Event type vocabulary (event_type_ov) Event conference, financial, holiday, international-summit, local-election, national-election, sport-competition Eye color vocabulary (eye_color_ov) Threat actor individual black, blue, brown, green, hazel, other Gender vocabulary (gender_ov) Threat actor individual female, male, nonbinary, other Grouping context vocabulary (grouping_context_ov) Grouping malware-analysis, suspicious-activity, unspecified Hair color vocabulary vocabulary (hair_color_ov) Threat actor individual bald, black, blond, blue, brown, gray, green, other, red Implementation language vocabulary (implementation_language_ov) Malware applescript, bash, c, c++, c#, go, java, javascript, lua, objective-c, perl, php, powershell, python, ruby, scala, swift, typescript, visual-basic, x86-32, x86-64 Incident response type vocabulary (incident_response_type_ov) Incident response data-leak, ransomware Incident severity vocabulary (incident_severity_ov) Incident critical, high, medium, low Incident type vocabulary (incident_type_ov) Incident alert, compromise, cybercrime, data-leak, information-system-disruption, phishing, reputation-damage, typosquatting Indicator type vocabulary (indicator_type_ov) Indicator anomalous-activity, anonymization, attribution, benign, compromised, malicious-activity, unknown Infrastructure type vocabulary (infrastructure_type_ov) Infrastructure amplification, anonymization, botnet, command-and-control, control-system, exfiltration, firewall, hosting-malware, hosting-target-lists, phishing, reconnaissance, routers-switches, staging, unknown, workstation Integrity level vocabulary (integrity_level_ov) Process high, medium, low, system Malware capabilities vocabulary (malware_capabilities_ov) Malware accesses-remote-machines, anti-debugging, anti-disassembly, anti-emulation, anti-memory-forensics, anti-sandbox, anti-vm, captures-input-peripherals, captures-output-peripherals, captures-system-state-data, cleans-traces-of-infection, commits-fraud, communicates-with-c2, compromises-data-availability, compromises-data-integrity, compromises-system-availability, controls-local-machine, degrades-security-software, degrades-system-updates, determines-c2-server, emails-spam, escalates-privileges, evades-av, exfiltrates-data, fingerprints-host, hides-artifacts, hides-executing-code, infects-files, infects-remote-machines, installs-other-components, persists-after-system-reboot, prevents-artifact-access, prevents-artifact-deletion, probes-network-environment, self-modifies, steals-authentication-credentials, violates-system-operational-integrity Malware result vocabulary (malware_result_ov) Malware analysis benign, malicious, suspicious, unknown Malware type vocabulary (malware_type_ov) Malware adware, backdoor, bootkit, bot, ddos, downloader, dropper, exploit-kit, keylogger, ransomware, remote-access-trojan, resource-exploitation, rogue-security-software, rootkit, screen-capture, spyware, trojan, unknown, virus, webshell, wiper, worm Marital status vocabulary (marital_status_ov) Threat actor individual annulled, divorced, domestic_partner, legally_separated, married, never_married, polygamous, separated, single, widowed Note types vocabulary (note_types_ov) Note analysis, assessment, external, feedback, internal Opinion vocabulary (opinion_ov) Opinion agree, disagree, neutral, strongly-agree, strongly-disagree Pattern type vocabulary (pattern_type_ov) Indicator eql, pcre, shodan, sigma, snort, spl, stix, suricata, tanium-signal, yara Permissions vocabulary (permissions_ov) Attack pattern Administrator, root, User Platforms vocabulary (platforms_ov) Data source android, Azure AD, Containers, Control Server, Data Historian, Engineering Workstation, Field Controller/RTU/PLC/IED, Google Workspace, Human-Machine Interface, IaaS, Input/Output Server, iOS, linux, macos, Office 365, PRE, SaaS, Safety Instrumented System/Protection Relay, windows Processor architecture vocabulary (processor_architecture_ov) Malware alpha, arm, ia-64, mips, powerpc, sparc, x86, x86-64 Reliability vocabulary (reliability_ov) Report, Organization A - Completely reliable, B - Usually reliable, C - Fairly reliable, D - Not usually reliable, E - Unreliable, F - Reliability cannot be judged Report types vocabulary (report_types_ov) Report internal-report, threat-report Request for information types vocabulary (request_for_information_types_ov) Request for information none Request for takedown types vocabulary (request_for_takedown_types_ov) Request for takedown brand-abuse, phishing Service status vocabulary (service_status_ov) Process SERVICE_CONTINUE_PENDING, SERVICE_PAUSE_PENDING, SERVICE_PAUSED, SERVICE_RUNNING, SERVICE_START_PENDING, SERVICE_STOP_PENDING, SERVICE_STOPPED Service type vocabulary (service_type_ov) Process SERVICE_FILE_SYSTEM_DRIVER, SERVICE_KERNEL_DRIVER, SERVICE_WIN32_OWN_PROCESS, SERVICE_WIN32_SHARE_PROCESS Start type vocabulary (start_type_ov) Process SERVICE_AUTO_START, SERVICE_BOOT_START, SERVICE_DEMAND_START, SERVICE_DISABLED, SERVICE_SYSTEM_ALERT Threat actor group role vocabulary (threat_actor_group_role_ov) Threat actor group agent, director, independent, infrastructure-architect, infrastructure-operator, malware-author, sponsor Threat actor group sophistication vocabulary (threat_actor_group_sophistication_ov) Threat actor group advanced, expert, innovator, intermediate, minimal, none, strategic Threat actor group type vocabulary (threat_actor_group_type_ov) Threat actor group activist, competitor, crime-syndicate, criminal, hacker, insider-accidental, insider-disgruntled, nation-state, sensationalist, spy, terrorist, unknown Threat actor individual role vocabulary (threat_actor_individual_role_ov) Threat actor individual agent, director, independent, infrastructure-architect, infrastructure-operator, malware-author, sponsor Threat actor individual sophistication vocabulary (threat_actor_individual_sophistication_ov) Threat actor individual advanced, expert, innovator, intermediate, minimal, none, strategic Threat actor individual type vocabulary (threat_actor_individual_type_ov) Threat actor individual activist, competitor, crime-syndicate, criminal, hacker, insider-accidental, insider-disgruntled, nation-state, sensationalist, spy, terrorist, unknown Tool types vocabulary (tool_types_ov) Tool credential-exploitation, denial-of-service, exploitation, information-gathering, network-capture, remote-access, unknown, vulnerability-scanning "},{"location":"reference/taxonomy/#customization","title":"Customization","text":"
Users can customize the taxonomies by modifying the available values or adding new ones. These modifications enable users to adapt the classification system to their specific intelligence requirements. Additionally, within each vocabulary list, users have the flexibility to customize the order of the dropdown menu associated with the taxonomy. This feature allows users to prioritize certain values or arrange them in a manner that aligns with their specific classification needs. Additionally, users can track the usage count for each vocabulary, providing insights into the frequency of usage and helping to identify the most relevant and impactful classifications. These customization options empower users to tailor the taxonomy system to their unique intelligence requirements, enhancing the efficiency and effectiveness of intelligence analysis within the OpenCTI platform.
The application collects statistical data related to its usage and performances.
Confidentiality
The OpenCTI platform does not collect any information related to threat intelligence knowledge which remains strictly confidential. Also, the collection is strictly anonymous and personally identifiable information is NOT collected (including IP addresses).
All data collected is anonymized and aggregated to protect the privacy of individual users, in compliance with all privacy regulations.
"},{"location":"reference/usage-telemetry/#purpose-of-the-telemetry","title":"Purpose of the telemetry","text":"
The collected data is used for the following purposes:
Improving the functionality and performance of the application.
Analyzing user behavior to enhance user experience.
Generating aggregated and anonymized statistics for internal and external reporting.
"},{"location":"reference/usage-telemetry/#important-thing-to-know","title":"Important thing to know","text":"
The platform send the metrics to the hostname telemetry.filigran.io using the OTLP protocol (over HTTPS). The format of the data is OpenTelemetry JSON.
The metrics push is done every 6 hours if OpenCTI was able to connect to the hostname when the telemetry manager is started. Metrics are also written in specific logs files in order to be included in support package
Ask AI is available under the \"OpenCTI Enterprise Edition\" license.
Please read the dedicated page to have all information
"},{"location":"usage/ask-ai/#prerequisites-for-using-ask-ai","title":"Prerequisites for using Ask AI","text":"
There are several possibilities for Enterprise Edition customers to use OpenCTI AI endpoints:
Use the Filigran AI Service leveraging our custom AI model using the token given by the support team.
Use OpenAI or MistralAI cloud endpoints using your own tokens.
Deploy or use local AI endpoints (Filigran can provide you with the custom model).
Please read the configuration documentation
Beta Feature
Ask AI is a beta feature as we are currently fine-tuning our models. Consider checking important information.
"},{"location":"usage/ask-ai/#how-it-works","title":"How it works","text":"
Even if in the future, we would like to leverage AI to do RAG, for the moment we are mostly using AI to analyze and produce texts or images, based on data directly sent into the prompt.
This means that if you are using Filigran AI endpoint or a local one, your data is never used to re-train or adapt the model and everything relies on a pre-trained and fixed model. When using the Ask AI button in the platform, a prompt is generated with the proper instruction to generate the expected result and use it in the context of the button (in forms, rich text editor etc.).
We are hosting a scalable AI endpoint for all SaaS or On-Prem enterprise edition customers, this endpoint is based on MistralAI with a model that will be adapted over time to be more effective when processing threat intelligence related contents.
The model, which is still in beta version, will be adapted in the upcoming months to reach maturity at the end of 2024. It can be shared with on-prem enterprise edition customers under NDA.
"},{"location":"usage/ask-ai/#functionalities-of-ask-ai","title":"Functionalities of Ask AI","text":"
Ask AI is represented by a dedicated icon wherever on of its functionalities is available to use.
"},{"location":"usage/ask-ai/#assistance-for-writing-meaningful-content","title":"Assistance for writing meaningful content","text":"
Ask AI can assist you for writing better textual content, for example better title, name, description and detailed content of Objects.
Fix spelling & grammar: try to improve the text from a formulation and grammar perspective.
Make it shorter/longer: try to shorten or lengthen the text.
Change tone: try to change the tone of the text. You can select if you want the text to be written for Strategic (Management, decision makers), Tactical (for team leaders) or Operational (for technical CTI analysts) audiences.
Summarize: try to summarize the text in bullet points.
Explain: try to explain the context of the subject's text based on what is available to the LLM.
"},{"location":"usage/ask-ai/#assistance-for-importing-data-from-documents","title":"Assistance for importing data from documents","text":"
Fom the Content tab of a Container (Reports, Groupings and Cases), Ask AI can also assist you for importing data contained in uploaded documents into OpenCTI for further exploitation.
Generate report document: Generate a text report based on the knowledge graph (entities and relationships) of this container.
Summarize associated files: Generate a summary of the selected files (or all files associated to this container).
Try to convert the selected files (or all files associated to this container) in a STIX 2.1 bundle you will then be able to use at your convenience (for example importing it into the platform).
A short video on the FiligranHQ YouTube channel presents tha capabilities of AskAI: https://www.youtube.com/watch?v=lsP3VVsk5ds.
"},{"location":"usage/ask-ai/#improving-generated-elements-of-ask-ai","title":"Improving generated elements of Ask AI","text":"
Be aware that the text quality is highly dependent on the capabilities of the associated LLM.
That is why every generated text by Ask AI is provided in a dedicated panel, allowing you to verify and rectify any error the LLM could have made.
Playbooks automation is available under the \"OpenCTI Enterprise Edition\" license. Please read the dedicated page to have all information.
OpenCTI playbooks are flexible automation scenarios which can be fully customized and enabled by platform administrators to enrich, filter and modify the data created or updated in the platform.
Playbook automation is accessible in the user interface under Data > Processing > Automation.
Right needed
You need the \"Manage credentials\" capability to use the Playbooks automation, because you will be able to manipulate data simple users cannot access.
You will then be able to:
add labels depending on enrichment results to be used in threat intelligence-driven detection feeds,
create reports and cases based on various criteria,
trigger enrichments or webhooks in given conditions,
modify attributes such as first_seen and last_seen based on other pieces of knowledge,
Initiating with a component listening to a data stream, each subsequent component in the playbook processes a received STIX bundle. These components have the ability to modify the bundle and subsequently transmit the altered result to connected components.
In this paradigm, components can send out the STIX 2.1 bundle to multiple components, enabling the development of multiple branches within your playbook.
A well-designed playbook end with a component executing an action based on the processed information. For instance, this may involve writing the STIX 2.1 bundle in a data stream.
Validate ingestion
The STIX bundle processed by the playbook won't be written in the platform without specifying it using the appropriate component, i.e. \"Send for ingestion\".
"},{"location":"usage/automation/#create-a-playbook","title":"Create a Playbook","text":"
It is possible to create as many playbooks as needed which are running independently. You can give a name and description to each playbook.
The first step to define in the playbook is the \u201ctriggering event\u201d, which can be any knowledge event (create, update or delete) with customizable filters. To do so, click on the grey rectangle in the center of the workspace and choose the component to \"listen knowledge events\". Configure it with adequate filters. You can use same filters as in other part of the platform.
Then you have flexible choices for the next steps to:
filter the initial knowledge,
enrich data using external sources and internal rules,
modify entities and relationships by applying patches,
write the data, send notifications,
etc.
Do not forget to start your Playbook when ready, with the Start option of the burger button placed near the name of your Playbook.
By clicking the burger button of a component, you can replace it by another one.
By clicking on the arrow icon in the bottom right corner of a component, you can develop a new branch at the same level.
By clicking the \"+\" button on a link between components, you can insert a component between the two.
"},{"location":"usage/automation/#components-of-playbooks","title":"Components of playbooks","text":""},{"location":"usage/automation/#log-data-in-standard-output","title":"Log data in standard output","text":"
Will write the received STIX 2.1 bundle in platform logs with configurable log level and then send out the STIX 2.1 bundle unmodified.
"},{"location":"usage/automation/#send-for-ingestion","title":"Send for ingestion","text":"
Will pass the STIX 2.1 bundle to be written in the data stream. This component has no output and should end a branch of your playbook.
Will allow you to define filter and apply it to the received STIX 2.1 bundle. The component has 2 output, one for data matching the filter and one for the remainder.
By default, filtering is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
"},{"location":"usage/automation/#enrich-through-connector","title":"Enrich through connector","text":"
Will send the received STIX 2.1 bundle to a compatible enrichment connector and send out the modified bundle.
Will add, replace or remove compatible attribute of the entities contains in the received STIX 2.1 bundle and send out the modified bundle.
By default, modification is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
Will modify the received STIX 2.1 bundle to include the entities into an container of the type you configured. By default, wrapping is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all elements in the bundle (elements that might result from enrichment for example).
"},{"location":"usage/automation/#share-with-organizations","title":"Share with organizations","text":"
Will share every entity in the received STIX 2.1 bundle with Organizations you configured. Your platform needs to have declared a platform main organization in Settings/Parameters.
Will apply a complex automation built-in rule. This kind of rule might impact performance. Current rules are:
First/Last seen computing extension from report publication date: will populate first seen and last seen date of entities contained in the report based on its publication date,
Resolve indicators based on observables (add in bundle): will retrieve all indicators linked to the bundle's observables from the database,
Resolve observables an indicator is based on (add in bundle): retrieve all observables linked to the bundle's indicator from the database,
Resolve container references (add in bundle): will add to the bundle all the relationships and entities the container contains (if the entity having triggered the playbook is not a container, the output of this component will be empty),
Resolve neighbors relations and entities (add in bundle): will add to the bundle all relations of the entity having triggered the playbook, as well as all entities at the end of these relations, i.e. the \"first neighbors\" (if the entity is a container, the output of this component will be empty).
"},{"location":"usage/automation/#send-to-notifier","title":"Send to notifier","text":"
Will generate a Notification each time a STIX 2.1 bundle is received.
"},{"location":"usage/automation/#promote-observable-to-indicator","title":"Promote observable to indicator","text":"
Will generate indicator based on observables contained in the received STIX 2.1 bundle.
By default, it is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all observables in the bundle (e.g. observables that might result from predefined rule).
You can also add all indicators and relationships generated by this component in the entity having triggered the playbook, if this entity is a container.
"},{"location":"usage/automation/#extract-observables-from-indicator","title":"Extract observables from indicator","text":"
Will extract observables based on indicators contained in the received STIX 2.1 bundle.
By default, it is applied to entities having triggered the playbook. You can toggle the corresponding option to apply it to all indicators in the bundle (e.g. indicators that might result from enrichment.
You can also add all observables and relationships generated by this component in the entity having triggered the playbook, if this entity is a container.
At the top right of the interface, you can access execution trace of your playbook and consult the raw data after every step of your playbook execution.
Rule tasks can be seen and activated in Settings > Customization > Rules engine. Knowledge and user tasks can be seen and managed in Data > Background Tasks. The scope of each task is indicated.
If a rule task is enabled, it leads to the scan of the whole platform data and the creation of entities or relationships in case a configuration corresponds to the tasks rules. The created data are called 'inferred data'. Each time an event occurs in the platform, the rule engine checks if inferred data should be updated/created/deleted.
Knowledge tasks are background tasks updating or deleting entities and correspond to mass operations on these data. To create one, select entities via the checkboxes in an entity list, and choose the action to perform via the toolbar.
To create a knowledge task, the user should have the capability to Update Knowledge (or the capability to delete knowledge if the task action is a deletion).
To see a knowledge task in the Background task section, the user should be the creator of the task, or have the KNOWLEDGE capability.
To delete a knowledge task from the Background task section, the user should be the creator of the task, or have the KNOWLEDGE_UPDATE capability.
User tasks are background tasks updating or deleting notifications. It can be done from the Notification section, by selecting several notifications via the checkboxes, and choosing an action via the toolbar.
A user can create a user task on its own notifications only.
To see or delete a user task, the user should be the creator of the task or have the SET_ACCESS capability.
"},{"location":"usage/case-management/","title":"Case management","text":""},{"location":"usage/case-management/#why-case-management","title":"Why Case management?","text":"
Compiling CTI data in one place, deduplicate and correlate to transform it into Intelligence is very important. But ultimately, you need to act based on this Intelligence. Some situations will need to be taken care of, like cybersecurity incidents, requests for information or requests for takedown. Some actions will then need to be traced, to be coordinated and oversaw. Some actions will include feedback and content delivery.
OpenCTI includes Cases to allow organizations to manage situations and organize their team's work. Better, by doing Case management in OpenCTI, you handle your cases with all the context and Intelligence you need, at hand.
"},{"location":"usage/case-management/#how-to-manage-your-case-in-opencti","title":"How to manage your Case in OpenCTI?","text":"
Multiple situations can be modelize in OpenCTI as a Case, either an Incident Response, a Request for Takedown or a Request for Information.
All Cases can contain any entities and relationships you need to represent the Intelligence context related to the situation. At the beginning of your case, you may find yourself with only some Observables sighted in a system. At the end, you may have Indicators, Threat Actor, impacted systems, attack patterns. All representing your findings, ready to be presented and exported as graph, pdf report, timeline, etc.
Some Cases may need some collaborative work and specific Tasks to be performed by people that have relevant skillsets. OpenCTI allows you to associate Tasks in your Cases and assign them to users in the platform. As some type of situation may need the same tasks to be done, it is also possible to pre-define lists of tasks to be applied on your case. You can define these lists by accessing the Settings/Taxonomies/Case templates panel. Then you just need to add it from the overview of your desire Case.
Tip: A user can have a custom dashboard showing him all the tasks that have been assigned to him.
As with other objects in OpenCTI, you can also leverage the Notes to add some investigation and analysis related comments, helping you shaping up the content of your case with unstructured data and trace all the work that have been done.
You can also use Opinions to collect how the Case has been handled, helping you to build Lessons Learned.
To trace the evolution of your Case and define specific resolution worflows, you can use the Status (that can be define in Settings/Taxonomies/Status templates).
At the end of your Case, you will certainly want to report on what has been done. OpenCTI allows you to export the content of the Case in a simple but customizable PDF (currently in refactor). But of course, your company has its own documents' templates, right? With OpenCTI, you will be able to include some nice graphics in it. For example, a Matrix view of the attacker attack pattern or even a graph display of how things are connected.
Also, we are currently working a more meaningfull Timeline view that will be possible to export too.
"},{"location":"usage/case-management/#use-case-example-a-suspicious-observable-is-sighted-by-a-defense-system-is-it-important","title":"Use case example: A suspicious observable is sighted by a defense system. Is it important?","text":"
Daily, your SIEM and EDR are fed Indicators of Compromise from your OpenCTI instance.
Today, your SIEM has sighted the domain name \"bad.com\" matching one of them. Its alert has been transfered to OpenCTI and has created a Sighting relationship between your System \"SIEM permiter A\" and the Observable \"bad.com\".
You are alerted immediatly, because you have activated the inference rule creating a corresponding Incident in this situation, and you have created an alert based on new Incident that sends you email notification and Teams message (webhook).
In OpenCTI, you can clearly see the link between the alerting System, the sighted Observable and the corresponding Indicator. Better, you can also see all the context of the Indicator. It is linked to a notorious and recent phishing campaign targeting your activity sector. \"bad.com\" is clearly something to investigate ASAP.
You quickly select all the context you have found, and add it to a new Incident responsecase. You position the priority to High, regarding the context, and the severity to Low, as you don't know yet if someone really interacted with \"bad.com\".
You also assign the case to one of your colleagues, on duty for investigative work. To guide him, you also create a Task in your case for verifying if an actual interaction happened with \"bad.com\".
In the STIX 2.1 standard, some STIX Domain Objects (SDO) can be considered as \"container of knowledge\", using the object_refs attribute to refer multiple other objects as nested references. In object_refs, it is possible to refer to entities and relationships.
"},{"location":"usage/containers/#implementation","title":"Implementation","text":""},{"location":"usage/containers/#types-of-container","title":"Types of container","text":"
In OpenCTI, containers are displayed differently than other entities, because they contain pieces of knowledge. Here is the list of containers in the platform:
Type of entity STIX standard Description Report Native Reports are collections of threat intelligence focused on one or more topics, such as a description of a threat actor, malware, or attack technique, including context and related details. Grouping Native A Grouping object explicitly asserts that the referenced STIX Objects have a shared context, unlike a STIX Bundle (which explicitly conveys no context). Observed Data Native Observed Data conveys information about cyber security related entities such as files, systems, and networks using the STIX Cyber-observable Objects (SCOs). Note Native A Note is intended to convey informative text to provide further context and/or to provide additional analysis not contained in the STIX Objects. Opinion Native An Opinion is an assessment of the correctness of the information in a STIX Object produced by a different entity. Case Extension A case whether an Incident Response, a Request for Information or a Request for Takedown is used to convey an epic with a set of tasks. Task Extension A task, generally used in the context of a case, is intended to convey information about something that must be done in a limited timeframe."},{"location":"usage/containers/#containers-behavior","title":"Containers behavior","text":"
In the platform, it is always possible to visualize the list of entities and/or observables referenced in a container (Container > Entities or Observables) but also to add / remove entities from the container.
As containers can also contain relationships, which are generally linked to the other entities in the container, it is also possible to visualize the container as a graph (Container > Knowledge)
"},{"location":"usage/containers/#containers-of-an-entity-or-a-relationship","title":"Containers of an entity or a relationship","text":"
On the entity or the relationship side, you can always find all containers where the object is contained using the top menu Analysis:
In all containers list, you can also filter containers based on one or multiple contained object(s):
OpenCTI provides a simple way to share a visualisation of a custom dashboard to anyone, even for people that are outside of the platform. We call those visualisations: public dashboards.
Public dashboards are a snapshot of a custom dashboard at a specific moment of time. By this way you can share a version of a custom dashboard, then modify your custom dashboard without worrying about the impact on public dashboards you have created.
On the contrary, if you want that your public dashboard is updated with the last version of the associated custom dashboard, you can do it with few clicks by recreating a public dashboard using the same name of the one to update.
To be able to share custom dashboards you need to have the Manage data sharing & ingestion capability.
"},{"location":"usage/dashboards-share/#create-a-public-dashboard","title":"Create a public dashboard","text":"
On the top-right of your custom dashboard page you will find a button that will open a panel to manage the public dashboards associated to this custom dashboard.
In this panel you will find two parts: - At the top you have a form allowing you to create public dashboards, - And below, the list of the public dashboards you have created.
"},{"location":"usage/dashboards-share/#form-to-create-a-new-public-dashboard","title":"Form to create a new public dashboard","text":"
First you need to specify a name for your public dashboard. This name will be displayed on the dashboard page. The name is also used to generate an ID for your public dashboard that will be used in the URL to access the dashboard.
The ID is generated as follow: replace all spaces with symbol - and remove special characters. This ID also needs to be unique in the platform as it is used in the URL to access the dashboard.
Then you can choose if the public dashboard is enabled or not. A disabled dashboard means that you cannot access the public dashboard through the custom URL. But you can still manage it from this panel.
Finally you choose the max level of marking definitions for the data to be displayed in the public dashboard. For example if you choose TLP:AMBER then the data fetched by the widgets inside the public dashboard will be at maximum AMBER, you won't retrieved data with RED marking definition.
Also note that the list of marking definitions you can see is based on your current marking access on the platform and the maximum sharable marking definitions defined in your groups.
Define maximum shareable markings in groups
As a platform administrator, you can define, for each group and for each type of marking definition, Maximum marking definitions shareable to be shared through Public Dashboard, regardless of the definition set by users in their public dashboard.
"},{"location":"usage/dashboards-share/#list-of-the-public-dashboards","title":"List of the public dashboards","text":"
When you have created a public dashboard, it will appear in the list below the form.
In this list each item represents a public dashboard you have created. For each you can see its name, path of the URL, max marking definitions, the date of creation, the status to know if the dashboard is enabled or not and some actions.
The possible actions are: copy the link of the public dashboard, disable or enable the dashboard and delete the dashboard.
To share a public dashboard just copy the link and give the URL to the person you want to share with. The dashboard will be visible even if the person is not connected to OpenCTI.
OpenCTI provides an adaptable and entirely customizable dashboard functionality. The flexibility of OpenCTI's dashboard ensures a tailored and insightful visualization of data, fostering a comprehensive understanding of the platform's knowledge, relationships, and activities.
You have the flexibility to tailor the arrangement of widgets on your dashboard, optimizing organization and workflow efficiency. Widgets can be intuitively placed to highlight key information. Additionally, you can resize widgets from the bottom right corner based on the importance of the information, enabling adaptation to specific analytical needs. This technical flexibility ensures a fluid, visually optimized user experience.
Moreover, the top banner of the dashboard offers a convenient feature to configure the timeframe for displayed data. This can be accomplished through the selection of a relative time period, such as \"Last 7 days\", or by specifying fixed \"Start\" and \"End\" dates, allowing users to precisely control the temporal scope of the displayed information.
In OpenCTI, the power to create custom dashboards comes with a flexible access control system, allowing users to tailor visibility and rights according to their collaborative needs.
When a user crafts a personalized dashboard, by default, it remains visible only to the dashboard creator. At this stage, the creator holds administrative access rights. However, they can extend access and rights to others using the \"Manage access\" button, denoted by a locker icon, located at the top right of the dashboard page.
Levels of access:
View: Access to view the dashboard.
Edit: View + Permission to modify and update the dashboard and its widgets.
Manage: Edit + Ability to delete the dashboard and to control user access and rights.
It's crucial to emphasize that at least one user must retain admin access to ensure ongoing management capabilities for the dashboard.
Knowledge access restriction
The platform's data access restrictions also apply to dashboards. The data displayed in the widgets is subject to the user's access rights within the platform. Therefore, an admin user will not see the same results in the widgets as a user with limited access, such as viewing only TLP:CLEAR data (assuming the platform contains data beyond TLP:CLEAR).
"},{"location":"usage/dashboards/#share-dashboard-and-widget-configurations","title":"Share dashboard and widget configurations","text":"
OpenCTI provides functionality for exporting, importing and duplicating dashboard and widget configurations, facilitating the seamless transfer of customized dashboard setups between instances or users.
The dashboard configuration will be saved as a JSON file, with the title formatted as [YYYYMMDD]_octi_dashboard_[dashboard title]. The expected configuration file content is as follows:
The widget configuration will be saved as a JSON file, with the title formatted as [YYYYMMDD]_octi_widget_[widget view]. The expected configuration file content is as follows:
When exporting a dashboard or widget configuration, all filters will be exported as is. Filters on objects that do not exist in the receiving platform will need manual deletion after import. Filters to be deleted can be identified by their \"delete\" barred value.
Dashboards can be imported from the custom dashboards list:
Hover over the Add button (+) in the right bottom corner.
Click on the Import dashboard button (cloud with an upward arrow).
Select your file.
To import a widget, the same mechanism is used, but from a dashboard view.
Configuration compatibility
Only JSON files with the required properties will be accepted, including \"openCTI_version: [5.12.0 and above]\", \"type: [dashboard|widget]\", and a \"configuration\". This applies to both dashboards and widgets configurations.
Dashboards can be duplicated from either the custom dashboards list or the dashboard view.
To duplicate a dashboard from the custom dashboards list:
Click on the burger menu button at the end of the dashboard line.
Select Duplicate.
To duplicate a widget, the same mechanism is used, but from the burger menu button in the upper right-hand corner of the widget.
To duplicate a dashboard from the dashboard view:
Navigate to the desired dashboard.
Click on the Duplicate the dashboard button (two stacked sheets) located in the top-right corner of the dashboard.
Upon successful duplication, a confirmation message is displayed for a short duration, accompanied by a link for easy access to the new dashboard view. Nevertheless, the new dashboard can still be found in the dashboards list.
Dashboard access
The user importing or duplicating a dashboard becomes the only one with access to it. Then, access can be managed as usual.
To enable a unified approach in the description of threat intelligence knowledge as well as importing and exporting data, the OpenCTI data model is based on the STIX 2.1 standard. Thus we highly recommend to take a look to the STIX Introductory Walkthrough and to the different kinds of STIX relationships to get a better understanding of how OpenCTI works.
Some more important STIX naming shortcuts are:
STIX Domain Objects (SDO): Attack Patterns, Malware, Threat Actors, etc.
STIX Cyber Observable (SCO): IP Addresses, domain names, hashes, etc.
In some cases, the model has been extended to be able to:
Support more types of SCOs to modelize information systems such as cryptocurrency wallets, user agents, etc.
Support more types of SDOs to modelize disinformation and cybercrime such as channels, events, narrative, etc.
Support more types of SROs to extend the new SDOs such asamplifies, publishes, etc.
"},{"location":"usage/data-model/#implementation-in-the-platform","title":"Implementation in the platform","text":""},{"location":"usage/data-model/#diagram-of-types","title":"Diagram of types","text":"
You can find below the digram of all types of entities and relationships available in OpenCTI.
"},{"location":"usage/data-model/#attributes-and-properties","title":"Attributes and properties","text":"
To get a comprehensive list of available properties for a given type of entity or relationship, you can use the GraphQL playground schema available in your \"Profile > Playground\". Then you can click on schema. You can for instance search for the keyword IntrusionSet:
"},{"location":"usage/dates/","title":"Meaning of dates","text":"
In OpenCTI, entities can contain various dates, each representing different types of information. The available dates vary depending on the entity types.
In OpenCTI, dates play a crucial role in understanding the context and history of entities. Here's a breakdown of the different dates you might encounter in the platform:
\u201cPlatform creation date\u201d: This date signifies the moment the entity was created within OpenCTI. On the API side, this timestamp corresponds to the created_at field. It reflects the initiation of the entity within the OpenCTI environment.
\u201cOriginal creation date\u201d: This date reflects the original creation date of the data on the source's side. It becomes relevant if the source provides this information and if the connector responsible for importing the data takes it into account. In cases where the source date is unavailable or not considered, this date defaults to the import date (i.e. the \u201cPlatform creation date\u201d). On the API side, this timestamp corresponds to the created field.
\u201cModification date\u201d: This date captures the most recent modification made to the entity, whether a connector automatically modifies it or a user manually edits the entity. On the API side, this timestamp corresponds to the updated_at field. It serves as a reference point for tracking the latest changes made to the entity.
Date not shown on GUI: There is an additional date which is not visible on the entity in the GUI. This date is the modified field on the API. This date reflects the original update date of the data on the source's side. The difference between modified and updated_at is identical to the difference between created and created_at.
Understanding these dates is pivotal for contextualizing the information within OpenCTI, ensuring a comprehensive view of entity history and evolution.
The technical dates refer to the dates directly associated to data management within the platform. The API fields corresponding to technical dates are:
created_at: Indicates the date and time when the entity was created in the platform.
updated_at: Represents the date and time of the most recent update to the entity in the platform.
The functional dates are the dates functionally significant, often indicating specific events or milestones. The API fields corresponding to functional dates are:
created: Denotes the date and time when the entity was created on the source's side.
modified: Represents the date and time of the most recent modification to the entity on the source's side.
start_time: Indicates the start date and time associated with a relationship.
stop_time: Indicates the stop date and time associated with a relationship.
first_seen: Represents the initial date and time when the entity/activity was observed.
last_seen: Represents the most recent date and time when the entity/activity was observed.
One of the core concept of the OpenCTI knowledge graph is all underlying mechanisms implemented to accurately de-duplicate and consolidate (aka. upserting) information about entities and relationships.
When an object is created in the platform, whether manually by a user or automatically by the connectors / workers chain, the platform checks if something already exist based on some properties of the object. If the object already exists, it will return the existing object and, in some cases, update it as well.
Technically, OpenCTI generates deterministic IDs based on the listed properties below to prevent duplicate (aka \"ID Contributing Properties\"). Also, it is important to note that there is a special link between name and aliases leading to not have entities with overlapping aliases or an alias already used in the name of another entity.
"},{"location":"usage/deduplication/#entities","title":"Entities","text":"Type Attributes Area (name OR x_opencti_alias) AND x_opencti_location_type Attack Pattern (name OR alias) AND optional x_mitre_id Campaign name OR alias Channel name OR alias City (name OR x_opencti_alias) AND x_opencti_location_type Country (name OR x_opencti_alias) AND x_opencti_location_type Course Of Action (name OR alias) AND optional x_mitre_id Data Component name OR alias Data Source name OR alias Event name OR alias Feedback Case name AND created (date) Grouping name AND context Incident name OR alias Incident Response Case name OR alias Indicator pattern OR alias Individual (name OR x_opencti_alias) and identity_class Infrastructure name OR alias Intrusion Set name OR alias Language name OR alias Malware name OR alias Malware Analysis name OR alias Narrative name OR alias Note None Observed Data name OR alias Opinion None Organization (name OR x_opencti_alias) and identity_class Position (name OR x_opencti_alias) AND x_opencti_location_type Region name OR alias Report name AND published (date) RFI Case name AND created (date) RFT Case name AND created (date) Sector (name OR alias) and identity_class Task None Threat Actor name OR alias Tool name OR alias Vulnerability name OR alias
Names and aliases management
The name and aliases of an entity define a set of unique values, so it's not possible to have the name equal to an alias and vice versa.
For STIX Cyber Observables, OpenCTI also generate deterministic IDs based on the STIX specification using the \"ID Contributing Properties\" defined for each type of observable.
In cases where an entity already exists in the platform, incoming creations can trigger updates to the existing entity's attributes. This logic has been implemented to converge the knowledge base towards the highest confidence and quality levels for both entities and relationships.
To understand in details how the deduplication mechanism works in context of the maximum confidence level, you can navigate through this diagram (section deduplication):
"},{"location":"usage/delete-restore/","title":"Delete and restore knowledge","text":"
Knowledge can be deleted from OpenCTI either in an overview of an object or using background tasks. When an object is deleted, all its relationships and references to other objects are also deleted.
The deletion event is written to the stream, to trigger automated playbooks or synchronize another platform.
Since OpenCTI 6.1, a record of the deleted objects is kept for a given period of a time, allowing to restore them on demand. This does not impact the stream events or other side effect of the deletion: the object is still deleted.
A view called \"Trash\" displays all \"delete\" operations, entities and relationships alike.
A delete operation contains not only the entity or relationship that has been deleted, but also all the relationships and references from (to) this main object to (from) other elements in the platform.
You can sort, filter or search this table using the usual UI controls. You are limited to the type of object, their representation (most of the time, the name of the object), the user who deleted the object, the date and time of deletion and the marking of the object.
Note that the delete operations (i.e. the entries in this table view) inherit the marking of the main entity that was deleted, and thus follow the same access restriction as the object that was deleted.
You can individually restore or permanently delete an object from the trash view using the burger menu at the end of the line.
Alternatively, you can use the checkboxes at the start of the line to select a subset of deleted objects, and trigger a background task to restore or permanently delete them by batch.
Restoring an element creates it again in the platform with the same information it had before its deletion. It also restores all the relationships from or to this object, that have been also deleted during the deletion operation. If the object had attached files (uploaded or exported), they are also restored.
When it comes to restoring a deleted object from the trash, the current implementation shows several limitations. First and foremost, if an object in the trash has lost a relationship dependency (i.e. the other side of a relationship from or to this object is no longer in live database), you will not be able to restore the object.
In such case, if the missing dependency is in the trash too, you can manually restore this dependency first and then retry.
If the missing dependency has been permanently deleted, the object cannot be recovered.
In other words: * no partial restore: the object and all its relationships must be restored in one pass * no \"cascading\" restore: restoring one object does not restore automatically all linked objects in the trash
Enriching the data within the OpenCTI platform is made seamlessly through the integration of enrichment connectors. These connectors facilitate the retrieval of additional data from external sources or portals.
Enrichment can be conducted automatically in two distinct modes:
Upon data arrival: Configuring the connector to run automatically when data arrives in OpenCTI ensures a real-time enrichment process, supplementing the platform's data. However, it's advisable to avoid automatic enrichment for quota-based connectors to paid sources to prevent quickly depleting all quotas. Additionally, this automatic enrichment contributes to increased data volume. On a large scale, with hundreds of thousands of objects, the disk space occupied by this data can be substantial, and it should be considered, especially if disk space is a concern. The automatic execution is configured at the connector level using the \"auto: true|false\" parameter.
Targeted enrichment via playbooks: Enrichment can also be performed in a more targeted manner using playbooks. This approach allows for a customized enrichment strategy, focusing on specific objects and optimizing the relevance of the retrieved data.
Manually initiating the enrichment process is straightforward. Simply locate the button with the cloud icon at the top right of an entity.
Clicking on this icon unveils a side panel displaying a list of available connectors that can be activated for the given object. If no connectors appear in the panel, it indicates that no enrichment connectors are available for the specific type of object in focus.
Activation of an enrichment connector triggers a contact with the designated remote source, importing a set of data into OpenCTI to enrich the selected object. Each enrichment connector operates uniquely, focusing on a specific set of object types it can enrich and a distinct set of data it imports. Depending on the connectors, they may, establish relationships, add external references, or complete object information, thereby contributing to the comprehensiveness of information within the platform.
The list of available connectors can be found in our connectors catalog. In addition, further documentation on connectors is available on the dedicated documentation page.
Impact of the max confidence level
The maximum confidence level per user can have an impact on enrichment connectors, not being able to update data in the platform. To understand the concept and the potential issues you could face, please navigate to this page to understand.
When you click on \"Analyses\" in the left-side bar, you see all the \"Analyses\" tabs, visible on the top bar on the left. By default, the user directly access the \"Reports\" tab, but can navigate to the other tabs as well.
From the Analyses section, users can access the following tabs:
Reports: See Reports as a sort of containers to detail and structure what is contained on a specific report, either from a source or write by yourself. Think of it as an Intelligence Production in OpenCTI.
Groupings: Groupings are containers, like Reports, but do not represent an Intelligence Production. They regroup Objects sharing an explicit context. For example, a Grouping might represent a set of data that, in time, given sufficient analysis, would mature to convey an incident or threat report as Report container.
Malware Analyses: As define by STIX 2.1 standard, Malware Analyses captures the metadata and results of a particular static or dynamic analysis performed on a malware instance or family.
Notes: Through this tab, you can find all the Notes that have been written in the platform, for example to add some analyst's unstructured knowledge about an Object.
External references: Intelligence is never created from nothing. External references give user a way to link sources or reference documents to any Object in the platform.
Reports are one of the central component of the platform. It is from a Report that knowledge is extracted and integrated in the platform for further navigation, analyses and exports. Always tying the information back to a report allows for the user to be able to identify the source of any piece of information in the platform at all time.
In the MITRE STIX 2.1 documentation, a Report is defined as such :
Reports are collections of threat intelligence focused on one or more topics, such as a description of a threat actor, malware, or attack technique, including context and related details. They are used to group related threat intelligence together so that it can be published as a comprehensive cyber threat story.
As a result, a Report object in OpenCTI is a set of attributes and metadata defining and describing a document outside the platform, which can be a threat intelligence report from a security reseearch team, a blog post, a press article a video, a conference extract, a MISP event, or any type of document and source.
When clicking on the Reports tab at the top left, you see the list of all the Reports you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of reports.
"},{"location":"usage/exploring-analysis/#visualizing-knowledge-within-a-report","title":"Visualizing Knowledge within a Report","text":"
When clicking on a Report, you land on the Overview tab. For a Report, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge contained in the report, accessible through different views (See below for a dive-in). As described here.
Content: a tab to upload or creates outcomes document displaying the content of the Report (for example in PDF, text, HTML or markdown files). The Content of the document is displayed to ease the access of Knowledge through a readable format. As described here.
Entities: A table containing all SDO (Stix Domain Objects) contained in the Report, with search and filters available. It also displays if the SDO has been added directly or through inferences with the reasoning engine
Observables: A table containing all SCO (Stix Cyber Observable) contained in the Report, with search and filters available. It also displays if the SCO has been added directly or through inferences with the reasoning engine
Data: as described here.
Exploring and modifying the structured Knowledge contained in a Report can be done through different lenses.
In Graph view, STIX SDO are displayed as graph nodes and relationships as graph links. Nodes are colored depending of their type. Direct relationship are displayed as plain link and inferred relationships in dotted link. At the top right, you will find a serie of icons. From there you can change the current type of view. Here you can also perform global action on the Knowledge of the Report. Let's highlight 2 of them: - Suggestions: This tool suggests you some logical relationships to add between your contained Object to give more consistency to your Knowledge. - Share with an Organization: if you have designated a main Organization in the platform settings, you can here share your Report and its content with users of an other Organization. At the bottom, you have many option to manipulate the graph: - Multiple option for shaping the graph and applying forces to the nodes and links - Multiple selection options - Multiple filters, including a time range selector allowing you to see the evolution of the Knowledge within the Report. - Multiple creation and edition tools to modify the Knowledge contained in the Report.
Through this view, you can map exsisting or new Objects directly from a readable content, allowing you to quickly append structured Knowledge in your Report before refining it with relationships and details. This view is a great place to see the continuum between unstructured and structured Knowledge of a specific Intelligence Production.
This view allows you to see the structured Knowledge chronologically. This view is really useful when the report describes an attack or a campaign that lasted some time, and the analyst payed attention to the dates. The view can be filtered and displayed relationships too.
The correlation view is a great way to visualize and find other Reports related to your current subject of interest. This graph displays all Report related to the important nodes contained in your current Report, for example Objects like Malware or Intrusion sets.
If your Report describes let's say an attack, a campaign, or an understanding of an Intrusion set, it should contains multiple attack patterns Objects to structure the Knowledge about the TTPs of the Threat Actor. Those attack patterns can be displayed as highlighted matrices, by default the MITRE ATT&CK Enterprise matrix. As some matrices can be huge, it can be also filtered to only display attack patterns describes in the Report.
Groupings are an alternative to Report for grouping Objects sharing a context without describing an Intelligence Production.
In the MITRE STIX 2.1 documentation, a Grouping is defined as such :
A Grouping object explicitly asserts that the referenced STIX Objects have a shared context, unlike a STIX Bundle (which explicitly conveys no context). A Grouping object should not be confused with an intelligence product, which should be conveyed via a STIX Report. A STIX Grouping object might represent a set of data that, in time, given sufficient analysis, would mature to convey an incident or threat report as a STIX Report object. For example, a Grouping could be used to characterize an ongoing investigation into a security event or incident. A Grouping object could also be used to assert that the referenced STIX Objects are related to an ongoing analysis process, such as when a threat analyst is collaborating with others in their trust community to examine a series of Campaigns and Indicators.
When clicking on the Groupings tab at the top of the interface, you see the list of all the Groupings you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the groupings.
Clicking on a Grouping, you land on its Overview tab. For a Groupings, the following tabs are accessible: - Overview: as described here. - Knowledge: a complex tab that regroups all the structured Knowledge contained in the groupings, as for a Report, except for the Timeline view. As described here. - Entities: A table containing all SDO (Stix Domain Objects) contained in the Grouping, with search and filters available. It also display if the SDO has been added directly or through inferences with the reasonging engine - Observables: A table containing all SCO (Stix Cyber Observable) contained in the Grouping, with search and filters available. It also display if the SDO has been added directly or through inferences with the reasonging engine - Data: as described here.
Malware analyses are an important part of the Cyber Threat Intelligence, allowing an precise understanding of what and how a malware really do on the host but also how and from where it receives its command and communicates its results.
In OpenCTI, Malware Analyses can be created from enrichment connectors that will take an Observable as input and perform a scan on a online service platform to bring back results. As such, Malware Analyses can be done on File, Domain and URL.
In the MITRE STIX 2.1 documentation, a Malware Analyses is defined as such :
Malware Analyses captures the metadata and results of a particular static or dynamic analysis performed on a malware instance or family.
When clicking on the Malware Analyses tab at the top of the interface, you see the list of all the Malware Analyses you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the Malware Analyses.
Clicking on a Malware Analyses, you land on its Overview tab. The following tabs are accessible: - Overview: This view contains some additions from the common Overview here. You will find here details about how the analysis have been performed, what is the global result regarding the malicioussness of the analysed artifact and all the Observables that have been found during the analysis. - Knowledge: If you Malware analysis is linked to other Objects that are not part of the analysis result, they will be displayed here. As described here. - Data: as described here. - History: as described here.
Not every Knowledge can be structured. For allowing any users to share their insights about a specific Knowledge, they can create a Note for every Object and relationship in OpenCTI they can access to. All the Notes are listed within the Analyses menu for allowing global review of this unstructured addition to the global Knowledge.
In the MITRE STIX 2.1 documentation, a Note is defined as such :
A Note is intended to convey informative text to provide further context and/or to provide additional analysis not contained in the STIX Objects, Marking Definition objects, or Language Content objects which the Note relates to. Notes can be created by anyone (not just the original object creator).
Clicking on a Note, you land on its Overview tab. The following tabs are accessible: - Overview: as described here. - Data: as described here. - History: as described here.
Intelligence is never created from nothing. External references give user a way to link sources or reference documents to any Object in the platform. All external references are listed within the Analyses menu for accessing directly sources of the structured Knowledge.
In the MITRE STIX 2.1 documentation, a External references is defined as such :
External references are used to describe pointers to information represented outside of STIX. For example, a Malware object could use an external reference to indicate an ID for that malware in an external database or a report could use references to represent source material.
Clicking on an External reference, you land on its Overview tab. The following tabs are accessible: - Overview: as described here.
When you click on \"Arsenal\" in the left-side bar, you access all the \"Arsenal\" tabs, visible on the top bar on the left. By default, the user directly access the \"Malware\" tab, but can navigate to the other tabs as well.
From the Arsenal section, users can access the following tabs:
Malware: Malware represents any piece of code specifically designed to damage, disrupt, or gain unauthorized access to computer systems, networks, or user data.
Channels: Channels, in the context of cybersecurity, refer to places or means through which actors disseminate information. This category is used in particular in the context of FIMI (Foreign Information Manipulation Interference).
Tools: Tools represent legitimate, installed software or hardware applications on an operating system that can be misused by attackers for malicious purposes. (e.g. LOLBAS).
Vulnerabilities: Vulnerabilities are weaknesses or that can be exploited by attackers to compromise the security, integrity, or availability of a computer system or network.
Malware encompasses a broad category of malicious pieces of code built, deployed, and operated by intrusion set. Malware can take many forms, including viruses, worms, Trojans, ransomware, spyware, and more. These entities are created by individuals or groups, including state-nations, state-sponsored groups, corporations, or hacktivist collectives.
Use the Malware SDO to model and track these threats comprehensively, facilitating in-depth analysis, response, and correlation with other security data.
When clicking on the Malware tab on the top left, you see the list of all the Malware you have access to, in respect with your allowed marking definitions. These malware are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, related intrusion sets, countries and sectors they target, and labels. You can then search and filter on some common and specific attributes of Malware.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-arsenal/#visualizing-knowledge-associated-with-a-malware","title":"Visualizing Knowledge associated with a Malware","text":"
When clicking on an Malware card you land on its Overview tab. For a Malware, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Malware. Different thematic views are proposed to easily see the victimology, the threat actors and intrusion sets using the Malware, etc. As described here.
Channels - such as forums, websites and social media platforms (e.g. Twitter, Telegram) - are mediums for disseminating news, knowledge, and messages to a broad audience. While they offer benefits like open communication and outreach, they can also be leveraged for nefarious purposes, such as spreading misinformation, coordinating cyberattacks, or promoting illegal activities.
Monitoring and managing content within Channels aids in analyzing threats, activities, and indicators associated with various threat actors, campaigns, and intrusion sets.
When clicking on the Channels tab at the top left, you see the list of all the Channels you have access to, in respect with your allowed marking definitions. These channels are displayed in a list where you can find certain fields characterizing the entity: type of channel, labels, and dates. You can then search and filter on some common and specific attributes of Channels.
"},{"location":"usage/exploring-arsenal/#visualizing-knowledge-associated-with-a-channel","title":"Visualizing Knowledge associated with a Channel","text":"
When clicking on a Channel in the list, you land on its Overview tab. For a Channel, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Channel. Different thematic views are proposed to easily see the victimology, the threat actors and intrusion sets using the Malware, etc. As described here.
Tools refers to legitimate, pre-installed software applications, command-line utilities, or scripts that are present on a compromised system. These objects enable you to model and monitor the activities of these tools, which can be misused by attackers.
When clicking on the Tools tab at the top left, you see the list of all the Tools you have access to, in respect with your allowed marking definitions. These tools are displayed in a list where you can find certain fields characterizing the entity: labels and dates. You can then search and filter on some common and specific attributes of Tools.
"},{"location":"usage/exploring-arsenal/#visualizing-knowledge-associated-with-an-observed-data","title":"Visualizing Knowledge associated with an Observed Data","text":"
When clicking on a Tool in the list, you land on its Overview tab. For a Tool, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Tool. Different thematic views are proposed to easily see the threat actors, the intrusion sets and the malware using the Tool. As described here.
Vulnerabilities represent weaknesses or flaws in software, hardware, configurations, or systems that can be exploited by malicious actors. This object assists in managing and tracking the organization's security posture by identifying areas that require attention and remediation, while also providing insights into associated intrusion sets, malware and campaigns where relevant.
When clicking on the Vulnerabilities tab at the top left, you see the list of all the Vulnerabilities you have access to, in respect with your allowed marking definitions. These vulnerabilities are displayed in a list where you can find certain fields characterizing the entity: CVSS3 severity, labels, dates and creators (in the platform). You can then search and filter on some common and specific attributes of Vulnerabilities.
"},{"location":"usage/exploring-arsenal/#visualizing-knowledge-associated-with-an-observed-data_1","title":"Visualizing Knowledge associated with an Observed Data","text":"
When clicking on a Vulnerabilities in the list, you land on its Overview tab. For a Vulnerability, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Vulnerability. Different thematic views are proposed to easily see the threat actors, the intrusion sets and the malware exploiting the Vulnerability. As described here.
When you click on \"Cases\" in the left-side bar, you access all the \"Cases\" tabs, visible on the top bar on the left. By default, the user directly access the \"Incident Responses\" tab, but can navigate to the other tabs as well.
As Analyses, Cases can contain other objects. This way, by adding context and results of your investigations in the case, you will be able to get an up-to-date overview of the ongoing situation, and later produce more easily an incident report.
From the Cases section, users can access the following tabs:
Incident Responses: This type of Cases is dedicated to the management of incidents. An Incident Response case does not represent an incident, but all the context and actions that will encompass the response to a specific incident.
Request for Information: CTI teams are often asked to provide extensive information and analysis on a specific subject, be it related to an ongoing incident or a particular trending threat. Request for Information cases allow you to store context and actions relative to this type of request and its response.
Request for Takedown: When an organization is targeted by an attack campaign, a typical response action can be to request the Takedown of elements of the attack infrastructure, for example a domain name impersonating the organization to phish its employees, or an email address used to deliver phishing content. As Takedown needs in most case to reach out to external providers and be effective quickly, it often needs specific workflows. Request for Takedown cases give you a dedicated space to manage these specific actions.
Tasks: In every case, you need tasks to be performed in order to solve it. The Tasks tab allows you to review all created tasks to quickly see past due date, or quickly see every task assigned to a specific user.
Feedbacks: If you use your platform to interact with other teams and provide them CTI Knowledge, some users may want to give you feedback about it. Those feedbacks can easily be considered as another type of case to solve, as it will often refer to Knowledge inconsistency or gaps.
"},{"location":"usage/exploring-cases/#incident-response-request-for-information-request-for-takedown","title":"Incident Response, Request for Information & Request for Takedown","text":""},{"location":"usage/exploring-cases/#general-presentation","title":"General presentation","text":"
Incident responses, Request for Information & Request for Takedown cases are an important part of the case management system in OpenCTI. Here, you can organize the work of your team to respond to cybersecurity situations. You can also give context to the team and other users on the platform about the situation and actions (to be) taken.
To manage the situation, you can issue Tasks and assign them to users in the platform, by directly creating a Task or by applying a Case template that will append a list of predefined tasks.
To bring context, you can use your Case as a container (like Reports or Groupings), allowing you to add any Knowledge from your platform in it. You can also use this possibility to trace your investigation, your Case playing the role of an Incident report. You will find more information about case management here.
Incident Response, Request for Information & Request for Takedown are not STIX 2.1 Objects.
When clicking on the Incident Response, Request for Information & Request for Takedown tabs at the top, you see the list of all the Cases you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes.
"},{"location":"usage/exploring-cases/#visualizing-knowledge-within-an-incident-response-request-for-information-request-for-takedown","title":"Visualizing Knowledge within an Incident Response, Request for Information & Request for Takedown","text":"
When clicking on an Incident Response, Request for Information or Request for Takedown, you land on the Overview tab. The following tabs are accessible:
Overview: Overview of Cases are slightly different from the usual (described here). Cases' Overview displays also the list of the tasks associated with the case. It also let you highlight Incident, Report or Sighting at the origin of the case. If other cases contains some Observables with your Case, they will be displayed as Related Cases in the Overview.
Knowledge: a complex tab that regroups all the structured Knowledge contained in the Case, accessible through different views (See below for a dive-in). As described here.
Content: a tab to upload or creates outcomes document displaying the content of the Case (for example in PDF, text, HTML or markdown files). The Content of the document is displayed to ease the access of Knowledge through a readable format. As described here.
Entities: A table containing all SDO (Stix Domain Objects) contained in the Case, with search and filters available. It also displays if the SDO has been added directly or through inferences with the reasoning engine
Observables: A table containing all SCO (Stix Cyber Observable) contained in the Case, with search and filters available. It also displays if the SDO has been added directly or through inferences with the reasoning engine
Data: as described here.
Exploring and modifying the structured Knowledge contained in a Case can be done through different lenses.
In Graph view, STIX SDO are displayed as graph nodes and relationships as graph links. Nodes are colored depending on their type. Direct relationship are displayed as plain link and inferred relationships in dotted link. At the top right, you will find a series of icons. From there you can change the current type of view. Here you can also perform global action on the Knowledge of the Case. Let's highlight 2 of them:
Suggestions: This tool suggests you some logical relationships to add between your contained Object to give more consistency to your Knowledge.
Share with an Organization: if you have designated a main Organization in the platform settings, you can here share your Case and its content with users of another Organization. At the bottom, you have many option to manipulate the graph:
Multiple option for shaping the graph and applying forces to the nodes and links
Multiple selection options
Multiple filters, including a time range selector allowing you to see the evolution of the Knowledge within the Case.
Multiple creation and edition tools to modify the Knowledge contained in the Case.
Through this view, you can map existing or new Objects directly from a readable content, allowing you to quickly append structured Knowledge in your Case before refining it with relationships and details. This view is a great place to see the continuum between unstructured and structured Knowledge.
This view allows you to see the structured Knowledge chronologically. This view is particularly useful in the context of a Case, allowing you to see the chain of events, either from the attack perspectives, the defense perspectives or both. The view can be filtered and displayed relationships too.
Tasks are actions to be performed in the context of a Case (Incident Response, Request for Information, Request for Takedown). Usually, a task is assigned to a user, but important tasks may involve more participants.
When clicking on the Tasks tab at the top of the interface, you see the list of all the Tasks you have access to, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of the tasks.
Clicking on a Task, you land on its Overview tab. For a Tasks, the following tabs are accessible:
When a user fill a feedback form from its Profile/Feedback menu, it will then be accessible here.
This feature gives the opportunity to engage with other users of your platform and to respond directly to their concern about it or the Knowledge, without the need of third party software.
Clicking on a Feedback, you land on its Overview tab. For a Feedback, the following tabs are accessible:
OpenCTI's Entities objects provides a comprehensive framework for modeling various targets and attack victims within your threat intelligence data. With five distinct Entity object types, you can represent sectors, events, organizations, systems, and individuals. This robust classification empowers you to contextualize threats effectively, enhancing the depth and precision of your analysis.
When you click on \"Entities\" in the left-side bar, you access all the \"Entities\" tabs, visible on the top bar on the left. By default, the user directly access the \"Sectors\" tab, but can navigate to the other tabs as well.
From the Entities section, users can access the following tabs:
Sectors: areas of activity.
Events: event in the real world.
Organizations: groups with specific aims such as companies and government entities.
Systems: technologies such as platforms and software.
Sectors represent specific domains of activity, defining areas such as energy, government, health, finance, and more. Utilize sectors to categorize targeted industries or sectors of interest, providing valuable context for threat intelligence analysis within distinct areas of the economy.
When clicking on the Sectors tab at the top left, you see the list of all the Sectors you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-a-sector","title":"Visualizing Knowledge associated with a Sector","text":"
When clicking on a Sector in the list, you land on its Overview tab. For a Sector, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Sector. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the Sector. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in the Sector.
Events encompass occurrences like international sports events, summits (e.g., G20), trials, conferences, or any significant happening in the real world. By modeling events, you can analyze threats associated with specific occurrences, allowing for targeted investigations surrounding high-profile incidents.
When clicking on the Events tab at the top left, you see the list of all the Events you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-an-event","title":"Visualizing Knowledge associated with an Event","text":"
When clicking on an Event in the list, you land on its Overview tab. For an Event, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Event. Different thematic views are proposed to easily see the related entities, the threats, the locations, etc. linked to the Event. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted during an attack against the Event.
Organizations include diverse entities such as companies, government bodies, associations, non-profits, and other groups with specific aims. Modeling organizations enables you to understand the threat landscape concerning various entities, facilitating investigations into cyber-espionage, data breaches, or other malicious activities targeting specific groups.
When clicking on the Organizations tab at the top left, you see the list of all the Organizations you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-an-organization","title":"Visualizing Knowledge associated with an Organization","text":"
When clicking on an Organization in the list, you land on its Overview tab. For an Organization, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Organization. Different thematic views are proposed to easily see the related entities, the threats, the locations, etc. linked to the Organization. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in the Organization.
Data: as described here.
History: as described here.
Furthermore, an Organization can be observed from an \"Author\" perspective. It is possible to change this viewpoint to the right of the entity name, using the \"Display as\" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to \"Author\" mode, the observed data pertains to the entity's description as an author within the platform:
Overview: The \"Latest created relationships\" and \"Latest containers about the object\" panels are replaced by the \"Latest containers authored by this entity\" panel.
Knowledge: A tab that presents an overview of the data authored by the Organization (i.e. counters and a graph).
Analyses: The list of all Analyses (Report, Groupings) and Cases (Incident response, Request for Information, Request for Takedown) for which the Organization is the author.
Systems represent software applications, platforms, frameworks, or specific tools like WordPress, VirtualBox, Firefox, Python, etc. Modeling systems allows you to focus on threats related to specific software or technology, aiding in vulnerability assessments, patch management, and securing critical applications.
When clicking on the Systems tab at the top left, you see the list of all the Systems you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-a-system","title":"Visualizing Knowledge associated with a System","text":"
When clicking on a System in the list, you land on its Overview tab. For a System, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the System. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the System. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in the System.
Data: as described here.
History: as described here.
Furthermore, a System can be observed from an \"Author\" perspective. It is possible to change this viewpoint to the right of the entity name, using the \"Display as\" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to \"Author\" mode, the observed data pertains to the entity's description as an author within the platform:
Overview: The \"Latest created relationships\" and \"Latest containers about the object\" panels are replaced by the \"Latest containers authored by this entity\" panel.
Knowledge: A tab that presents an overview of the data authored by the System (i.e. counters and a graph).
Analyses: The list of all Analyses (Report, Groupings) and Cases (Incident response, Request for Information, Request for Takedown) for which the System is the author.
Individuals represent specific persons relevant to your threat intelligence analysis. This category includes targeted individuals, or influential figures in various fields. Modeling individuals enables you to analyze threats related to specific people, enhancing investigations into cyber-stalking, impersonation, or other targeted attacks.
When clicking on the Individuals tab at the top left, you see the list of all the Individuals you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-entities/#visualizing-knowledge-associated-with-an-individual","title":"Visualizing Knowledge associated with an Individual","text":"
When clicking on an Individual in the list, you land on its Overview tab. For an Individual, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Individual. Different thematic views are proposed to easily see the related entities, the threats, the locations, etc. linked to the Individual. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in the Individual.
Data: as described here.
History: as described here.
Furthermore, an Individual can be observed from an \"Author\" perspective. It is possible to change this viewpoint to the right of the entity name, using the \"Display as\" drop-down menu (see screenshot below). This different perspective is accessible in the Overview, Knowledge and Analyses tabs. When switched to \"Author\" mode, the observed data pertains to the entity's description as an author within the platform:
Overview: The \"Latest created relationships\" and \"Latest containers about the object\" panels are replaced by the \"Latest containers authored by this entity\" panel.
Knowledge: A tab that presents an overview of the data authored by the Individual (i.e. counters and a graph).
Analyses: The list of all Analyses (Report, Groupings) and Cases (Incident response, Request for Information, Request for Takedown) for which the Individual is the author.
When you click on \"Events\" in the left-side bar, you access all the \"Events\" tabs, visible on the top bar on the left. By default, the user directly access the \"Incidents\" tab, but can navigate to the other tabs as well.
From the Events section, users can access the following tabs:
Incidents: In OpenCTI, Incidents correspond to a negative event happening on an information system. This can include a cyberattack (intrusion, phishing, etc.), a consolidated security alert generated by a SIEM or EDR that need to be qualified, and so on. It can also refer to an information warfare attack in the context of countering disinformation.
Sightings: Sightings correspond to the event in which an Observable (IP, domain name, certificate, etc.) is detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or an EDR.
Observed Data: Observed Data has been added in OpenCTI by compliance with the STIX 2.1 standard. You can see it has a pseudo-container that contains Observables, like a line of firewall log for example. Currently, it is rarely used.
Incidents usually represents negative events impacting resources you want to protect, but local definitions can vary a lot, from a simple security events send by a SIEM to a massive scale supply chain attack impacting a whole activity sector.
In the MITRE STIX 2.1, the Incident SDO has not yet been finalized and is the object of important work as part of a forthcoming STIX Extension.
When clicking on the Incidents tab at the top left, you see the list of all the Incidents you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-events/#visualizing-knowledge-associated-with-an-incident","title":"Visualizing Knowledge associated with an Incident","text":"
When clicking on an Incident in the list, you land on its Overview tab. For an Incident, the following tabs are accessible:
Overview: as described here, with the particularity to display two distribution graphs of its related Entities (STIX SDO) and Observable (STIX SCO).
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Incident. Different thematic views are proposed to easily see the victimology, arsenal, techniques and so on used in the context of the Incident. As described here.
Content: This specific tab allows to previzualize, manage and write deliverable associated with the Incident. For example an analytic report to share with other teams, a markdown files to feed a collaborative wiki with, etc. As described here.
The Sightings correspond to events in which an Observable (IP, domain name, url, etc.) is detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or EDR.
In OpenCTI, as we are in a cybersecurity context, Sightings are associated with Indicators of Compromise (IoC) and the notion of \"True positive\" and \"False positive\".
It is important to note that Sightings are a type of relationship (not a STIX SDO or STIX SCO), between an Observable and an Entities or Locations.
When clicking on the Sightings tab at the top left, you see the list of all the Sightings you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-events/#visualizing-knowledge-associated-with-a-sighting","title":"Visualizing Knowledge associated with a Sighting","text":"
When clicking on a Sighting in the list, you land on its Overview tab. As other relationships in the platform, Sighting's overview displays common related metadata, containers, external references, notes and entities linked by the relationship.
In addition, this overview displays: - Qualification : if the Sighting is a True Positive or a False Positive - Count : number of times the event has been seen
Observed Data correspond to an extract from a log that contains Observables.
In the MITRE STIX 2.1, the Observed Data SDO is defined as such:
Observed Data conveys information about cybersecurity related entities such as files, systems, and networks using the STIX Cyber-observable Objects (SCOs). For example, Observed Data can capture information about an IP address, a network connection, a file, or a registry key. Observed Data is not an intelligence assertion, it is simply the raw information without any context for what it means.
When clicking on the Observed Data tab at the top left, you see the list of all the Observed Data you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-events/#visualizing-knowledge-associated-with-an-observed-data","title":"Visualizing Knowledge associated with an Observed Data","text":"
When clicking on an Observed Data in the list, you land on its Overview tab. The following tabs are accessible:
Overview: as described here, with the particularity to display a distribution graphs of its related Observables (STIX SCO).
Entities : a sortable and filterable list of all Entities (SDO)
Observables: a sortable and filterable list of all Observables (SCO) in relation with the Observed Data
OpenCTI's Locations objects provides a comprehensive framework for representing various geographic entities within your threat intelligence data. With five distinct Location object types, you can precisely define regions, countries, areas, cities, and specific positions. This robust classification empowers you to contextualize threats geographically, enhancing the depth and accuracy of your analysis.
When you click on \"Locations\" in the left-side bar, you access all the \"Locations\" tabs, visible on the top bar on the left. By default, the user directly access the \"Regions\" tab, but can navigate to the other tabs as well.
From the Locations section, users can access the following tabs:
Regions: very large geographical territories, such as a continent.
Countries: the world's countries.
Areas: more or less extensive geographical areas and often not having a very defined limit
Regions encapsulate broader geographical territories, often representing continents or significant parts of continents. Examples include EMEA (Europe, Middle East, and Africa), Asia, Western Europe, and North America. Utilize regions to categorize large geopolitical areas and gain macro-level insights into threat patterns.
When clicking on the Regions tab at the top left, you see the list of all the Regions you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-region","title":"Visualizing Knowledge associated with a Region","text":"
When clicking on a Region in the list, you land on its Overview tab. For a Region, the following tabs are accessible:
Overview: as described here, with the particularity of not having a Details section but a map locating the Region.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Region. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the Region. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in a Region.
Countries represent individual nations across the world. With this object type, you can specify detailed information about a particular country, enabling precise localization of threat intelligence data. Countries are fundamental entities in geopolitical analysis, offering a focused view of activities within national borders.
When clicking on the Countries tab at the top left, you see the list of all the Countries you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-country","title":"Visualizing Knowledge associated with a Country","text":"
When clicking on a Country in the list, you land on its Overview tab. For a Country, the following tabs are accessible:
Overview: as described here, with the particularity of not having a Details section but a map locating the Country.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Country. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the Country. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in a Country.
Areas define specific geographical regions of interest, such as the Persian Gulf, the Balkans, or the Caucasus. Use areas to identify unique zones with distinct geopolitical, cultural, or strategic significance. This object type facilitates nuanced analysis of threats within defined geographic contexts.
When clicking on the Areas tab at the top left, you see the list of all the Areas you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-an-area","title":"Visualizing Knowledge associated with an Area","text":"
When clicking on an Area in the list, you land on its Overview tab. For an Area, the following tabs are accessible:
Overview: as described here, with the particularity of not having a Details section but a map locating the Area.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Area. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the Area. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in an Area.
Cities provide granular information about urban centers worldwide. From major metropolises to smaller towns, cities are crucial in understanding localized threat activities. With this object type, you can pinpoint threats at the urban level, aiding in tactical threat assessments and response planning.
When clicking on the Cities tab at the top left, you see the list of all the Cities you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-city","title":"Visualizing Knowledge associated with a City","text":"
When clicking on a City in the list, you land on its Overview tab. For a City, the following tabs are accessible:
Overview: as described here, with the particularity of not having a Details section but a map locating the City.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the City. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the City. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted in a City.
Positions represent highly precise geographical points, such as monuments, buildings, or specific event locations. This object type allows you to define exact coordinates, enabling accurate mapping of events or incidents. Positions enhance the granularity of your threat intelligence data, facilitating precise geospatial analysis.
When clicking on the Positions tab at the top left, you see the list of all the Positions you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-locations/#visualizing-knowledge-associated-with-a-position","title":"Visualizing Knowledge associated with a Position","text":"
When clicking on a Position in the list, you land on its Overview tab. For a Position, the following tabs are accessible:
Overview: as described here, with the particularity to display a map locating the Position.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Position. Different thematic views are proposed to easily see the related entities, the threats, the incidents, etc. linked to the Position. As described here.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which an Indicator (IP, domain name, url, etc.) is sighted at a Position.
When you click on \"Observations\" in the left-side bar, you access all the \"Observations\" tabs, visible on the top bar on the left. By default, the user directly access the \"Observables\" tab, but can navigate to the other tabs as well.
From the Observations section, users can access the following tabs:
Observables: An Observable represents an immutable object. Observables can encompass a wide range of entities such as IPv4 addresses, domain names, email addresses, and more.
Artefacts: In OpenCTI, the Artefacts is a particular Observable. It may contain a file, such as a malware sample.
Indicators: An Indicator is a detection object. It is defined by a search pattern, which could be expressed in various formats such as STIX, Sigma, YARA, among others.
Infrastructures: An Infrastructure describes any systems, software services and any associated physical or virtual resources intended to support some purpose (e.g. C2 servers used as part of an attack, devices or servers that are part of defense, database servers targeted by an attack, etc.).
An Observable is a distinct entity from the Indicator within OpenCTI and represents an immutable object. Observables can encompass a wide range of entities such as IPv4 addresses, domain names, email addresses, and more. Importantly, Observables don't inherently imply malicious intent, they can include items like legitimate IP addresses or domains associated with an organization. Additionally, they serve as raw data points without the additional detection context found in Indicators.
When clicking on the Observables tab at the top left, you see the list of all the Observables you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-observations/#visualizing-knowledge-associated-with-an-observable","title":"Visualizing Knowledge associated with an Observable","text":"
When clicking on an Observable in the list, you land on its Overview tab. For an Observable, the following tabs are accessible:
Overview: as described here, with the particularity to display Indicators composed with the Observable.
Knowledge: a tab listing all its relationships and nested objects.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which the Observable (IP, domain name, url, etc.) has been sighted.
An Artefact is a particular Observable. It may contain a file, such as a malware sample. Files can be uploaded or downloaded in encrypted archives, providing an additional layer of security against potential manipulation of malicious payloads.
When clicking on the Artefacts tab at the top left, you see the list of all the Artefacts you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-observations/#visualizing-knowledge-associated-with-an-artefact","title":"Visualizing Knowledge associated with an Artefact","text":"
When clicking on an Artefact in the list, you land on its Overview tab. For an Artefact, the following tabs are accessible:
Overview: as described here, with the particularity to be able to download the attached file.
Knowledge: a tab listing all its relationships and nested objects.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which the Artefact has been sighted.
An Indicator is a detection object. It is defined by a search pattern, which could be expressed in various formats such as STIX, Sigma, YARA, among others. This pattern serves as a key to identify potential threats within the data. Furthermore, an Indicator includes additional information that enriches its detection context. This information encompasses:
Validity dates: Indicators are accompanied by a time frame, specifying the duration of their relevance, and modeled by the Valid from and Valid until dates.
Actionable fields: Linked to the validity dates, the Revoked and Detection fields can be used to sort Indicators for detection purposes.
Kill chain phase: They indicate the phase within the cyber kill chain where they are applicable, offering insights into the progression of a potential threat.
Types: Indicators are categorized based on their nature, aiding in classification and analysis.
When clicking on the Indicators tab at the top left, you see the list of all the Indicators you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-observations/#visualizing-knowledge-associated-with-an-indicator","title":"Visualizing Knowledge associated with an Indicator","text":"
When clicking on an Indicator in the list, you land on its Overview tab. For an Indicator, the following tabs are accessible:
Overview: as described here, with the particularity to display the Observables on which it is based.
Knowledge: a tab listing all its relationships.
Analyses: as described here.
Sightings: a table containing all Sightings relationships corresponding to events in which the Indicator has been sighted.
An Infrastructure refers to a set of resources, tools, systems, or services employed by a threat to conduct their activities. It represents the underlying framework or support system that facilitates malicious operations, such as the command and control (C2) servers in an attack. Notably, like Observables, an Infrastructure doesn't inherently imply malicious intent. It can also represent legitimate resources affiliated with an organization (e.g. devices or servers that are part of defense, database servers targeted by an attack, etc.).
When clicking on the Infrastructures tab at the top left, you see the list of all the Infrastructures you have access to, in respect with your allowed marking definitions.
"},{"location":"usage/exploring-observations/#visualizing-knowledge-associated-with-an-infrastructure","title":"Visualizing Knowledge associated with an Infrastructure","text":"
When clicking on an Infrastructure in the list, you land on its Overview tab. For an Infrastructure, the following tabs are accessible:
Overview: as described here, with the particularity to display distribution graphs of its related Observable (STIX SCO).
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Infrastructure. Different thematic views are proposed to easily see the threats, the arsenal, the observations, etc. linked to the Infrastructure. As described here.
When you click on \"Techniques\" in the left-side bar, you access all the \"Techniques\" tabs, visible on the top bar on the left. By default, the user directly access the \"Attack pattern\" tab, but can navigate to the other tabs as well.
From the Techniques section, users can access the following tabs:
Attack pattern: attacks pattern used by the threat actors to perform their attacks. By default, OpenCTI is provisionned with attack patterns from MITRE ATT&CK matrices (for CTI) and DISARM matrix (for FIMI).
Narratives: In OpenCTI, narratives used by threat actors can be represented and linked to other Objects. Narratives are mainly used in the context of disinformation campaigns where it is important to trace which narratives have been and are still used by threat actors.
Courses of action: A Course of Action is an action taken either to prevent an attack or to respond to an attack that is in progress. It may describe technical, automatable responses (applying patches, reconfiguring firewalls) but can also describe higher level actions like employee training or policy changes. For example, a course of action to mitigate a vulnerability could describe applying the patch that fixes it.
Data sources: Data sources represent the various subjects/topics of information that can be collected by sensors/logs. Data sources also include data components,
Data components: Data components identify specific properties/values of a data source relevant to detecting a given ATT&CK technique or sub-technique.
Attacks pattern used by the threat actors to perform their attacks. By default, OpenCTI is provisionned with attack patterns from MITRE ATT&CK matrices and CAPEC (for CTI) and DISARM matrix (for FIMI).
In the MITRE STIX 2.1 documentation, an Attack pattern is defined as such :
Attack Patterns are a type of TTP that describe ways that adversaries attempt to compromise targets. Attack Patterns are used to help categorize attacks, generalize specific attacks to the patterns that they follow, and provide detailed information about how attacks are performed. An example of an attack pattern is \"spear phishing\": a common type of attack where an attacker sends a carefully crafted e-mail message to a party with the intent of getting them to click a link or open an attachment to deliver malware. Attack Patterns can also be more specific; spear phishing as practiced by a particular threat actor (e.g., they might generally say that the target won a contest) can also be an Attack Pattern.
When clicking on the Attack pattern tab at the top left, you access the list of all the attack pattern you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of attack patterns.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-an-attack-pattern","title":"Visualizing Knowledge associated with an Attack pattern","text":"
When clicking on an Attack pattern, you land on its Overview tab. For an Attack pattern, the following tabs are accessible:
Overview: Overview of Attack pattern is a bit different as the usual described here. The \"Details\" box is more structured and contains information about:
parent or subtechniques (as in the MITRE ATT&CK matrices),
related kill chain phases
Platform on which the Attack pattern is usable,
permission required to apply it
Related detection technique
Courses of action to mitigate the Attack pattern
Data components in which find data to detect the usage of the Attack pattern
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Attack pattern. Different thematic views are proposed to easily see Threat Actors and Intrusion Sets using this techniques, linked incidents, etc.
In OpenCTI, narratives used by threat actors can be represented and linked to other Objects. Narratives are mainly used in the context of disinformation campaigns where it is important to trace which narratives have been and are still used by threat actors.
An example of Narrative can be \"The country A is weak and corrupted\" or \"The ongoing operation aims to free people\".
Narrative can be a mean in the context of a more broad attack or the goal of the operation, a vision to impose.
When clicking on the Narrative tab at the top left, you access the list of all the Narratives you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of narratives.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-a-narrative","title":"Visualizing Knowledge associated with a Narrative","text":"
When clicking on a Narrative, you land on its Overview tab. For a Narrative, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Narratives. Different thematic views are proposed to easily see the Threat actors and Intrusion Set using the Narrative, etc.
Analyses: as described here.
Data: as described here.
History: as described here.
"},{"location":"usage/exploring-techniques/#courses-of-action","title":"Courses of action","text":""},{"location":"usage/exploring-techniques/#general-presentation_2","title":"General presentation","text":"
In the MITRE STIX 2.1 documentation, an Course of action is defined as such :
A Course of Action is an action taken either to prevent an attack or to respond to an attack that is in progress. It may describe technical, automatable responses (applying patches, reconfiguring firewalls) but can also describe higher level actions like employee training or policy changes. For example, a course of action to mitigate a vulnerability could describe applying the patch that fixes it.
When clicking on the Courses of action tab at the top left, you access the list of all the Courses of action you have access too, in respect with your allowed marking definitions. You can then search and filter on some common and specific attributes of course of action.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-a-course-of-action","title":"Visualizing Knowledge associated with a Course of action","text":"
When clicking on a Course of Action, you land on its Overview tab. For a Course of action, the following tabs are accessible:
Overview: Overview of Course of action is a bit different as the usual described here. In \"Details\" box, mitigated attack pattern are listed.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Narratives. Different thematic views are proposed to easily see the Threat actors and Intrusion Set using the Narrative, etc.
Analyses: as described here.
Data: as described here.
History: as described here.
"},{"location":"usage/exploring-techniques/#data-sources-data-components","title":"Data sources & Data components","text":""},{"location":"usage/exploring-techniques/#general-presentation_3","title":"General presentation","text":"
In the MITRE ATT&CK documentation, Data sources are defined as such :
Data sources represent the various subjects/topics of information that can be collected by sensors/logs. Data sources also include data components, which identify specific properties/values of a data source relevant to detecting a given ATT&CK technique or sub-technique.
"},{"location":"usage/exploring-techniques/#visualizing-knowledge-associated-with-a-data-source-or-a-data-components","title":"Visualizing Knowledge associated with a Data source or a Data components","text":"
When clicking on a Data source or a Data component, you land on its Overview tab. For a Course of action, the following tabs are accessible:
When you click on \"Threats\" in the left-side bar, you access all the \"Threats\" tabs, visible on the top bar on the left. By default, the user directly access the \"Threat Actor (Group)\" tab, but can navigate to the other tabs as well.
From the Threats section, users can access the following tabs:
Threat actors (Group): Threat actor (Group) represents a physical group of attackers operating an Intrusion set, using malware and attack infrastructure, etc.
Threat actors (Indvidual): Threat actor (Individual) represents a real attacker that can be described by physical and personal attributes and motivations. Threat actor (Individual) operates Intrusion set, uses malware and infrastructure, etc.
Intrusion sets: Intrusion set is an important concept in Cyber Threat Intelligence field. It is a consistent set of technical and non-technical elements corresponding of what, how and why a Threat actor acts. it is particularly useful for associating multiple attacks and malicious actions to a defined Threat, even without sufficient information regarding who did them. Often, with you understanding of the threat growing, you will link an Intrusion set to a Threat actor (either a Group or an Individual).
Campaigns: Campaign represents a series of attacks taking place in a certain period of time and/or targeting a consistent subset of Organization/Individual.
"},{"location":"usage/exploring-threats/#threat-actors-group-and-individual","title":"Threat actors (Group and Individual)","text":""},{"location":"usage/exploring-threats/#general-presentation","title":"General presentation","text":"
Threat actors are the humans who are building, deploying and operating intrusion sets. A threat actor can be an single individual or a group of attackers (who may be composed of individuals). A group of attackers may be a state-nation, a state-sponsored group, a corporation, a group of hacktivists, etc.
Beware, groups of attackers might be modelled as \"Intrusion sets\" in feeds, as there is sometimes a misunderstanding in the industry between group of people and the technical/operational intrusion set they operate.
When clicking on the Threat actor (Group or Individual) tabs at the top left, you see the list of all the groups of Threat actors or Individual Threat actors you have access to, in respect with your allowed marking definitions. These groups or individual are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware they used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Threat actors.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-threats/#demographic-and-biographic-information","title":"Demographic and Biographic Information","text":"
Individual Threat actors have unique properties to represent demographic and biographic information. Currently tracked demographics include their countries of residence, citizenships, date of birth, gender, and more.
Biographic information includes their eye and hair color, as well as known heights and weights.
An Individual Threat actor can also be tracked as employed by an Organization or a Threat Actor group. This relationship can be set under the knowledge tab.
"},{"location":"usage/exploring-threats/#visualizing-knowledge-associated-with-a-threat-actor","title":"Visualizing Knowledge associated with a Threat actor","text":"
When clicking on a Threat actor Card, you land on its Overview tab. For a Threat actor, the following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Threat actor. Different thematic views are proposed to easily see the victimology, arsenal and techniques used by the Threat actor, etc. As described here.
An intrusion set is a consistent group of technical elements such as \"tactics, technics and procedures\" (TTP), tools, malware and infrastructure used by a threat actor against one or a number of victims who are usually sharing some characteristics (field of activity, country or region) to reach a similar goal whoever the victim is. The intrusion set may be deployed once or several times and may evolve with time. Several intrusion sets may be linked to one threat actor. All the entities described below may be linked to one intrusion set. There are many debates in the Threat Intelligence community on how to define an intrusion set and how to distinguish several intrusion sets with regards to:
their differences
their evolutions
the possible reuse
\"false flag\" type of attacks
As OpenCTI is very customizable, each organization or individual may use these categories as they wish. Instead, it is also possible to use the import feed for the choice of categories.
When clicking on the Intrusion set tab on the top left, you see the list of all the Intrusion sets you have access to, in respect with your allowed marking definitions. These intrusion sets are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware they used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Intrusion set.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-threats/#visualizing-knowledge-associated-with-an-intrusion-set","title":"Visualizing Knowledge associated with an Intrusion set","text":"
When clicking on an Intrusion set Card, you land on its Overview tab. The following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Intrusion Set. Different thematic views are proposed to easily see the victimology, arsenal and techniques used by the Intrusion Set, etc. As described here.
A campaign can be defined as \"a series of malicious activities or attacks (sometimes called a \"wave of attacks\") taking place within a limited period of time, against a defined group of victims, associated to a similar intrusion set and characterized by the use of one or several identical malware towards the various victims and common TTPs\". However, a campaign is an investigation element and may not be widely recognized. Thus, a provider might define a series of attacks as a campaign and another as an intrusion set. Campaigns can be attributed to an Intrusion set.
When clicking on the Campaign tab on the top left, you see the list of all the Campaigns you have access to, in respect with your allowed marking definitions. These campaigns are displayed as Cards where you can find a summary of the important Knowledge associated with each of them: description, aliases, malware used, countries and industries they target, labels. You can then search and filter on some common and specific attributes of Campaigns.
At the top right of each Card, you can click the star icon to put it as favorite. It will pin the card on top of the list. You will also be able to display all your favorite easily in your Custom Dashboards.
"},{"location":"usage/exploring-threats/#visualizing-knowledge-associated-with-a-campaign","title":"Visualizing Knowledge associated with a Campaign","text":"
When clicking on an Campaign Card, you land on its Overview tab. The following tabs are accessible:
Overview: as described here.
Knowledge: a complex tab that regroups all the structured Knowledge linked to the Campaign. Different thematic views are proposed to easily see the victimology, arsenal and techniques used in the context of the Campaign. As described here.
With the OpenCTI platform, you can manually export your intelligence content in the following formats:
JSON,
CSV,
PDF,
TXT.
"},{"location":"usage/export/#export-in-structured-or-document-format","title":"Export in structured or document format","text":""},{"location":"usage/export/#generate-an-export","title":"Generate an export","text":"
To export one or more entities you have two possibilities. First you can click on the button \"Open export panel\". The list of pre-existing exports will open, and in the bottom right-hand corner you can configure and generate a new export.
This opens the export settings panel, where you can customize your export according to four fields:
desired export format (text/csv, application/pdf, application/vnd.oasis.stix+json, text/plain)
export type (simple or full),
the max marking definition levels of the elements to be included in the export (a TLP level, for instance). The list of the available max markings is limited by the user allowed markings and its maximum shareable markings (more details about maximum shareable marking definitions in data segregation). For a marking definition type to be taken into account here, a marking definition from this type must be provided. For example, if you select TLP:GREEN for this field, AMBER and RED elements will be excluded but it will not take into account any PAP markings unless one is elected too.
the file marking definition levels of the export (a TLP level, for instance). This marking on the file itself will then restrain the access to it in accordance with users' marking definition levels. For example, if a file has the marking TLP:RED and INTERNAL, a user will need to have these marking to see and access the file in the platform.```
The second way is to click directly on the \"Generate an Export\" button to export the content of an entity in the desired format. The same settings panel will open.
Both ways add your export in the Exported files list in the Data tab.
All entities in your instance can be exported either directly via Generate Export or indirectly via Export List in .json and .csv formats.
"},{"location":"usage/export/#export-a-list-of-entities","title":"Export a list of entities","text":"
You have the option to export either a single element, such as a report, or a collection of elements, such as multiple reports. These exports may contain not only the entity itself but also related elements, depending on the type of export you select: \"simple\" or \"full\". See the Export types (simple and full) section.
You can also choose to export a list of entities within a container. To do so, go to the container's entities tab. For example, for a report, if you only want to retrieve entity type attack pattern and indicators to design a detection strategy, go to the entities tab and select specific elements for export.
"},{"location":"usage/export/#export-types-simple-and-full","title":"Export types (simple and full)","text":"
When you wish to export only the content of a specific entity such as a report, you can choose a \"simple\" export type.
If you also wish to export associated content, you can choose a \"full\" export. With this type of export, the entity will be exported along with all entities directly associated with the central one (first neighbors).
"},{"location":"usage/export/#exports-list-panel","title":"Exports list panel","text":"
Once an export has been created, you can find it in the export list panel. Simply click on a particular export to download it.
You can also generate a new export directly from the Exports list, as explained in the Generate an export section.
Feeds are configured in the \"Data > Data sharing\" window. Configuration for all feed types is uniform and relies on the following parameters:
Filter setup: The feed can have specific filters to publish only a subset of the platform overall knowledge. Any data that meets the criteria established by the user's feed filters will be shared (e.g. specific types of entities, labels, marking definitions, etc.).
Access control: A feed can be either public, i.e. accessible without authentication, or restricted. By default, it's accessible to any user with the \"Access data sharing\" capability, but it's possible to increase restrictions by limiting access to a specific user, group, or organization.
By carefully configuring filters and access controls, you can tailor the behavior of Live streams, TAXII collections, and CSV feeds to align with your specific data-sharing needs.
Live streams, an exclusive OpenCTI feature, increase the capacity for real-time data sharing by serving STIX 2.1 bundles as TAXII collections with advanced capabilities. What distinguishes them is their dynamic nature, which includes the creation, updating, and deletion of data. Unlike TAXII, Live streams comprehensively resolve relationships and dependencies, ensuring a more nuanced and interconnected exchange of information. This is particularly beneficial in scenarios where sharing involves entities with complex relationships, providing a richer context for the shared data.
In scenarios involving data sharing between two OpenCTI platforms, Live streams emerge as the preferred mechanism. These streams operate like TAXII collections but are notably enhanced, supporting:
create, update and delete events depending on the parameters,
caching already created entities in the last 5 minutes,
resolving relationships and dependencies even out of the filters,
can be public (without authentication).
Resolve relationships and dependencies
Dependencies and relationships of entities shared via Live streams, as determined by specified filters, are automatically shared even beyond the confines of these filters. This means that interconnected data, which may not directly meet the filter criteria, is still included in the Live stream. However, OpenCTI data segregation mechanisms are still applied. They allow restricting access to shared data based on factors such as markings or organization. It's imperative to carefully configure and manage these access controls to ensure that no confidential data is shared.
To better understand how live streams are working, let's take a few examples, from simple to complex.
Given a live stream with filters Entity type: Indicator AND Label: detection. Let's see what happens with an indicator with:
Marking definition: TLP:GREEN
Author Crowdstrike
Relation indicates to the malware Emotet
Action Result in stream (with Avoid dependencies resolution=true) Result in stream (with Avoid dependencies resolution=false) 1. Create an indicator Nothing Nothing 2. Add the label detection Create TLP:GREEN, create CrowdStrike, create the indicator Create TLP:GREEN, create CrowdStrike, create the malware Emotet, create the indicator, create the relationship indicates 3. Remove the label detection Delete the indicator Delete the indicator and the relationship 4. Add the label detection Create the indicator Create the indicator, create the relationship indicates 5. Delete the indicator Delete the indicator Delete the indicator and the relationship
Details on how to consume these Live streams can be found on the dedicated page.
OpenCTI has an embedded TAXII API endpoint which provides valid STIX 2.1 bundles. If you wish to know more about the TAXII standard, please read the official introduction.
In OpenCTI you can create as many TAXII 2.1 collections as needed.
After creating a new collection, every system with a proper access token can consume the collection using different kinds of authentication (basic, bearer, etc.).
As when using the GraphQL API, TAXII 2.1 collections have a classic pagination system that should be handled by the consumer. Also, it's important to understand that element dependencies (nested IDs) inside the collection are not always contained/resolved in the bundle, so consistency needs to be handled at the client level.
The CSV feed facilitates the automatic generation of a CSV file, accessible via a URL. The CSV file is regenerated and updated at user-defined intervals, providing flexibility. The entries in the file correspond to the information that matches the filters applied and that were created or modified in the platform during the time interval (between the last generation of the CSV and the new one).
CSV size limit
The CSV file generated has a limit of 5 000 entries. If more than 5 000 entities are retrieved by the platform, only the most recent 5 000 will be shared in the file.
This guide aims to give you a full overview of the OpenCTI features and workflows. The platform can be used in various contexts to handle threats management use cases from a technical to a more strategic level. OpenCTI has been designed as a knowledge graph, taking inputs (threat intelligence feeds, sightings & alerts, vulnerabilities, assets, artifacts, etc.) and generating outputs based on built-in capabilities and / or connectors.
Here are some examples of use cases:
Cyber Threat Intelligence knowledge base
Detection as code feeds for XDR, EDR, SIEMs, firewalls, proxies, etc.
Incident response artifacts & cases management
Vulnerabilities management
Reporting, alerting and dashboarding on a subset of data
The welcome page gives any visitor on the OpenCTI platform an overview of what's happening on the platform. It can be replaced by a custom dashboard, created by a user (or the default dashboard set up in a role, a group or an organization).
"},{"location":"usage/getting-started/#indicators-in-the-dashboard","title":"Indicators in the dashboard","text":""},{"location":"usage/getting-started/#numbers","title":"Numbers","text":"Component Description Intrusion sets Number of intrusion sets . Malware Number of malware. Reports Number of reports. Indicators Number of indicators."},{"location":"usage/getting-started/#charts-lists","title":"Charts & lists","text":"Component Description Most active threats (3 last months) Top active threats (threat actor, intrusion set and campaign) during the last 3 months. Most targeted victims (3 last months) Intensity of the targeting tied to the number of relations targets for a given entities (organization, sector, location, etc.) during the last 3 months. Relationships created Volume of relationships created over the past 12 months. Most active malware (3 last months) Top active malware during the last 3 months. Most active vulnerabilities (3 last months) List of the vulnerabilities with the greatest number of relations over the last 3 months. Targeted countries (3 last months) Intensity of the targeting tied to the number of relations targets for a given country over the past 3 months. Latest reports Last reports ingested in the platform. Most active labels (3 last months) Top labels given to entities during the last 3 months.
Explore the platform
To start exploring the platform and understand how information is structured, we recommend starting with the overview documentation page.
Automated imports in OpenCTI streamline the process of data ingestion, allowing users to effortlessly bring in valuable intelligence from diverse sources. This page focuses on the automated methods of importing data, which serve as bridges between OpenCTI and diverse external systems, formatting it into a STIX bundle, and importing it into the OpenCTI platform.
Connectors in OpenCTI serve as dynamic gateways, facilitating the import of data from a wide array of sources and systems. Every connector is designed to handle specific data types and structures of the source, allowing OpenCTI to efficiently ingest the data.
The behavior of each connector is defined by its development, determining the types of data it imports and its configuration options. This flexibility allows users to customize the import process to their specific needs, ensuring a seamless and personalized data integration experience.
The level of configuration granularity regarding the imported data type varies with each connector. Nevertheless, connectors empower users to specify the date from which they wish to fetch data. This capability is particularly useful during the initial activation of a connector, enabling the retrieval of historical data. Following this, the connector operates in real-time, continuously importing new data from the source.
Resetting the connector state enables you to restart the ingestion process from the very beginning. Additionally, resetting the connector state will purge the RabbitMQ queue for this specific connector.
However, this action requires the \"Manage connector state\" capability (more details about capabilities: List of capabilities). Without this specific capability, you will not be able to reset the connector state.
When the action is performed, a message is displayed confirming the reset and inform you about the number of messages that will be purged
Purging a message queue is necessary to remove any accumulated messages that may be outdated or redundant. It helps to avoid reprocessing messages that have already been ingested.
By purging the queue, you ensure that the connector starts with a clean slate, processing only the new data.
OpenCTI's connector ecosystem covers a broad spectrum of sources, enhancing the platform's capability to integrate data from various contexts, from threat intelligence providers to specialized databases. The list of available connectors can be found in our connectors catalog. Connectors are categorized into three types: import connectors (the focus here), enrichment connectors, and stream consumers. Further documentation on connectors is available on the dedicated documentation page.
In summary, automated imports through connectors empower OpenCTI users with a scalable, efficient, and customizable mechanism for data ingestion, ensuring that the platform remains enriched with the latest and most relevant intelligence.
In OpenCTI, the \"Data > Ingestion\" section provides users with built-in functions for automated data import. These functions are designed for specific purposes and can be configured to seamlessly ingest data into the platform. Here, we'll explore the configuration process for the four built-in functions: Live Streams, TAXII Feeds, RSS Feeds, and CSV Feeds.
Live Streams enable users to consume data from another OpenCTI platform, fostering collaborative intelligence sharing. Here's a step-by-step guide to configure Live streams synchroniser:
Remote OpenCTI URL: Provide the URL of the remote OpenCTI platform (e.g., https://[domain]; don't include the path).
Remote OpenCTI token: Provide the user token. An administrator from the remote platform must supply this token, and the associated user must have the \"Access data sharing\" privilege.
After filling in the URL and user token, validate the configuration.
Once validated, select a live stream to which you have access.
Additional configuration options:
User responsible for data creation: Define the user responsible for creating data received from this stream. Best practice is to dedicate one user per source for organizational clarity. Please see the section \"Best practices\" below for more information.
Starting synchronization: Specify the date of the oldest data to retrieve. Leave the field empty to import everything.
Take deletions into account: Enable this option to delete data from your platform if it was deleted on the providing stream. (Note: Data won't be deleted if another source has imported it previously.)
Verify SSL certificate: Check the validity of the certificate of the domain hosting the remote platform.
Avoid dependencies resolution: Import only entities without their relationships. For instance, if the stream shares malware, all the malware's relationships will be retrieved by default. This option enables you to choose not to recover them.
Use perfect synchronization: This option is specifically for synchronizing two platforms. If an imported entity already exists on the platform, the one from the stream will overwrite it.
TAXII Feeds in OpenCTI provide a robust mechanism for ingesting TAXII collections from TAXII servers or other OpenCTI instances. Configuring TAXII ingester involves specifying essential details to seamlessly integrate threat intelligence data. Here's a step-by-step guide to configure TAXII ingesters:
TAXII server URL: Provide the root API URL of the TAXII server. For collections from another OpenCTI instance, the URL is in the form https://[domain]/taxii2/root.
TAXII collection: Enter the ID of the TAXII collection to be ingested. For collections from another OpenCTI instance, the ID follows the format 426e3acb-db50-4118-be7e-648fab67c16c.
Authentication type (if necessary): Enter the authentication type. For non-public collections from another OpenCTI instance, the authentication type is \"Bearer token.\" Enter the token of a user with access to the collection (similar to the point 2 of the Live streams configuration above).
TAXII root API URL
Many ISAC TAXII configuration instructions will provide the URL for the collection or discovery service. In these cases, remove the last path segment from the TAXII Server URL in order to use it in OpenCTI. eg. use https://[domain]/tipapi/tip21, and not https://[domain]/tipapi/tip21/collections.
Additional configuration options:
User responsible for data creation: Define the user responsible for creating data received from this TAXII feed. Best practice is to dedicate one user per source for organizational clarity. Please see the section \"Best practices\" below for more information.
Import from date: Specify the date of the oldest data to retrieve. Leave the field empty to import everything.
RSS Feeds functionality enables users to seamlessly ingest items in report form from specified RSS feeds. Configuring RSS Feeds involves providing essential details and selecting preferences to tailor the import process. Here's a step-by-step guide to configure RSS ingesters:
RSS Feed URL: Provide the URL of the RSS feed from which items will be imported.
Additional configuration options:
User responsible for data creation: Define the user responsible for creating data received from this RSS feed. Best practice is to dedicate one user per source for organizational clarity. Please see the section \"Best practices\" below for more information.
Import from date: Specify the date of the oldest data to retrieve. Leave the field empty to import everything.
Default report types: Indicate the report type to be applied to the imported report.
Default author: Indicate the default author to be applied to the imported report. Please see the section \"Best practices\" below for more information.
Default marking definitions: Indicate the default markings to be applied to the imported reports.
CSV feed ingester enables users to import CSV files exposed on URLs. Here's a step-by-step guide to configure TAXII ingesters:
CSV URL: Provide the URL of the CSV file exposed from which items will be imported.
CSV Mappers: Choose the CSV mapper to be used to import the data.
Authentication type (if necessary): Enter the authentication type.
CSV mapper
CSV feed functionality is based on CSV mappers. It is necessary to create the appropriate CSV mapper to import the data contained in the file. See the page dedicated to the CSV mapper.
Additional configuration options:
User responsible for data creation: Define the user responsible for creating data received from this CSV feed. Best practice is to dedicate one user per source for organizational clarity. Please see the section \"Best practices\" below for more information.
Import from date: Specify the date of the oldest data to retrieve. Leave the field empty to import everything.
in CSV Mappers, if you created a representative for Marking definition, you could have chosen between 2 options:
let the user choose marking definitions
Use default marking definitions of the user
This configuration applies when using a CSV Mapper for a CSV Ingester. If you select a CSV Mapper containing the option \"Use default marking definitions of the user\", the default marking definitions of the user you chose to be responsible for the data creation will be applied to all data imported. If you select a CSV Mapper containing the option \"let the user choose marking definitions\", you will be presented with the list of all the marking definitions of the user you chose to be responsible for the data creation (and not yours!)
To finalize the creation, click on \"Verify\" to run a check on the submitted URL with the selected CSV mapper. A valid URL-CSV mapper combination results in the identification of up to 50 entities.
To start your new ingester, click on \"Start\", in the burger menu.
CSV feed ingestion is made possible thanks to the connector \"ImportCSV\". So you can track the progress in \"Data > Ingestion > Connectors\". On a regular basis, the ingestion is updated when new data is added to the CSV feed.
"},{"location":"usage/import-automated/#best-practices-for-feed-import","title":"Best practices for feed import","text":"
Ensuring a secure and well-organized environment is paramount in OpenCTI. Here are two recommended best practices to enhance security, traceability, and overall organizational clarity:
Create a dedicated user for each source: Generate a user specifically for feed import, following the convention [F] Source name for clear identification. Assign the user to the \"Connectors\" group to streamline user management and permission related to data creation. Please see here for more information on this good practice.
Establish a dedicated Organization for the source: Create an organization named after the data source for clear identification. Assign the newly created organization to the \"Default author\" field in feed import configuration if available.
By adhering to these best practices, you ensure independence in managing rights for each import source through dedicated user and organization structures. In addition, you enable clear traceability to the entity's creator, facilitating source evaluation, dashboard creation, data filtering and other administrative tasks.
Users can streamline the data ingestion process using various automated import capabilities. Each method proves beneficial in specific circumstances.
Connectors act as bridges to retrieve data from diverse sources and format it for seamless ingestion into OpenCTI.
Live Streams enable collaborative intelligence sharing across OpenCTI instances, fostering real-time updates and efficient data synchronization.
TAXII Feeds provide a standardized mechanism for ingesting threat intelligence data from TAXII servers or other OpenCTI instances.
RSS Feeds facilitate the import of items in report form from specified RSS feeds, offering a straightforward way to stay updated on relevant intelligence.
By leveraging these automated import functionalities, OpenCTI users can build a comprehensive, up-to-date threat intelligence database. The platform's adaptability and user-friendly configuration options ensure that intelligence workflows remain agile, scalable, and tailored to the unique needs of each organization.
"},{"location":"usage/import-files/","title":"Import from files","text":""},{"location":"usage/import-files/#import-mechanisms","title":"Import mechanisms","text":"
The platform provides a seamless process for automatically parsing data from various file formats. This capability is facilitated by two distinct mechanisms.
File import connectors: Currently, there are two connectors designed for importing files and automatically identifying entities.
ImportFileStix: Designed to handle STIX-structured files (json or xml format).
ImportDocument: Versatile connector supporting an array of file formats, including pdf, text, html, and markdown.
CSV mappers: The CSV mapper is a tailored functionality to facilitate the import of data stored in CSV files. For more in-depth information on using CSV mappers, refer to the CSV Mappers documentation page.
Both mechanisms can be employed wherever file uploads are possible. This includes the \"Data\" tabs of all entities and the dedicated panel named \"Data import and analyst workbenches\" located in the top right-hand corner (database logo with a small gear). Importing files from these two locations is not entirely equal; refer to the \"Relationship handling from entity's Data tab\" section below for details on this matter.
For ImportDocument connector, the identification process involves searching for existing entities in the platform and scanning the document for relevant information. In additions, the connector use regular expressions (regex) to detect IP addresses and domains within the document.
As for the ImportFileStix connector and the CSV mappers, there is no identification mechanism. The imported data will be, respectively, the data defined in the STIX bundle or according to the configuration of the CSV mapper used.
Upload file: Navigate to the desired location, such as the \"Data\" tabs of an entity or the \"Data import and analyst workbenches\" panel. Then, upload the file containing the relevant data by clicking on the small cloud with the arrow inside next to \"Uploaded files\".
Entity identification: For a CSV file, select the connector and CSV mapper to be used by clicking on the icon with an upward arrow in a circle. If it's not a CSV file, the connector will launch automatically. Then, the file import connectors or CSV mappers will identify entities within the uploaded document.
Workbench review and validation: Entities identified by connectors are not immediately integrated into the platform's knowledge base. Instead, they are thoughtfully placed in a workbench, awaiting review and validation by an analyst. Workbenches function as draft spaces, ensuring that no data is officially entered into the platform until the workbench has undergone the necessary validation process. For more information on workbenches, refer to the Analyst workbench documentation page.
Review workbenches
Import connectors may introduce errors in identifying object types or add \"unknown\" entities. Workbenches were established with the intent of reviewing the output of connectors before validation. Therefore, it is crucial to be vigilant when examining the workbench to prevent the import of incorrect data into the platform.
"},{"location":"usage/import-files/#additional-information","title":"Additional information","text":""},{"location":"usage/import-files/#no-workbench-for-csv-mapper","title":"No workbench for CSV mapper","text":"
It's essential to note that CSV mappers operate differently from other import mechanisms. Unlike connectors, CSV mappers do not generate workbenches. Instead, the data identified by CSV mappers is imported directly into the platform without an intermediary workbench stage.
"},{"location":"usage/import-files/#relationship-handling-from-entitys-data-tab","title":"Relationship handling from entity's \"Data\" tab","text":"
When importing a document directly from an entity's \"Data\" tab, there can be an automatic addition of relationships between the objects identified by connectors and the entity in focus. The process differs depending on the type of entity in which the import occurs:
If the entity is a container (e.g., Report, Grouping, and Cases), the identified objects in the imported file will be linked to the entity (upon workbench validation). In the context of a container, the object is said to be \"contained\".
For entities that are not containers, a distinct behavior unfolds. In this scenario, the identified objects are not linked to the entity, except for Observables. Related to relationships between the Observables and the entity are automatically added to the workbench and created after validation of this one.
"},{"location":"usage/import-files/#file-import-in-content-tab","title":"File import in Content tab","text":"
Expanding the scope of file imports, users can seamlessly add files in the Content tab of Analyses or Cases. In this scenario, the file is directly added as an attachment without utilizing an import mechanism.
In order to initiate file imports, users must possess the requisite capability: \"Upload knowledge files.\" This capability ensures that only authorized users can contribute and manage knowledge files within the OpenCTI platform, maintaining a controlled and secure environment for data uploads.
Deprecation warning
Using the ImportDocument connector to parse CSV file is now disallowed as it produces inconsistent results. Please configure and use CSV mappers dedicated to your specific CSV content for a reliable parsing. CSV mappers can be created and configured in the administration interface.
OpenCTI enforces strict rules to determine the period during which an indicator is effective for usage. This period is defined by the valid_from and valid_until dates. All along its lifecycle, the indicator score will decrease according to configured decay rules. After the indicator expires, the object is marked as revoked and the detection field is automatically set to false. Here, we outline how these dates are calculated within the OpenCTI platform and how the score is updated with decay rules.
"},{"location":"usage/indicators-lifecycle/#setting-validity-dates","title":"Setting validity dates","text":""},{"location":"usage/indicators-lifecycle/#data-source-provided-the-dates","title":"Data source provided the dates","text":"
If a data source provides valid_from and valid_until dates when creating an indicator on the platform, these dates are used without modification. But, if the creation is performed from the UI and the indicator is elligible to be manages by a decay rule, the platform will change this valid_until with the one calculated by the Decay rule.
"},{"location":"usage/indicators-lifecycle/#fallback-rules-for-unspecified-dates","title":"Fallback rules for unspecified dates","text":"
If a data source does not provide validity dates, OpenCTI applies the decay rule matching the indicator to determine these dates. The valid_until date is computed based on the revoke score of the decay rule : it is set at the exact time at which the indicator will reach the revoke score. Past valid_until date, the indicator is marked as revoked.
Indicators have an initial score at creation, either provided by data source, or 50 by default. Over time, this score is going to decrease according to the configured decay rules. Score is updated at each reaction point defined for the decay rule matching the indicator at creation.
Understanding how OpenCTI calculates validity periods and scores is essential for effective threat intelligence analysis. These rules ensure that your indicators are accurate and up-to-date, providing a reliable foundation for threat intelligence data.
"},{"location":"usage/inferences/","title":"Inferences and reasoning","text":""},{"location":"usage/inferences/#overview","title":"Overview","text":"
OpenCTI\u2019s inferences and reasoning capability is a robust engine that automates the process of relationship creation within your threat intelligence data. This capability, situated at the core of OpenCTI, allows logical rules to be applied to existing relationships, resulting in the automatic generation of new, pertinent connections.
"},{"location":"usage/inferences/#understanding-inferences-and-reasoning","title":"Understanding inferences and reasoning","text":"
Inferences and reasoning serve as OpenCTI\u2019s intelligent engine. It interprets your data logically. By activating specific predefined rules (of which there are around twenty), OpenCTI can deduce new relationships from the existing ones. For instance, if there's a connection indicating an Intrusion Set targets a specific country, and another relationship stating that this country is part of a larger region, OpenCTI can automatically infer that the Intrusion Set also targets the broader region.
Completeness: Fills relationship gaps, ensuring a comprehensive and interconnected threat intelligence database.
Accuracy: Minimizes manual input errors by deriving relationships from predefined, accurate logic.
"},{"location":"usage/inferences/#how-it-operates","title":"How it operates","text":"
When you activate an inference rule, OpenCTI continuously analyzes your existing relationships and applies the defined logical rules. These rules are logical statements that define conditions for new relationships. When the set of conditions is met, the OpenCTI creates the corresponding relationship automatically.
For example, if you activate a rule as follows:
IF [Entity A targets Identity B] AND [Identity B is part of Identity C] THEN [Entity A targets Identity C]
OpenCTI will apply this rule to existing data. If it finds an Intrusion Set (\"Entity A\") targeting a specific country (\"Identity B\") and that country is part of a larger region (\"Identity C\"), the platform will automatically establish a relationship between the Intrusion Set and the region.
Administration: To find out about existing inference rules and enable/disable them, refer to the Rules engine page in the Administration section of the documentation.
Playbooks: OpenCTI playbooks are highly customizable automation scenarios. This seamless integration allows for further automation, making your threat intelligence processes even more efficient and tailored to your specific needs. More information in our blog post.
Manual data creation in OpenCTI is an intuitive process that occurs throughout the platform. This page provides guidance on two key aspects of manual creation: Entity creation and Relationship creation.
Navigate to the relevant section: Be on the section of the platform related to the object type you want to create.
Click on the \"+\" icon: Locate the \"+\" icon located at the bottom right of the window.
Fill in entity-specific fields: A form on the right side of the window will appear, allowing to fill in specific fields of the entity. Certain fields are inherently obligatory, and administrators have the option to designate additional mandatory fields (See here for more information).
Click on \"Create\": Once you've filled in the desired fields, click on \"create\" to initiate the entity creation process.
Before delving into the creation of relationships between objects in OpenCTI, it's crucial to grasp some foundational concepts. Here are key points to understand:
On several aspects, including relationships, two categories of objects must be differentiated: containers (e.g., Reports, Groupings, and Cases) and others. Containers aren't related to but contains objects.
Relationships, like all other entities, are objects. They possess fields, can be linked, and share characteristics identical to other entities.
Relationships are inherently directional, comprising a \"from\" entity and a \"to\" entity. Understanding this directionality is essential for accurate relationship creation.
OpenCTI supports various relationship types, and their usage depends on the entity types being linked. For example, a \"target\" relationship might link malware to an organization, while linking malware to an intrusion set might involve a different relationship type.
Now, let\u2019s explore the process of creating relationships. To do this, we will differentiate the case of containers from the others.
When it comes to creating relationships within containers in OpenCTI, the process is straightforward. Follow these steps to attach objects to a container:
Navigate to the container: Go to the specific container to which you want to attach an object. This could be a Report, Grouping, or Cases.
Access the \"Entities\" tab: Within the container, locate and access the \"Entities\" tab.
Click on the \"+\" icon: Find the \"+\" icon located at the bottom right of the window.
Search for entities: A side window will appear. Search for the entities you want to add to the container.
Add entities to the container: Click on the desired entities. They will be added directly to the container.
When creating relationships not involving a container, the creation method is distinct. Follow these steps to create relationships between entities:
Navigate to one of the entities: Go to one of the entities you wish to link. Please be aware that the entity from which you create the relationship will be designated as the \"from\" entity for that relationship. So the decision of which entity to choose for creating the relationship should be considered, as it will impact the outcome.
Access the \"Knowledge\" tab: Within the entity, go to the \"Knowledge\" tab.
Select the relevant categories: In the right banner, navigate to the categories that correspond to the object to be linked. The available categories depend on the type of entity you are currently on. For example, if you are on malware and want to link to a sector, choose \"victimology.\"
Click on the \"+\" icon: Find the \"+\" icon located at the bottom right of the window.
Search for entities: A side window will appear. Search for the entities you want to link.
Add entities and click on \"Continue\": Click on the entities you wish to link. Multiple entities can be selected. Then click on \"Continue\" at the bottom right.
Fill in the relationship form: As relationships are objects, a creation form similar to creating an entity will appear.
Click on \"Create\": Once you've filled in the desired fields, click on \"create\" to initiate the relationship creation process.
While the aforementioned methods are primary for creating entities and relationships, OpenCTI offers versatility, allowing users to create objects in various locations within the platform. Here's a non-exhaustive list of additional places that facilitate on-the-fly creation:
Creating entities during relationship creation: During the \"Search for entities\" phase (see above) of the relationship creation process, click on the \"+\" icon to create a new entity directly.
Knowledge graph: Within the knowledge graph - found in the knowledge tab of the containers or in the investigation functionality - users can seamlessly create entities or relationships.
Inside a workbench: The workbench serves as another interactive space where users can create entities and relationships efficiently.
These supplementary methods offer users flexibility and convenience, allowing them to adapt their workflow to various contexts within the OpenCTI platform. As users explore the platform, they will naturally discover additional means of creating entities and relationships.
Max confidence level
When creating knowledge in the platform, the maximum confidence level of the users is used. Please navigate to this page to understand this concept and the impact it can have on the knowledge creation.
OpenCTI\u2019s merge capability stands as a pivotal tool for optimizing threat intelligence data, allowing to consolidate multiple entities of the same type. This mechanism serves as a powerful cleanup tool, harmonizing the platform and unifying scattered information. In this section, we explore the significance of this feature, the process of merging entities, and the strategic considerations involved.
In the ever-expanding landscape of threat intelligence and the multitude of names chosen by different data sources, data cleanliness is essential. Duplicates and fragmented information hinder efficient analysis. The merge capability is a strategic solution for amalgamating related entities into a cohesive unit. Central to the merging process is the selection of a main entity. This primary entity becomes the anchor, retaining crucial attributes such as name and description. Other entities, while losing specific fields like descriptions, are aliased under the primary entity. This strategic decision preserves vital data while eliminating redundancy.
One of the key feature of the merge capability is its ability to preserve relationships. While merging entities, their interconnected relationships are not lost. Instead, they seamlessly integrate into the new, merged entity. This ensures that the intricate web of relationships within the data remains intact, fostering a comprehensive understanding of the threat landscape.
OpenCTI\u2019s merge capability helps improve the quality of threat intelligence data. By consolidating entities and centralizing relationships, OpenCTI empowers analysts to focus on insights and strategies, unburdened by data silos or fragmentation. However, exercising caution and foresight in the merging process is essential, ensuring a robust and streamlined knowledge basis.
Administration: To understand how to merge entities and the consideration to take into account, refer to the Merging page in the Administration section of the documentation.
Deduplication mechanism: the platform is equipped with deduplication processes that automatically merge data at creation (either manually or by importing data from different sources) if it meets certain conditions.
"},{"location":"usage/nested/","title":"Nested references and objects","text":""},{"location":"usage/nested/#stix-standard","title":"STIX standard","text":""},{"location":"usage/nested/#definition","title":"Definition","text":"
In the STIX 2.1 standard, objects can:
Refer to other objects in directly in their attributes, by referencing one or multiple IDs.
Have other objects directly embedded in the entity.
In OpenCTI, all nested references and objects are modelized as relationships, to be able to pivot more easily on labels, external references, kill chain phases, marking definitions, etc.
When importing and exporting data to/from OpenCTI, the translation between nested references and objects to full-fledged nodes and edges is automated and therefore transparent for the users. Here is an example with the object in the graph above:
"},{"location":"usage/notifications/","title":"Notifications and alerting","text":"
In the evolving landscape of cybersecurity, timely awareness is crucial. OpenCTI empowers users to stay informed and act swiftly through its robust notifications and alerting system. This feature allows users to create personalized triggers that actively monitor the platform for specific events and notify them promptly when these conditions are met.
From individual users tailoring their alert preferences to administrators orchestrating collaborative triggers for Groups or Organizations, OpenCTI's notification system is a versatile tool for keeping cybersecurity stakeholders in the loop.
The main menu \"Notifications and triggers\" for creating and managing notifications is located in the top right-hand corner with the bell icon.
In OpenCTI, triggers serve as personalized mechanisms for users to stay informed about specific events that align with their cybersecurity priorities. Users can create and manage triggers to tailor their notification experience. Each trigger operates by actively listening to events based on predefined filters and event types, promptly notifying users via chosen notifiers when conditions are met.
Individual user triggers: Each user possesses the autonomy to craft their own triggers, finely tuned to their unique preferences and responsibilities. By setting up personalized filters and selecting preferred notifiers, users ensure that they receive timely and relevant notifications aligned with their specific focus areas.
Administrative control: Platform administrators have the capability to create and manage triggers for Users, Groups and Organizations. This provides centralized control and the ability to configure triggers that address collective cybersecurity objectives. Users within the designated Group or Organization will benefit from triggers with read-only access rights. These triggers are to be created directly on the User|Group|Organization with whom to share them in \"Settings > Security > Users|Groups|Organizations\".
Leveraging the filters, users can meticulously define the criteria that activate their triggers. This level of granularity ensures that triggers are accurate, responding precisely to events that matter most. Users can tailor filters to consider various parameters such as object types, markings, sources, or other contextual details. They can also allow notifications for the assignment of a Task, a Case, etc.
Beyond filters, a trigger can be configured to respond to three event types: creation, modification, and deletion.
Instance triggers offer a targeted approach to live monitoring by allowing users to set up triggers specific to one or several entities. These triggers, when activated, keep a vigilant eye on a predefined set of events related to the selected entities, ensuring that you stay instantly informed about crucial changes.
On an entity's overview, locate the \"Instance trigger quick subscription\" button with the bell icon at the top right.
Click on the button to create the instance trigger.
(Optional) Click on it again to modify the instance trigger created.
"},{"location":"usage/notifications/#events-monitored-by-instance-triggers","title":"Events monitored by instance triggers","text":"
An instance trigger set on an entity X actively monitors the following events:
Update/Deletion of X: Stay informed when the selected entity undergoes changes or is deleted.
Creation/Deletion of relationships: Receive notifications about relationships being added or removed from/to X.
Creation/Deletion of related entities: Be alerted when entities that have X in its refs - i.e. contains X, is shared with X, is created by X, etc. - are created or deleted.
Adding/Removing X in ref: Stay in the loop when X is included or excluded from the ref of other entities - i.e. adding X in the author of an entity, adding X in a report, etc.).
Entity deletion notification
It's important to note that the notification of entity deletion can occur in two scenarios: - Real entity deletion: When the entity is genuinely deleted from the platform. - Visibility loss: When a modification to the entity results in the user losing visibility for that entity.
Digests provide an efficient way to streamline and organize your notifications. By grouping notifications based on selected triggers and specifying the delivery period (daily, weekly, monthly), you gain the flexibility to receive consolidated updates at your preferred time, as opposed to immediate notifications triggered by individual events.
Configure digest: Set the parameters, including triggers to be included and the frequency of notifications (daily, weekly, monthly).
Choose the notifier(s): Select the notification method(s) (e.g. within the OpenCTI interface, via email, etc.).
"},{"location":"usage/notifications/#benefits-of-digests","title":"Benefits of digests","text":"
Organized notifications: Digests enable you to organize and categorize notifications, preventing a flood of individual alerts.
Customized delivery: Choose the frequency of digest delivery based on your preferences, whether it's a daily overview, a weekly summary, or a monthly roundup.
Reduced distractions: Receive notifications at a scheduled time, minimizing interruptions and allowing you to focus on critical tasks.
Digests enhance your control over notification management, ensuring a more structured and convenient approach to staying informed about important events.
In OpenCTI, notifiers serve as the channels for delivering notifications, allowing users to stay informed about critical events. The platform offers two built-in notifiers, \"Default mailer\" for email notifications and \"User interface\" for in-platform alerts.
OpenCTI features built-in notifier connectors that empower users to create personalized notifiers for notification and activity alerting. Three essential connectors are available:
Platform mailer connector: Enables sending notifications directly within the OpenCTI platform.
Simple mailer connector: Offers a straightforward approach to email notifications with simplified configuration options.
Generic webhook connector: Facilitates communication through webhooks.
OpenCTI provides two samples of webhook notifiers designed for Teams integration.
"},{"location":"usage/notifications/#configuration-and-access","title":"Configuration and Access","text":"
Notifiers are manageable in the \"Settings > Customization > Notifiers\" window and can be restricted through Role-Based Access Control (RBAC). Administrators can restrict access to specific Users, Groups, or Organizations, ensuring controlled usage.
For guidance on configuring custom notifiers and explore detailed setup instructions, refer to the dedicated documentation page.
The following chapter aims at giving the reader a step-by-step description of what is available on the platform and the meaning of the different tabs and entries.
When the user connects to the platform, the home page is the Dashboard. This Dashboard contains several visuals summarizing the types and quantity of data recently imported into the platform.
Dashboard
To get more information about the components of the default dashboard, you can consult the Getting started.
The left side panel allows the user to navigate through different windows and access different views and categories of knowledge.
The first part of the platform in the left menu is dedicated to what we call the \"hot knowledge\", which means this is the entities and relationships which are added on a daily basis in the platform and which generally require work / analysis from the users.
Analyses: all containers which convey relevant knowledge such as reports, groupings and malware analyses.
Cases: all types of case like incident responses, requests for information, for takedown, etc.
Events: all incidents & alerts coming from operational systems as well as sightings.
Observations: all technical data in the platform such as observables, artifacts and indicators.
The second part of the platform in the left menu is dedicated to the \"cold knowledge\", which means this is the entities and relationships used in the hot knowledge. You can see this as the \"encyclopedia\" of all pieces of knowledge you need to get context: threats, countries, sectors, etc.
Threats: all threats entities from campaigns to threat actors, including intrusion sets.
Arsenal: all tools and pieces of malware used and/or targeted by threats, including vulnerabilities.
Techniques: all objects related to tactics and techniques used by threats (TTPs, etc.).
Entities: all non-geographical contextual information such as sectors, events, organizations, etc.
Locations: all geographical contextual information, from cities to regions, including precise positions.
In the Settings > Parameters, it is possible for the platform administrator to hide categories in the platform for all users.
"},{"location":"usage/overview/#hide-categories-in-roles","title":"Hide categories in roles","text":"
In OpenCTI, the different roles are highly customizable. It is possible to defined default dashboards, triggers, etc. but also be able to hide categories in the roles:
"},{"location":"usage/overview/#presentation-of-a-typical-page-in-opencti","title":"Presentation of a typical page in OpenCTI","text":"
While OpenCTI features numerous entities and tabs, many of them share similarities, with only minor differences arising from specific characteristics. These differences may involve the inclusion or exclusion of certain fields, depending on the nature of the entity.
In this part will only be detailed a general outline of a \"typical\" OpenCTI page. The specifies of the different entities will be detailed in the corresponding pages below (Activities and Knowledge).
In the Overview tab on the entity, you will find all properties of the entity as well as the recent activities.
First, you will find the Details section, where are displayed all properties specific to the type of entity you are looking at, an example below with a piece of malware:
Thus, in the Basic information section, are displayed all common properties to all objects in OpenCTI, such as the marking definition, the author, the labels (i.e. tags), etc.
Below these two sections, you will find latest modifications in the Knowledge base related to the Entity:
Latest created relationships: display the latest relationships that have been created from or to this Entity. For example, latest Indicators of Compromise and associated Threat Actor of a Malware.
latest containers about the object: display all the Cases and Analyses that contains this Entity. For example, the latest Reports about a Malware.
External references: display all the external sources associated with the Entity. You will often find here links to external reports or webpages from where Entity's information came from.
History: display the latest chronological modifications of the Entity and its relationships that occurred in the platform, in order to traceback any alteration.
Last, all Notes written by users of the platform about this Entity are displayed in order to access unstructured analysis comments.
In the Knowledge tab, which is the central part of the entity, you will find all the Knowledge related to the current entity. The Knowledge tab is different for Analyses (Report, Groupings) and Cases (Incident response, Request for Information, Request for Takedown) entities than for all the other entity types.
The Knowledge tab of those entities (who represents Analyses or Cases that can contains a collection of Objects) is the place to integrate and link together entities. For more information on how to integrate information in OpenCTI using the knowledge tab of a report, please refer to the part Manual creation.
The Knowledge tabs of any other entity (that does not aim to contain a collection of Objects) gather all the entities which have been at some point linked to the entity the user is looking at. For instance, as shown in the following capture, the Knowledge tab of Intrusion set APT29, gives access to the list of all entities APT29 is attributed to, all victims the intrusion set has targeted, all its campaigns, TTPs, malware etc. For entities to appear in these tabs under Knowledge, they need to have been linked to the entity directly or have been computed with the inference engine.
"},{"location":"usage/overview/#focus-on-indicators-and-observables","title":"Focus on Indicators and Observables","text":"
The Indicators and Observables section offers 3 display modes: - The entities view, which displays the indicators/observables linked to the entity. - The relationship view, which displays the various relationships between the indicators/observables linked to the entity and the entity itself. - The contextual view, which displays the indicators/observables contained in the cases and analyses that contain the entity.
The Content tab allows for uploading and creating outcomes documents related to the content of the current entity (in PDF, text, HTML or markdown files). This specific tab enable to previzualize, manage and write deliverable associated with the entity. For example an analytic report to share with other teams, a markdown files to feed a collaborative wiki with, etc.
The Content tab is available for a subset of entities: Report, Incident, Incident response, Request for Information, and Request for Takedown.
The Analyses tab contains the list of all Analyses (Report, Groupings) and Cases (Incident response, Request for Information, Request for Takedown) in which the entity has been identified.
By default, this tab display the list, but you can also display the content of all the listed Analyses on a graph, allowing you to explore all their Knowledge and have a glance of the context around the Entity.
The Observables tab (for Reports and Observed data): A table containing all SCO (Stix Cyber Observable) contained in the Report or the Observed data, with search and filters available. It also displays if the SCO has been added directly or through inferences with the reasoning engine
the Entities tab (for Reports and Observed data): A table containing all SDO (Stix Domain Objects) contained in the Report or the Observed data, with search and filters available. It also displays if the SDO has been added directly or through inferences with the reasoning engine
Observables:
the Sightings tab (for Indicators and Observables): A table containing all Sightings relationships corresponding to events in which Indicators (IP, domain name, URL, etc.) are detected by or within an information system, an individual or an organization. Most often, this corresponds to a security event transmitted by a SIEM or EDR.
"},{"location":"usage/pivoting/","title":"Pivot and investigate","text":"
In OpenCTI, all data are structured as an extensive knowledge graph, where every element is interconnected. The investigation functionality provides a powerful tool for pivoting on any entity or relationship within the platform. Pivoting enables users to explore and analyze connections between entities and relationships, facilitating a comprehensive understanding of the data.
To access investigations, navigate to the top right corner of the toolbar:
Access restriction
When an investigation is created, it is initially visible only to the creator, allowing them to work on the investigation before deciding to share it. The sharing mechanism is akin to that of dashboards. For further details, refer to the Access control section in the dashboard documentation page.
We can add any existing entity of the platform to your investigation.
After adding an entity, we can choose the entity and view its details in the panel that appears on the right of the screen.
On each node, we'll notice a bullet with a number inside, serving as a visual indication of how many entities are linked to it but not currently displayed in the graph. Keep in mind that this number is an approximation, which is why there's a \"+\" next to it. If there's no bullet displayed, it means there's nothing to expand from this node.
To incorporate these linked entities into the graph, we just have to expand the nodes. Utilize the button with a 4-arrows logo in the mentioned menu, or double-click on the entity directly. This action opens a new window where we can choose the types of entities and relationships we wish to expand.
For instance, in the image above, selecting the target Malware and the relationship Uses implies expanding in my investigation graph all Malware linked to this node with a relationship of type Uses.
"},{"location":"usage/pivoting/#roll-back-expansion","title":"Roll back expansion","text":"
Expanding a graph can add a lot of entities and relations, making it not only difficult to read but sometimes counterproductive since it brings entities and relations that are not useful to your investigations. To solve this problem, there is a button to undo the last expansion.
When clicking on this button, we will retrieve the state in which your graph was before your expansion. As a result, please note that all add or remove actions made since the last expansion will be lost: in other words, if you have expanded your graph, and then have added some entities in your graph, when clicking on rollback button, the entities that you have added will not be in your graph.
You can roll back your investigation graph up to the last 10 expand actions.
We can create a relationship between entities directly within our investigation. To achieve this, select multiple entities by clicking on them while holding down the shift key. Subsequently, a button appears at the bottom right to create one (or more, depending on the number of entities selected) relationships.
Relationship creation
Creating a relationship in the investigation graph will generate the relationship in your knowledge base.
"},{"location":"usage/pivoting/#capitalize-on-an-investigation","title":"Capitalize on an investigation","text":""},{"location":"usage/pivoting/#export-investigation","title":"Export investigation","text":"
Users have the capability to export investigations, providing a way to share, document, or archive their findings.
PDF and image formats: Users can export investigations in either PDF or image format, offering flexibility in sharing and documentation.
STIX bundle: The platform allows the export of the entire content of an investigation graph as a STIX bundle. In the STIX format, all objects within the investigation graph are automatically aggregated into a Report object.
"},{"location":"usage/pivoting/#turn-investigation-into-a-container","title":"Turn investigation into a container","text":"
Users can efficiently collect and consolidate the findings of an investigation by adding the content into dedicated containers. The contents of an investigation can be imported into various types of containers, including:
Grouping
Incident Response
Report
Request for Information
Request for Takedown
We have the flexibility to choose between creating a new container on the fly or adding investigation content to an existing container.
After clicking on the ADD button, the browser will redirect to the Knowledge tab of the container where we added the content of our investigation. If we added it to multiple containers, the redirection will be to the first of the list.
"},{"location":"usage/reliability-confidence/","title":"Reliability and Confidence","text":""},{"location":"usage/reliability-confidence/#generalities","title":"Generalities","text":"
In (Cyber) Threat Intelligence, evaluation of information sources and of information quality is one of the most important aspect of the work. It is of the utter most importance to assess situations by taking into account reliability of the sources and credibility of the information.
This concept is foundational in OpenCTI, and have real impact on:
the data deduplication process
the data stream filtering for ingestion and sharing
"},{"location":"usage/reliability-confidence/#what-is-the-reliability-of-a-source","title":"What is the Reliability of a source?","text":"
Reliability of a source of information is a measurement of the trust that the analyst can have about the source, based on the technical capabilities or history of the source. Is the source a reliable partner with long sharing history? A competitor? Unknown?
Reliability of sources are often stated at organizational level, as it requires an overview of the whole history with it.
In the Intelligence field, Reliability is often notated with the NATO Admiralty code.
"},{"location":"usage/reliability-confidence/#what-is-confidence-of-an-information","title":"What is Confidence of an information?","text":"
Reliability of a source is important but even a trusted source can be wrong. Information in itself has a credibility, based on what is known about the subject and the level of corroboration by other sources.
Credibility is often stated at the analyst team level, expert of the subject, able to judge the information with its context.
In the Intelligence field, Confidence is often notated with the NATO Admiralty code.
Why Confidence instead of Credibility?
Using both Reliability and Credibility is an advanced use case for most of CTI teams. It requires a mature organization and a well staffed team. For most of internal CTI team, a simple confidence level is enough to forge assessment, in particular for teams that concentrate on technical CTI.
Thus in OpenCTI, we have made the choice to fuse the notion of Credibility with the Confidence level that is commonly used by the majority of users. They have now the liberty to push forward their practice and use both Confidence and Reliability in their daily assessments.
"},{"location":"usage/reliability-confidence/#reliability-open-vocabulary","title":"Reliability open vocabulary","text":"
Reliability value can be set for every Entity in the platform that can be Author of Knowledge:
Organizations
Individuals
Systems
and also Reports
Reliability on Reports allows you to specify the reliability associated to the original author of the report if you received it through a provider.
For all Knowledge in the platform, the reliability of the source of the Knowledge (author) is displayed in the Overview. This way, you can always forge your assessment of the provided Knowledge regarding the reliability of the author.
You can also now filter entities by the reliability of its author.
Tip
This way, you may choose to feed your work with only Knowledge provided by reliable sources.
Reliability is an open vocabulary that can be customized in Settings -> Taxonomies -> Vocabularies : reliability_ov.
Info
The setting by default is the Reliability scale from NATO Admiralty code. But you can define whatever best fit your organization.
Cases: Incident Response, Request for Information, Request for Takedown, Feedback
Events: Incident, Sighting, Observed data
Observations: Indicator, Infrastructure
Threats: Threat actor (Group), Threat actor (Individual), Intrusion Set, Campaign
Arsenal: Malware, Channel, Tool, Vulnerability
For all of these entities, the Confidence level is displayed in the Overview, along with the Reliability. This way, you can rapidly assess the Knowledge with the Confidence level representing the credibility/quality of the information.
Confidence level is a numerical value between 0 and 100. But Multiple \"Ticks\" can be defined and labelled to provide a meaningful scale.
Confidence level can be customized for each entity type in Settings > Customization > Entity type.
As such customization can be cumbersome, three confidence level templates are provided in OpenCTI:
Admiralty: corresponding to the Admiralty code's credibility scale
Objective: corresponding to a full objective scale, aiming to leave any subjectivity behind. With this scale, an information confidence is:
\"Cannot be judged\": there is no data regarding the credibility of the information
\"Told\": the information is known because it has been told to the source. The source doesn't verify it by any means.
\"Induced\": the information is the result of an analysis work and is based on other similar information assumed to be true.
\"Deduced\": the information is the result of an analysis work, and is a logical conclusion of other information assumed to be true.
\"Witnessed\": the source have observed itself the described situation or object.
standard: the historic confidence level scale in OpenCTI defining a Low, Med and High level of confidence.
It is always possible to modify an existing template to define a custom scale adapted to your context.
Tip
If you use the Admiralty code setting for both reliability and Confidence, you will find yourself with the equivalent of NATO confidence notation in the Overview of your different entities (A1, B2, C3, etc.)
We know that in organizations, different users do not always have the same expertise or seniority. As a result, some specific users can be more \"trusted\" when creating or updating knowledge than others. Additionally, because connectors, TAXII feeds and streams are all linked to respectively one user, it is important to be able to differentiate which connector, stream or TAXII feed is more trustable than others.
This is why we have introduced the concept of max confidence level to tackle this use case.
Max confidence level per user allows organizations to fine tune their users to ensure knowledge updated and created stays as consistent as possible.
The maximum confidence level can be set at the Group level or at the User level, and can be overridden by entity type for fine-tuning your confidence policy.
"},{"location":"usage/reliability-confidence/#overall-way-of-working","title":"Overall way of working","text":"
The overall idea is that users with a max confidence level lower than a confidence level of an entity cannot update or delete this entity.
Also, in a conservative approach, when 2 confidence levels are possible, we would always take the lowest one.
To have a detailed understanding of the concept, please browse through this diagram:
User and group confidence level configuration shall be viewed as:
a maximum confidence level between 0 and 100 (optional for users, mandatory for groups);
a list of overrides (a max confidence level between 0 and 100) per entity type (optional).
The user's effective confidence level is the result of this configuration from multiple sources (user and their groups).
To compute this value, OpenCTI uses the following strategy:
effective maximum confidence is the maximum value found in the user's groups;
effective overrides per entity type are cumulated from all groups, taking the maximum value if several overrides are set on the same entity type
if a user maximum confidence level is set, it overrides everything from groups, including the overrides per entity type defined at group level
if not, but the user has specific overrides per entity types, they override the corresponding confidence levels per entity types coming from groups
if a user has the administrator's \"Bypass\" capability, the effective confidence level will always be 100 without overrides, regardless of the group and user configuration on confidence level
The following diagram describes the different use-cases you can address with this system.
"},{"location":"usage/reliability-confidence/#how-to-set-a-confidence-level","title":"How to set a confidence level","text":"
You can set up a maximum confidence levels from the Confidences tab in the edition panel of your user or group. The value can be selected between 0 and 100, or using the admiralty scale selector.
At the group level, the maximum confidence level is mandatory, but is optional at the user level (you have to enable it using the corresponding toggle button).
"},{"location":"usage/reliability-confidence/#how-to-override-a-max-confidence-level-per-entity-type","title":"How to override a max confidence level per entity type","text":"
You also have the possibility to override a max confidence level per entity type, limited to Stix Domain Objects.
You can visualize the user's effective confidence level in the user's details view, by hovering the corresponding tooltip. It describes where the different values might come from.
"},{"location":"usage/reliability-confidence/#usage-in-opencti","title":"Usage in OpenCTI","text":""},{"location":"usage/reliability-confidence/#example-with-the-admiralty-code-template","title":"Example with the admiralty code template","text":"
Your organization have received a report from a CTI provider. At your organization level, this provider is considered as reliable most of the time and its reliability level has been set to \"B - Usually Reliable\" (your organization uses the Admiralty code).
This report concerns ransomware threat landscape and have been analysed by your CTI analyst specialized in cybercrime. This analyst has granted a confidence level of \"2 - Probably True\" to the information.
As a technical analyst, through the cumulated reliability and Confidence notations, you now know that the technical elements of this report are probably worth consideration.
"},{"location":"usage/reliability-confidence/#example-with-the-objective-template","title":"Example with the Objective template","text":"
As a CTI analyst in a governmental CSIRT, you build up Knowledge that will be shared within the platform to beneficiaries. Your CSIRT is considered as a reliable source by your beneficiaries, even if you play a role of a proxy with other sources, but your beneficiaries need some insights about how the Knowledge has been built/gathered.
For that, you use the \"Objective\" confidence scale in your platform to provide beneficiaries with that. When the Knowledge is the work of the investigation of your CSIRT, either from incident response or attack infrastructure investigation, you set the confidence level to \"Witnessed\", \"Deduced\" or \"Induced\" (depending on if you observed directly the data, or inferred it during your research). When the information has not been verified by the CSIRT but has value to be shared with beneficiaries, you can use the \"Told\" level to make it clear to them that the information is probably valuable but has not been verified.
"},{"location":"usage/search/","title":"Search for knowledge","text":"
In OpenCTI, you have access to different capabilities to be able to search for knowledge in the platform. In most cases, a search by keyword can be refined with additional filters for instance on the type of object, the author etc.
The global search is always available in the top bar of the platform.
This search covers all STIX Domain Objects (SDOs) and STIX Cyber Observables (SCOs) in the platform. The search results are sorted according to the following behaviour:
Priority 1 for exact matching of the keyword in one attribute of the objects.
Priority 2 for partial matching of the keyword in the name, the aliases and the description attributes (full text search).
Priority 3 for partial matching of the keyword in all other attributes (full text search).
If you get unexpected result, it is always possible to add some filters after the initial search:
Also, using the Advanced search button, it is possible to directly put filters in a global search:
Advanced filters
You have access to advanced filters all accross the UI, if you want to know more about how to use these filters with the API or the Python library, don't hesitate to read the dedicated page
"},{"location":"usage/search/#full-text-search-in-files-content","title":"Full text search in files content","text":"
Enterprise edition
Full text search in files content is available under the \"OpenCTI Enterprise Edition\" license.
Please read the dedicated page to have all information
It's possible to extend the global search by keywords to the content of documents uploaded to the platform via the Data import tab, or directly linked to an entity via its Data tab.
It is particularly useful to enable Full text indexing to avoid missing important information that may not have been structured within the platform. This situation can arise due to a partial automatic import of document content, limitations of a connector, and, of course, errors during manual processing.
In order to search in files, you need to configure file indexing.
The bulk search capabilities is available in the top bar of the platform and allows you to copy paste a list of keyword or objects (ie. list of domains, list of IP addresses, list of vulnerabilities, etc.) to search in the platform:
When searching in bulk, OpenCTI is only looking for an exact match in some properties:
name
aliases
x_opencti_aliases
x_mitre_id
value
subject
abstract
hashes.MD5
hashes.SHA-1
hashes.SHA-256
hashes.SHA-512
x_opencti_additional_names
When something is not found, it appears in the list as Unknown and will be excluded if you choose to export your search result in a JSON STIX bundle or in a CSV file.
Some other screens can contain search bars for specific purposes. For instance, in the graph views to filter the nodes displayed on the graph:
"},{"location":"usage/tips-widget-creation/","title":"Pro-tips on widget creation","text":"
Previously, the creation of widgets has been covered. To help users being more confident in creating widgets, here are some details to master the widget creation.
"},{"location":"usage/tips-widget-creation/#how-to-choose-the-appropriate-widget-visualization-for-your-use-case","title":"How to choose the appropriate widget visualization for your use case?","text":"
Use these widgets when you would like to display information about one single type of object (entity or relation).
Widget visualizations: number, list, list (distribution), timeline, donuts, radar, map, bookmark, tree map.
Use case example: view the amount of malware in platform (number widget), view the top 10 threat actor group target a specific country (distribution list widget), etc.
Use case example: view the amount of malware, intrusion sets, threat actor groups added in the course of last month in the platform (line or area widget).
Type of object in widget
These widgets need to use the same \"type\" of object to work properly. You always need to add relationships in the filter view if you have selected a \"knowledge graph\" perspective. If you have selected the knowledge graph entity, adding \"Entities\" (click on + entities) will not work, since you are not counting the same things.
"},{"location":"usage/tips-widget-creation/#break-down-widgets","title":"Break down widgets","text":"
Use this widget if you want to divide your data set into smaller parts to make it clearer and more useful for analysis.
Widget visualization: horizontal bars.
Use case example: view the list of malware targeting a country breakdown by the type of malware.
"},{"location":"usage/tips-widget-creation/#adding-datasets-to-your-widget","title":"Adding datasets to your widget","text":"
Adding datasets can serve two purposes: comparing data or breakdown a view to have deeper understanding on what a specific dataset is composed of.
"},{"location":"usage/tips-widget-creation/#use-case-1-compare-several-datasets","title":"Use Case 1: compare several datasets","text":"
As mentioned in How to choose the appropriate widget visualization for your use case? section you can add data sets to compare different data. Make sure to add the same type of objects (entities or relations) to be able to compare the same objects, by using access buttons like +, + Relationships, or + Entities.
You can add up to 5 different data sets. The Label field allows you to name a data set, and this label can then be shown as a legend in the widget using the Display legend button in the widget parameters (see the next section).
"},{"location":"usage/tips-widget-creation/#use-case-2-break-down-your-chart","title":"Use case 2: break down your chart","text":"
As mentioned in How to choose the appropriate widget visualization for your use case? section you can add data sets to decompose your graph into smaller meaningful chunks. In the below points, you can find some use cases that will help you understand how to structure your data.
You can break down a view either by entity or by relations, depending on what you need to count.
"},{"location":"usage/tips-widget-creation/#break-down-by-entity","title":"Break down by entity","text":"
Use case example: I need to understand what are the most targeted countries by malware, and have a breakdown for each country by malware type.
Process:
To achieve this use case, you first need to select the horizontal bar vizualisation.
Then you need to select the knowledge graph perspective.
In the filters view:
Then input your main query Source type = Malware AND Target type = Countries AND Relation type = Targets. Add a label to your dataset.
Add an entity data set by using access button + Entities.
Add the following filters Entity type = Malware AND In regards of = targets. Add a label to your dataset.
In the parameter view:
Attribute (of your relation) = entity (so that you display the different entities values)
Display the source toggle = off
Attribute (of your entity malware) = Malware type (since you want to break down your relations by the malware types)
As a result, you get a list of countries broken down by malware types.
"},{"location":"usage/tips-widget-creation/#break-down-by-relation","title":"Break down by relation","text":"
Use case example: I need to understand what are the top targeting malware and have a breakdown of the top targets per malware
Process:
To achieve this use case, you first need to select the horizontal bar vizualisation.
Then you need to select the knowledge graph perspective.
In the filters view:
Then input your main query Source type = Malware AND Relation type = Targets. Add a label to your dataset.
Add a relation data set by using access button + Relationships
Add the following filters Source type = Malware AND Relation type = targets. Add a label to your dataset.
In the parameter view:
Attribute (of your relation): entity (so that you display the different entities values)
Display the source toggle = on
Attribute (of your entity malware) = Malware type (since you want to break down your relations by the malware types)
Display the source toggle = off
As a result, you get a list of malware with the breakdown of their top targets.
"},{"location":"usage/tips-widget-creation/#more-use-cases","title":"More use cases","text":"
To see more use cases, feel free to have a look at this blog post that will provide you additional information.
Creating widgets on the dashboard involves a four-step configuration process. By navigating through these configuration steps, users can design widgets that meet their specific requirements.
Users can select from 15 diverse visualization options to highlight different aspects of their data. This includes simple views like counters and lists, as well as more intricate views like heatmaps and trees. The chosen visualization impacts the available perspectives and parameters, making it crucial to align the view with the desired data observations. Here are a few insights:
Line and Area views: Ideal for visualizing activity volumes over time.
Horizontal bar views: Designed to identify top entities that best satisfy applied filters (e.g., top malware targeting the Finance sector).
Tree views: Useful for comparing activity volumes.
A perspective is the way the platform will count the data to display in your widgets:
Entities Perspective: Focuses on entities, allowing observation of simple knowledge based on defined filters and criteria. The count will be based on entities only.
Knowledge Graph Perspective: Concentrates on relationships, displaying intricate knowledge derived from relationships between entities and specified filters. The count will be based on relations only.
Activity & History Perspective: Centers on activities within the platform, not the knowledge content. This perspective is valuable for monitoring user and connector activities, evaluating data sources, and more.
Filters vary based on the selected perspective, defining the dataset to be utilized in the widget. Filters are instrumental in narrowing down the scope of data for a more focused analysis.
While filters in the \"Entities\" and \"Activity & History\" perspectives align with the platform's familiar search and feed creation filters, the \"Knowledge Graph\" perspective introduces a more intricate filter configuration.Therefore, they need to be addressed in more detail.
"},{"location":"usage/widgets/#filter-in-the-context-of-knowledge-graph","title":"Filter in the context of Knowledge Graph","text":"
Two types of filters are available in the Knowledge Graph perspective:
Main query filter
Classic filters (gray): Define the relationships to be retrieved, forming the basis on which the widget displays data. Remember, statistics in the Knowledge Graph perspective are based on relationships.
Pre-query filters
Pre-query filters are used to provide to your main query a specific dataset. In other words, instead of making a query on the whole data set of your platform, you can already target a subset of data that will match certain criteria. They are two types of pre-query filters:
Dynamic filters on the source (orange): Refine data by filtering on entities positioned as the source (in the \"from\" position) of the relationship.
Dynamic filters on the target (green): Refine data by filtering on entities positioned as the target (in the \"to\" position) of the relationship.
Pre-query limitation
The pre-query is limited to 5000 results. If your pre-query results in having more than 5000 results, your widget will only display statistics based on these 5000 results matching your pre-query, resulting in a wrong view. To avoid this issue, be specific in your pre-query filters.
Example scenario:
Let's consider an example scenario: Analyzing the initial access attack patterns used by intrusion sets targeting the finance sector.
Classic filters: Define the relationships associated with the use of attack patterns by intrusion sets
Dynamic filters on the source (Orange): Narrow down the data by filtering on intrusion sets targeting the finance sector.
Dynamic filters on the target (Green): Narrow down the data by filtering on attack patterns associated with the kill chain's initial access phase.
By leveraging these advanced filters, users can conduct detailed analyses within the Knowledge Graph perspective, unlocking insights that are crucial for understanding intricate relationships and statistics.
In certain views, you can access buttons like +, + Relationships, or + Entities. These buttons enable you to incorporate different data into the same widget for comparative analysis. For instance, in a Line view, adding a second set of filters will display two curves in the widget, each corresponding to one of the filtered data sets. Depending on the view, you can work with 1 to 5 sets of filters. The Label field allows you to name a data set, and this label can then be shown as a legend in the widget using the Display legend button in the widget parameters (see the next section).
Parameters depend on the chosen visualization and allow users to define widget titles, choose displayed elements from the filtered data, select data reference date, and configure various other parameters specific to each visualization.
For the \"Knowledge Graph\" perspective, a critical parameter is the Display the source toggle. This feature empowers users to choose whether the widget displays entities from the source side or the target side of the relationships.
Toggle ON (\"Display the source\"): The widget focuses on entities positioned as the source of the relationships (in the \"from\" position).
Toggle OFF (\"Display the target\"): The widget shifts its focus to entities positioned as the target of the relationships (in the \"to\" position).
This level of control ensures that your dashboard aligns precisely with your analytical objectives, offering a tailored perspective based on your data and relationship.
To successfully configure widgets in OpenCTI, having a solid understanding of the platform's data modeling is essential. Knowing specific relationships, entities, and their attributes helps refine filters accurately. Let's explore two examples.
Scenarios 1:
Consider the scenario where you aim to visualize relationships between intrusion sets and attack patterns. In this case, the relevant relationship type connecting intrusion sets to attack patterns is labeled as \"Uses\" (as illustrated in the \"Filters\" section).
Scenarios 2:
Suppose your goal is to retrieve all reports associated with the finance sector. In this case, it's essential to use the correct filter for the finance sector. Instead of placing the finance sector in the \"Related entity\" filter, it should be placed in the \"Contains\" filter. Since a Report is a container object (like Cases and Groupings), it contains entities within it and is not related to entities.
"},{"location":"usage/widgets/#key-data-modeling-aspects","title":"Key data modeling aspects","text":"
Entities: Recognizing container (e.g. Reports, Cases and Groupings) and understanding the difference with non-container.
Relationships: Identifying the relationship types connecting entities.
Attributes: Understanding entities and relationships attributes for effective filtering.
Having this prerequisite knowledge allows you to navigate the widget configuration process seamlessly, ensuring accurate and insightful visualizations based on your specific data requirements.
Workbenches serve as dedicated workspaces for manipulating data before it is officially imported into the platform.
"},{"location":"usage/workbench/#location-of-use","title":"Location of use","text":"
The workbenches are located at various places within the platform:
"},{"location":"usage/workbench/#data-import-and-analyst-workbenches-window","title":"Data import and analyst workbenches window","text":"
This window encompasses all the necessary tools for importing a file. Files imported through this interface will subsequently be processed by the import connectors, resulting in the creation of workbenches. Additionally, analysts can manually create a workbench by clicking on the \"+\" icon at the bottom right of the window.
"},{"location":"usage/workbench/#data-tabs-of-all-entities","title":"Data tabs of all entities","text":"
Workbenches are also accessible through the \"Data\" tabs of entities, providing convenient access to import data associated with the entity.
Workbenches are automatically generated upon the import of a file through an import connector. When an import connector is initiated, it scans files for recognizable entities and subsequently creates a workbench. All identified entities are placed within this workbench for analyst reviews. Alternatively, analysts have the option to manually create a workbench by clicking on the \"+\" icon at the bottom right of the \"Data import and analyst workbenches\" window.
The workbench being a draft space, the analysts use it to review connector proposals before finalizing them for import. Within the workbench, analysts have the flexibility to add, delete, or modify entities to meet specific requirements.
Once the content within the workbench is deemed acceptable, the analyst must initiate the ingestion process by clicking on Validate this workbench. This action signifies writing the data in the knowledge base.
Workbenches are drafting spaces
Until the workbench is validated, the contained data remains in draft form and is not recorded in the knowledge base. This ensures that only reviewed and approved data is officially integrated into the platform.
For more information on importing files, refer to the Import from files documentation page.
Confidence level of created knowledge through workbench
The confidence level of knowledge created through workbench is affected by the confidence level of the user. Please navigate to this page to understand in more details.
"},{"location":"usage/workflows/","title":"Workflows and assignation","text":"
Efficiently manage and organize your work within the OpenCTI platform by leveraging workflows and assignment. These capabilities provide a structured approach to tracking the status of objects and assigning responsibilities to users.
Workflows are designed to trace the status of objects in the system. They are represented by the \"Processing status\" field embedded in each object. By default, this field is disabled for most objects but can be activated through the platform settings. For details on activating and configuring workflows, refer to the dedicated documentation page.
Enabling workflows enhances visibility into the progress and status of different objects, providing a comprehensive view for effective management.
Certain objects, including Reports, Cases, and Tasks, come equipped with \"Assignees\" and \"Participants\" attributes. These attributes serve the purpose of designating individuals responsible for the object and those who actively participate in it.
Attributes can be set as mandatory or with default values, streamlining the assignment process. Users can also be assigned or designated as participants manually, contributing to a collaborative and organized workflow. For details on configuring attributes, refer to the dedicated documentation page.
Users can stay informed about assignments through notification triggers. By setting up notification triggers, users receive alerts when an object is assigned to them. This ensures timely communication and proactive engagement with assigned tasks or responsibilities.
The behavior of each connector is defined by its development, determining the types of data it imports and its configuration options. This flexibility allows users to customize the import process to their specific needs, ensuring a seamless and personalized data integration experience.
The level of configuration granularity regarding the imported data type varies with each connector. Nevertheless, connectors empower users to specify the date from which they wish to fetch data. This capability is particularly useful during the initial activation of a connector, enabling the retrieval of historical data. Following this, the connector operates in real-time, continuously importing new data from the source.
+
Reset connector state
+
Resetting the connector state enables you to restart the ingestion process from the very beginning.
+Additionally, resetting the connector state will purge the RabbitMQ queue for this specific connector.
+
However, this action requires the "Manage connector state" capability (more details about capabilities: List of capabilities). Without this specific capability, you will not be able to reset the connector state.
+
When the action is performed, a message is displayed confirming the reset and inform you about the number of messages that will be purged
+
+
Purging a message queue is necessary to remove any accumulated messages that may be outdated or redundant. It helps to avoid reprocessing messages that have already been ingested.
+
By purging the queue, you ensure that the connector starts with a clean slate, processing only the new data.
Connector Ecosystem
OpenCTI's connector ecosystem covers a broad spectrum of sources, enhancing the platform's capability to integrate data from various contexts, from threat intelligence providers to specialized databases. The list of available connectors can be found in our connectors catalog. Connectors are categorized into three types: import connectors (the focus here), enrichment connectors, and stream consumers. Further documentation on connectors is available on the dedicated documentation page.
In summary, automated imports through connectors empower OpenCTI users with a scalable, efficient, and customizable mechanism for data ingestion, ensuring that the platform remains enriched with the latest and most relevant intelligence.