Threat Command Policy Rules
Use Threat Command rules to enforce a remediation or other actions on an alert. As opposed to global policy rules, Threat Command rules enable more specific search criteria and additional activities.
You can create rules (also called policies) that will apply actions to all alerts of a specific alert type or to a subset of those alerts that match defined criteria.
All Threat Command policies include these parts:
- To which alerts to apply the policy - Which alerts match the criteria, as you define in the Alert Profile tab.
- What to do to the matching alerts- You define this in the following tabs:
- Internal remediation - Share alert indicators with integrated security devices. For example, if a phishing website alert contains IOCs, you can share those with an integrated device for blocking, etc. This option is available for some alert scenarios, such as phishing, credentials leakage, Twitter, and others.
- External remediation - Request a takedown, close alerts, and report malicious sites. The available options change per alert scenario.
- Actions - Assign alerts, add watchers, change severity, add tags, send an email, and close the matched alerts.
The steps of creating a policy are a) to define to which alerts to apply the actions and b) to define what to do to the matching alerts.
By default, rules act on new alerts. After defining a new rule, you are given the option to include past alerts in the defined actions.
Create a Threat Command policy
You can use Threat Command rules to match alerts more exactly and to perform internal or external remediation (where applicable).
To create a Threat Command policy:
- From the Automation > Policy page, click the Threat Command alert type to create.
- Click the + sign.
- Type a name for the rule.
- Use the Alert Profiletab to define to which alerts to apply the policy actions:
- By default, all alert scenarios will match. To limit that, select from the Alert Type list.
The options of alert scenarios depend on the Threat Command rule that you are creating. - You must select at least one of the alert severittes to match from the Alert Severity list.
- (Optional) To further define the matching alerts, select from the General Conditions list.
The conditions depend on the alert scenario that you selected.
The most common conditions are described in the General conditions table. There are additional conditions specific to particular scenarios, described in the Alert type conditions table.
- By default, all alert scenarios will match. To limit that, select from the Alert Type list.
- Click Next.
The Internal Remediation tab is shown. Internal remediation is available for some alert scenarios. - (Optional) Configure internal remediation:
- Select Share alert indicators with devices.
The list of integrated devices is shown, together with IOC groups that were predefined for each device. - Select devices and the IOC groups to share per device.
You can also create IOC groups on-the-fly, described in Automate Internal Remediation.
Selected IOC groups are displayed in the IOC Groups selected panel.
- Select Share alert indicators with devices.
- Click Next.
The External Remediation tab is shown. External remediation options vary per alert scenarios. - (Optional) Configure external remediation:
- To initiate a takedown, select one of the available options.
- To automatically close matching alerts after they have been successfully remediated, select Close alerts after successful remediation.
- To warn others about potentially malicious websites, select to which agency to report them.
- Click Next.
The Action tab is shown. - Select actions to perform on matched alerts.
- Click Finish.
When prompted, you can include past alerts, even alerts that are closed. - (Optional) To include historical alerts, select a time frame and whether to include closed alerts, then click Yes, include. Otherwise, click No thanks.
When the rule is accepted, a confirmation message is displayed, and the rule is shown in the rules list.
View current Threat Command rules
- From the Automation > Policy **** page, click a Threat Command alert type.
Rules that have already been defined are displayed in the rules list for the selected alert type.
You can edit, duplicate, delete, and stop a rule. For more information, see Editing Policy Rules.
General conditions table
The following table describes the general conditions that apply to most alert scenarios:
Condition | Enforce on alerts... |
---|---|
Select asset type | Of a specified asset type. |
Select assets | Of a specified asset value. |
Source date range* | Whose source was collected during a specified date range. |
Select tags* | That have specific tags. |
Source URL contains* | Whose source URL contains specified text. Separate multiple items with a semicolon. |
Network type | Found in the clear web or dark web. |
Source type | Found in specified sources (such as Hacking forums or Black markets). |
* Use the "Negate" option to find the opposite, for example, alerts whose source date is not within a range, or that do not have tags.
Not all of the general conditions apply to every alert type, and they are sometimes displayed in a different order.
Some alert scenarios use other conditions, described in the Alert type conditions table.
Alert type conditions table
The following table describes conditions that apply to only some scenarios (in alphabetical order):
Condition | Enforce on alerts... |
---|---|
Active User Accounts | Where at least one leaked username is "active" or "unknown" in the Active Directory or Azure Active Directory. |
Select App Stores* | Found in specified app stores. |
Select Attacked Company | Where a specified company was attacked. |
Select Certificate Found Issues* | Where the certificate issues are of a specified type (such as Non-trusted certificate or Certificate expired) |
Select Confidentiality Terms* | Where the leaked document contains specified confidentiality terms (such as Sensitive or Confidential). |
Credentials for Internal Page | Where the offered credentials relate to an internal webpage. |
Document Modification Date | Where the leaked document was modified in a specified date range. |
Domain Expired | Whose domain is (or is not) expired. |
Domain Registration Date* | Whose domain was registered in a specified date range. |
Domain type | Where the domain is a domain or is a subdomain. |
Employee Type | Employee is (or is not) a VIP. |
Select File Type | Where the leaked document is of a specified file type (such as DOC, PDF). |
Select Git Source* | Where the leak source is GitHub or is GitLab. |
Has A Record | Whose domain has (or does not have) an A record. |
Has MX Record | Whose domain has (or does not have) an MX record. |
Has NS Record | Whose domain has (or does not have) an NS record. |
Host URL Contains* | Whose source URL contains specified text. |
Is Protected | Where the leaked document is (or is not) protected. |
Max Number of Detected Credit Cards | Where the amount of cards offered for sale does not exceed a specified number. |
Max Number of Linked Domains | Where the amount of domains linked to a certificate found from an existing asset does not exceed a specified number. |
Max Number of Matched Objects | Where the amount of matched objects does not exceed a specified number. |
Max Number of Relevant Credentials | Where the amount of credentials offered for sale does not exceed a specified number. |
Max Number Of Secret Mentions | Where the amount of mentioned secrets does not exceed a specified number. |
Max Product Price | Where the price of the offered product does not exceed a specified price. |
Max VirusTotal Malicious Detections | Where the amount of VirusTotal detections does note exceed a specified number. |
Min Number of Detected Credit Cards | Where the amount of cards for sale is at least a specified number. |
Min Number of Linked Domains | Where the amount of domains linked to a certificate found from an existing asset is at least a specified number. |
Min Number of Matched Objects | Where the amount of matched objects is at least a specified number. |
Min Number of Relevant Credentials | Where the amount of credentials offered for sale is at least a specififed number. |
Min Number Of Secret Mentions | Where the amount of mentioned secrets is at least a specififed number. |
Min Product Price | Where the price of the offered product is at least a specified price. |
Min VirusTotal Malicious Detections | Where the amount of VirusTotal detections is at least a specified number. |
Password is Valid | Whose found password is (or is not) valid. |
Select Product Type* | Of a specified product type (such as Account or Hacking tutorial). |
Select Ransomware Group | Where the leak was published by specified ransomware groups. |
Records leaked Lower Threshold | Where at least a specified number of records were leaked. |
Records leaked Upper Threshold | Where no more than a specified number of records were leaked. |
Select Secret Type* | Where the leaked secret is of a specified type (such as AWS Client ID or GitHub key). |
Select Secret Value Contains* | Where the secret contains specified text. |
Select Source Name* | Found in a specified source (such as all_world_cards or briansclub). |
Select SSL Issues* | Where the SSL issues are of a specified type (such as Handshake failed or weak RSA key). |
Select Store Types* | Found in app stores of a specified type (such as Mirror or Malicious). |
Select Technology | Where the affected technology matches specified technologies. |
Select Vulnerability Origin* | Where the vulnerability origin matches specified origins (such as API or Technology in Use asset). |
Whois Contains | Whose WHOIS record contains specified text. |
WHOIS Update Date* | Whose WHOIS record was updated in a specified date range. |
* Use the "Negate" option to find the opposite, for example, alerts whose source date is not within a range, or that do not have tags.