Threat Command Policy Rules

Use Threat Command rules to enforce a remediation or other actions on an alert. As opposed to global policy rules, Threat Command rules enable more specific search criteria and additional activities.

You can create rules (also called policies) that will apply actions to all alerts of a specific alert type or to a subset of those alerts that match defined criteria.

All Threat Command policies include these parts:

  • To which alerts to apply the policy - Which alerts match the criteria, as you define in the Alert Profile tab.
  • What to do to the matching alerts- You define this in the following tabs:
    • Internal remediation - Share alert indicators with integrated security devices. For example, if a phishing website alert contains IOCs, you can share those with an integrated device for blocking, etc. This option is available for some alert scenarios, such as phishing, credentials leakage, Twitter, and others.
    • External remediation - Request a takedown, close alerts, and report malicious sites.  The available options change per alert scenario.
    • Actions - Assign alerts, add watchers, change severity, add tags, send an email, and close the matched alerts.

The steps of creating a policy are a) to define to which alerts to apply the actions and b) to define what to do to the matching alerts.

By default, rules act on new alerts. After defining a new rule, you are given the option to include past alerts in the defined actions.

Create a Threat Command policy

You can use Threat Command rules to match alerts more exactly and to perform internal or external remediation (where applicable).

To create a Threat Command policy:

  1. From the Automation > Policy  page, click the Threat Command alert type to create.
  2. Click the + sign.
  3. Type a name for the rule. 
  4. Use the Alert Profiletab to define to which alerts to apply the policy actions:
    1. By default, all alert scenarios will match. To limit that, select from the Alert Type list.
      The options of alert scenarios depend on the Threat Command rule that you are creating.
    2. You must select at least one of the alert severittes to match from the Alert Severity list.
    3. (Optional) To further define the matching alerts, select from the General Conditions list.
      The conditions depend on the alert scenario that you selected.
      The most common conditions are described in the General conditions table. There are additional conditions specific to particular scenarios, described in the Alert type conditions table.
  5. Click Next.
    The Internal Remediation tab is shown. Internal remediation is available for some alert scenarios.
  6. (Optional) Configure internal remediation:
    1. Select Share alert indicators with devices.
      The list of integrated devices is shown, together with IOC groups that were predefined for each device.
    2. Select devices and the IOC groups to share per device.
      You can also create IOC groups on-the-fly, described in Automate Internal Remediation.
      Selected IOC groups are displayed in the IOC Groups selected panel.
  7. Click Next.
    The External Remediation tab is shown. External remediation options vary per alert scenarios.
  8. (Optional) Configure external remediation:
    1. To initiate a takedown, select one of the available options.
    2. To automatically close matching alerts after they have been successfully remediated, select Close alerts after successful remediation.
    3. To warn others about potentially malicious websites, select to which agency to report them.
  9. Click Next.
    The Action tab is shown.
  10. Select actions to perform on matched alerts.
  11. Click Finish.
    When prompted, you can include past alerts, even alerts that are closed.
  12. (Optional) To include historical alerts, select a time frame and whether to include closed alerts, then click Yes, include. Otherwise, click No thanks.

When the rule is accepted, a confirmation message is displayed, and the rule is shown in the rules list.

View current Threat Command rules

  • From the Automation > Policy **** page, click a Threat Command alert type. 
    Rules that have already been defined are displayed in the rules list for the selected alert type.

You can edit, duplicate, delete, and stop a rule. For more information, see Editing Policy Rules.

General conditions table

The following table describes the general conditions that apply to most alert scenarios:

ConditionEnforce on alerts...
Select asset typeOf a specified asset type.
Select assetsOf a specified asset value.
Source date range*Whose source was collected during a specified date range.
Select tags*That have specific tags.
Source URL contains*Whose source URL contains specified text. Separate multiple items with a semicolon.
Network typeFound in the clear web or dark web.
Source typeFound in specified sources (such as Hacking forums or Black markets).

* Use the "Negate" option to find the opposite, for example, alerts whose source date is not within a range, or that do not have tags.

Not all of the general conditions apply to every alert type, and they are sometimes displayed in a different order.

Some alert scenarios use other conditions, described in the Alert type conditions table.

Alert type conditions table

The following table describes conditions that apply to only some scenarios (in alphabetical order):

ConditionEnforce on alerts...
Active User AccountsWhere at least one leaked username is "active" or "unknown" in the Active Directory or Azure Active Directory.
Select App Stores*Found in specified app stores.
Select Attacked CompanyWhere a specified company was attacked.
Select Certificate Found Issues*Where the certificate issues are of a specified type (such as Non-trusted certificate or Certificate expired)
Select Confidentiality Terms*Where the leaked document contains specified confidentiality terms (such as Sensitive or Confidential).
Credentials for Internal PageWhere the offered credentials relate to an internal webpage.
Document Modification DateWhere the leaked document was modified in a specified date range.
Domain ExpiredWhose domain is (or is not) expired.
Domain Registration Date*Whose domain was registered in a specified date range.
Domain typeWhere the domain is a domain or is a subdomain.
Employee TypeEmployee is (or is not) a VIP.
Select File TypeWhere the leaked document is of a specified file type (such as DOC, PDF).
Select Git Source*Where the leak source is GitHub or is GitLab.
Has A RecordWhose domain has (or does not have) an A record.
Has MX RecordWhose domain has (or does not have) an MX record.
Has NS RecordWhose domain has (or does not have) an NS record.
Host URL Contains*Whose source URL contains specified text.
Is ProtectedWhere the leaked document is (or is not) protected.
Max Number of Detected Credit CardsWhere the amount of cards offered for sale does not exceed a specified number.
Max Number of Linked DomainsWhere the amount of domains linked to a certificate found from an existing asset does not exceed a specified number.
Max Number of Matched ObjectsWhere the amount of matched objects does not exceed a specified number.
Max Number of Relevant CredentialsWhere the amount of credentials offered for sale does not exceed a specified number.
Max Number Of Secret MentionsWhere the amount of mentioned secrets does not exceed a specified number.
Max Product PriceWhere the price of the offered product does not exceed a specified price.
Max VirusTotal Malicious DetectionsWhere the amount of VirusTotal detections does note exceed a specified number.
Min Number of Detected Credit CardsWhere the amount of cards for sale is at least a specified number.
Min Number of Linked DomainsWhere the amount of domains linked to a certificate found from an existing asset is at least a specified number.
Min Number of Matched ObjectsWhere the amount of matched objects is at least a specified number.
Min Number of Relevant CredentialsWhere the amount of credentials offered for sale is at least a specififed number.
Min Number Of Secret MentionsWhere the amount of mentioned secrets is at least a specififed number.
Min Product PriceWhere the price of the offered product is at least a specified price.
Min VirusTotal Malicious DetectionsWhere the amount of VirusTotal detections is at least a specified number.
Password is ValidWhose found password is (or is not) valid.
Select Product Type*Of a specified product type (such as Account or Hacking tutorial).
Select Ransomware GroupWhere the leak was published by specified ransomware groups.
Records leaked Lower ThresholdWhere at least a specified number of records were leaked.
Records leaked Upper ThresholdWhere no more than a specified number of records were leaked.
Select Secret Type*Where the leaked secret is of a specified type (such as AWS Client ID or GitHub key).
Select Secret Value Contains*Where the secret contains specified text.
Select Source Name*Found in a specified source (such as all_world_cards or briansclub).
Select SSL Issues*Where the SSL issues are of a specified type (such as Handshake failed or weak RSA key).
Select Store Types*Found in app stores of a specified type (such as Mirror or Malicious).
Select TechnologyWhere the affected technology matches specified technologies.
Select Vulnerability Origin*Where the vulnerability origin matches specified origins (such as API or Technology in Use asset).
Whois ContainsWhose WHOIS record contains specified text.
WHOIS Update Date*Whose WHOIS record was updated in a specified date range.

* Use the "Negate" option to find the opposite, for example, alerts whose source date is not within a range, or that do not have tags.