Scan Template Best Practices
Scan Template Best Practices
A scan template is a predefined set of scan attributes that you can select quickly rather than manually define properties, such as target assets, services, and vulnerabilities. We recommend setting up a Best Practice scan template in the Security Console.
To create or view scan templates, on to the Administration page, click Scans > Templates.
Creating the Best Practice scan template
Default templates cannot be directly modified, however you can copy them which makes them editable for your business use cases.
We recommend copying the Full Audit without Web Spider template by clicking the copy icon next to that scan template.
When naming your custom template, we recommend the name start with an exclamation mark (!), followed by your company initials (example: !R7 - Full Audit) which makes them show up first in the list and make subsequent scans easier to perform.
At the top of this template, there are three different types of checks:
Asset Discovery
This type of check is required for all scan templates, because it is the initial Nmap process run to find the assets within the site range, fingerprint the OS, and find open ports.
This can be combined with the option, Use Credentials, to perform authenticated discovery scans , which provides stronger OS fingerprints and the ability to better understand what’s running on your target assets. This setting only applies to the Asset Discovery check type because the Vulnerability check type will always try to authenticate to the target host.
If Asset Discovery is the only option selected then these scans do not count against your license. For more information on how licensing works, see Live Licensing .
Vulnerabilities
Combined with Asset Discovery, these checks should be used in a best practice scan template. Vulnerabilities checks means we take the fingerprints from assets, ports and fingerprints found in Asset discovery and port scanning, apply any applicable credentials, or banner grabs and perform vulnerability assessments against those assets.
This option counts against your total licensed assets.
Policies
These checks are used to scan your assets to see how they stack up against different hardening guidelines, such as CIS or DISA STIGS. InsightVM has a fully featured policy assessment ability and is part of the defense-in-depth process of securing your environment.
Scanning credentials with administrative/root privileges are required, and we recommend enabling this feature in a scan template without also selecting the Vulnerabilities option and setting up OS-based scan templates targeting specific operating systems.
It is generally recommended to pursue policy scanning once your vulnerability management program is established and automated, as policy can be very complex and hard to justify in the early stages of your deployment. For easier policy scanning, you can use the agent based policy scans .
General Tab
On the General tab of the scan template, there are many options available. We have provided best practices for each.
Enhanced Logging
Enhanced Logging: This option provides DEBUG-level logging for scans and should be run before submitting logs to Rapid7’s Support team for a potential false positive issue.
Enable Windows services during a scan
This option is needed if your company has a policy to block remote registry access. If remote registry access is being blocked, this option will bypass it and allow a remote registry scan. We recommend enabling it for Windows assets.
Use Credentials
This option only works when running a Discovery-only scan with no Vulnerability checks enabled and only applies to the Asset Discovery check type. When enabled, it uses the Maximum assets scanned simultaneously value when authenticating to your assets, and can lead to discovery scan slowdown, but provides higher accuracy of asset fingerprinting.
This option allows credentials when running a discovery scan; the assets will still not count toward the license.
Enable Fingerprinting
This option is enabled by default in a scan template with Vulnerabilities disabled. It cannot be disabled if the Vulnerabilities option is selected. However, you can clear the checkbox to disable fingerprinting with a discovery scan. Since fingerprinting can be a large time sync in the discovery process, this could greatly speed up an asset-only discovery scan. By asset-only, there will be no attempt to identify the Operating System or other fingerprints, leading to decreased visibility.
We recommend keeping this enabled because there is relatively little benefit in only identifying live assets, without knowing other information such as OS, hostnames, etc.
Enable Windows File System Search
This feature was built to help detect vulnerabilities like Log4J by doing a windows file search.
We do not recommend selecting this option unless you have a dedicated scan template which includes vulnerabilities like Log4J. By enabling this feature, we use the Windows search engine, which greatly increases the scan duration and impact on the device or asset being scanned.
The Rapid7 agent can run assessments against vulnerabilities like Log4J checks as well, making this template feature deprecated for assets that have the agent installed.
Maxium assets scanned simultaneously per Scan Engine
This is where you must calculate your needs, as it depends on whether you’re using the Local Scan Engine , the OS being used, whether you’re using a dedicated scan engine, and the memory and CPU levels. It is arguably the most important option in the scan template, as it controls how fast you can scan.
Rapid7 previously released changes which reduce the impact on customers for simultaneous assets and minimize the risk of running out of memory in the Scan Engines. For this to work, we need a 1:4 ratio of CPU to Memory. This means aiming for 4 CPUs and 16 GB or 8 CPUs and 32 GB of memory (2 CPUs and 8 GB also work).
Operating systems also have an impact, as Windows-based operating systems tend to use a lot more memory to run the Windows GUI than the Linux GUI (if headless, there is no GUI). However, we do not recommend using Linux to save memory. Choose an operating system that best fits into your patch management strategy. Because you can’t patch it, you do not want your server that has all of your network’s vulnerability information on it to be the most vulnerable asset in your environment.
Recommended starting points:
|Engine Type|8GB/2 cores|16GB/4 cores|32GB/8 cores| ||||| |Windows Dedicated Scan Engine|150|300|700| |Windows Console Local Scan Engine|50|150|300| |Linux Dedicated Scan Engine|200|400|800| |Linux Console Local Scan Engine|75|200|400|
What do these numbers mean?
With an average credentialed scan taking around 12 minutes, we scan five volleys of assets per hour, and on a 16GB/4 core Linux dedicated engine running 400 simultaneous assets per engine, it would mean we scan approximately 2,000 assets in an hour (with just the one Scan Engine). If we add more than one scan engine, just multiply that number by the number of Scan Engines in a Scan Engine pool .
Another option in the Vulnerability Checks tab of the scan template, Skip checks performed by the Insight Agent, refers to Complementary scanning . This option detects whether an agent is installed and only runs the small percentage of checks that the agent cannot do, allowing these scans to potentially complete more quickly. For this reason, Agents are highly recommended on as many assets as possible in your environment.
The final option in the scan template is the Maximum scan processes simultaneously used for each asset with a default value of 10. We recommend leaving this at 10 because this is for scanning multiple services and boxes should not have more than 10 open services. If they do, you should investigate why they have so many open services, as each service is a potential access vector for a vulnerability. Use discretion when changing this value as going beyond 10 is generally not recommended.
Asset Discovery
Determining whether target assets are live can be useful in environments that contain large numbers of assets, which can be difficult to keep track of. Filtering out dead assets from the scan job helps reduce scan time and resource consumption.
Verify Live Assets
This section includes the steps we take to determine whether an asset is live.
- First, we try an ICMP Ping. If we get a response on a given asset, we’re done and don’t need to take any additional steps—fast and easy! However, if we don’t get an ICMP reply, we then try ARP. If the scan engine is in the local network segment, this might work and we might get a reply, and if we don’t, then we move on to TCP, and finally UDP.
- Next, we try TCP, where we send a SYN to the listed ports. We expect an ACK reply, but a response doesn’t necessarily mean the asset is live. Some TCP responses can be a bad thing, such as a TCP Reset response. TCP reset responses are generally sent by an IDS in an attempt to shroud the asset by responding with TCP resets for assets that don’t exist. This can cause a massive number of “ghost assets,” assets that show up as live with no hostname or operating system, which results in thousands if not tens of thousands of assets showing up that do not exist. To help with TCP resets, make sure to check the box for “Do not treat TCP reset responses as live assets.” We highly recommend this option for nearly all environments, as firewalls that you may scan across have some type of built-in IDS functionality that may cause this.
- Last, we try UDP, though it’s unlikely a box will be live that doesn’t respond to ICMP, ARP, or TCP. When scanning large network ranges like class As or Bs, unchecking this option is just one less thing to try on every dead asset and would allow the scan to get through all of those dead assets much faster. It’s up to you whether you want to disable this, though we generally recommend it.
Collect More Asset Info
We recommend leaving this option with the default values. We do not recommend using the “Find Other Assets on the Network” and “Collect WHOIS Information” options unless there is a specific use case for reviewing the results because results are only available in logs. While these options do work, this information is not integrated into the database.
- The Fingerprint TCP/IP Stacks option is enabled by default. We recommend leaving this enabled because it provides a better estimate of the OS compared to the no results when not using scanning credentials.
- The retries setting defines how many times InsightVM will repeat the attempt to fingerprint the IP stack, and unless you’re in a very latent environment, we recommend leaving it at zero.
- The minimum certainty option removes the results under whatever is set as the minimum. We recommend that you do not change this setting from the default because this can negatively impact fingerprint results which impacts vulnerability assessments.
You might see some of the following basic fingerprints:
- Anything below .70 or 70% is an IP stack analysis guess. We look at the ACK and Nmap makes a guess. In general, .60 to .70 tend to be more accurate guesses, and anything below are low-certainty guesses which can be better than no indication at all.
- .80 or 80% generally means we found a banner, usually an HTTP banner.
- .85 or 85% usually means credentials were successful, but they only achieved GUEST access and could not access the registry and could not get a software inventory.
- .90 or 90% generally means an SNMP banner.
- 1.0 or 100% means credentials worked, and you should be getting the exact OS of the asset, which generally equates to the best possible scan results.
Report Unauthorized MAC Addresses
This option is not recommended unless your company has a list of authorized mac addresses and automation to immediately block any unauthorized MAC addresses, otherwise you will need to actively look for this value in the scan logs.
Service Discovery
We generally recommend leaving everything default in this section, especially when doing unauthenticated scans where the goal is to get an overall view of the network. There are some use cases which deviate from these defaults which are outlined below.
TCP Scanning
The service discovery option could also be called port discovery, and by default, we use SYN>ACK, which is very solid, stealthy, and lightweight. The Well Known Port Numbers also tend to be sufficient for most use cases. For authenticated scans, where you want to run patch checks, you can get much better performance by limiting to the ports known to be used with any credentials provided in the site configuration.
Generally, we only recommend scanning against all 65,535 TCP ports when scanning your external attack surface assets. Never run all 65,535 UDP ports because the scan duration can exceed a month in some circumstances. If you are curious to see what the well-known ports list looks like, you can find the full Nmap command at the beginning of any scan log roughly a couple hundred lines down.
Another use case that could be beneficial for your business is to do a quarterly or yearly all 65K TCP scan just to get the delta on any ports that we are not finding with the Well Known ports, then add them to the additional port list. As for excluded ports, we’ve seen a lot of customers exclude the 9100-9106 and 515 port range, as it can impact certain printers (RAW and LPR print services).
Service Names File
This file lists each port and the service that commonly resides on it. If scans cannot identify actual services on ports, service names will be derived from this file in scan results. We recommend that you use at your discretion, and this should generally be left alone unless recommended by Rapid7 Support to solve a particular problem.
Nmap Service Detection
The Nmap Services Detection option adds roughly 88,700 additional Nmap fingerprints, at the cost of the scan taking up to 20 times longer. We only recommend enabling this option for sites that you do not have good credentials for.
We can also account for some of the scan time increase by increasing the parallelism setting in the next tab, Discovery Performance, at the cost of increased bandwidth used by the scan. We have found that setting this to 500 can significantly mitigate the additional scan time impact.
These core Nmap options are explained with in-depth reviews of each in more detail within the Nmap performance man page .
Discovery Performance
This Discovery Performance tab covers scan resilience and scan speed. This section covers scan resilience to timeouts and resilience to network latency. The default settings are set to be extremely resilient.
Scan Resilience
Retries limit set to 3 indicate you are scanning a network that is prone to lots of timeouts, so we try to contact an asset 4 times (1 initial packet and 3 retry attempts) to ensure we are able to get accurate results. However, we are also retrying three times for every dead port we are scanning, which will significantly increase the scan time retying over and over. If you know that the network you are scanning is relatively timeout-free, which most are these days, we recommend lowering that setting to 1.
The Timeout Interval settings are more related to network latency. Most networks today are extremely robust and ping times are usually in the low teens and may occasionally spike to 100, but never over 200. Unless you’re using a Scan Engine to scan devices across the world or via high-latency satellite connections, we recommend lowering the minimums from 500 to 200 and the maximums from 3000 to 500.
The default settings would implement an initial timeout of half a second, and then 3 seconds for each of the three retry attempts. That’s a total of 9.5 seconds per dead port, and if we are scanning roughly 1200 total ports on each live asset (as is the case with our Full Audit without Web Spider template), that adds up to a significant amount of time waiting for responses. However, if the above changes are made, the total wait time is reduced to 0.7 seconds, which is roughly a 14X decrease in wait time.
Scan Speed
The Scan Delay is how long we wait between sending volleys of SYNs to ports, and the parallelism is the size of the volley. By default, Scan Delay and Parallelism are 0. They are both dynamically set by the minimum packets per section rate (min-rate).
The packets per second section is how you can make the Discovery part of the scan run fast, at the cost of bandwidth. With TCP being designed in the 1970s when networks were extremely small, it was designed to always slow down rates to the minimum, so increasing the minimum will force the network to use more bandwidth and scan faster.
With the default of 450 packets per second and an average packet size of 1,500, you are using around 600kB/s of bandwidth for your scan. This is not an issue unless you are running on an old ISDN or DSL WAN connection. Most WAN links today are usually 100MB/s plus, so this number can be gradually increased to speed up the scan. We generally recommend trying 2,000, and work with your network team to ensure they don’t notice any impact, at which point you could continue to increase if required.
On a 100 MB/s connection, a couple MB of bandwidth is usually inconsequential and can greatly speed up asset discovery. For example, a Class A, over 16 million IP’s at 450 usually takes a little over 3 days to scan, however with the PPS set to 2000, it takes less than a day. These PPS settings are per engine, so if you set it to 2000, which means ~2.5 MB/s, and you have a pool of 4 engines, that might use closer to 10 MBs of bandwidth.
Vulnerability Checks
There are seven options at the top of the Vulnerability Checks section that can be turned on or off depending on your business use cases.
Perform unsafe checks
This is a legacy option for when the PCI board required basic DoS and Buffer overflow testing for PCI ASV. The PCI board no longer requires this, and these checks are no longer used, but are still in the console. As these are legacy checks which could cause damage, we do not recommend checking this box.
Include potential vulnerability checks
Unauthenticated vulnerability checks are not perfectly reliable. In order to minimize missed vulnerabilities, “potential” checks can be enabled when a higher number of false positives are acceptable. This is not recommended for default scans due to the lower signal-to-noise ratio.
Correlate reliable checks with regular checks
This option generally only applies to RHEL or CentOS operating systems running either PHP or Apache.
Let’s use RHEL running Apache, for example. System Administrators like to run Apache on RHEL because when you update RHEL, it automatically updates Apache, which is nice. The challenge is, RHEL does not update the Apache banner when it updates Apache, so if you’re not using credentials and we can’t get a ‘Reliable’ check on your OS, we have no choice but to use the ‘Regular’ banner grab and depending on how many years old it is, it may mean thousands of false positives.
This could still happen even with Credentials if this option is unchecked, so always make sure this option is checked so it will correlate the reliable credentialed OS grab with the regular banner grab to remove any vulns detected from the banner information or to recognize other forms of backporting by Red Hat.
Skip checks performed by the Insight Agent
If we detect that the asset has an agent, we only run the network based checks that the agent cannot, which will both speed up the scan and also increase the accuracy by not trying to correlate scan results with agent results. However, there are three caveats for using this feature.
- Fully elevated credentials, through direct credentials or use of the scan assistant, are required to recognize that an agent assessment was successfully uploaded within the expected time period.
- Additionally, if, for example, you are running an ad-hoc scan, it will skip any checks performed by an agent. Since the majority of checks are found using the agent, there is a good chance that an ad-hoc scan will not produce the results you are looking for.
- If you are going to enable this feature, we recommend it be only enabled for scan templates used in your scheduled scans and keep this disabled on the primary scan template assigned to the site to prevent challenges with ad-hoc scanning.
This feature can also impact the ability to use the Validation Scan feature in Remediation Projects, because Scan Validation needs to know which Scan Engine found the vulnerability to run a quick scan against. If the checks performed by the agent are skipped in all scanning, then there won’t be a record in the database of which Scan Engine can get to that asset and that vulnerability. This is one of the reasons it is common to see potential failures when using the validation scanning feature.
Use Metasploit Remote Check Service when available (Beta)
If your Scan Engine is Linux-based and has the Metasploit Remote Check Service enabled already, enabling this option instructs the Scan Engine to run “Metasploit” vulnerability checks provided by that service. If not enabled on the scan engine, it generates an error message in the scan logs, but otherwise, does not affect your scanning.
If this is enabled, it uses extra memory, which may affect the number of assets scanned simultaneously and configured on the General Tab. We recommend reducing the number slightly (by 25 on Linux) to accommodate the 1 GB of RAM this feature requires.
Enable Scanning Diagnostic checks
The scan diagnostic checks the report on scan details (e.g., Credential Success) but does not report on vulnerabilities. If you are having trouble with credential success and need a better understanding of why credentials fail, we recommend enabling this setting.
Scan Diagnostics and vulnerabilities
These Scan Diagnostics present as vulnerabilities, so if the credentials fail expect to see an additional vulnerability on that asset. Vulnerabilities reported by Scan Diagnostics carry the lowest possible severity and do not impact your risk score.
Store invulnerable results
Enables the storage of invulnerable results. When scanning a device, all vulnerabilities, whether successful or not, will be sent back to the security console in the scan logs. This is required for some PCI auditors. Unless your PCI auditor explicitly requires a list of all vulnerabilities attempted on a target device, it is recommended to leave this setting disabled.
When disabled, only the vulnerabilities that were found to be successful on the host will be sent back to the console. Disabling will reduce disk space usage for scan data and speed up your scans, but prohibit reporting on invulnerable data. However, invulnerable data required for correlation will still be collected if vulnerability correlation is enabled.
Invulnerable results and false negatives
False negatives are extremely rare, and that is the primary need for invulnerable data. The only way to troubleshoot a false negative is to determine if the check fired. However, due to the extreme rarity of false negatives, we highly recommend keeping this option disabled unless you have a specific need for it in an ad-hoc scan.
Selected Checks
You can see all of the vulnerability categories which can be used for other parts of the tool, or search for checks using the By individual check drop-down. Typically, none of these options should be changed.
Other settings
The following tabs allow further scan template configuration:
- File Searching is a very slow process with high impact to the targets, and will not work for an SMBV2 connection, although it can work with CIFS, and we don’t recommend using it for general use cases.
- Spam relaying there are far better ways to test for spam relaying.
- Database servers can be used if you plan to do policy scanning configuration checks against non-windows databases so you can add the database names.
- Mail Servers, CVS servers and DHCP servers just leave default.
- Telnet Servers is used if you get Telnet false positives so you can add custom Telnet failed login responses to the regex if you run into those default account challenges.
Deploy
You should have a great start for your Scan Template in your InsightVM deployment. Just click save, and this will be the first result when creating a site to start bringing data into your database!