Scan Configuration Parameters

The scan configurations parameters are available in the AppSpider user interface or through the scan configuration file.

ScanConfig

ScanConfig is the top-level structure in the Scan Configuration File. ScanConfig's composite objects are presented in the Advanced Tab of the Scan Configuration Dialog. The scalar values of ScanConfig can only be modified by editing the ScanConfiguration file. After you manually edit the Scan Configuration File, AppSpider should be re-started.

ScanConfig object

The ScanConfig object contains the following objects:

  • CrawlConfig - Defines the crawler parameters.
  • AttackerConfig - Defines the attacker parameters.
  • AttackPolicyConfig - Defines the attack policy. A list of attack modules for the scan and their parameters.
  • AnalyzerConfig - Defines the analyzer parameters.
  • AuthConfig - The authentication configuration. This structure contains everything related to authentication, login, re-login, logout detection.
  • ProxyConfig - The proxy settings.
  • RemediationConfig - The parameters for calculating remediation efforts.
  • SSLCertConfig - SSL client certificate settings.
  • NetworkSettingsConfig - The network parameters.
  • PerformanceConfig - The performance parameter.
  • SystemRecommendationsConfig - The parameters for computer hardware recommendations.
  • HTTPHeadersConfig - The HTTP Headers.
  • ManualCrawlingConfig - List of traffic log files to import.
  • AutoSequenceConfig - Automatic sequence discovery settings.
  • MacroConfig - List of macros for the scan.
  • SeleniumConfig - List of selenium scripts and setting to run selenium scripts.
  • WebServiceConfig - The web service configuration.
  • ReportConfig - Report generation settings.
  • WAFConfig - Deprecated
  • ScheduleConfig - Deprecated
  • SiteTechnologyConfig - The site technology settings.
  • OneTimeTokenConfig - The parameters of One-Time Tokens (XSRF tokens).
  • CVSSConfig - The CVSS configuration.
  • ParameterParserConfig - The custom URL parameter parsers.
  • ParameterValueConfig - The description of parameters used to populate form controls.
NameDescriptionFormatDefault valueAdditional optionsType
NameName of the scan configuration. This can not be blank.StringNoneNoneScalar
AppVersionVersion of the application used to create the scan. This is not used by the Scan Engine.StringCurrent major version of the Scan Engine.NoneScalar
LogEnables or disables logging into the operation log.Boolean10: logging is disabled.
1: logging is enabled
Scalar
Detailed LoggingEnables or disables detailed logging.Boolean00: detailed logging is disabled.
1: detailed logging is enabled.
Scalar
IncludeTrafficEnables or disables detailed logging of the network traffic.Boolean00: network traffic is disabled.
1: network traffic logging is enabled
Scalar
WindowsErrorsDeprecated
UseSystemDsnDeprecated
RecrawlDeprecated
PauseOnRecoverableErrorControls the behavior of the scanner when it encounters a recoverable error. A recoverable error is an error that often can be corrected by the user. Boolean10: scan will fail on recoverable error
1: scan will pause on recoverable error
Scalar
ExecuteCommandLineURLDeprecated
NotifyScanDoneURLDeprecated
JavaScriptEngineWhich browser used by the scanEnumInternet ExplorerNoneScalar
MaxDatabaseSizeThe maximum size of the jet database scan data file after which the scan is stopped.Number1073741824 (1 GB)NoneScalar
MaxTrafficFilesThe maximum number of traffic files the scanner will keep. Number00 means an unlimited number of traffic will be keptScalar

Detailed Logging

Detailed logging increases the amount of the information ScanEngine logs during scan execution. That extra information appears in the operation log. While it is useful to enable detailed logging for debugging, it is recommended to disable it for normal scan execution. Enabling detailed logging significantly increases the size of the log files on disk and slows the scan.

PauseOnRecoverableError

Some of the errors that can be corrected are:

  • Re-login problem
  • Out of disk space
  • Out of memory

MaxTrafficFiles

The scanner removes old traffic files after number of traffic file reaches the number specified in this parameter. It follows First In, First Out (FIFO).

CrawlConfig

NameDescriptionFormatDefault valueAdditional optionsType
MaxDomainMaximum number of domains that AppSpider will crawl.Number100NoneScalar
MaxCrawlResultsMaximum number of web resources that AppSpider is allowed to retrieve from the server during the scan. A web resource is identified by a unique combination of a URL and a parameter (Query, POST). After that number is reached, crawling is stopped.Number5000NoneScalar
MaxPerWebSiteCrawlResultsMaximum number web resource crawler is allowed to crawl per domain.Number-1 (unlimited)NoneScalar
MaxPerDirCrawlResultsMaximum number of web resources in any directory the crawler is allowed to retrieve.Number500NoneScalar
MaxPerLinkCrawlResultsMaximum number of web resources for a given link the crawler is allowed to retrieve. It limits how many resources that have the same URL but different variations of POST parameters can be crawled. Number50NoneScalar
MaxPerNormalizedLinkCrawlResultMaximum number of resources the crawler is allowed to request for a given normalized link. Normalized link is a URL without parameter values.Number100NoneScalar
MaxPerDirChildNodesMaximum number of child nodes in the directory the crawler is allowed to crawl. Child node is a directory or a file. This parameter does not count grand children. Number300NoneScalar
MaxBlackListExtCrawlResultsNumber of resources that have blacklisted based on extension the crawler is allowed to retrieve. This is per domain.Number100NoneScalar
MaxAttackFeedbackLinksCountMaximum number of new links discovered in attack traffic the crawler will insert in the queue. Number300NoneScalar
MaxPerFileNameCrawlResultsMaximum number of Web Resources with the same file name the crawler is allowed to analyzed. Number250NoneScalar
RecursionDepthMaximum repetition that AppSpider will tolerate in URL.Number2NoneScalar
MaxDirDepthMaximum number of directories AppSpider will look into. URLs that have more directories in their path than the value of this parameters will be ignored. For example, www.site.com/dir1/dir2/dir3/file.html will be ignored if MaxDirDepth parameter is set to value smaller than 3.Number10NoneScalar
DiscoveryDepthMaximum discovery depth that AppSpider can go into the site. Discovery depth of a URL is the number of steps that is required for the user to discover the link.Number-1 (unlimited)NoneScalar
UrlRepetitionToleranceMaximum number of identical normalized URLs AppSpider is allowed to crawl. Normalized URL is the URL without query parameter values.Number25NoneScalar
SequenceRepetitionToleranceMaximum number of similar sequences that AppSpider will try to follow.Number5NoneScalarMaxReportedImagesMaximum number of discovered image links that AppSpider should store in the databaseNumber500NoneScalar
MaxReportedLinksDefines maximum number of discovered Web Resources that AppSpider should store in the database in addition to web resources that will be crawled by the crawler. Number2500NoneScalar
MaxReportedCommentsMaximum number of discovered HTML comments that AppSpider should store in the databaseNumber500NoneScalar
MaxReportedScriptsMaximum number of discovered scripts that AppSpider should store in the database.Number500NoneScalar
MaxReportedEmailsMaximum number of discovered email addresses that AppSpider should store in the database.Number500NoneScalar
MaxReportedFormsMaximum number of discovered forms that AppSpider should store in the database.Number500NoneScalar
MaxBrowserPageWaitTimeoutMaximum time AppSpider should wait for the Browser component to load the page and perform all operations.Number. Time in milliseconds.60000NoneScalar
MaxBrowserWaitTillRequestTimeoutMaximum time AppSpider should wait for the JavaScript on the page to send an AJAX request to the server after firing an event (for example, 'onclick' or 'onmouseover').Number. Time in milliseconds.4000NoneScalar
MaxBrowserDOMDepthMaximum depth of DOMs that AppSpider should try to analyze within an HTML page. DOM depth is minimum number of user actions (events) that are required to reach that DOM from the initial DOM of the page.Number4NoneScalar
MaxBrowserEventsPerLinkMaximum number of JavaScript events AppSpider should fire per one link. A link is a URL without a query parameter and the fragment.Number200Scalar
MaxBrowserEventsPerCrawlResultMaximum number of JavaScript events AppSpider should fire per 1 web resource.Number100Scalar
MaxBrowserEventsPerDOMMaximum number of JavaScript events AppSpider should fire per one DOM view.Number100Scalar
NotInsertedLinkCountThresholdMaximum number of ignored links that should be reported in the User Log. Number2NoneScalar
CrawlPrioritizationDefines the algorithm that will be used to crawl the site.EnumSmartFIFO(numeric: 0)
Smart(numeric: 1)
DirBreadthFirst(numeric: 2)
FoundBreadthFirst(numeric: 3)
FoundDepthFirst(numeric: 4)
Juicy(numeric: 5)
LoginFormDiscovery(numeric: 6)
Login(numeric: 7)
Scalar
FileNotFoundRegexRegular Expression that is used by AppSpider to identify custom 404 responses (File not found)StringDefault: (pageresource) (you requested )?(was notcannot be) foundPage not found|404(.0)? - ((File (or directory )?not found)|(Not Found))|HTTP Status 404|404 Not FoundScalar
ServerErrorRegexRegular Expression that is used by AppSpider to identify error responses from the web server.StringNoneNoneScalar
InvalidURLRegexAttackRegular Expression that identifies URLs that comes from attack traffic as Invalid so that AppSpider does not attack an invalid URL.String['\"\\(\\)<>]|\\d([-+]|%2[bd])\\d|repeat\\(|alert\\(|/x\\w{7}\\.txtNoneScalar
InvalidURLRegexCrawlRegular Expression that identifies a URL that was discovered during crawling as Invalid so that AppSpider does not crawl and analyze an invalid URL.String((\\s|%20)(OR|AND|MOD|ASC|DESC)(\\s|%20)|(<|%3c)(a|div|script|style|iframe|img)|[?&=]x[a-z0-9]{7}$|C=N;O=D|\\?C=M)NoneScalar
LockCookiesFlags that tells AppSpider whether it should preserve the value of the cookies supplied by the user in the Scan Configuration even if the web server requested to change the cookie.Boolean11: lock cookie values
0: do not lock cookie values
Scalar
CaseSensitivityThis parameter tells AppSpider how to treat URLs of the web site. The website can have either a case sensitive or a case insensitive file system on the back end.EnumCaseSensitiveAutoDetect (numeric: 0)
CaseSensitive (numeric: 1)
CaseInsensitive (numeric: 2)
Scalar
UniqueUrlsAcrossWebsitesDeprecated
SaveReferencesThis parameter controls whether the crawler should store cross-references in the database. Boolean00: Do not save cross-references
1: Save cross-references
Scalar
UseBrowserFlag that tells the crawler to use browser to execute JavaScript event handlers. Boolean10: Do not use browser
1: Use browser
Scalar
ShowBrowserFlag that tells the crawler to show browser window during traversing web site's pages. Boolean00: Do not show browser
1: Show browser
Scalar
StayOnPortFlag that tells the crawler to not deviate from the port of original seed URLs. This implies that all seed URLs should be on the same port if that option is enabled.Boolean00: Crawler can request URLs from other ports
1: Crawler should stay on port
Scalar
RestrictToMacroThis flag forces AppSpider to not crawl any links other than the requests sent during macro execution.Boolean00: Crawler can discover new links
1: Crawler should try to discover new links
Scalar
RestrictToManualCrawlingThis flag forces AppSpider to not crawl any links other than the requests imported from proxy logs. Boolean00: Crawler can discover new links
1: Crawler should try to discover new links
Scalar
RestrictToSeedListThis flag forces AppSpider to not crawl any links other than the seed links provided in the scan configuration.Boolean00: Crawler can discover new links
1: Crawler should try to discover new links
Scalar
RestrictToWebServiceThis flag forces AppSpider to not crawl any links other than the web service requests.Boolean00: Crawler can discover new links
1: Crawler should try to discover new links
RestrictToSeleniumThis flag forces AppSpider to not crawl any links other than requests performed during execution of Selenium scripts.Boolean00: Crawler can discover new links
1: Crawler should try to discover new links
Scalar
ImportCookiesFromTrafficThis flag controls what AppSpider does with cookies that it finds in the imported traffic.Boolean00: Ignore cookies
1: Import cookies
Scalar
PageEqualThreshholdThis parameter sets the minimum value of the similarity coefficient above which two pages are considered to be identical. Double0.95NoneScalar
PageSimilarThreshholdThis parameter sets the minimum value of the similarity coefficient above which two pages are considered to have same structure. Double0.80NoneScalar
FlashThis flags tells AppSpider whether it should analyze Flash files.Boolean10: Should not analyze Flash files
1: Should analyze Flash files
Scalar
EnableAdvancedParsersInternal parameter. The value provided in the scan configuration file is overwritten.
SearchForUrlsThis flags tells AppSpider whether it should try to find URLs in places other than HTML structure: comments, JavaScript or text. Boolean10: Should not look for URLs in non-standard locations
1: Should look for URLs in non-standard locations
Scalar
MaxWebResourcesOverheadThis flags tells AppSpider how many links it can add to the crawl queue over the value specified in MaxCrawlResults parameter. Those extra links provide the Crawler with ability to pick more promising links to crawl. Without that parameter, the crawler would stop looking for new links once the queue is full. Number1000NoneScalar
SeedUrlListList of seed URLs from which AppSpider should start the scan.List
ScopeConstraintListThis parameter contains rules that specify what URLs AppSpider should crawl.List
BlackListExtensionListList of extensions that the crawler is not allowed to crawl. See parameter MaxBlackListExtCrawlResults for the list details.List
GrayListExtensionListList of extensions that the crawler is not allowed to crawl if Web Resource with the specified extensions do not have query parameters. See parameter MaxBlackListExtCrawlResults for list details.List
BinaryExtensionListList of file extensions that usually files with binary content have.List
TextExtensionListList of file extensions that usually files with text content have.List
BinaryContentTypeListList of content types that identify files with binary contentList
HTMLContentTypeListList of content types that identify HTML contentList
TextContentTypeListList of content types that identify text contentList
XMLContentTypeListList of content types that identify XML contentList
BrowserDownloadWhitelistListList of URLs that browser should always downloadList
BrowserDoNotDownloadExtentionListList of file extensions that should not be downloaded even if they were requested by the browserList
BrowserDoNotDownloadContentTypeListList of content type of files that should not be downloaded even if they were requested by the browserList
LockedCookieListList of cookie names that should not change value for the duration of the scanList

MaxDomain

Domain is a host name and protocol (https://www.live.com) in the URL. For instance, windows.microsoft.com and www.microsoft.com are different domains. If AppSpider finds a URL with an explicit IP address that matches a host name in another URL (for instance http://localhost/page.html and http://127.0.0.1/page.html), AppSpider will consider those as two different domains. Also, the protocol is a part of the domain name, and, as a result, https://localhost and http://localhost are two different domains.

MaxPerLinkCrawlResults

This option limits how many resources that have the same URL but different variations of POST parameters can be crawled.

MaxPerDirChildNodes

This parameter includes Crawl Results and sub-directories. The crawler will stop crawling new resources in a directory if one of the limits specified in MaxPerDirCrawlResult or MaxPerDirChildNodes is reached for that directory. This implies that the value of MaxPerDirCrawlResult should be greater or equal to the value of MaxPerDirChildNodes.

MaxBlackListExtCrawlResults

Maximum number of Web Resources that are on BlackListExtensionList and GrayListExtensionList that AppSpider is allowed to crawl. Even if the resource should be blacklisted based on extension, AppSpider will still crawl a small number of those resources the number of resources specified in this parameter.

MaxAttackFeedbackLinksCount

Crawler monitors traffic of attack modules and tries to find new links in the traffic. This parameter specified maximum number of new Web Resource found in the responses received by attack modules. Some attack requests result is responses that contain some of the attack payload or invalid links. For example, its very common for website to return a reference to a URL that could help user to solve the problem. That reference often include the description of the problem.

If the attack set the following request: GET /users/userinfo?userid=alert(‘111’)

The response to the attack could have the following statement and a new link.

Invalid parameter to the request /users/userinfo?userid=alert(‘111’): <a href=”/mgmt/reporterror.php=Invalid parameter URL: /users/userinfo?userid=alert(‘111’)”>Report error</a>

Every attack would result in new Web Resource /mgmt/reporterror.php with different parameter values. It could make sense to analyze that web resource once, it does not make to analyze all the web resources. AppSpider tries automatically detect those invalid links and avoid crawling them. The MaxAttackFeedbackLinksCount configuration parameter is a safety net in case invalid link detection algorithm fails.

MaxPerFileNameCrawlResults

During counting number of resources, only file name of the URL is considered and path is ignored). For example the following two URLs are considered to have the same File Name, because the filename is the same for both URLs.

  • http://mysite/customers/get_address.php?id=23
  • http://mysite/suppliers/get_address.php?id=78

RecursionDepth

This parameter defines how many times a part of a URL can be repeated. For example, if the recursion depth is set to 2, URL 1 will be crawled, while the URL 2 will be ignored because its recursion depth is 3.

  1. www.site.com/dir1/dir2/dir1/dir2/file.txt
  2. www.site.com/dir1/dir2/dir1/dir2/dir1/dir2/file.txt

The crawler almost always sees more links that it will crawl. Often it discovers more links that it will crawl. Many web resources are ignored because one or more crawler limitation. This parameter describes how many web resources will be stored in the database on top of the web resources that were be crawled. While web resources take significantly less space in the database then rrawl Results, they still take some space, so it is recommended to keep this number below the value specified in parameter MaxCrawlResults.

NotInsertedLinkCountThreshold

AppSpider reports ignored links in the User Log so that the user could easily notice that some URLs were unintentionally ignored and correct the problem by Only important messages should be reported in User Log. AppSpider ignores many links during crawling (for instance, out-of-domain links), on an average site the number of ignore links can be in thousands. To avoid cluttering User Log, it is recommended to keep this number low.

SaveReferences

Storing references significantly increases the size of the database, and it is advised not to enable this feature.

UseBrowser

This flag only affect using browser for crawling. It has no affect on using browser for macros, saequences or attacks.

ShowBrowser

This flag was designed to be used to debug various crawling problems. It is advised to disable this feature for regular scans. If this feature enabled, it is recommended to make the scan Single-Threaded. This way, only 1 browser window will be shown at any given moment.

RestrictToManualCrawling

If the value is set to 1, AppSpider will analyzed/attacked only the requests that it imported from the proxy logs. If imported responses contain other links, those links will not be crawler, analyzed or attacked. If the imported traffic has 5 requests, only 5 requests will be analyzed or attacked.

PageEqualThreshhold

PageEqualThreshhold deals with randomness in the responses. Some pages always have advertisement frames that website randomly inserts in the responses. This parameter allows AppSpider to ignore that random content.

The more similar the pages are, the higher is their similarity coefficient. Two responses with with similarity coefficient above the value of the parameter are considered by AppSpider to be identical. This information is used in several components in AppSpider, when comparing responses with custom 404 response or to determine whether that response is seen too often and can be ignored. It is recommended to not change the value of this parameter.

PageSimilarThreshhold

PageSimilarThreshhold deals with pages that return logically equivalent responses. For example, the shopping cart’s checkout page with a sweater in the shopping cart will look very similar to the page with a scarf in the shopping cart. This parameter helps AppSpider to understand what pages are similar. Several attacks and the analyzer use analysis for page similarity.

The more similar the pages are, the higher is their similarity coefficient. Two responses with with similarity coefficient above the value of the parameter are considered by AppSpider to be similar. It is recommended to not change the value of this parameter.

SearchForUrls

For example, this page HTML page with an HTML comment block:

html
1
<a href=”/admin/show_users.php”>Show Users</a><br>
2
<!--
3
Do not forget that we need to remove ‘/admin/rebootserver.php’ when we are done debugging
4
--->
5
<a href=”/admin/server_info.php”>Server Information</a>

If this flag is set to 1, AppSpider will find URL /admin/rebootserver.php. If the flag is set to 0, AppSpider won’t find the URL.

MaxWebResourcesOverhead

Even if those extra resources are added to the queue, the crawler will stop crawling once it reached the crawled number of links specified in the MaxCrawlResult option.

AuthConfig

NameDescriptionFormatDefault valueAdditional optionsType
TypeThis parameter defines the type of authentication that will be used by AppSpider.EnumNoneNone (numeric: 0): No authentication
Form (numeric: 1): Form-based automatic authentication
Macro (numeric: 2): Macro is used to authenticate the user. The macro should be specified in parameter MacroFile.
SessionTakeover (numeric: 3): The user will provide session cookies.
SSORedirect (numeric: 4)
Bootstrap (numeric: 5)
Scalar
HttpAuthFlag that tells that AppSpider should use HTTP username and password from the config to login to site that use HTTP authentication (Basic, NTLM, Kerberos)Boolean00: Should not use HTTP authentication credentials
1: Should use HTTP authentication credentials
Scalar
ReloginAfterSessionLossFlag that specifies whether AppSpider should re-login after it detected session loss.Boolean10: Should not re-login.
1: Should re-login
Scalar
LogoutDetectionFlag that specifies whether AppSpider should try to detect whether it lost the session.Boolean10: Should not detect
1: Should detect
Scalar
UserAssistanceReserved for future use
AssumeSuccessfulLoginFlag that defines whether AppSpider should check if the user was logged in using the regular expression in parameter LoggedInRegex or it can just assume that the user was logged in.Boolean00: Use regular expression to detect whether the user was logged in
1: Assume that the user was logged in.
VerifyNotLoggedinThis flag defines whether AppSpider should verify that the session is not logged in before trying to re-login. If the session was logged in and that flag is set, AppSpider will not try to re-login.Boolean10: AppSpider will verify whether the session was not logged in.
1: AppSpider will verify whether the session was logged in.
Scalar
PostponeLoginActionFlag that tells AppSpider whether it should postpone crawling the link if the is defined in the action attribute of the login form.Boolean10: AppSpider will crawl the action link.
1: AppSpider will postpone crawling of the action link.
Scalar
CreateNonAuthenticatedSessionFlag that determines whether AppSpider should create a non-authenticated session along with the authenticated session. This flag should only be set if the user provided authentication information in the scan configuration: login macro, username and password for form authentication.Boolean00: Do not create non-authenticated session
1: Create non-authenticated session.
Scalar
TreatFailedReloginAsErrorThe flag that tells AppSpider what to do when it fails to re-login the user. If that flag is set, then the scan will stop if re-login is failed. If the flag is not set then AppSpider continues with the scan with the logged out session. Note that the initial login is always treated as an error.Boolean10: Do not treat re-login failure as an error and continue with the scan
1: Consider re-login failure as an error.
BlacklistSinglePasswordFormsThis flag determines whether the crawler should send requests from forms that have one password field.Boolean00: Allowed to crawl forms with one password field
1: Do not crawl forms with one password field
Scalar
BlacklistMultiPasswordFormsThis flag determines whether the crawler should sent requests from forms that have two password fields.Boolean10: Allowed to crawl forms with two password fields
1: Do not crawl forms with two password fields
Scalar
ResetCookiesThis flag tells AppSpider whether it should reset all cookies before every re-login.Boolean10: Do not reset cookies that were in the session before re-login
1: Reset all cookies.
Scalar
AccountTypeDeprecated
UsernameFormThe user name that will be used for form authentication. Only used if parameter Type is set to Form.StringNoneNoneNone
PasswordFormThe user password that will be used for form authentication. Only used if parameter Type is set to Form.StringNoneNoneScalar
UsernameHttpThe user name that will be used for HTTP authentication (Basic, NTLM or Kerberos). For NTLM authentication with domain, the format of username should be <domain>/<username>StringNoneNonneScalar
PasswordHttpThe user password that will be used for HTTP authentication (Basic, NTLM or Kerberos)StringNoneNoneScalar
AutoLogonSecurityThis parameter defines the scope for which AppSpider should use Windows user identity for Integrated Windows Authentication.EnumAutoLogonSecurityMediumAutoLogonSecurityLow (numeric: 0): An authenticated log on using the default credentials is performed for all requests
AutoLogonSecurityMedium (numeric: 1): An authenticated log on using the default credentials is performed only for requests on the local Intranet
AutoLogonSecurityHigh (numeric: 2): Default credentials are not used. Note that this flag takes effect only if you specify the server by the actual machine name. It will not take effect, if you specify the server by "localhost" or IP address.
Scalar
LoginLinkRegexDefines the regular expression that AppSpider uses to determine whether a link is a login link (link used in login process)String((log|sign)[ -]?(in|on))authNoneScalar
LoggedInRegexDefines the regular expression that AppSpider uses to determine whether the user was logged in as a result of login macro execution or login form submission or any other type of supported authentication.String(sign|log)[ -]?(out|off)NoneScalar
SessionLossRegexDefines the regular expression that AppSpider uses to determine whether the user was logged out. This regex is only applied to HTTP response body.AppSpider applies that regex to all responses (as opposed to regular expression in SessionLossOnCanaryPageRegex).Stringplease (re)?loginhave been logged outsession has expiredNoneScalar
SessionLossHeaderRegexDefines the regular expression that AppSpider uses to determine whether the user was logged out. This regex is only applied to HTTP headers.StringLocation: [^\\n]{0,100}((sign|log)(in|on|out)|unauthenticated)\\bNoneScalar
LogoutLinkRegexDefines the regular expression that AppSpider uses to determine whether a link is a logout link. This helps AppSpider to stay logged in by not clicking on or requesting logout links.String(sign|log|time)[ -]?(in|on|out|off)passwordNoneScalar
LogoutPostBodyRegexDefines the regular expression that AppSpider uses to determine whether a request with POST data can cause session logout. This helps AppSpider to stay logged in by not clicking on or requesting logout links.String(sign|log|time)[ -]?(in|on|out|off)NoneScalar
CanaryPageDefines the URL that AppSpider will periodically request to determine whether the session was lost. Should be used in conjunction with parameter SessionLossOnCanaryPageRegex.StringNoneNoneScalar
SessionLossOnCanaryPageRegexDefines the regular expression that AppSpider uses to determine whether a request with POST data can cause session logout. This helps AppSpider to stay logged in by not clicking on or requesting logout links. Should be used in conjunction with parameter CanaryPageStringNoneNoneScalar
FormSubmissionScriptReserved for future
SessionCookieRegexThis parameter contains the regular expression that AppSpider uses to determine whether a cookie is a session cookie. The regular expression is applied to the cookie’s name only.String\\b(CFID|CFTOKEN|SESSION|JSESSIONID|ASPSESSIONID[A-Z0-9]+|PHPSESSID|ASP[.]NET_SessionId)\\bNoneScalar
SessionCookieLifespanThis parameter determines the maximum lifespan of the cookie below which the cookie is considered a session cookie.Number (of days)32NoneScalar
LogoutDetectionFrequencyDeprecated
DiscoveryMaxLinksThis parameter defines maximum number links that the login component can crawl in search for a login form.Number200NoneScalar
LoginMaxLinksThis parameter defines the maximum number of links that the login component can crawl after submitting a login form while it is looking for the page that indicates that the user session was logged in.Number50NoneScalar
DiscoveryDepthThis parameter determines how deep into the web site the crawler should go in search of the login form. The depth of a link is the minimum number of links (steps) that the user should visit to discover this link.Number10NoneScalar
LoginDepthThis parameter determines how deep into the web site the crawler should go after the submitting login form in search of the page that can determine a logged in state. The depth of a link is the minimum number of links (steps) that the user should visit to discover this links starting from the page with the login form.Number10NoneScalar
MaxMacroReloginAttemptsMaximum number of times AppSpider should try to re-login.This parameter is not used for initial login, which is performed only once.Number3NoneScalar
DiscoveryPrioritizationThis parameter determines the algorithm the login form discovery crawler should use. It is not recommended to change the value from this parameter from the one selected by AppSpider by default.EnumLoginFormDiscoveryFIFO(numeric: 0)
Smart(numeric: 1)
DirBreadthFirst(numeric: 2)
FoundBreadthFirst(numeric: 3)
FoundDepthFirst(numeric: 4)
Juicy(numeric: 5)
LoginFormDiscovery(numeric: 6)
Login(numeric: 7)
Scalar
LoginPrioritizationThis parameter determines the algorithm the crawler should use after the submission of login form to find the page would indicate a logged in state, for example, “Welcome back to acme.com Bob”. It is not recommended to change the value from this parameter from the one selected by AppSpider by default.EnumLoginFIFO(numeric: 0)
Smart(numeric: 1)
DirBreadthFirst(numeric: 2)
FoundBreadthFirst(numeric: 3)
FoundDepthFirst(numeric: 4)
Juicy(numeric: 5)
LoginFormDiscovery(numeric: 6)
Login(numeric: 7)
Scalar
MacroFileMacro file that will be used for authentication. Note that this parameter is used only if Type value is set to Macro.Object
ScopeConstraintListList of scope constraints for the login crawler that determine which part of the site the login crawler is allowed to crawl. Note that this parameter is only used when the Type value is set to Form.List

AssumeSuccessfulLogin

This parameter is often used in conjunction with macro login when the user can see in the browser that AppSpider logged in and does not want to craft a regular expression that detects a logged in state.

VerifyNotLoggedin

If during scanning a false positive logout was detected in a responses, AppSpider will try to re-login into session that was perfectly valid. This parameter configures how AppSpider behaves in this situations. If the value of the parameter is set to 1, AppSpider first check whether the user is already logged in, prior to starting login process. If it is set to 0, AppSpider will reset session .

AnalyzerConfig

NameDescriptionFormatDefault valueAdditional optionsType
EnabledDeprecatedNoneNoneScalar
NotExistingFilePathDefines the URL request that AppSpider sends to retrieve the response that the web server returns for a non-existing file path. The value of that parameter is appended to the URL of the directory.String/aaaaaaaa.aaaNoneScalar
NotExistingDirPathDefines the URL request that AppSpider sends to retrieve the response that the web server returns for a non-existing directory path. The value of that parameter is appended to the URL of the directory.String/aaaaaaaa/NoneScalar
AppendToOriginalValue1NoneScalar
ReplaceOriginalValue0NoneScalar

AttackerConfig

NameDescriptionFormatDefault valueAdditional optionsType
ParametersToAttackBeforeLimitingAttacksNoneNoneScalar
LinksToAttackBeforeLimitingAttacksNoneNoneScalar
MaxSameNameParameterAttackPointsDetermines how many parameter values that have the same name (query or POST) AppSpider is going to attack.Number50NoneScalar
MaxSameCookieParameterAttackPointsDetermines on how many pages a cookie can be attacked by AppSpider.Number25NoneScalar
MaxSameNameParameterAttackPointsPerLinkDetermines how many parameter values that have the same name (query or POST) AppSpider is going to attack on links that have the same URL. Number3NoneScalar
MaxNormalizedSameNameParameterAttackPointsPerLinkDetermines how many parameters with the same normalized name AppSpider is going to attack on links that have the same URL. Normalized name is the name of the parameter without array index or any other indexing type.Number10NoneScalar
ScopeConstraintA list of scope constraints that determines which URLs AppSpider can attack. If the that list is empty, AppSpider will not attack URLs that do not comply with constraints specified for the crawler in CrawlConfig.ScopeConstraintListNoneURL, Method, Match Criteria, ExclusionList
DefaultDoNotAttackParamA list of parameter names that AppSpider should not attack. This list should not be changed by the user. For convenience, user-defined parameters that should not be attacked are moved into a separate parameter: UserDoNotAttackParamListNoneParameter Name, Match CriteriaList
UserDoNotAttackParamA list of parameter that AppSpider should not attack.NoneParameter Name, Match CriteriaList

For instance if a page has parameter params[1], params[2], params[3], all those parameters will have the same normalized name: params[].