Configure LLM Vulnerability Scanning
This article explains how to configure settings specific to scanning a Large Language Model (LLM) for vulnerabilities within your application.
To configure LLM vulnerability scanning:
- Toggle the Enable LLM vulnerability scanning switch to the on position.
- Use one of the following options to configure LLM vulnerability scanning for your LLM feature:
Macro Files (Recommended option)
The LLM Chatbot Macro configuration allows users to create a sequence of requests that perform actions against a chatbot. Usually, accessing a component of this type within a web application can require specific actions such as navigating to the chatbot page, opening the chatbot interface, completing setup, and entering a prompt.
Macro files can record all your interactions with the chatbot in a .rec file, and the InsightAppSec Scan Engine can use this file to navigate and interact with your chatbot and run attacks against it. To improve performance, the macro should avoid unnecessary steps.
A macro can be provided standalone, as long as it is recorded using the prompt R7-PROMPT
for the chatbot field. Otherwise, the macro will be used in combination with the CSS Selectors/DOM Elements to identify the correct elements to interact with.
To scan using a macro file:
- Record a new macro using the Rapid7 AppSec Toolkit. Utilize the Inline Recording selection within the component, or Upload an existing recording.
- Important: If you’re using a macro standalone then you must enter the prompt
R7-PROMPT
during the recording. This identifies the prompt field.
- Open the Scan Scope > LLM Attacks screen, and click the Add Chat Bot Macro File link.
- Click the Choose File button. This will open the Choose File pop-up.
- Click Upload File and upload the macro from your computer.
- Select the newly uploaded file or an existing macro from the All my Files tab in the popup.
- Click the Use Selected File button. The macro file will now appear above the Add Macro File link.
CSS Selectors
Page URL (Required)
Enter the URL of the page where the DOM elements for your LLM interaction are located.
- This URL is crucial as it allows InsightAppSec to understand which page to scan to identify the relevant elements and launch attacks.
- This URL must precisely match your application and scan-level URL constraints.
Document Object Model (DOM) Elements
This section requires you to specify the DOM (Document Object Model) selectors that point to the key interactive elements of your LLM feature. By correctly configuring these DOM element selectors, the InsightAppSec Scan Engine can interact with your LLM feature to send prompts and analyze the responses for potential vulnerabilities. If using a Text selector, element text, labels, and label-related aria tags are checked.
Use your browser's Developer Tools (right-click → Inspect) to find and copy the appropriate CSS selector for each element.
DOM Selectors and Prompt Values
Selector / Value | Purpose |
---|---|
Open UI Selector | If there is a specific element that needs to be interacted with to open the LLM interface, provide it as a CSS selector or Text selector. |
Prompt Input Selector | Enter the CSS or Text selector for the input field where users type their prompts for the LLM. |
Submit Button Selector | Provide the CSS or Text selector for the button that submits the user's prompt to the LLM. |
Response Output Selector | Enter the CSS selector for the element where the LLM's responses are displayed. |
Prompt Value | Enter a valid value to be submitted for the crawler's initial interaction with the chatbot. |
Swagger Files
Alternatively, you can upload a swagger document for LLM scanning. To use this method, simply add a swagger file under the Scan Scope section during scan configuration setup.
- Select an application.
- Click Scan Scope > WSDL & Swagger > Add swagger file.
Rapid7 and your data privacy
InsightAppSec leverages Rapid7 proprietary prompt engineering with foundational LLMs running safely within our cloud boundary with no data egress to precisely analyze attack data from LLM interactions. This helps to ensure that we accurately detect vulnerabilities in the target and provide clear, verifiable evidence to support our analysis.
InsightAppSec does not use any customer data for training or fine-tuning our large language models (LLMs), nor do we share your data with any third-party LLMs for their training purposes.
We adhere to stringent data privacy protocols to ensure your information is secure and handled with the utmost care.