Kubernetes Collector Deployment
Copy link

The Kubernetes Collector deployment enables the Rapid7 SIEM (InsightIDR) Collector to be hosted within a Kubernetes environment, allowing for automated installation and registration to collect event logs.

The deployment consists of the following components:

  • Collector Docker Image - A containerised SIEM Collector deployed as a StatefulSet using a Helm chart. The image comes with the Collector pre-installed and on container runtime it automatically registers with the Rapid7 platform.
  • Helm Chart - A Helm chart provided for deploying the Collector, with configurable values for parameters such as Collector names, authentication secrets, and resources.
  • Container Registry - A Collector image is available per Rapid7 region in the official DockerHub repository.
  • Platform Registration - The Collector automatically registers with the Rapid7 platform using a secret containing an APIkey.

Prerequisites
Copy link

Before deployment, you will need the following:

  • A valid Rapid7 SIEM license and platform access with permission to add Collectors
  • Access to DockerHub
  • Kubernetes cluster with Helm and kubectl installed
  • kubectl configured to access your cluster
  • Network connectivity to Rapid7 platform endpoints

Deployment
Copy link

Step 1: Obtain User API Key
Copy link

ℹ️

Already have a User API key?

If you have already obtained a User API Key, you can skip this step.

  1. Log in to the Command Platform and go to Administration > User API Keys.
  2. In the User API Keys page, click Generate New User Key.
  3. In the Organization dropdown menu, select the desired organisation.
  4. In the Name field, enter a name for the key.
  5. Click Submit to generate the key. A new window will open and display the generated key.
  6. Copy the key. You will not be able to view it again after you close the window.

Step 2: Add API Key as Kubernetes secret
Copy link

  1. Open a command prompt and run the following command:
kubectl create secret generic <InsertSecretName> --from-literal=api-key=<InsertUserApiKey>

Step 3: Retrieve image from DockerHub
Copy link

  1. Using your web browser, go to https://hub.docker.com/r/rapid7/idr-collector .
  2. Using the Tag summary panel, select the image that coincides with the region your SIEM instance is deployed in.
  3. Copy the Docker command to pull the image and execute on your machine. You should now have the image locally downloaded.

Step 4: Retrieve Helm chart
Copy link

  1. Open the command prompt and run the following command to add the Rapid7 Helm chart repository to your Helm client:
helm repo add publicrapid7 https://helm.rapid7.com/helm-charts
  1. Run the following command to update the helm repositories to ensure latest version:
helm repo update
  1. Run the following command to list all available charts in the repo and select the relevant Collector chart:
helm search repo publicrapid7
  1. Run the following command to pull down the Collector chart:
helm pull publicrapid7/collector-chart

The chart is downloaded as a .tar file.

Step 5: Configure Deployment Values
Copy link

  1. Extract the helm chart from the .tar file.
  2. Edit the values.yaml file with your variables:
    • The Collector name must be unique.
    • The API secret must be valid.
    • The image.tag must match the same tag you selected in DockerHub.
    • You can increase resource and storage values, but each Collector pod requires 2 CPUs and 8GB memory. Ensure your nodes can accommodate this.

Step 6: Deploy the Collector
Copy link

  1. Open the command prompt and run the following command to launch a helm release:
helm install <release-name> ./collector-chart -f ./collector-chart/values.yaml --set persistence.enabled=true
  1. Run the following commands to verify deployment:
kubectl get pods
kubectl logs <pod-name>
  1. Run the following command to show script specific logs:
kubectl logs <pod-name> | grep "\[Collector Entrypoint Script\]"
  1. Run the following command to show Collector logs:
kubectl logs <pod-name>| grep -v "\[Collector Entrypoint Script\]"

Setting up a portwatcher
Copy link

The portwatcher is unique in that the container’s port will be watched, but propagating that port to external servers is not possible. There must be changes made in the values.yaml to accommodate either a nodeport or loadbalancer.

Sample loadbalancer:

exposure: mode: lb externalTrafficPolicy: Local loadBalancerSourceRanges: [ ] lbAnnotations: { } service: ports: - name: portwatcher port: 800 # External loadbalancer port targetPort: 800 # Container port your portwatcher listens on protocol: TCP # TCP or UDP

Once configured, forward traffic to the loadbalancer’s IP and external port to allow the container port to receive syslog events. Set up a portwatcher as normal and see the data flow.

Troubleshooting
Copy link

If you encounter issues during setup or operation, use the following troubleshooting steps to quickly identify and resolve common problems.

If the pod fails to start (pending state):

  • Ensure your storage class supports volumes of the requested sizes.
  • Check if your cluster nodes have sufficient CPU/memory for the configured resources.requests.

If the Collector fails to start:

  • Verify the image region matches your SIEM instance.

If registration fails:

  • Confirm the User API Key is valid.
  • Check network connectivity to Rapid7 endpoints.
  • Check firewall rules for outbound HTTPS traffic.
  • Ensure the Collector name is unique. Script logs will show the API response.