Kubernetes Local Scanner

The local scanner supports managed Kubernetes clusters that are not accessible to InsightCloudSec and any self-managed Kubernetes clusters. When configured to provide access to each specific cluster, self-managed clusters will be harvested and assessed automatically through the local scanner after they are successfully added to InsightCloudSec. The local scanner is deployed using kubectl & helm for each individual Kubernetes cluster you want to monitor.

  • Self-managed clusters need to be configured to provide access to each specific cluster and will be harvested and assessed automatically through the local scanner after they are successfully onboarded to InsightCloudSec.
  • For managed clusters, when using a local scanner an account is created automatically and clusters that cannot be accessed will be marked with an error and placed into the harvesting Paused state.
    • When installing a local scanner for managed clusters, special care should be given to assigning the cluster ID.
    • We recommend using the provider resource ID, otherwise a new account will be created that is detached from the account that was automatically created, losing all the benefits of tags and badges synchronization.
  • To migrate between scanners, refer to the instructions on the Clusters Account Setup & Management page.

Prerequisites

Before using the Kubernetes Local Scanner, you will need to verify that your local machine is set up with helm and kubectl. To do this you can run the helm and kubectl commands (individually) to set the correct context against your Kubernetes cluster.

Setup kubectl

If you do not have an existing kubectl setup refer to the following in order to connect to your Kubernetes cluster:

Setup Helm

If you do not have an existing Helm install setup refer to the following in order to connect your Kubernetes cluster:

  1. Download and install Helm.

    Cluster Network Access Requirements

    The steps below should be executed on all designated clusters. Every cluster must be able to connect to the InsightCloudSec URL using https/443.

    For SaaS customers, if you are utilizing a customer-specific allowlist, ensure the clusters' ingress IP addresses are included.

  2. Connect to the cluster context that you would like to install Kubernetes Guardrails.

Step 1: Generate an API Key for the Manual Deployment

Generating an API Key is required to identify and authenticate the local scanners (one on each cluster) and allow the scanner to report inventory and Guardrails assessment findings to the InsightCloudSec platform.

  1. Navigate to your InsightCloudSec installation and click to open Cloud > Kubernetes Clusters.
  2. On the top of the page click the Manage Kubernetes API Key button.
  3. Click Add API Key.
  4. Provide a name for the Key and ensure the Activate this API Key checkbox is selected, then click Create API Key.
  5. Copy the newly generated API key and store it in a safe place.

Save the Key!

This will be your only opportunity to save this information.

Step 2: Install Guardrails

After generating an API key, you're ready to deploy Kubernetes Guardrails on your local cluster. Before deploying anything, familiarize yourself with the Guardrails command:

Guardrails Command Reference
PropertyDescriptionInstructions
K8sGuardrails.ApiTokenRequired
API Key from InsightCloudSec. The Kubernetes Guardrails API token is used in token-based authentication to allow the Guardrails scanners (agents) to access the InsightCloudSec API and report findings.
API key authentication is required but the method is flexible: you can use either ApiToken or ApiTokenSecret but not both.
Explore the Generate an API Key section for details.
K8sGuardrails.ApiTokenSecretRequired
The name of the Kubernetes Secret that contains the API Key.
API key authentication is required but the method is flexible: you can use either ApiToken or ApiTokenSecret but not both.
String. Provide the name of the Kubernetes Secret containing your K8sGuardrails.ApiToken. InsightCloudSec assumes the Key within the Kubernetes secret is api-token unless defined with K8sGuardrails.ApiTokenSecretKey.
K8sGuardrails.ApiTokenSecretKeyOptional
Defines the Key that maps to the API Key inside of the Kubernetes Secret.
String
Config.BaseUrlRequired
Set this to the base URL for your InsightCloudSec installation.
If unknown, the URL can be retrieved from the InsightCloudSec interface by going to Administration > System Administration > System and copying the Base URL field.
Config.ClusterNameRequired
User-defined cluster name.
String
Config.ClusterIdRequired
The ARN field of the discovered remote or cloud-managed cluster. The IDs must match to correlate correctly and to generate coverage reports.

If the cluster is self-managed, any provided value is sufficient because there is no discovered cluster to correlate with.
Navigate to the Resource page or use an API to get Kubernetes Cluster Without Guardrails Report.
Reports contain ARN for each cluster.
Refer to Discovery for Existing Clusters for details.
Config.LabelsOptional
The cluster badges, if provided, will be translated into cloud account (cluster) badges that you can user later on to navigate/filter Insight findings.
Example of Cluster-Badges:
'\{\"environment\": \"production\"\, \"owner\": \"user@rapid7.com\"\, \"risk\": \"low\"\, \"provider\": \"EKS\"\}'
Config.HasNodeFsAccessOptional
Enable this feature to access the Node Scanner (requires additional configuration)
Boolean type
Config.ProxyUrlOptional
Provide a proxy URL to run the Local Scanner behind a proxy
String
CronScheduleOptional
Creates periodic and recurring tasks to run the Guardrails scanner.
The default scanning schedule (if not specified) is once an hour.
For CronJob Scheduling refer to the following information.
devopscurlSpec.SelfSignedCert.EnabledEnable this feature and supply the Self-Cert-Secret-Name and Self-Cert-Pem-Base64 if your ICS server is using a self-signed cert.Boolean type
devopscurlSpec.SelfSignedCert.CertSecretNameOptional
Can be replaced by Self-Cert-Pem-Base64
Create a secret in the same namespace and pass the secret name.
devopscurlSpec.SelfSignedCert.CertPemOptional
A base64 encoded string of the self signed certificate PEM file.
Can be replaced by Self-Cert-Secret-Name
Pass a base64 encoded certificate.
This option is less recommended then using the Self-Cert-Secret-Name property.
If this option is used, ensure value passed via an inline parameter using the --set flag and not hardcoded in the values.yaml file.
Config.IsOpenShiftOptional
Enable this feature if your Kubernetes cluster is running on OpenShift.
Boolean type
Config.LogLevelOptional
Set this to modify the Kubernetes log level. The default log level is info
String
nodeSelector.OsOptional
Specifies the operating system value for the Kubernetes node selector.
String. For more information, review the Kubernetes documentation.
nodeSelector.ArchOptional
Specifies the architecture value for the Kubernetes node selector.
String. For more information, review the Kubernetes documentation.
  1. Add the Kubernetes Guardrails Helm repo by issuing the following commands:
shell
1
helm repo add helm-repo https://helm.rapid7.com/cloudsec
2
helm search repo
3
4
# example helm install command
5
helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace \
6
--set K8sGuardrails.ApiToken=<InsightCloudSec-API-token> \
7
--set Config.BaseUrl=<InsightCloudSec-Base-URL> \
8
--set Config.ClusterName=<InsightCloudSec-Cluster-Name> \
9
--set Config.ClusterId=<InsightCloudSec-Cluster-ID> \
10
--set Config.Labels=<InsightCloudSec-Cluster-Badges> \
11
--set Config.HasNodeFsAccess=false \
12
--set CronSchedule=<k8sGuardrails-CronSchedule> \
13
--set devopscurlSpec.SelfSignedCert.Enabled=<Enable-Self-Cert> \
14
--set devopscurlSpec.SelfSignedCert.CertSecretName=<Self-Cert-Secret-Name>

The devopscurlSpec.SelfSignedCert.CertSecretName setting can be replaced with devopscurlSpec.SelfSignedCert.CertPem=<Self-Cert-Pem-Base64>. Review the Guardrails Command Reference and Troubleshooting sections for more information and help.

API Token already stored in a Kubernetes Secret?

If the API Key for the InsightCloudSec platform is already stored in a Kubernetes Secret, you can replace K8sGuardrails.ApiToken with K8sGuardrails.ApiTokenSecret and optionally K8sGuardrails.ApiTokenSecretKey as shown in the following example:

shell
1
helm repo add helm-repo https://helm.rapid7.com/cloudsec
2
helm search repo
3
4
# example helm install command
5
helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace \
6
--set K8sGuardrails.ApiTokenSecret=<Name-of-K8s-secret-containing-API-key> \
7
--set K8sGuardrails.ApiTokenSecretKey=<Key-in-secret-mapping-to-API-key-value> \
8
--set Config.BaseUrl=<InsightCloudSec-Base-URL> \
9
--set Config.ClusterName=<InsightCloudSec-Cluster-Name> \
10
--set Config.ClusterId=<InsightCloudSec-Cluster-ID> \
11
--set Config.Labels=<InsightCloudSec-Cluster-Badges> \
12
--set Config.HasNodeFsAccess=false \
13
--set CronSchedule=<k8sGuardrails-CronSchedule> \
14
--set devopscurlSpec.SelfSignedCert.Enabled=<Enable-Self-Cert> \
15
--set devopscurlSpec.SelfSignedCert.CertSecretName=<Self-Cert-Secret-Name>

Step 3: Cluster Role and Binding Setup (Optional)

Lastly, you can optionally create a Kubernetes Cluster Role and Binding to allow create permission for the subjectaccessreview resource. See the Kubernetes Scanners Overview FAQ for details.

Cluster role and binding are security best practices

Creating a cluster role and binding enables two additional Insights for Kubernetes security best practices.

  1. Authenticate to your Kubernetes Cluster as a cluster administrator.

  2. Create a role with the necessary permissions:

    1
    echo "apiVersion: rbac.authorization.k8s.io/v1
    2
    kind: ClusterRole
    3
    metadata:
    4
    name: subjectaccessreview-create
    5
    rules:
    6
    - apiGroups:
    7
    - authorization.k8s.io
    8
    resources:
    9
    - subjectaccessreviews
    10
    verbs:
    11
    - create" \
    12
    | kubectl create -f -
  3. Create a role binding:

    1
    echo "apiVersion: rbac.authorization.k8s.io/v1
    2
    kind: ClusterRoleBinding
    3
    metadata:
    4
    name: subjectaccessreview-create-k8s-guardrails-sa
    5
    subjects:
    6
    - kind: ServiceAccount
    7
    name: k8s-guardrails-sa
    8
    namespace: rapid7
    9
    roleRef:
    10
    kind: ClusterRole
    11
    name: subjectaccessreview-create
    12
    apiGroup: rbac.authorization.k8s.io" \
    13
    | kubectl create -f -

Verify the Deployment

After deploying guardrails and configuring the cluster role and cluster binding, you should verify the deployment is working successfully.

  1. To verify that Kubernetes Guardrails works successfully, you will need to trigger a job manually, using the following command.

    1
    kubectl create job --from=cronjob/k8s-guardrails -n rapid7 k8s-guardrails-manual-001
  2. Verify that the pod is in the completed status. Time to completion will depend on the size of the cluster.

    1
    kubectl get pods -n rapid7 | grep k8s-guardrails-manual-001
  3. Verify that the cluster is marked as monitored and that resources appear with findings on them.

Troubleshooting

Specifying Resource Limits

InsightCloudSec includes the ability to specify resource limits and requests for Guardrails containers. The helm key to set should start with the following YAML hierarchy:

"global.Resources." followed by the requests and/or limits of your choice. The values and names must follow a valid structure of Kubernetes resources. Edit the following values to your needs (these values depend on the cluster's characteristics) and add them to the helm command:

shell
1
--set global.Resources.requests.cpu=200m \
2
--set global.Resources.requests.memory=100Mi \
3
--set global.Resources.limits.cpu=1 \
4
--set global.Resources.limits.memory=1Gi

For more details on resource limits, refer to the Kubernetes documentation on Resource Management for Pods and Containers

Using a Self-Signed Certificate

Accessing InsightCloudSec from the Kubernetes scanner is done over TLS. While in most cases a public Certificate Authority is used, some organizations use a private Certificate Authority that requires the Kubernetes scanner to be configured with a self-signed certificate. Configuring a self-signed certificate is done by providing additional parameters to the helm chart installation indicating the use of a self-signed certificate and providing the certificate in the format of base64 encoding.

An example:

shell
1
helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace --set K8sGuardrails.ApiToken=token --set Config.ClusterName="Cluster name" --set CronSchedule="30 * * * *" --set Config.BaseUrl=https://self-sign-cert-ics.com --set Config.ClusterId="cluster-id" --set devopscurlSpec.SelfSignedCert.Enabled=true --set devopscurlSpec.SelfSignedCert.CertPem=LS...0t --set Config.HasNodeFsAccess=false

Discovery for Existing Clusters

To identify clusters that are not currently covered by the local scanner you will need to refer to the following steps. Below are instructions to identify clusters using the InsightCloudSec UI or the InsightCloudSec API.

Discovery of Clusters Using the UI

  1. From your InsightCloudSec platform installation, navigate to Inventory > Resources and select the Containers tab.

  2. (Optionally) Use the Scopes button at the top of the page to narrow the scope (e.g., cloud accounts, resource groups) to use when scanning for clusters that are not yet included in your InsightCloudSec setup.

  3. From the Containers tab, select Clusters to see a list of all of the clusters included in the selected scope.

  4. Navigate to Filters and search for/select the Kubernetes Cluster Without Guardrails Report.

    • Selecting this filter will update the resources to only include clusters that have not been scanned.
    • The cluster ID field that displays will be used when deploying Guardrails to a specific cluster.
  5. Locate the Cluster ID column and note the Cluster you want to deploy Guardrails in.
    You will have to scroll to the right to see all of the columns.

Cluster ID

InsightCloudSec uses the Cluster ID to identify clusters. Using the Cluster ID allows Rapid7 to associate clusters discovered via the InsightCloudSec platform (either through the UI or API) with clusters onboarded through the local scanning capability.

Discovery of Clusters Using the API

For information on using the InsightCloudSec API refer to the Getting Started.

  1. Login to the InsightCloudSec API using your username and password in the request body in a POST to v2/public/user/login.

  2. Use the session_id from the response in the X-Auth-Token header. Use the following request body in a POST to v2/public/resource/query:

    1
    {
    2
    "selected_resource_type": "containercluster",
    3
    "filters": [
    4
    {
    5
    "name": "divvy.query.kubernetes_cluster_without_guardrails_report",
    6
    "config": {}
    7
    }
    8
    ],
    9
    "offset": 0,
    10
    "limit": 100
    11
    }
  3. The resources list will display clusters that have not been scanned; the ARN field will be used when deploying Guardrails to a specific cluster.
    Save the ARN details for clusters where you want to configure Kubernetes Security Guardrails.

Manage Existing API Keys

Clicking the Add Kubernetes API Key button will enable you to generate new API Keys, manage a keys' status (activated, deactivated), and delete unused keys.

Our current setup supports up to 5 API keys for API key rotation. The clusters will be installed as a single Organization (within InsightCloudSec).

Applying New API Keys

Assuming you have applied the suggested naming convention for the Helm repository and installation, the command for updating your Kubernetes Scanner deployment for new API keys should look like:
helm upgrade k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --set K8sGuardrails.ApiToken=<new token>

Uninstall the local scanner

You can remove the Kubernetes Local Scanner at any time using helm and kubectl.

To remove the Kubernetes Local Scanner:

  1. Uninstall the Guardrails package:

    1
    helm uninstall k8s-guardrails -n rapid7

    Want to verify the uninstallation first?

    You can use the --dry-run flag to verify what is uninstalled before committing to the command.

  2. Remove the remaining Guardrails components:

    1
    kubectl delete role <NAME_OF_ROLE> -n <NAMESPACE>
    2
    kubectl delete rolebinding <NAME_OF_ROLEBINDING> -n <NAMESPACE>
    3
    kubectl delete cronjob k8s-guardrails-manual-001 -n rapid7
    4
    kubectl delete -f r7-node-scan.yaml [-n r7-node-scan]
    5
    kubectl delete namespace r7-node-scan

What's Next?

Refer to the Kubernetes Security Guardrails for an overview of this feature and a summary of the prerequisites.

Jump to the Using Kubernetes Security Guardrails page to view details on using the feature and exploring the imported data.