Kubernetes Local Scanner | InsightCloudSec Documentation

The local scanner supports managed Kubernetes clusters that are not accessible to InsightCloudSec and any self-managed Kubernetes clusters. When configured to provide access to each specific cluster, self-managed clusters will be harvested and assessed automatically through the local scanner after they are successfully added to InsightCloudSec. The local scanner is deployed using kubectl & helm for each individual Kubernetes cluster you want to monitor.

  • Self-managed clusters need to be configured to provide access to each specific cluster and will be harvested and assessed automatically through the local scanner after they are successfully onboarded to InsightCloudSec.
  • For managed clusters, when using a local scanner an account is created automatically and clusters that cannot be accessed will be marked with an error and placed into the harvesting Paused state.
    • When installing a local scanner for managed clusters, special care should be given to assigning the cluster ID.
    • We recommend using the provider resource ID, otherwise a new account will be created that is detached from the account that was automatically created, losing all the benefits of tags and badges synchronization.
  • To migrate between scanners, refer to the instructions on the Clusters Account Setup & Management page.

Prerequisites

Before using the Kubernetes Local Scanner, you will need to verify that your local machine is set up with helm and kubectl. To do this you can run the helm and kubectl commands (individually) to set the correct context against your Kubernetes cluster.

Setup kubectl

If you do not have an existing kubectl setup refer to the following in order to connect to your Kubernetes cluster:

Setup Helm

If you do not have an existing Helm install setup refer to the following in order to connect your Kubernetes cluster:

  1. Download and install Helm.
ℹ️

Cluster Network Access Requirements

The steps below should be executed on all designated clusters. Every cluster must be able to connect to the InsightCloudSec URL using https/443.

For SaaS customers, if you are utilizing a customer-specific allowlist, ensure the clusters’ ingress IP addresses are included.

  1. Connect to the cluster context that you would like to install Kubernetes Guardrails.

Step 1: Generate an API Key for the Manual Deployment

Generating an API Key is required to identify and authenticate the local scanners (one on each cluster) and allow the scanner to report inventory and Guardrails assessment findings to the InsightCloudSec platform.

  1. Navigate to your InsightCloudSec installation and click to open Cloud > Kubernetes Clusters.
  2. On the top of the page click the Manage Kubernetes API Key button.
  3. Click Add API Key.
  4. Provide a name for the Key and ensure the Activate this API Key checkbox is selected, then click Create API Key.
  5. Copy the newly generated API key and store it in a safe place.
⚠️

Save the Key!

This will be your only opportunity to save this information.

Step 2: Install Guardrails

After generating an API key, you’re ready to deploy Kubernetes Guardrails on your local cluster. Before deploying anything, familiarize yourself with the Guardrails command:

Guardrails Command Reference

PropertyTypeRequired?Description
K8sGuardrails.ApiTokenStringRequiredAPI Key from InsightCloudSec to allow the Guardrails scanners (agents) to access the InsightCloudSec API and report findings. You can use either ApiToken or ApiTokenSecret but not both.
K8sGuardrails.ApiTokenSecretStringRequiredName of the Kubernetes Secret that contains the API Key. You can use either ApiToken or ApiTokenSecret but not both. InsightCloudSec assumes the Key within the Kubernetes secret is api-token unless defined with K8sGuardrails.ApiTokenSecretKey.
K8sGuardrails.ApiTokenSecretKeyStringOptionalName of the key that maps to the API Key inside of the Kubernetes Secret.
Config.BaseUrlStringRequiredSet this to the base URL for your InsightCloudSec installation. If unknown, the URL can be retrieved from InsightCloudSec by going to Settings > System Administration > System and copying the Base URL field.
Config.ClusterNameStringRequiredUser-defined cluster name.
Config.ClusterIdStringRequiredThe ARN field of the discovered remote or cloud-managed cluster. See Discovery for Existing Clusters for instructions. The IDs must match to correlate correctly and to generate coverage reports. Cluster ID is not required if the cluster is self-managed.
Config.LabelsString-encoded JSON mapOptionalThe cluster badges to translate into cloud account badges that you can use to filter Insight findings. Example: '{\"environment\": \"production\"\, \"owner\": \"user@rapid7.com\"\, \"risk\": \"low\"\, \"provider\": \"EKS\"}'
Config.HasNodeFsAccessBooleanOptionalEnables access for the Node Scanner (requires additional configuration)
Config.ProxyUrlStringOptionalProxy URL to run the Local Scanner behind
CronScheduleStringOptionalCron schedule to run the Guardrails scanner. The default scanning schedule is once an hour. To learna bout Kubernetes CronJob Scheduling, see the Kubernetes documentation.
devopscurlSpec.SelfSignedCert.EnabledBooleanOptionalEnables support for self-signed certificates on your ICS server.
devopscurlSpec.SelfSignedCert.CertSecretNameStringOptionalName of a Kubernetes secret that contains the certificate. The secret must reside in the same namespace and include a file named cert.pem. Use this instead of CertPem when possible.
devopscurlSpec.SelfSignedCert.CertPemStringOptionalA base64-encoded PEM certificate. Recommended only if CertSecretName is not used. Use --set to pass the value inline (do not hardcode it in values.yaml).
Config.IsOpenShiftBooleanOptionalSet to true if your Kubernetes cluster is running on OpenShift.
Config.LogLevelStringOptionalSet this to modify the Kubernetes log level. The default log level is info
nodeSelector.OsStringOptionalSpecifies the operating system value for the Kubernetes node selector. To learn more about Kubernetes node selector OSes, see the Kubernetes documentation.
nodeSelector.ArchStringOptionalSpecifies the architecture value for the Kubernetes node selector. To learn more about Kubernetes node selector architecture, see the Kubernetes documentation.
  1. Add the Kubernetes Guardrails Helm repo by issuing the following commands:
helm repo add helm-repo https://helm.rapid7.com/cloudsec helm search repo # example helm install command helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace \ --set K8sGuardrails.ApiToken=<InsightCloudSec-API-token> \ --set Config.BaseUrl=<InsightCloudSec-Base-URL> \ --set Config.ClusterName=<InsightCloudSec-Cluster-Name> \ --set Config.ClusterId=<InsightCloudSec-Cluster-ID> \ --set Config.Labels=<InsightCloudSec-Cluster-Badges> \ --set Config.HasNodeFsAccess=false \ --set CronSchedule=<k8sGuardrails-CronSchedule> \ --set devopscurlSpec.SelfSignedCert.Enabled=<Enable-Self-Cert> \ --set devopscurlSpec.SelfSignedCert.CertSecretName=<Self-Cert-Secret-Name>

The devopscurlSpec.SelfSignedCert.CertSecretName setting can be replaced with devopscurlSpec.SelfSignedCert.CertPem=<Self-Cert-Pem-Base64>. Review the Guardrails Command Reference and Troubleshooting sections for more information and help.

ℹ️

API Token already stored in a Kubernetes Secret?

If the API Key for the InsightCloudSec platform is already stored in a Kubernetes Secret, you can replace K8sGuardrails.ApiToken with K8sGuardrails.ApiTokenSecret and optionally K8sGuardrails.ApiTokenSecretKey as shown in the following example:

helm repo add helm-repo https://helm.rapid7.com/cloudsec helm search repo # example helm install command helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace \\ --set K8sGuardrails.ApiTokenSecret=<Name-of-K8s-secret-containing-API-key> \\ --set K8sGuardrails.ApiTokenSecretKey=<Key-in-secret-mapping-to-API-key-value> \\ --set Config.BaseUrl=<InsightCloudSec-Base-URL> \\ --set Config.ClusterName=<InsightCloudSec-Cluster-Name> \\ --set Config.ClusterId=<InsightCloudSec-Cluster-ID> \\ --set Config.Labels=<InsightCloudSec-Cluster-Badges> \\ --set Config.HasNodeFsAccess=false \\ --set CronSchedule=<k8sGuardrails-CronSchedule> \\ --set devopscurlSpec.SelfSignedCert.Enabled=<Enable-Self-Cert> \\ --set devopscurlSpec.SelfSignedCert.CertSecretName=<Self-Cert-Secret-Name>

Step 3: Cluster Role and Binding Setup (Optional)

Lastly, you can optionally create a Kubernetes Cluster Role and Binding to allow create permission for the subjectaccessreview resource. See the Kubernetes Scanners Overview FAQ for details.

ℹ️

Cluster role and binding are security best practices

Creating a cluster role and binding enables two additional Insights for Kubernetes security best practices.

  1. Authenticate to your Kubernetes Cluster as a cluster administrator.

  2. Create a role with the necessary permissions:

    echo “apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: subjectaccessreview-create rules:

    • apiGroups:
      • authorization.k8s.io resources:
      • subjectaccessreviews verbs:
      • create”
        | kubectl create -f -
  3. Create a role binding:

    echo “apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: subjectaccessreview-create-k8s-guardrails-sa subjects:

    • kind: ServiceAccount name: k8s-guardrails-sa namespace: rapid7 roleRef: kind: ClusterRole name: subjectaccessreview-create apiGroup: rbac.authorization.k8s.io”
      | kubectl create -f -

Verify the Deployment

After deploying guardrails and configuring the cluster role and cluster binding, you should verify the deployment is working successfully.

  1. To verify that Kubernetes Guardrails works successfully, you will need to trigger a job manually, using the following command.

    kubectl create job —from=cronjob/k8s-guardrails -n rapid7 k8s-guardrails-manual-001

  2. Verify that the pod is in the completed status. Time to completion will depend on the size of the cluster.

    kubectl get pods -n rapid7 | grep k8s-guardrails-manual-001

  3. Verify that the cluster is marked as monitored and that resources appear with findings on them.

Troubleshooting

Specifying Resource Limits

InsightCloudSec includes the ability to specify resource limits and requests for Guardrails containers. The helm key to set should start with the following YAML hierarchy:

"global.Resources." followed by the requests and/or limits of your choice. The values and names must follow a valid structure of Kubernetes resources. Edit the following values to your needs (these values depend on the cluster’s characteristics) and add them to the helm command:

--set global.Resources.requests.cpu=200m \ --set global.Resources.requests.memory=100Mi \ --set global.Resources.limits.cpu=1 \ --set global.Resources.limits.memory=1Gi

For more details on resource limits, refer to the Kubernetes documentation on Resource Management for Pods and Containers

Using a Self-Signed Certificate

Accessing InsightCloudSec from the Kubernetes scanner is done over TLS. While in most cases a public Certificate Authority is used, some organizations use a private Certificate Authority that requires the Kubernetes scanner to be configured with a self-signed certificate. Configuring a self-signed certificate is done by providing additional parameters to the helm chart installation indicating the use of a self-signed certificate and providing the certificate in the format of base64 encoding.

An example:

helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace --set K8sGuardrails.ApiToken=token --set Config.ClusterName="Cluster name" --set CronSchedule="30 * * * *" --set Config.BaseUrl=https://self-sign-cert-ics.com --set Config.ClusterId="cluster-id" --set devopscurlSpec.SelfSignedCert.Enabled=true --set devopscurlSpec.SelfSignedCert.CertPem=LS...0t --set Config.HasNodeFsAccess=false

Discovery for Existing Clusters

To identify clusters that are not currently covered by the local scanner you will need to refer to the following steps. Below are instructions to identify clusters using the InsightCloudSec UI or the InsightCloudSec API.

Discovery of Clusters Using the UI

  1. From your InsightCloudSec platform installation, navigate to Inventory > Resources and select the Containers tab.

  2. (Optionally) Use the Scopes button at the top of the page to narrow the scope (e.g., cloud accounts, resource groups) to use when scanning for clusters that are not yet included in your InsightCloudSec setup.

  3. From the Containers tab, select Clusters to see a list of all of the clusters included in the selected scope.

  4. Navigate to Filters and search for/select the Kubernetes Cluster Without Guardrails Report.

    • Selecting this filter will update the resources to only include clusters that have not been scanned.
    • The cluster ID field that displays will be used when deploying Guardrails to a specific cluster.
  5. Locate the Cluster ID column and note the Cluster you want to deploy Guardrails in.
    You will have to scroll to the right to see all of the columns.

ℹ️

Cluster ID

InsightCloudSec uses the Cluster ID to identify clusters. Using the Cluster ID allows Rapid7 to associate clusters discovered via the InsightCloudSec platform (either through the UI or API) with clusters onboarded through the local scanning capability.

Discovery of Clusters Using the API

For information on using the InsightCloudSec API refer to the Getting Started.

  1. Login to the InsightCloudSec API using your username and password in the request body in a POST to v2/public/user/login.
  2. Use the session_id from the response in the X-Auth-Token header. Use the following request body in a POST to v2/public/resource/query:
{ "selected_resource_type": "containercluster", "filters": [ { "name": "divvy.query.kubernetes_cluster_without_guardrails_report", "config": {} } ], "offset": 0, "limit": 100 }
  1. The resources list will display clusters that have not been scanned; the ARN field will be used when deploying Guardrails to a specific cluster.
    Save the ARN details for clusters where you want to configure Kubernetes Security Guardrails.

Manage Existing API Keys

Clicking the Add Kubernetes API Key button will enable you to generate new API Keys, manage a keys’ status (activated, deactivated), and delete unused keys.

Our current setup supports up to 5 API keys for API key rotation. The clusters will be installed as a single Organization (within InsightCloudSec).

ℹ️

Applying New API Keys

Assuming you have applied the suggested naming convention for the Helm repository and installation, the command for updating your Kubernetes Scanner deployment for new API keys should look like:
helm upgrade k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --set K8sGuardrails.ApiToken=<new token>

Uninstall the local scanner

You can remove the Kubernetes Local Scanner at any time using helm and kubectl.

To remove the Kubernetes Local Scanner:

  1. Uninstall the Guardrails package:

    helm uninstall k8s-guardrails -n rapid7

ℹ️

Want to verify the uninstallation first?

You can use the --dry-run flag to verify what is uninstalled before committing to the command.

  1. Remove the remaining Guardrails components:
kubectl delete role <NAME_OF_ROLE> -n <NAMESPACE> kubectl delete rolebinding <NAME_OF_ROLEBINDING> -n <NAMESPACE> kubectl delete cronjob k8s-guardrails-manual-001 -n rapid7 kubectl delete -f r7-node-scan.yaml [-n r7-node-scan] kubectl delete namespace r7-node-scan

What’s Next?

Refer to the Kubernetes Security Guardrails for an overview of this feature and a summary of the prerequisites.

Jump to the Using Kubernetes Security Guardrails page to view details on using the feature and exploring the imported data.