Kubernetes Local Scanner

The local scanner supports managed Kubernetes clusters that are not accessible to InsightCloudSec and any self-managed Kubernetes clusters. When configured to provide access to each specific cluster, self-managed clusters will be harvested and assessed automatically through the local scanner after they are successfully added to InsightCloudSec.

Feature Overview

  • Self-managed clusters need to be configured to provide access to each specific cluster and will be harvested and assessed automatically through the local scanner after they are successfully onboarded to InsightCloudSec.
  • For managed clusters, when using a local scanner an account is created automatically and clusters that cannot be accessed will be marked with an error and placed into the harvesting Paused state.
    • When installing a local scanner for managed clusters, special care should be given to assigning the cluster ID.
    • We recommend using the provider resource ID, otherwise a new account will be created that is detached from the account that was automatically created, losing all the benefits of tags and badges synchronization.
  • To migrate between scanners refer to the instructions on the Clusters Account Setup & Management page.

Setup & Configuration

Generating an API Key

Generating an API Key is required to identify and authenticate the local scanners (one on each cluster) and allow the scanner to report inventory and Guardrails assessment findings to the InsightCloudSec platform.

  1. Navigate to your InsightCloudSec installation and click to open Cloud>Kubernetes Clusters.
  2. On the top of the page click the Add Kubernetes API Key button.
  3. Click Add API Key.
  4. Provide a name for the Key and ensure the Activate this API Key checkbox is selected, then click Create API Key.
  5. Copy the newly generated API key and store it in a safe place.

Save

This will be your only opportunity to save this information.

Manage Existing API Keys

Clicking the Add Kubernetes API Key button will enable you to generate new API Keys, manage a keys' status (activated, deactivated), and delete unused keys.

Our current setup supports up to two API keys for API key rotation. The clusters will be installed as a single Organization (within InsightCloudSec).

Managing your Kubernetes API Keys

Applying New API Keys

Assuming you have applied the suggested naming convention for the Helm repository and installation, the command for updating your Kubernetes Scanner deployment for new API keys should look like:
helm upgrade k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --set K8sGuardrails.ApiToken=<new token>

Verifying Configuration Requirements

Before using Kubernetes Security Guardrails feature you will need to verify that your local machine is set up with helm and kubectl.

To do this you can run the helm and kubectl commands (individually) to set the correct context against your Kubernetes cluster. Helm is required to install the Guardrails scanner.

Setup for kubectl

If you do not have an existing kubectl setup refer to the following in order to connect to your Kubernetes cluster:

Setup for Helm

If you do not have an existing Helm install setup refer to the following in order to connect your Kubernetes cluster:

  1. Download and install Helm.

Cluster Network Access Requirements

The steps below should be executed on all designated clusters.

Every cluster must have network access to the InsightCloudSec server's IP and Port.

For SaaS customers, your cluster(s) will be making requests to your InsightCloudSec installation on one of two IP addresses specific to your installation. It is possible to obtain your IP addresses by performing a DNS lookup on your installation’s domain name.

  1. Connect to the cluster context that you would like to install k8s guardrails.

  2. Add the K8s guardrails Helm repo by issuing the following commands. The devopscurlSpec.SelfSignedCert.CertSecretName setting can be replaced with devopscurlSpec.SelfSignedCert.CertPem=<Self-Cert-Pem-Base64>. See Helm Install Guardrails Documentation for more information and troubleshooting.

    text
    1
    helm repo add helm-repo https://helm.rapid7.com/cloudsec
    2
    helm search repo
    3
    4
    # example helm install command
    5
    helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace \
    6
    --set K8sGuardrails.ApiToken=<InsightCloudSec-API-token> \
    7
    --set Config.BaseUrl=<InsightCloudSec-Base-URL> \
    8
    --set Config.ClusterName=<InsightCloudSec-Cluster-Name> \
    9
    --set Config.ClusterId=<InsightCloudSec-Cluster-ID> \
    10
    --set Config.Labels=<InsightCloudSec-Cluster-Badges> \
    11
    --set Config.HasNodeFsAccess=false \
    12
    --set CronSchedule=<k8sGuardrails-CronSchedule> \
    13
    --set devopscurlSpec.SelfSignedCert.Enabled=<Enable-Self-Cert> \
    14
    --set devopscurlSpec.SelfSignedCert.CertSecretName=<Self-Cert-Secret-Name>
Helm Install Guardrails Documentation and Troubleshooting
PropertyDescriptionInstructions
K8sGuardrails.ApiTokenMandatory
Define API Key on the InsightCloudSec platform. K8s Guardrails API token is used in token-based authentication to allow the Guardrails scanners (agents) to access the platform API and report findings.
Note: Ensure you generate a secure API token.
In the InsightCloudSec interface, navigate to Cloud > Clouds.
Select Add k8s API Key.
Enter the API Key to be used to link InsightCloudSec with your Kubernetes clusters.
Config.BaseUrlMandatory
Set this to the base URL for your InsightCloudSec platform installation.
If unknown, API URL can be retrieved from the InsightCloudSec interface:
- Navigate to Administration > System Administration
- Select System
- Copy the Base URL field

This URL should be used alongside the path to the endpoint:
<base-url>
See example: https://mycompany.divvycloud.com/
Config.ClusterNameMandatory
User-defined cluster name.
Config.ClusterIdMandatory
Must match the ARN field of the discovered cluster in order to correlate correctly and to be able to generate coverage reports.
Navigate to the Resource page or use API to get Kubernetes Cluster Without Guardrails Report.
Reports contain ARN for each cluster.
Refer to Discovery for Existing Clusters for details.
Config.LabelsOptional
The cluster badges, if provided, will be translated into cloud account (cluster) badges that you can user later on to navigate/filter Insight findings.
Example of Cluster-Badges:
'\{\"environment\": \"production\"\, \"owner\": \"user@rapid7.com\"\, \"risk\": \"low\"\, \"provider\": \"EKS\"\}'
Config.HasNodeFsAccessOptional
Enable this feature to access the Node Scanner (requires additional configuration)
Boolean type
CronScheduleOptional
Creates periodic and recurring tasks to run the Guardrails scanner.
Default scanning schedule (if not specified) is once an hour.
For CronJob Scheduling refer to the following information.
devopscurlSpec.SelfSignedCert.EnabledEnable this feature and supply the Self-Cert-Secret-Name and Self-Cert-Pem-Base64 if your ICS server is using a self-signed cert.Boolean type
devopscurlSpec.SelfSignedCert.CertSecretNameOptional
Can be replaced by Self-Cert-Pem-Base64
Create a secret in the same namespace and pass the secret name.
devopscurlSpec.SelfSignedCert.CertPemOptional
A base64 encoded string of the self signed certificate PEM file.
Can be replaced by Self-Cert-Secret-Name
Pass a base64 encoded certificate.
This option is less recommended then using the Self-Cert-Secret-Name property.
If this option is used, ensure value passed via an inline parameter using the --set flag and not hardcoded in the values.yaml file.
Config.IsOpenShiftOptional
Enable this feature if your Kubernetes cluster is running on OpenShift.
Boolean type
  1. To verify that k8s-guardrails works successfully, you will need to trigger a job manually, using the following command.

    text
    1
    kubectl create job --from=cronjob/k8s-guardrails -n rapid7 k8s-guardrails-manual-001
  2. Verify that the pod is in the completed status. Time to completion status will depend on the size of the cluster.

    text
    1
    kubectl get pods -n rapid7 | grep k8s-guardrails-manual-001
  3. Verify that the cluster is marked as monitored and that resources appear with findings on them.

Specifying Resource Limits

InsightCloudSec includes the ability to specify resource limits and requests for Guardrails containers.

The helm key to set should start with the following YAML hierarchy:
"<container spec>.Resources." following the wanted resources requests/limits, where <container spec> is one of "advisorSpec/mergerSpec/inventoryscannerSpec/exporterSpec/devopscurlSpec"

yaml
1
advisorSpec.Resources.requests.cpu=200m
2
advisorSpec.Resources.requests.memory=100Mi
3
advisorSpec.Resources.limits.cpu=1
4
advisorSpec.Resources.limits.memory=1Gi

For more info and how to configure refer to Kubernetes documentation on Resource Management for Pods and Containers

Using a Self-Signed Certificate

Accessing InsightCloudSec from the Kubernetes scanner is done over TLS. While in most cases a public Certificate Authority is used, some organizations use a private Certificate Authority that requires the Kubernetes scanner to be configured with a self-signed certificate.

Configuring a self-signed certificate is done by providing additional parameters to the helm chart installation indicating the use of a self-signed certificate and providing the certificate in the format of base64 encoding.

An example is below:

text
1
helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace --set K8sGuardrails.ApiToken=token --set Config.ClusterName="Cluster name" --set CronSchedule="30 * * * *" --set Config.BaseUrl=https://self-sign-cert-ics.com --set Config.ClusterId="cluster-id" --set devopscurlSpec.SelfSignedCert.Enabled=true --set devopscurlSpec.SelfSignedCert.CertPem=LS...0t --set Config.HasNodeFsAccess=false

Discovery for Existing Clusters

To identify clusters that are not currently covered you will need to refer to the following steps. Below are instructions to identify clusters using the InsightCloudSec UI or the InsightCloudSec API.

Discovery of Clusters Using the UI

  1. From your InsightCloudSec platform installation, navigate to Inventory > Resources and select the Containers tab.

  2. (Optionally) Use the Scopes button at the top of the page to narrow the scope (e.g., cloud accounts, resource groups) to use when scanning for clusters that are not yet included in your InsightCloudSec setup.

  3. From the Containers tab, select Clusters to see a list of all of the clusters included in the selected scope.

  4. Navigate to Filters and search for/select the Kubernetes Cluster Without Guardrails Report.

    • Selecting this filter will update the resources to only include clusters that have not been scanned.
    • The cluster ID field that displays will be used when deploying Guardrails to a specific cluster.
  5. Locate the Cluster ID column and note the Cluster you want to deploy Guardrails in.
    You will have to scroll to the right to see all of the columns.

Cluster ID

InsightCloudSec uses the Cluster ID to identify clusters. Using the Cluster ID allows us to correlate between clusters discovered via the InsightCloudSec platform (either through the UI or API) vs. clusters onboarded through the Alcide scanning capability.

Discovery of Clusters Using the API

For information on using the InsightCloudSec API refer to the Getting Started.

  1. Login to the InsightCloudSec (DivvyCloud) API using your username and password in the request body in a POST to v2/public/user/login.
  2. Use the session_id from the response in the X-Auth-Token header. Use the following request body in a POST to v2/public/resource/query:
json
1
{
2
"selected_resource_type": "containercluster",
3
"filters": [{
4
"name": "divvy.query.kubernetes_cluster_without_guardrails_report",
5
"config": {}
6
}],
7
"offset": 0,
8
"limit": 100
9
}
  1. The resources list will display clusters that have not been scanned; the ARN field will be used when deploying Guardrails to a specific cluster.
    Save the ARN details for clusters where you want to configure Kubernetes Security Guardrails.

What's Next?

Refer to the Kubernetes Security Guardrails for an overview of this feature and a summary of the prerequisites.

Jump to the Using Kubernetes Security Guardrails page to view details on using the feature and exploring the imported data.