Kubernetes Local Scanner
The local scanner supports managed Kubernetes clusters that are not accessible to InsightCloudSec and any self-managed Kubernetes clusters. When configured to provide access to each specific cluster, self-managed clusters will be harvested and assessed automatically through the local scanner after they are successfully added to InsightCloudSec. The local scanner is deployed using kubectl
& helm
for each individual Kubernetes cluster you want to monitor.
- Self-managed clusters need to be configured to provide access to each specific cluster and will be harvested and assessed automatically through the local scanner after they are successfully onboarded to InsightCloudSec.
- For managed clusters, when using a local scanner an account is created automatically and clusters that cannot be accessed will be marked with an error and placed into the harvesting Paused state.
- When installing a local scanner for managed clusters, special care should be given to assigning the cluster ID.
- We recommend using the provider resource ID, otherwise a new account will be created that is detached from the account that was automatically created, losing all the benefits of tags and badges synchronization.
- To migrate between scanners, refer to the instructions on the Clusters Account Setup & Management page.
Prerequisites
Before using the Kubernetes Local Scanner, you will need to verify that your local machine is set up with helm
and kubectl
. To do this you can run the helm
and kubectl
commands (individually) to set the correct context against your Kubernetes cluster.
Setup kubectl
If you do not have an existing kubectl
setup refer to the following in order to connect to your Kubernetes cluster:
- To install
kubectl
on OSX - To install
kubectl
on Linux - To install
kubectl
on Windows
Setup Helm
If you do not have an existing Helm install setup refer to the following in order to connect your Kubernetes cluster:
- Download and install Helm.
Cluster Network Access Requirements
The steps below should be executed on all designated clusters. Every cluster must be able to connect to the InsightCloudSec URL using
https/443
.For SaaS customers, if you are utilizing a customer-specific allowlist, ensure the clusters' ingress IP addresses are included.
- Connect to the cluster context that you would like to install Kubernetes Guardrails.
- For GKE clusters
- For EKS clusters
- For AKS clusters
- For local clusters such as
kind
,minikube
, orkubeadm
Step 1: Generate an API Key for the Manual Deployment
Generating an API Key is required to identify and authenticate the local scanners (one on each cluster) and allow the scanner to report inventory and Guardrails assessment findings to the InsightCloudSec platform.
- Navigate to your InsightCloudSec installation and click to open Cloud > Kubernetes Clusters.
- On the top of the page click the Manage Kubernetes API Key button.
- Click Add API Key.
- Provide a name for the Key and ensure the Activate this API Key checkbox is selected, then click Create API Key.
- Copy the newly generated API key and store it in a safe place.
Save the Key!
This will be your only opportunity to save this information.
Step 2: Install Guardrails
After generating an API key, you're ready to deploy Kubernetes Guardrails on your local cluster. Before deploying anything, familiarize yourself with the Guardrails command:
Guardrails Command Reference
Property | Description | Instructions |
---|---|---|
K8sGuardrails.ApiToken | Required API Key from InsightCloudSec. The Kubernetes Guardrails API token is used in token-based authentication to allow the Guardrails scanners (agents) to access the InsightCloudSec API and report findings. API key authentication is required but the method is flexible: you can use either ApiToken or ApiTokenSecret but not both. | Explore the Generate an API Key section for details. |
K8sGuardrails.ApiTokenSecret | Required The name of the Kubernetes Secret that contains the API Key. API key authentication is required but the method is flexible: you can use either ApiToken or ApiTokenSecret but not both. | String. Provide the name of the Kubernetes Secret containing your K8sGuardrails.ApiToken . InsightCloudSec assumes the Key within the Kubernetes secret is api-token unless defined with K8sGuardrails.ApiTokenSecretKey . |
K8sGuardrails.ApiTokenSecretKey | Optional Defines the Key that maps to the API Key inside of the Kubernetes Secret. | String |
Config.BaseUrl | Required Set this to the base URL for your InsightCloudSec installation. | If unknown, the URL can be retrieved from the InsightCloudSec interface by going to Administration > System Administration > System and copying the Base URL field. |
Config.ClusterName | Required User-defined cluster name. | String |
Config.ClusterId | Required The ARN field of the discovered remote or cloud-managed cluster. The IDs must match to correlate correctly and to generate coverage reports. If the cluster is self-managed, any provided value is sufficient because there is no discovered cluster to correlate with. | Navigate to the Resource page or use an API to get Kubernetes Cluster Without Guardrails Report. Reports contain ARN for each cluster. Refer to Discovery for Existing Clusters for details. |
Config.Labels | Optional The cluster badges, if provided, will be translated into cloud account (cluster) badges that you can user later on to navigate/filter Insight findings. | Example of Cluster-Badges: '\{\"environment\": \"production\"\, \"owner\": \"user@rapid7.com\"\, \"risk\": \"low\"\, \"provider\": \"EKS\"\}' |
Config.HasNodeFsAccess | Optional Enable this feature to access the Node Scanner (requires additional configuration) | Boolean type |
Config.ProxyUrl | Optional Provide a proxy URL to run the Local Scanner behind a proxy | String |
CronSchedule | Optional Creates periodic and recurring tasks to run the Guardrails scanner. The default scanning schedule (if not specified) is once an hour. | For CronJob Scheduling refer to the following information. |
devopscurlSpec.SelfSignedCert.Enabled | Enable this feature and supply the Self-Cert-Secret-Name and Self-Cert-Pem-Base64 if your ICS server is using a self-signed cert. | Boolean type |
devopscurlSpec.SelfSignedCert.CertSecretName | Optional Can be replaced by Self-Cert-Pem-Base64 | Create a secret in the same namespace and pass the secret name. |
devopscurlSpec.SelfSignedCert.CertPem | Optional A base64 encoded string of the self signed certificate PEM file. Can be replaced by Self-Cert-Secret-Name | Pass a base64 encoded certificate. This option is less recommended then using the Self-Cert-Secret-Name property. If this option is used, ensure value passed via an inline parameter using the --set flag and not hardcoded in the values.yaml file. |
Config.IsOpenShift | Optional Enable this feature if your Kubernetes cluster is running on OpenShift. | Boolean type |
Config.LogLevel | Optional Set this to modify the Kubernetes log level. The default log level is info | String |
nodeSelector.Os | Optional Specifies the operating system value for the Kubernetes node selector. | String. For more information, review the Kubernetes documentation. |
nodeSelector.Arch | Optional Specifies the architecture value for the Kubernetes node selector. | String. For more information, review the Kubernetes documentation. |
- Add the Kubernetes Guardrails Helm repo by issuing the following commands:
shell
1helm repo add helm-repo https://helm.rapid7.com/cloudsec2helm search repo34# example helm install command5helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace \6--set K8sGuardrails.ApiToken=<InsightCloudSec-API-token> \7--set Config.BaseUrl=<InsightCloudSec-Base-URL> \8--set Config.ClusterName=<InsightCloudSec-Cluster-Name> \9--set Config.ClusterId=<InsightCloudSec-Cluster-ID> \10--set Config.Labels=<InsightCloudSec-Cluster-Badges> \11--set Config.HasNodeFsAccess=false \12--set CronSchedule=<k8sGuardrails-CronSchedule> \13--set devopscurlSpec.SelfSignedCert.Enabled=<Enable-Self-Cert> \14--set devopscurlSpec.SelfSignedCert.CertSecretName=<Self-Cert-Secret-Name>
The devopscurlSpec.SelfSignedCert.CertSecretName
setting can be replaced with devopscurlSpec.SelfSignedCert.CertPem=<Self-Cert-Pem-Base64>
. Review the Guardrails Command Reference and Troubleshooting sections for more information and help.
API Token already stored in a Kubernetes Secret?
If the API Key for the InsightCloudSec platform is already stored in a Kubernetes Secret, you can replace K8sGuardrails.ApiToken
with K8sGuardrails.ApiTokenSecret
and optionally K8sGuardrails.ApiTokenSecretKey
as shown in the following example:
shell
1helm repo add helm-repo https://helm.rapid7.com/cloudsec2helm search repo34# example helm install command5helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace \6--set K8sGuardrails.ApiTokenSecret=<Name-of-K8s-secret-containing-API-key> \7--set K8sGuardrails.ApiTokenSecretKey=<Key-in-secret-mapping-to-API-key-value> \8--set Config.BaseUrl=<InsightCloudSec-Base-URL> \9--set Config.ClusterName=<InsightCloudSec-Cluster-Name> \10--set Config.ClusterId=<InsightCloudSec-Cluster-ID> \11--set Config.Labels=<InsightCloudSec-Cluster-Badges> \12--set Config.HasNodeFsAccess=false \13--set CronSchedule=<k8sGuardrails-CronSchedule> \14--set devopscurlSpec.SelfSignedCert.Enabled=<Enable-Self-Cert> \15--set devopscurlSpec.SelfSignedCert.CertSecretName=<Self-Cert-Secret-Name>
Step 3: Cluster Role and Binding Setup (Optional)
Lastly, you can optionally create a Kubernetes Cluster Role and Binding to allow create
permission for the subjectaccessreview resource. See the Kubernetes Scanners Overview FAQ for details.
Cluster role and binding are security best practices
Creating a cluster role and binding enables two additional Insights for Kubernetes security best practices.
Authenticate to your Kubernetes Cluster as a cluster administrator.
Create a role with the necessary permissions:
1echo "apiVersion: rbac.authorization.k8s.io/v12kind: ClusterRole3metadata:4name: subjectaccessreview-create5rules:6- apiGroups:7- authorization.k8s.io8resources:9- subjectaccessreviews10verbs:11- create" \12| kubectl create -f -Create a role binding:
1echo "apiVersion: rbac.authorization.k8s.io/v12kind: ClusterRoleBinding3metadata:4name: subjectaccessreview-create-k8s-guardrails-sa5subjects:6- kind: ServiceAccount7name: k8s-guardrails-sa8namespace: rapid79roleRef:10kind: ClusterRole11name: subjectaccessreview-create12apiGroup: rbac.authorization.k8s.io" \13| kubectl create -f -
Verify the Deployment
After deploying guardrails and configuring the cluster role and cluster binding, you should verify the deployment is working successfully.
To verify that Kubernetes Guardrails works successfully, you will need to trigger a job manually, using the following command.
1kubectl create job --from=cronjob/k8s-guardrails -n rapid7 k8s-guardrails-manual-001Verify that the pod is in the completed status. Time to completion will depend on the size of the cluster.
1kubectl get pods -n rapid7 | grep k8s-guardrails-manual-001Verify that the cluster is marked as monitored and that resources appear with findings on them.
Troubleshooting
Specifying Resource Limits
InsightCloudSec includes the ability to specify resource limits and requests for Guardrails containers. The helm key to set should start with the following YAML hierarchy:
"global.Resources."
followed by the requests and/or limits of your choice. The values and names must follow a valid structure of Kubernetes resources. Edit the following values to your needs (these values depend on the cluster's characteristics) and add them to the helm
command:
shell
1--set global.Resources.requests.cpu=200m \2--set global.Resources.requests.memory=100Mi \3--set global.Resources.limits.cpu=1 \4--set global.Resources.limits.memory=1Gi
For more details on resource limits, refer to the Kubernetes documentation on Resource Management for Pods and Containers
Using a Self-Signed Certificate
Accessing InsightCloudSec from the Kubernetes scanner is done over TLS. While in most cases a public Certificate Authority is used, some organizations use a private Certificate Authority that requires the Kubernetes scanner to be configured with a self-signed certificate. Configuring a self-signed certificate is done by providing additional parameters to the helm chart installation indicating the use of a self-signed certificate and providing the certificate in the format of base64 encoding.
An example:
shell
1helm install k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --create-namespace --set K8sGuardrails.ApiToken=token --set Config.ClusterName="Cluster name" --set CronSchedule="30 * * * *" --set Config.BaseUrl=https://self-sign-cert-ics.com --set Config.ClusterId="cluster-id" --set devopscurlSpec.SelfSignedCert.Enabled=true --set devopscurlSpec.SelfSignedCert.CertPem=LS...0t --set Config.HasNodeFsAccess=false
Discovery for Existing Clusters
To identify clusters that are not currently covered by the local scanner you will need to refer to the following steps. Below are instructions to identify clusters using the InsightCloudSec UI or the InsightCloudSec API.
Discovery of Clusters Using the UI
From your InsightCloudSec platform installation, navigate to Inventory > Resources and select the Containers tab.
(Optionally) Use the Scopes button at the top of the page to narrow the scope (e.g., cloud accounts, resource groups) to use when scanning for clusters that are not yet included in your InsightCloudSec setup.
From the Containers tab, select Clusters to see a list of all of the clusters included in the selected scope.
Navigate to Filters and search for/select the Kubernetes Cluster Without Guardrails Report.
- Selecting this filter will update the resources to only include clusters that have not been scanned.
- The cluster ID field that displays will be used when deploying Guardrails to a specific cluster.
Locate the Cluster ID column and note the Cluster you want to deploy Guardrails in.
You will have to scroll to the right to see all of the columns.
Cluster ID
InsightCloudSec uses the Cluster ID to identify clusters. Using the Cluster ID allows Rapid7 to associate clusters discovered via the InsightCloudSec platform (either through the UI or API) with clusters onboarded through the local scanning capability.
Discovery of Clusters Using the API
For information on using the InsightCloudSec API refer to the Getting Started.
Login to the InsightCloudSec API using your username and password in the request body in a
POST
tov2/public/user/login
.Use the
session_id
from the response in theX-Auth-Token header
. Use the following request body in aPOST
tov2/public/resource/query
:1{2"selected_resource_type": "containercluster",3"filters": [4{5"name": "divvy.query.kubernetes_cluster_without_guardrails_report",6"config": {}7}8],9"offset": 0,10"limit": 10011}The
resources
list will display clusters that have not been scanned; theARN
field will be used when deploying Guardrails to a specific cluster.
Save the ARN details for clusters where you want to configure Kubernetes Security Guardrails.
Manage Existing API Keys
Clicking the Add Kubernetes API Key button will enable you to generate new API Keys, manage a keys' status (activated, deactivated), and delete unused keys.
Our current setup supports up to 5 API keys for API key rotation. The clusters will be installed as a single Organization (within InsightCloudSec).
Applying New API Keys
Assuming you have applied the suggested naming convention for the Helm repository and installation, the command for updating your Kubernetes Scanner deployment for new API keys should look like:helm upgrade k8s-guardrails helm-repo/k8s-guardrails -n rapid7 --set K8sGuardrails.ApiToken=<new token>
Uninstall the local scanner
You can remove the Kubernetes Local Scanner at any time using helm
and kubectl
.
To remove the Kubernetes Local Scanner:
Uninstall the Guardrails package:
1helm uninstall k8s-guardrails -n rapid7Want to verify the uninstallation first?
You can use the
--dry-run
flag to verify what is uninstalled before committing to the command.Remove the remaining Guardrails components:
1kubectl delete role <NAME_OF_ROLE> -n <NAMESPACE>2kubectl delete rolebinding <NAME_OF_ROLEBINDING> -n <NAMESPACE>3kubectl delete cronjob k8s-guardrails-manual-001 -n rapid74kubectl delete -f r7-node-scan.yaml [-n r7-node-scan]5kubectl delete namespace r7-node-scan
What's Next?
Refer to the Kubernetes Security Guardrails for an overview of this feature and a summary of the prerequisites.
Jump to the Using Kubernetes Security Guardrails page to view details on using the feature and exploring the imported data.