Kubernetes Node Scanner

Section 4.1 of the CIS Benchmark (version 1.6.0) for Kubernetes includes a set of security checks for Kubernetes clusters that requires access information on the file system of Kubernetes cluster nodes. To meet this requirement, the deployment of a thin Rapid7 Node Scanner as a daemonset is necessary. This scanner is particularly important for manually managed Kubernetes deployments that might be exposed to deployment mistakes more than Kubernetes clusters managed by a cloud provider. The guide below will walk you through the installation process for the scanner, along with the required configurations and steps.

Prerequisites

  • This scanner is relevant for standard Kubernetes clusters and has been tested on AWS' Elastic Kubernetes Service (EKS), GCP's Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS)

    • It is not, however, for clusters with fully-managed infrastructure, such as AWS' EKS Fargate and GKE Autopilot
  • A Kubernetes cluster that was configured according to the Kubernetes Local Scanner or Kubernetes Remote Scanner documentation

  • Clusters scanned by local Kubernetes Scanners and are standard (see the first bullet) will require this config to be added to the helm command (default is false):

    bash
    1
    --set Config.HasNodeFsAccess=true
  • Save the Node Scanner YAML code as a file named r7-node-scan.yaml and ensure it's available to the cluster you're attempting to configure

Node Scanner YAML

Deployment Limitations?

Take into consideration any cluster deployment limitations, such as node affinity, that might require changes in the Node Scanner YAML in order to allow the daemonset pods run on all nodes.

yaml
1
# ****** IMPORTANT. read this ************
2
# First prepare a secret in the SAME NAMESPACE as the daemonset will be deployed
3
# kubectl -n same-namespace-of-daemonset create secret generic r7-node-scanner-key --from-literal=encKey=$(openssl rand -hex 32)
4
# If the secret was updated make sure to restart the pods
5
6
apiVersion: apps/v1
7
kind: DaemonSet
8
metadata:
9
name: r7-node-scanner
10
spec:
11
selector:
12
matchLabels:
13
app: r7-node-scanner
14
template:
15
metadata:
16
labels:
17
app: r7-node-scanner
18
spec:
19
tolerations:
20
- key: node-role.kubernetes.io/control-plane
21
operator: Exists
22
effect: NoSchedule
23
- key: node-role.kubernetes.io/master
24
operator: Exists
25
effect: NoSchedule
26
### runAsUser is only possible if K8S config files are visible by that user ##
27
containers:
28
# Do not modify this container's name. r7-k8s-scanner will be looking for it by name.
29
- name: r7-node-scanner
30
image: alpine
31
command: ["/bin/sh", "-c"]
32
args:
33
- |
34
apk add openssl
35
36
cat > /workdir/r7-node-scanner.sh << "EOF"
37
#!/bin/sh
38
39
# POSIX_LINE_MAX represent the minimum value for line width by Posix standard
40
POSIX_LINE_MAX=2048
41
OUTPUT_VERSION=v0.0.1
42
43
WORK_DIR=/workdir
44
RESULTS_FILE=scan.json-lines
45
FILE_NAMES=file_names.list
46
47
ENC_KEY=$(cat /mnt/enc-key/encKey)
48
ENC_IV=$(openssl rand -hex 16)
49
if [[ "$ENC_KEY" == "" ]]; then
50
echo "ENC_KEY is empty"
51
exit 1
52
fi
53
if [[ "$ENC_IV" == "" ]]; then
54
echo "ENC_IV is empty"
55
exit 1
56
fi
57
58
while true; do
59
START_TIME=$(date +%s)
60
61
mkdir -p $WORK_DIR
62
cd $WORK_DIR
63
rm -f $RESULTS_FILE
64
rm -f $FILE_NAMES
65
66
HOST_FS_ROOT=/mnt/host-fs
67
68
chroot $HOST_FS_ROOT find /etc /var -maxdepth 7 -type f | grep kube | grep -v "fluent\|/var/log\|lib/kubelet/pods\|/var/lib/cni" >> $FILE_NAMES
69
70
for f in $(cat $FILE_NAMES); do
71
# save ps containing the file name. stream directly to file in order to avoid too long command line
72
printf "{\"psB64\": \"" >> $RESULTS_FILE
73
chroot $HOST_FS_ROOT ps ax | grep -v grep | grep $f | base64 | tr -d '\n' >> $RESULTS_FILE
74
chroot $HOST_FS_ROOT [ -f $f ] && \
75
chroot $HOST_FS_ROOT stat -c "\", \"file\": \"%n\", \"perms\": %a, \"gid\": %g, \"group\": \"%G\", \"uid\": %u, \"user\": \"%U\" }" "$f" >> $RESULTS_FILE
76
done
77
78
tar czf $RESULTS_FILE.tar.gz $RESULTS_FILE
79
FILES_LINE_COUNT=$(wc -l $RESULTS_FILE | awk '{print $1}')
80
rm $RESULTS_FILE
81
82
TGZ_SIZE=$(ls -l $RESULTS_FILE.tar.gz | awk {'print $5'})
83
84
cat $RESULTS_FILE.tar.gz | \
85
openssl enc -e -aes-256-cbc -K $ENC_KEY -iv $ENC_IV | \
86
base64 -w $POSIX_LINE_MAX > $RESULTS_FILE.tgz.base64
87
88
rm $RESULTS_FILE.tar.gz
89
BASE64_LINES_COUNT=$(wc -l $RESULTS_FILE.tgz.base64 | awk '{print $1}')
90
91
cat $RESULTS_FILE.tgz.base64
92
rm $RESULTS_FILE.tgz.base64
93
94
END_TIME=$(date +%s)
95
TOTAL_EXEC_SEC=$(( END_TIME - START_TIME ))
96
97
SUMMARY=$(echo "{
98
'ver': '$OUTPUT_VERSION',
99
'totalSec': $TOTAL_EXEC_SEC,
100
'filesLineCount': $FILES_LINE_COUNT,
101
'tgzSize': $TGZ_SIZE,
102
'ivHex': '$ENC_IV',
103
'base64LineCount': $BASE64_LINES_COUNT,
104
'nodeName': '$NODE_NAME',
105
'podName': '$POD_NAME'
106
}" | tr "'" '"')
107
108
echo $SUMMARY
109
110
touch $LAST_SUCCESS_FLAG_FILE
111
sleep $EXECUTION_INTERVAL_SECONDS
112
done
113
114
EOF
115
116
chmod 550 /workdir/r7-node-scanner.sh
117
# run with exec to keep commandline clean (without exec, the entire script preparation will be shown on "ps ax")
118
exec /workdir/r7-node-scanner.sh
119
env:
120
- name: NODE_NAME
121
valueFrom:
122
fieldRef:
123
fieldPath: spec.nodeName
124
- name: POD_NAME
125
valueFrom:
126
fieldRef:
127
fieldPath: metadata.name
128
- name: LAST_SUCCESS_FLAG_FILE
129
value: "/workdir/last-success-flag-file"
130
- name: EXECUTION_INTERVAL_SECONDS
131
value: "360"
132
volumeMounts:
133
- name: host-fs
134
mountPath: /mnt/host-fs
135
readOnly: true
136
- name: workdir
137
mountPath: /workdir
138
- name: r7-node-scanner-key
139
mountPath: /mnt/enc-key
140
volumes:
141
- name: host-fs
142
hostPath:
143
path: /
144
- name: workdir
145
emptyDir: {}
146
- name: r7-node-scanner-key
147
secret:
148
secretName: r7-node-scanner-key
149

Installation

Cluster Administrator Required

Due to its required permissions, the installation needs to be performed locally on each cluster by the cluster administrator.

  1. Create a namespace of your choice for the Rapid7 Node Scanner. For this guide, we used r7-node-scan as an example:

    bash
    1
    kubectl create namespace r7-node-scan
  2. Create a secret to encrypt the scanner data. This secret will be used by the cluster's configured scanner (local or remote) to decrypt and consume the scan results. Anyone with access to this secret will be able to read the scan results. Use the following command to create the secret and ensure that the openssl command is installed:

    bash
    1
    kubectl -n r7-node-scan create secret generic r7-node-scanner-key --from-literal=encKey=$(openssl rand -hex 32)
  3. After ensuring the Node Scanner YAML file is available to the cluster, deploy the Rapid7 Node Scanner by applying the file using the following command:

Isolated Environment

The Node Scanner will run apk add openssl on startup, which requires access to the internet. In isolated environments, you can replace the image with your own Alpine image that has pre-installed openssl.

bash
1
kubectl -n r7-node-scan apply -f r7-node-scan.yaml

To compliment the Rapid7 Kubernetes Node Scanner, there are several Insights available to check compliance throughout your cloud environments. Once the node scanner is up-and-running, you can use the Ensure that r7-node-scanner is deployed Insight (introduced in InsightCloudSec version 23.7.25) to see which clusters have (or do not have) the Node Scanner deployed. The CIS - Kubernetes 1.6.0 Compliance Pack has also been updated to feature the following Insights (which require the Node Scanner to be deployed):

  • Ensure that the kubelet service file permissions are set to 644 or more restrictive
  • Ensure that the kubelet service file ownership is set to root:root
  • If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive
  • If proxy kubeconfig file exists ensure ownership is set to root:root
  • Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive
  • Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root
  • Ensure that the certificate authorities file permissions are set to 644 or more restrictive
  • Ensure that the client certificate authorities file ownership is set to root:root
  • Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive
  • Ensure that the kubelet --config configuration file ownership is set to root:root