Views:
Profile applicability: Level 1
If kubelet is running, ensure that the file ownership of its kubeconfig file is set to root:root.
The kubeconfig file for kubelet controls various parameters for the kubelet service in the worker node. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root.
Note
Note
See the GKE documentation for the default value.

Impact

Overly permissive file access increases the security risk to the platform.

Audit

Using Google Cloud Console
  1. Go to Kubernetes Engine by visiting Google Cloud Console Kubernetes Engine page.
  2. Click on the desired cluster to open the Details page, then click on the desired Node pool to open the Node pool Details page. 3.
  3. Note the name of the desired node
  4. Go to VM Instances by visiting Google Cloud Console VM Instances page
  5. Find the desired node and click on 'SSH' to open an SSH connection to the node.
Using Command Line
Method 1
SSH to the worker nodes.
To check to see if the kubelet service is running:
sudo systemctl status kubelet
The output should return Active: active (running) since..
Run the following command on each node to find the appropriate kubeconfig file:
ps -ef | grep kubelet
The output of the above command should return something similar to --kubeconfig /var/lib/kubelet/kubeconfig which is the location of the kubeconfig file.
Run this command to obtain the kubeconfig file ownership:
stat -c %U:%G /var/lib/kubelet/kubeconfig
The output of the above command gives you the kubeconfig file's ownership. Verify that the ownership is set to root:root.
Method 2
Create and Run a Privileged Pod.
You will need to run a pod that is privileged enough to access the host's file system. This can be achieved by deploying a pod that uses the hostPath volume to mount the node's file system into the pod.
Here's an example of a simple pod definition that mounts the root of the host to /host within the pod:
apiVersion: v1 kind: Pod metadata: name: file-check spec: volumes: - name: host-root hostPath: path: / type: Directory containers: - name: nsenter image: busybox command: ["sleep", "3600"] volumeMounts: - name: host-root mountPath: /host securityContext: privileged: true
Save this to a file (e.g., file-check-pod.yaml) and create the pod:
kubectl apply -f file-check-pod.yaml
Once the pod is running, you can exec into it to check file ownership on the node:
kubectl exec -it file-check -- sh
Now you are in a shell inside the pod, but you can access the node's file system through the /host directory and check the ownership of the file:
ls -l /host/var/lib/kubelet/kubeconfig
The output of the above command gives you the kubeconfig file's ownership. Verify that the ownership is set to root:root.

Remediation

Run the below command (based on the file location on your system) on each worker node. For example:
chown root:root <proxy kubeconfig file>