This page was exported from Actual Test Materials [ http://blog.actualtests4sure.com ] Export date:Fri Nov 15 19:45:01 2024 / +0000 GMT ___________________________________________________ Title: [Q13-Q32] CKS Dumps Free Test Engine Player Verified Updated [Oct 11, 2024] --------------------------------------------------- CKS Dumps Free Test Engine Player Verified Updated [Oct 11, 2024] Q&As with Explanations Verified & Correct Answers QUESTION 13SIMULATIONCreate a network policy named allow-np, that allows pod in the namespace staging to connect to port 80 of other pods in the same namespace.Ensure that Network Policy:-1. Does not allow access to pod not listening on port 80.2. Does not allow access from Pods, not in namespace staging. apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: network-policyspec:podSelector: {} #selects all the pods in the namespace deployedpolicyTypes:– Ingressingress:– ports: #in input traffic allowed only through 80 port only– protocol: TCPport: 80QUESTION 14Using the runtime detection tool Falco, Analyse the container behavior for at least 30 seconds, using filters that detect newly spawning and executing processes  store the incident file art /opt/falco-incident.txt, containing the detected incidents. one per line, in the format [timestamp],[uid],[user-name],[processName]QUESTION 15SIMULATIONGiven an existing Pod named nginx-pod running in the namespace test-system, fetch the service-account-name used and put the content in /candidate/KSC00124.txt Create a new Role named dev-test-role in the namespace test-system, which can perform update operations, on resources of type namespaces.Create a new RoleBinding named dev-test-role-binding, which binds the newly created Role to the Pod’s ServiceAccount ( found in the Nginx pod running in namespace test-system).  Sendusyourfeedbackonit QUESTION 16SIMULATIONBefore Making any changes build the Dockerfile with tag base:v1Now Analyze and edit the given Dockerfile(based on ubuntu 16:04)Fixing two instructions present in the file, Check from Security Aspect and Reduce Size point of view.Dockerfile:FROM ubuntu:latestRUN apt-get update -yRUN apt install nginx -yCOPY entrypoint.sh /RUN useradd ubuntuENTRYPOINT [“/entrypoint.sh”]USER ubuntuentrypoint.sh#!/bin/bashecho “Hello from CKS”After fixing the Dockerfile, build the docker-image with the tag base:v2 To Verify: Check the size of the image before and after the build.  Send us the Feedback on it. QUESTION 17You must complete this task on the following cluster/nodes: Cluster: immutable-cluster Master node: master1 Worker node: worker1 You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context immutable-cluster Context: It is best practice to design containers to be stateless and immutable. Task: Inspect Pods running in namespace prod and delete any Pod that is either not stateless or not immutable. Use the following strict interpretation of stateless and immutable: 1. Pods being able to store data inside containers must be treated as not stateless. Note: You don’t have to worry whether data is actually stored inside containers or not already. 2. Pods being configured to be privileged in any way must be treated as potentially not stateless or not immutable. Reference: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ https://cloud.google.com/architecture/best-practices-for-operating-containersQUESTION 18SIMULATIONFix all issues via configuration and restart the affected components to ensure the new setting takes effect.Fix all of the following violations that were found against the API server:- a. Ensure the –authorization-mode argument includes RBAC b. Ensure the –authorization-mode argument includes Node c. Ensure that the –profiling argument is set to false Fix all of the following violations that were found against the Kubelet:- a. Ensure the –anonymous-auth argument is set to false.b. Ensure that the –authorization-mode argument is set to Webhook.Fix all of the following violations that were found against the ETCD:-a. Ensure that the –auto-tls argument is not set to trueHint: Take the use of Tool Kube-Bench API server:Ensure the –authorization-mode argument includes RBACTurn on Role Based Access Control. Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. It is recommended to use the RBAC authorization mode.Fix – BuildtimeKubernetesapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-systemspec:containers:– command:+ – kube-apiserver+ – –authorization-mode=RBAC,Nodeimage: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0livenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthzport: 6443scheme: HTTPSinitialDelaySeconds: 15timeoutSeconds: 15name: kube-apiserver-should-passresources:requests:cpu: 250mvolumeMounts:– mountPath: /etc/kubernetes/name: k8sreadOnly: true– mountPath: /etc/ssl/certsname: certs– mountPath: /etc/pkiname: pkihostNetwork: truevolumes:– hostPath:path: /etc/kubernetesname: k8s– hostPath:path: /etc/ssl/certsname: certs– hostPath:path: /etc/pkiname: pkiEnsure the –authorization-mode argument includes NodeRemediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the –authorization-mode parameter to a value that includes Node.–authorization-mode=Node,RBACAudit:/bin/ps -ef | grep kube-apiserver | grep -v grepExpected result:‘Node,RBAC’ has ‘Node’Ensure that the –profiling argument is set to falseRemediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the below parameter.–profiling=falseAudit:/bin/ps -ef | grep kube-apiserver | grep -v grepExpected result:‘false’ is equal to ‘false’Fix all of the following violations that were found against the Kubelet:- Ensure the –anonymous-auth argument is set to false.Remediation: If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to false. If using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.–anonymous-auth=falseBased on your system, restart the kubelet service. For example:systemctl daemon-reloadsystemctl restart kubelet.serviceAudit:/bin/ps -fC kubeletAudit Config:/bin/cat /var/lib/kubelet/config.yamlExpected result:‘false’ is equal to ‘false’2) Ensure that the –authorization-mode argument is set to Webhook.Auditdocker inspect kubelet | jq -e ‘.[0].Args[] | match(“–authorization-mode=Webhook”).string’ Returned Value: –authorization-mode=Webhook Fix all of the following violations that were found against the ETCD:- a. Ensure that the –auto-tls argument is not set to true Do not use self-signed certificates for TLS. etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service.Fix – BuildtimeKubernetesapiVersion: v1kind: Podmetadata:annotations:scheduler.alpha.kubernetes.io/critical-pod: “”creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-systemspec:containers:– command:+ – etcd+ – –auto-tls=trueimage: k8s.gcr.io/etcd-amd64:3.2.18imagePullPolicy: IfNotPresentlivenessProbe:exec:command:– /bin/sh– -ec– ETCDCTL_API=3 etcdctl –endpoints=https://[192.168.22.9]:2379 –cacert=/etc/kubernetes/pki/etcd/ca.crt–cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt –key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo failureThreshold: 8 initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd-should-fail resources: {} volumeMounts:– mountPath: /var/lib/etcdname: etcd-data– mountPath: /etc/kubernetes/pki/etcdname: etcd-certshostNetwork: truepriorityClassName: system-cluster-criticalvolumes:– hostPath:path: /var/lib/etcdtype: DirectoryOrCreatename: etcd-data– hostPath:path: /etc/kubernetes/pki/etcdtype: DirectoryOrCreatename: etcd-certsstatus: {}QUESTION 19ContextA default-deny NetworkPolicy avoids to accidentally expose a Pod in a namespace that doesn’t have any other NetworkPolicy defined.TaskCreate a new default-deny NetworkPolicy named defaultdeny in the namespace testing for all traffic of type Egress.The new NetworkPolicy must deny all Egress traffic in the namespace testing.Apply the newly created default-deny NetworkPolicy to all Pods running in namespace testing. QUESTION 20Given an existing Pod named nginx-pod running in the namespace test-system, fetch the service-account-name used and put the content in /candidate/KSC00124.txt Create a new Role named dev-test-role in the namespace test-system, which can perform update operations, on resources of type namespaces.  Create a new RoleBinding named dev-test-role-binding, which binds the newly created Role to the Pod’s ServiceAccount ( found in the Nginx pod running in namespace test-system). QUESTION 21You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context prod-accountContext:A Role bound to a Pod’s ServiceAccount grants overly permissive permissions. Complete the following tasks to reduce the set of permissions.Task:Given an existing Pod named web-pod running in the namespace database.1. Edit the existing Role bound to the Pod’s ServiceAccount test-sa to only allow performing get operations, only on resources of type Pods.2. Create a new Role named test-role-2 in the namespace database, which only allows performing update operations, only on resources of type statuefulsets.3. Create a new RoleBinding named test-role-2-bind binding the newly created Role to the Pod’s ServiceAccount.Note: Don’t delete the existing RoleBinding. $ k edit role test-role -n databaseapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:creationTimestamp: “2021-06-04T11:12:23Z”name: test-rolenamespace: databaseresourceVersion: “1139”selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/database/roles/test-role uid: 49949265-6e01-499c-94ac-5011d6f6a353 rules:– apiGroups:– “”resources:– podsverbs:– * # Delete– get # Fixed$ k create role test-role-2 -n database –resource statefulset –verb update$ k create rolebinding test-role-2-bind -n database –role test-role-2 –serviceaccount=database:test-sa Explanation[desk@cli]$ k get pods -n databaseNAME READY STATUS RESTARTS AGE LABELSweb-pod 1/1 Running 0 34s run=web-pod[desk@cli]$ k get roles -n databasetest-role[desk@cli]$ k edit role test-role -n databaseapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:creationTimestamp: “2021-06-13T11:12:23Z”name: test-rolenamespace: databaseresourceVersion: “1139”selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/database/roles/test-role uid: 49949265-6e01-499c-94ac-5011d6f6a353 rules:– apiGroups:– “”resources:– podsverbs:– “*” # Delete this– get # Replace by this[desk@cli]$ k create role test-role-2 -n database –resource statefulset –verb update role.rbac.authorization.k8s.io/test-role-2 created [desk@cli]$ k create rolebinding test-role-2-bind -n database –role test-role-2 –serviceaccount=database:test-sa rolebinding.rbac.authorization.k8s.io/test-role-2-bind created Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ role.rbac.authorization.k8s.io/test-role-2 created[desk@cli]$ k create rolebinding test-role-2-bind -n database –role test-role-2 –serviceaccount=database:test-sa rolebinding.rbac.authorization.k8s.io/test-role-2-bind created[desk@cli]$ k create role test-role-2 -n database –resource statefulset –verb update role.rbac.authorization.k8s.io/test-role-2 created [desk@cli]$ k create rolebinding test-role-2-bind -n database –role test-role-2 –serviceaccount=database:test-sa rolebinding.rbac.authorization.k8s.io/test-role-2-bind created Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/QUESTION 22You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context test-account Task: Enable audit logs in the cluster.To do so, enable the log backend, and ensure that:1. logs are stored at /var/log/Kubernetes/logs.txt2. log files are retained for 5 days3. at maximum, a number of 10 old audit log files are retainedA basic policy is provided at /etc/Kubernetes/logpolicy/audit-policy.yaml. It only specifies what not to log. Note: The base policy is located on the cluster’s master node.Edit and extend the basic policy to log: 1. Nodes changes at RequestResponse level 2. The request body of persistentvolumes changes in the namespace frontend 3. ConfigMap and Secret changes in all namespaces at the Metadata level Also, add a catch-all rule to log all other requests at the Metadata level Note: Don’t forget to apply the modified policy. $ vim /etc/kubernetes/log-policy/audit-policy.yaml– level: RequestResponseuserGroups: [“system:nodes”]– level: Requestresources:– group: “” # core API groupresources: [“persistentvolumes”]namespaces: [“frontend”]– level: Metadataresources:– group: “”resources: [“configmaps”, “secrets”]– level: Metadata$ vim /etc/kubernetes/manifests/kube-apiserver.yaml Add these– –audit-policy-file=/etc/kubernetes/log-policy/audit-policy.yaml– –audit-log-path=/var/log/kubernetes/logs.txt– –audit-log-maxage=5– –audit-log-maxbackup=10Explanation[desk@cli] $ ssh master1 [master1@cli] $ vim /etc/kubernetes/log-policy/audit-policy.yaml apiVersion: audit.k8s.io/v1 # This is required.kind: Policy# Don’t generate audit events for all requests in RequestReceived stage.omitStages:– “RequestReceived”rules:# Don’t log watch requests by the “system:kube-proxy” on endpoints or services– level: Noneusers: [“system:kube-proxy”]verbs: [“watch”]resources:– group: “” # core API groupresources: [“endpoints”, “services”]# Don’t log authenticated requests to certain non-resource URL paths.– level: NoneuserGroups: [“system:authenticated”]nonResourceURLs:– “/api*” # Wildcard matching.– “/version”# Add your changes below– level: RequestResponseuserGroups: [“system:nodes”] # Block for nodes– level: Requestresources:– group: “” # core API groupresources: [“persistentvolumes”] # Block for persistentvolumesnamespaces: [“frontend”] # Block for persistentvolumes of frontend ns– level: Metadataresources:– group: “” # core API groupresources: [“configmaps”, “secrets”] # Block for configmaps & secrets– level: Metadata # Block for everything else[master1@cli] $ vim /etc/kubernetes/manifests/kube-apiserver.yamlapiVersion: v1kind: Podmetadata:annotations:kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.0.0.5:6443 labels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-systemspec:containers:– command:– kube-apiserver– –advertise-address=10.0.0.5– –allow-privileged=true– –authorization-mode=Node,RBAC– –audit-policy-file=/etc/kubernetes/log-policy/audit-policy.yaml #Add this– –audit-log-path=/var/log/kubernetes/logs.txt #Add this– –audit-log-maxage=5 #Add this– –audit-log-maxbackup=10 #Add this…output truncatedNote: log volume & policy volume is already mounted in vim /etc/kubernetes/manifests/kube-apiserver.yaml so no need to mount it. Reference: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/QUESTION 23Context:Cluster: gvisorMaster node: master1Worker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context gvisorContext: This cluster has been prepared to support runtime handler, runsc as well as traditional one.Task:Create a RuntimeClass named not-trusted using the prepared runtime handler names runsc.Update all Pods in the namespace server to run on newruntime. Find all the pods/deployment and edit runtimeClassName parameter to not-trusted under spec[desk@cli] $ k edit deploy nginxspec:runtimeClassName: not-trusted. # Add thisExplanation[desk@cli] $vim runtime.yamlapiVersion: node.k8s.io/v1kind: RuntimeClassmetadata:name: not-trustedhandler: runsc[desk@cli] $ k apply -f runtime.yaml[desk@cli] $ k get podsNAME READY STATUS RESTARTS AGEnginx-6798fc88e8-chp6r 1/1 Running 0 11mnginx-6798fc88e8-fs53n 1/1 Running 0 11mnginx-6798fc88e8-ndved 1/1 Running 0 11m[desk@cli] $ k get deployNAME READY UP-TO-DATE AVAILABLE AGEnginx 3/3 11 3 5m[desk@cli] $ k edit deploy nginxQUESTION 24You must complete this task on the following cluster/nodes:Cluster: apparmorMaster node: masterWorker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context apparmorGiven: AppArmor is enabled on the worker1 node.Task:On the worker1 node,1. Enforce the prepared AppArmor profile located at: /etc/apparmor.d/nginx2. Edit the prepared manifest file located at /home/cert_masters/nginx.yaml to apply the apparmor profile3. Create the Pod using this manifest [desk@cli] $ ssh worker1[worker1@cli] $apparmor_parser -q /etc/apparmor.d/nginx[worker1@cli] $aa-status | grep nginxnginx-profile-1[worker1@cli] $ logout[desk@cli] $vim nginx-deploy.yamlAdd these lines under metadata:annotations: # Add this linecontainer.apparmor.security.beta.kubernetes.io/<container-name>: localhost/nginx-profile-1[desk@cli] $kubectl apply -f nginx-deploy.yamlExplanation[desk@cli] $ ssh worker1[worker1@cli] $apparmor_parser -q /etc/apparmor.d/nginx[worker1@cli] $aa-status | grep nginxnginx-profile-1[worker1@cli] $ logout[desk@cli] $vim nginx-deploy.yaml[desk@cli] $kubectl apply -f nginx-deploy.yaml pod/nginx-deploy created Reference: https://kubernetes.io/docs/tutorials/clusters/apparmor/ pod/nginx-deploy created[desk@cli] $kubectl apply -f nginx-deploy.yaml pod/nginx-deploy created Reference: https://kubernetes.io/docs/tutorials/clusters/apparmor/QUESTION 25ContextThis cluster uses containerd as CRI runtime.Containerd’s default runtime handler is runc. Containerd has been prepared to support an additional runtime handler, runsc (gVisor).TaskCreate a RuntimeClass named sandboxed using the prepared runtime handler named runsc.Update all Pods in the namespace server to run on gVisor. QUESTION 26SIMULATIONCreate a RuntimeClass named untrusted using the prepared runtime handler named runsc.Create a Pods of image alpine:3.13.2 in the Namespace default to run on the gVisor runtime class.Verify: Exec the pods and run the dmesg, you will see output like this:-  Send us your feedback on it. QUESTION 27Create a Pod name Nginx-pod inside the namespace testing, Create a service for the Nginx-pod named nginx-svc, using the ingress of your choice, run the ingress on tls, secure port.  Send us your Feedback on this. QUESTION 28ContextA Role bound to a Pod’s ServiceAccount grants overly permissive permissions. Complete the following tasks to reduce the set of permissions.TaskGiven an existing Pod named web-pod running in the namespace security.Edit the existing Role bound to the Pod’s ServiceAccount sa-dev-1 to only allow performing watch operations, only on resources of type services.Create a new Role named role-2 in the namespace security, which only allows performing update operations, only on resources of type namespaces.Create a new RoleBinding named role-2-binding binding the newly created Role to the Pod’s ServiceAccount. QUESTION 29Create a RuntimeClass named untrusted using the prepared runtime handler named runsc.Create a Pods of image alpine:3.13.2 in the Namespace default to run on the gVisor runtime class. Verify: Exec the pods and run the dmesg, you will see output like this:-QUESTION 30Cluster: scanner Master node: controlplane Worker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context scannerGiven: You may use Trivy’s documentation.Task: Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace nato.Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images. Trivy is pre-installed on the cluster’s master node. Use cluster’s master node to use Trivy. QUESTION 31Given an existing Pod named nginx-pod running in the namespace test-system, fetch the service-account-name used and put the content in /candidate/KSC00124.txt Create a new Role named dev-test-role in the namespace test-system, which can perform update operations, on resources of type namespaces.Create a new RoleBinding named dev-test-role-binding, which binds the newly created Role to the Pod’s ServiceAccount ( found in the Nginx pod running in namespace test-system). QUESTION 32SIMULATIONCreate a network policy named restrict-np to restrict to pod nginx-test running in namespace testing.Only allow the following Pods to connect to Pod nginx-test:-1. pods in the namespace default2. pods with label version:v1 in any namespace.Make sure to apply the network policy.  Send us your Feedback on this.  Loading … Verified CKS dumps Q&As Latest CKS Download: https://www.actualtests4sure.com/CKS-test-questions.html --------------------------------------------------- Images: https://blog.actualtests4sure.com/wp-content/plugins/watu/loading.gif https://blog.actualtests4sure.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2024-10-11 12:45:31 Post date GMT: 2024-10-11 12:45:31 Post modified date: 2024-10-11 12:45:31 Post modified date GMT: 2024-10-11 12:45:31