Thứ Hai, 29 tháng 11, 2010

note cks temp

 



Contents

 

CKS. 1

Chương 3:  Cluster Setup và Harderning. 2

Kube-bench. 2

Trivy. 2

ufw.. 2

SecurityContext 3

Limit user/Group. 3

Capability. 3

Seccomp. 3

Log audit 3

 

Phạm vi thi chứng chỉ

Cluster Setup (10%)
Use Network security policies to restrict cluster level access
Use CIS benchmark to review the security configuration of Kubernetes components (etcd, kubelet, kubedns, kubeapi)
Properly set up Ingress objects with security control
Protect node metadata and endpoints
Minimize use of, and access to, GUI elements
Verify platform binaries before deployin

Cluster Hardening (15%)
Restrict access to Kubernetes API
Use Role Based Access Controls to minimize exposure
Exercise caution in using service accounts e.g. disable defaults, minimize permissions on newly created ones
Update Kubernetes frequently

System Hardening (15%)
Minimize host OS footprint (reduce attack surface)
Minimize IAM roles
Minimize external access to the network
Appropriately use kernel hardening tools such as AppArmor, seccomp

Minimize Microservice Vulnerabilities (20%)
Setup appropriate OS level security domains
Manage Kubernetes secrets
Use container runtime sandboxes in multi-tenant environments (e.g. gvisor, kata containers)
Implement pod to pod encryption by use of mTLS

Supply Chain Security (20%)
Minimize base image footprint
Secure your supply chain: whitelist allowed registries, sign and validate images
Use static analysis of user workloads (e.g.Kubernetes resources, Docker files)
Scan images for known vulnerabilities

Monitoring, Logging and Runtime Security (20%)
Perform behavioral analytics of syscall process and file activities at the host and container level to detect malicious activities
Detect threats within physical infrastructure, apps, networks, data, users and workloads
Detect all phases of attack regardless where it occurs and how it spreads
Perform deep analytical investigation and identification of bad actors within environment
Ensure immutability of containers at runtime
Use Audit Logs to monitor access

 

---

CKS

 

Để bảo mật toàn diện ta cần bảo vệ cả 4 lớp Vật lý -> k8s cluster -> Contianer -> Code

 

 

Chương 3:  Cluster Setup và Harderning

 

 

 

Kube-bench

Link tải: https://github.com/aquasecurity/kube-bench/releases

wget https://github.com/aquasecurity/kube-bench/releases/download/v0.9.2/kube-bench_0.9.2_linux_amd64.deb

dpkg -i kube-bench_0.9.2_linux_amd64.deb

 

kube-bench run                   #dùng để scan all

kube-bench run --targets etcd    #hoặc master|node| controlplane| etcd| policies

kube-bench --config-dir /etc/kube-bench/cfg --config /etc/kube-bench/cfg/config.yaml -v10

 

Để hiểu bản chất kube-bench check các hạng mục như nào. Ta có thể tải file .tar.gz về và giải nén. Trong đó có bộ config file /etc/kube-bench/cfg có tập lệnh nó check kubernetes cluster.

 

 

Trivy

Download: https://github.com/aquasecurity/trivy/releases

wget https://github.com/aquasecurity/trivy/releases/download/v0.57.1/trivy_0.57.1_Linux-64bit.deb

dpkg -i trivy_0.57.1_Linux-64bit.deb

trivy image nginx:1.26.0

trivy k8s --report summary kubernetes-admin@kubernetes

trivy image --severity HIGH,CRITICAL nginx:1.26.0

 

 

 

 

ufw

#Refer: https://blog.rtsp.us/ufw-uncomplicated-firewall-cheat-sheet-a9fe61933330

#Refer:  https://manpages.ubuntu.com/manpages/oracular/en/man8/ufw.8.html

ufw enable|disable|reload

ufw show added

ufw show listening

ufw status [verbose|numbered]

 

#Chặn khóa ALL luồng vào/ra/routed

ufw default allow|deny|reject [incoming|outgoing|routed]

ufw default reject incoming

ufw default allow outgoing

ufw default deny routed

 

# Basic rule

ufw allow 80/tcp

ufw allow ssh|http|https

 

# Toàn bộ rule UFW sẽ xếp theo thứ tự sau

ufw [rule]

  [delete] [insert NUM] [prepend]

  allow|deny|reject|limit

  [in|out [on INTERFACE]]

  [log|log-all]

  [proto PROTOCOL]

  [from ADDRESS [port PORT | app APPNAME ]]

  [to ADDRESS [port PORT | app APPNAME ]]

  [comment COMMENT]

 

# Example:

## specific incoming interface

ufw allow in on eth0 proto tcp to any port 22

ufw allow in on eth0 to any port ssh

## specific source ip

ufw allow from 192.168.1.0/24 proto tcp to any port 22

ufw allow from 172.16.1.10 proto tcp to any port 80

ufw allow from 172.16.1.10 proto tcp to any port 443

## or both

ufw allow in on eth0 from 192.168.1.0/24 to any port 22



#Bật log ufw và level in log

ufw logging on|off|LEVEL

ufw logging full

tail -f /var/log/ufw.log

 

#Other

ufw show REPORT

ufw app list|info|default|update

ufw [delete] [insert NUM] [prepend] allow|deny|reject|limit [in|out] [log|log-all] [ PORT[/PROTOCOL] | APPNAME ] [comment COMMENT]

ufw [rule] [delete] [insert NUM] [prepend] allow|deny|reject|limit [in|out [on INTERFACE]] [log|log-all] [proto PROTOCOL] [from ADDRESS [port PORT | app APPNAME ]] [to ADDRESS [port PORT | app APPNAME ]] [comment COMMENT]

ufw route [delete] [insert NUM] [prepend] allow|deny|reject|limit [in|out on INTERFACE] [log|log-all] [proto PROTOCOL] [from ADDRESS [port PORT | app APPNAME]] [to ADDRESS [port PORT | app APPNAME]] [comment COMMENT]

ufw delete NUM

 

#Reset

ufw reset



SecurityContext

Limit user/Group

 

Capability

 

Seccomp

 

 

 

 

 

 

 

 

 

 

 

Log audit

 

 

 

 

 

 

 

 https://blog.csdn.net/sinat_33076015/category_12426913.html

https://ai-feier.github.io/p/2024-cks-%E9%A2%98%E5%BA%93/

kubernetes-exercises/topics/README.md at main · jayendrapatil/kubernetes-exercises

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Thứ Tư, 3 tháng 3, 2010

temp-note-k8s-v2

 https://iceburn.medium.com/kubectl-useful-commands-f5f47c0773f


Kubectl Useful Commands

Kubernetes Shortcuts

Backup

root@vagrant:/home/vagrant# kubectl get all -A -o yaml > backup.yaml

Explain

root@vagrant:/home/vagrant# kubectl explain sc --recursive | less

Pods

#List Pod
root@vagrant:/home/vagrant# kubectl get pods
root@vagrant:/home/vagrant# kubectl get pods -o wide
root@vagrant:/home/vagrant# kubectl get pods -n kube-system
root@vagrant:/home/vagrant# kubectl get pods --selector app=test-application,env=develop
root@vagrant:/home/vagrant# kubectl get pods -l app=test-application,env=develop
root@vagrant:/home/vagrant# kubectl get pods --all-namespaces
root@vagrant:/home/vagrant# kubectl get pods --show-labels
#Pod Status
root@vagrant:/home/vagrant# kubectl describe pod mypod
#Create Pod
root@vagrant:/home/vagrant# kubectl run mypod --image nginx
#Edit Pod
root@vagrant:/home/vagrant# kubectl edit pod mypod
root@vagrant:/home/vagrant# kubectl get pod mypod -o yaml > mypod.yaml
#Create Pod from YML file
root@vagrant:/home/vagrant# kubectl create -f mypod.yml
root@vagrant:/home/vagrant# kubectl apply -f mypod.yml
#Delete Pod
root@vagrant:/home/vagrant# kubectl delete pod mypod

ReplicaSet

#Create ReplicaSetroot@vagrant:/home/vagrant# wget https://kubernetes.io/examples/controllers/frontend.yaml
root@vagrant:/home/vagrant# cat frontend.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
root@vagrant:/home/vagrant# kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml#Get ReplicaSet
root@vagrant:/home/vagrant# kubectl get rs
#Delete ReplicaSet
root@vagrant:/home/vagrant# kubectl delete rs frontend

Deployment

#Scale Deployment
root@vagrant:/home/vagrant# kubectl replace -f application.yml
root@vagrant:/home/vagrant# kubectl scale --replicas=10 -f application.yml
root@vagrant:/home/vagrant# kubectl scale --replicas=10 replicaset application
#Generate YML File From Deployment
root@vagrant:/home/vagrant# kubectl create deployment --image=nginx nginx --replicas=2 --dry-run=client -o yaml > nginx.yaml
root@vagrant:/home/vagrant# kubectl create deployment httpd-name --image=httpd
root@vagrant:/home/vagrant# kubectl scale deployment httpd-name --replicas=10
#Rollout
root@vagrant:/home/vagrant# rollout status deployment/httpd-name
root@vagrant:/home/vagrant# kubectl rollout history deployment/httpd-name --revision=1
root@vagrant:/home/vagrant# kubectl rollout undo deployment/httpd-name

Configuration Examples

apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: nginx
image: nginx
env:
- name: DB_NAME
value: MyDB
- name: DB_URL
valueFrom:
configMapKeyRef:
name: config-url
key: db_url
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: config-passwd
key: db_password
#Create ConfigMaproot@vagrant:/home/vagrant# kubectl create configmap testconfigmap --from-literal=TestKey1=TestValue1 --from-literal=TestKey2=TestValue2
root@vagrant:/home/vagrant# kubectl create configmap testconfigmap --from-file=/opt/test_file
#Test
root@vagrant:/home/vagrant# kubectl get configmaps
root@vagrant:/home/vagrant# kubectl describe configmaps
root@vagrant:/home/vagrant# kubectl describe configmap testconfigmap
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: nginx
image: nginx
envFrom:
- configMapRef:
name: testconfigmap
#Create Secrets
root@vagrant:/home/vagrant# kubectl create secret generic testsecret --from-literal=Key1=Value1 --from-literal=Key2=Value2
root@vagrant:/home/vagrant# create secret generic testsecret --from-file=/opt/secret
#Test
root@vagrant:/home/vagrant# kubectl get secrets
root@vagrant:/home/vagrant# kubectl describe secrets
root@vagrant:/home/vagrant# kubectl get secret testsecret
root@vagrant:/home/vagrant# kubectl describe secret testsecret
root@vagrant:/home/vagrant# kubectl get secret testsecret -o wide
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: nginx
image: nginx
envFrom:
- secretRef:
name: testsecret
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
securityContext:
runAsUser: 1000
capabilities:
add: ["ADMINISTRATOR"]

containers:
- name: nginx
image: nginx
command: ["printenv"]
args: args: ["HOSTNAME"]
securityContext:
runAsUser: 2000
capabilities:
add: ["USER"]
#Create Service Account
root@vagrant:/home/vagrant# kubectl create serviceaccount testsa
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
serviceAccount: testsa
containers:
- name: nginx
image: nginx
envFrom:
- secretRef:
name: testsecret
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: nginx
image: nginx
resources:
requests:
memory: "1Mi"
cpu: 0.2
limits:
memory: "1Gi"
cpu: 1

envFrom:
- secretRef:
name: testsecret

Possible variants that we can set are: NoSchedule , PreferNoSchedule , NoExecute

#Create Taints
root@vagrant:/home/vagrant# kubectl taint nodes vagrant example-key=blue:NoSchedule
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: nginx
image: nginx
envFrom:
- secretRef:
name: testsecret
tolerations:
- key: "example-key"
operator: "Equal"
value: "blue"
effect: "NoSchedule"
#Remove 
root@vagrant:/home/vagrant# kubectl taint nodes vagrant example-key=blue:NoSchedule-
#Create Selector
root@vagrant:/home/vagrant# kubectl label nodes vagrant label-key=label-name
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: nginx
image: nginx
envFrom:
- secretRef:
name: testsecret
nodeSelector:
label-key: label-name

Services

root@vagrant:/home/vagrant# kubectl expose deployment testdeployment --name=nginx-service --type=NodePort --target-port=8080 --port=80
root@vagrant:/home/vagrant# kubectl expose pod mypod --port=80 --name=nginx-service --type=NodePort
root@vagrant:/home/vagrant# kubectl create service nodeport mypod --tcp=80:80 --node-port=30080

Namespace

#Get Pods
root@vagrant:/home/vagrant# kubectl get pods --namespace=develop
root@vagrant:/home/vagrant# kubectl get pods -n develop
root@vagrant:/home/vagrant# get pods --all-namespaces
root@vagrant:/home/vagrant# kubectl get ns
#Change Default Namespace
root@vagrant:/home/vagrant# kubectl config set-context --current --namespace=develop

Readinesss Probe / Liveness Probe

#HTTP Test
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: nginx
image: nginx
readinessProbe/livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 8

envFrom:
- secretRef:
name: testsecret
#TCP Test
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: nginx
image: nginx
readinessProbe/livenessProbe:
tcpSocket:
port: 80

envFrom:
- secretRef:
name: testsecret
#Run Command
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: nginx
image: nginx
readinessProbe/livenessProbe:
exec:
command:
- cat
- probe.htm

envFrom:
- secretRef:
name: testsecret

Logs

root@vagrant:/home/vagrant# kubectl logs -f pod-name

Jobs

#Create Jobs
root@vagrant:/home/vagrant# kubectl create job test-job --image=nginx
#Get Jobs
root@vagrant:/home/vagrant# kubectl get jobs test-job
root@vagrant:/home/vagrant# kubectl get jobs

Ứng cứu khi chown -R user1:user1 /etc

1. Bài toán Gõ nhầm: chown -R user1:user1 /etc 2. Giải: Cách 1: Tìm bản backup /etc cũ (tỉ lệ phục hồi gần như ~100%) Cách 2: Tìm tạm 1 thư ...