Contents
Contents
Case 3: matchExpressions tự do
1. Cài đặt
Cài đặt cilium-cli
Để cài cilium, ta
cần chuẩn bị cluster chưa cài CNI (container network interface) với status của
node là NotReady. Nếu đã có pod kube-proxy thì clium sẽ control nó. Còn nếu ko
có kube-proxy thì cilium sẽ hoạt động độc lập thay cho kube-proxy.
https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#install-the-cilium-cli
CILIUM_CLI_VERSION=$(curl
-s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname
-m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail
--remote-name-all
https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check
cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC
cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm
cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
cilium install --version
1.18.1
cilium status --wait
cilium hubble enable
Kiểm tra cilium
agent , cilium-envoy, cilium-operator:
kubectl get pods -n
kube-system -l k8s-app=cilium
kubectl get pods -n
kube-system | grep '^cilium-envoy'
kubectl get pods -n
kube-system -l app.kubernetes.io/name=cilium-operator
Cài đặt
cilium từ helm
https://docs.cilium.io/en/stable/installation/k8s-install-helm/#install-cilium
helm repo add cilium
https://helm.cilium.io/
helm show values
cilium/cilium | grep -A 5 'relay:'
# Cài đặt với bật thêm hubble
helm install cilium
cilium/cilium \
--namespace kube-system \
--version 1.18.1 \
--create-namespace --values values.yaml
Bật
debug
# Dòng 24
debug:
enabled: false
Bật
Hubble UI
# Dòng 1196
hubble:
enabled: true
2. Config
cilium
Tắt
kube-proxy
kubectl -n kube-system
delete ds kube-proxy
kubectl -n kube-system
delete cm kube-proxy
# Sửa file values.yaml và upgrade lại helm
Dòng_ 2109:
kubeProxyReplacement: "true"
# 1 cách nữa là edit cilium-config trong
kube-system namespace
kubectl -n kube-system
rollout restart deployment cilium-operator
# Sau đó kiểm tra:
kubectl -n kube-system
exec ds/cilium -- cilium-dbg status --verbose | grep KubeProxyReplacement
3.
Network Policy in Cilium
Phần 3.1 Lab Loại 1
Lab1
Case 2: Basic
pod to pod
frontend,
backend, db. Nhưng hiện tại frontend vẫn gọi đc DB. Cần chặn lại và chỉ cho
backend gọi
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: prod
spec:
endpointSelector:
matchLabels:
role: db
ingress:
- fromEndpoints:
- matchLabels:
role: backend
Case 3: matchExpressions
tự do
Ta muốn thêm 1
yêu cầu đặc biệt từ yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: backend-ingress-from-frontend
namespace: staging
spec:
endpointSelector:
matchLabels:
role: backend
ingress:
- fromEndpoints:
- matchLabels:
role: frontend
matchExpressions:
- key: k8s:io.kubernetes.pod.namespace
operator: In
values:
- staging
- prod
Case 4:
Egress basic
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: prod
spec:
endpointSelector:
matchLabels:
role: frontend
egress:
- toEndpoints:
- matchLabels:
role: backend
Case 5:
Allow all 1 namespace
Riêng môi trường
namesapce dev, mở all traffic tất cả các pod
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-dev-namespace-traffic
namespace: dev
spec:
endpointSelector: {}
ingress:
- fromEndpoints:
- matchLabels: {}
egress:
- toEndpoints:
- matchLabels: {}
Case 6
# k get pod -n prod --show-labels
NAME
READY
STATUS
RESTARTS AGE LABELS
backend 1/1 Running 0
21m role=backend
db
1/1 Running 0 21m role=db
frontend 1/1 Running 0
21m role=frontend
inventory 1/1 Running 0
3m38s role=inventory
orders 1/1 Running 0
3m38s role=orders
products 1/1 Running 0
3m37s role=products
In the prod namespace, several new pods
including orders, inventory, and products have been created.
Your task is to
configure a network policy orders-egress-to-inventory-products
with the necessary permissions so that pods with the label role=orders can egress on port 3000
to the inventory pod and product pod.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "orders-egress-to-inventory-products"
namespace: prod
spec:
endpointSelector:
matchLabels:
role: orders
egress:
- toEndpoints:
- matchExpressions:
- key: role
operator: In
values:
- inventory
- products
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "3000"
protocol: TCP
Case 7:
In admin namespace, allow role=admin
to egress on port 4000 to any role=user
, and port 5000 to any role=products
pods, across all namespaces.
Use name: admin-egress-policy
k -n admin get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
admin 1/1 Running 0
3m27s role=admin
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: admin-egress-policy
spec:
endpointSelector:
matchLabels:
role: admin
k8s:io.kubernetes.pod.namespace: admin
egress:
- toEndpoints:
- matchLabels:
role: user
toPorts:
- ports:
- port: "4000"
protocol: TCP
- toEndpoints:
- matchLabels:
role: products
toPorts:
- ports:
- port: "5000"
protocol: TCP
Case 8:
k -n prod get pod --show-labels
NAME
READY
STATUS
RESTARTS AGE LABELS
backend 1/1 Running 0
28m role=backend
db
1/1 Running 0 28m role=db
frontend 1/1 Running 0
28m role=frontend
inventory 1/1 Running 0
11m role=inventory
orders 1/1 Running 0
11m role=orders
payment 1/1 Running 0
23s role=payment
products 1/1 Running 0
11m role=products
The payment service (located in
the prod
namespace) requires communication with an external card validation
service, which is accessible at the IP address 200.100.17.1
.
Create an egress policy cidr-rule
that enables the payment
service to send traffic to the external card validation service specifically on
port 443
.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: cidr-rule
namespace: prod
spec:
endpointSelector:
matchLabels:
role: payment
egress:
- toCIDR:
- 200.100.17.1/32
toPorts:
- ports:
- port: "443"
protocol: TCP
Case 9:
The payment service must also
be configured to communicate with an external fraud detection service located
at the IP range 100.10.0.0/24
, excluding the address 100.10.0.50
.
Add an additional rule to the previously configured policy cidr-rule
for the payment service and
update it to enable communication with the external fraud detection service on
port 3000
.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: cidr-rule
namespace: prod
spec:
endpointSelector:
matchLabels:
role: payment
egress:
- toCIDR:
- 200.100.17.1/32
toPorts:
- ports:
- port: "443"
protocol: TCP
- toCIDRSet:
- cidr: 100.10.0.0/24
except:
- "100.10.0.50/32"
toPorts:
- ports:
- port: "3000"
protocol: TCP
Case 10 :
The payment service must also
be configured to communicate with an external fraud detection service located
at the IP range 100.10.0.0/24
, excluding the address 100.10.0.50
.
Add an additional rule to the previously configured policy cidr-rule
for the payment service
and update it to enable communication with the external fraud detection service
on port 3000
.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: cidr-rule
namespace: prod
spec:
endpointSelector:
matchLabels:
role: payment
egress:
- toCIDR:
- 200.100.17.1/32
toPorts:
- ports:
- port: "443"
protocol: TCP
- toCIDRSet:
- cidr: 100.10.0.0/24
except:
- "100.10.0.50/32"
toPorts:
- ports:
- port: "3000"
protocol: TCP
Case 11
The end users will interact with the application by accessing
the webapp hosted on the product service.
Configure an ingress policy my-policy
to allow all traffic from
outside the cluster to the products service role=products
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "my-policy"
namespace: prod
spec:
endpointSelector:
matchLabels:
role: products
ingress:
- fromEntities:
- world
Case 12:
In the admin
namespace, a monitoring
service pod has been set up with role=monitoring
. This service will need to talk to all the nodes in the
cluster.
Configure an egress policy my-policy
to explicitly allow it to
talk to all nodes by configuring role=monitoring
pods to egress to host
and remote-node
entities (so they can
reach all cluster nodes).
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: admin
spec:
endpointSelector:
matchLabels:
role: monitoring
egress:
- toEntities:
- host
- remote-node
Case 13
In the prod namespace,
configure a network policy my-policy to allow ingress
on HTTP port 80 to pods with label role=user from any pod in
the same namespace and only for these HTTP methods/paths:
·
GET /users
·
POST /users
·
PATCH /users
·
GET /auth/token
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: prod
spec:
endpointSelector:
matchLabels:
role: user
ingress:
- fromEndpoints:
- matchLabels: {}
toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: GET
path: /users
- method: POST
path: /users
- method: PATCH
path: /users
- method: GET
path: /auth/token
Case 13:
A
warehouse service has been established in the prod namespace.
Configure
a policy named my-policy for
the warehouse service to enable it to send DNS requests to the kube-dns server located in the kube-system namespace for the following Fully Qualified Domain Names
(FQDNs):
·
kodekloud.com
·
app.kodekloud.com
·
api.kodekloud.com
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: prod
spec:
endpointSelector:
matchLabels:
app: warehouse
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchName: kodekloud.com
- matchName: app.kodekloud.com
- matchName: api.kodekloud.com
Case 14:
Let’s make sure that all pods in our cluster can talk to
the kube-dns
server.
Create a CiliumClusterwideNetworkPolicy
with name: allow-dns-clusterwide
to allow all pods to egress DNS queries (port 53 ANY, any FQDN) to the
kube-dns server.
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: allow-dns-clusterwide
spec:
endpointSelector: {}
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
Phần 3.2 Lab Loại 2
Case 2:
In the video-app namespace, an upload
pod should only egress
to video-encoding
on TCP port 5000. The existing policy allows port 4000, so upload cannot reach
video-encoding on the correct port.
Update the my-policy
CiliumNetworkPolicy so
that port 5000 is permitted.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: video-app
spec:
endpointSelector:
matchLabels:
role: upload
egress:
- toEndpoints:
- matchLabels:
role: video-encoding
toPorts:
- ports:
- port: "5000"
protocol: TCP
Case 3:
In the video-app namespace,
the subscription
pod should only receive TCP port 80 traffic from pods
labeled role=content-access
. For some reason, all pods are still able to communicate
with the subscription service. Find out the cause and update my-policy
accordingly.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: video-app
spec:
endpointSelector:
matchLabels:
role: subscription
ingress:
- fromEndpoints:
- matchLabels:
role: content-access
toPorts:
- ports:
- port: "80"
protocol: TCP
Case 4:
The admin
pod in the admin namespace must connect to the content-management
pod
in video-app on TCP port 443. Two policies exist: content-management-policy
(in video-app) and admin-policy
(also in video-app).
Figure out why the admin
service can’t talk to the content-management service.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: content-management-policy
namespace: video-app
spec:
endpointSelector:
matchLabels:
role: content-management
ingress:
- fromEndpoints:
- matchLabels:
role: admin
k8s:io.kubernetes.pod.namespace: admin
toPorts:
- ports:
- port: "443"
protocol: TCP
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: admin-policy
namespace: admin
spec:
endpointSelector:
matchLabels:
role: admin
egress:
- toEndpoints:
- matchLabels:
role: content-management
k8s:io.kubernetes.pod.namespace: video-app
toPorts:
- ports:
- port: "443"
protocol: TCP
Case 5:
The subscription
service in the video-app
namespace communicates
with the notification
service on port 3000
in the video-app
namespace.
Recently, an engineer implemented an egress policy for the
subscription service to permit egress traffic to the notification service.
However, after applying the policy, the application encountered issues. The
engineer confirmed that the subscription service could access the notification
service Pod’s IP on port 3000
, yet the application remained non-functional.
Review and fix the policy subscription-policy
.
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: subscription-policy
namespace: video-app
spec:
endpointSelector:
matchLabels:
role: subscription
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
- toEndpoints:
- matchLabels:
role: notification
toPorts:
- ports:
- port: "3000"
protocol: TCP
Case 6:
A cluster-wide policy
named external-lockdown
is currently blocking all external ingress (fromEntities: world
), but
it’s also preventing pods from talking to each other internally. Update external-lockdown
so
it continues to block external traffic yet allows intra-cluster pod-to-pod
communication.
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: external-lockdown
spec:
endpointSelector: {}
ingressDeny:
- fromEntities:
- world
ingress:
- fromEntities:
- all
Không có nhận xét nào:
Đăng nhận xét