Contents
1.3 Một vài lệnh tắt để check cilium
Case 3: matchExpressions tự do
Case 6 matchLabels và matchExpression
Case 7: Egress with 2 endpoint
Case 9: Egress with except CIDR
Case 11 Allow al traffic ingress
Case 12: Allow al traffic egress
Case 14: Allow all DNS to kube dns server
4.3 NetworkPolicy cho Ingress và GatewayAPI
1. Cài đặt
(Khi chuyển đổi từ calico , sau khi apply yaml
cilium, cần khởi động lại worker node cho ăn hoàn toàn, nếu ko pod cilium mới
sẽ bị trạng thái pending)
1.1 Cài đặt cilium từ helm
Dùng kubeadm reset, sau đó kubeadm init lại.
Join lại. Apply cilim, sau đó xóa proxy, xóa cilium sau đó apply lại cilium.
https://docs.cilium.io/en/stable/installation/k8s-install-helm/#install-cilium
helm
repo add cilium https://helm.cilium.io/
helm
show values cilium/cilium > values.yaml
cat
values.yaml | grep -A 5 'relay:'
helm
pull cilium/cilium
tar
-xvzf cilium-1.18.2.tgz
helm
show chart cilium/cilium
#
Cài đặt ( CHÚ Ý SỬA FILE VALUES với các option bên dưới)
helm
install cilium cilium/cilium \
--namespace kube-system \
--version 1.18.2 \
--create-namespace --values values.yaml
#
Cài đặt thư mục tải về
helm
install cilium cilium/ --namespace kube-system --create-namespace --values
values.yaml
Kiểm tra cilium agent , cilium-envoy,
cilium-operator:
Những option nên bật
trong Helm values.yaml
Chi
tiết config |
Giải
thích chức năng |
Chú ý cài cilium cli và hubble cli trước khi
chạy |
|
Khai báo cài đặt ban đầu, bắt buộc có |
k8sServiceHost:
"master01" #dòng 78 k8sServicePort: "6443" #dòng 81
|
Bật debug mode |
#
Dòng 24 debug: enabled: false
|
Bật Hubble UI (default) |
#
Dòng 1202 hubble: enabled: true ---------------- #
Dòng 1498 Enable Hubble Relay relay:
enabled: false ---------------- #
Dòng 1723 Hubble UI ui: # -- Whether to enable the Hubble UI. enabled: true #
Dòng 1891: Mục đích là để vào Hubble UI từ IP Loadbalancer service: type: NodePort
|
Tắt kube-proxy (nên làm bước cuối) |
kubectl
-n kube-system delete ds kube-proxy kubectl
-n kube-system delete cm kube-proxy
kubeProxyReplacement:
"true"
helm
upgrade cilium cilium/ --namespace
kube-system --create-namespace --values values.yaml sleep
10 kubectl
-n kube-system rollout restart deployment cilium-operator kubectl
-n kube-system rollout restart daemonset
cilium
|
Bật Audit mode – lưu log kết nối pod-to-pod
để xem khi nào deny, allow. Thuận tiện tra cứu bằng kubectl hoặc hubble |
policyAuditMode:
true #(restart deployment/daemonset để
apply) Check
sau khi apply kubectl
-n kube-system exec cilium-xxxxx –huble observe flows –t policy-verdict –last
1 |
1.2 Cài đặt cilium-cli only
Để cài cilium, ta cần chuẩn bị cluster chưa
cài CNI (container network interface) với status của node là NotReady. Nếu đã
có pod kube-proxy thì clium sẽ control nó. Còn nếu ko có kube-proxy thì cilium
sẽ hoạt động độc lập thay cho kube-proxy.
https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#install-the-cilium-cli
CILIUM_CLI_VERSION=$(curl
-s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if
[ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl
-L --fail --remote-name-all
https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum
--check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo
tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm
cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
cilium install --version 1.18.1
cilium
status --wait
cilium
hubble enable
1.3 Cài đặt Hubble Cli
https://docs.cilium.io/en/stable/observability/hubble/setup/#install-the-hubble-client
HUBBLE_VERSION=$(curl -s
https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
HUBBLE_ARCH=amd64
if [ "$(uname -m)" =
"aarch64" ]; then HUBBLE_ARCH=arm64; fi
curl -L --fail --remote-name-all
https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
sha256sum --check
hubble-linux-${HUBBLE_ARCH}.tar.gz.sha256sum
sudo tar xzvfC
hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
rm
hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
1.3 Một vài lệnh tắt để check cilium
kubectl -n kube-system exec ds/cilium -- cilium-dbg
status –verbose #kiểm tra config cilium hiện tại
hoặc
kubectl get cm
cilium-config -n kube-system -o yaml |
grep -i ingress | grep -i enable-ingress-controller
2. Config cilium
Tắt kube-proxy
kubectl
-n kube-system delete ds kube-proxy
kubectl
-n kube-system delete cm kube-proxy
#
Sửa file values.yaml và upgrade lại helm
Dòng_
2109: kubeProxyReplacement: "true"
helm
upgrade cilium cilium/ --namespace
kube-system --create-namespace --values values.yaml
#
Sau đó cần resstart lại deployment và daemonset để reload lại cấu hình
kubectl
-n kube-system rollout restart deployment cilium-operator
kubectl
-n kube-system rollout restart daemonset
cilium
ư
#
Sau đó kiểm tra:
kubectl
-n kube-system exec ds/cilium -- cilium-dbg status --verbose | grep
KubeProxyReplacement
3. Network Policy in Cilium
Mẫu về dạng cilium full format with all
option:
YAML
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "complete-policy-example"
namespace: "your-namespace"
spec:
endpointSelector:
matchLabels:
app: backend
ingress:
-
fromEndpoints:
- matchLabels:
app: frontend
"k8s:io.kubernetes.pod.namespace": "another-namespace"
fromCIDR:
- "192.168.1.0/24"
- "2001:db8::/32"
fromEntities:
# -
"world" # Bất kỳ IP nào bên ngoài
cluster
# -
"host" # Host/node cục bộ
# -
"cluster" # Bất kỳ IP nào bên trong
cluster
# -
"remote-node" # Các node khác trong cluster
# -
"kube-apiserver" # Kubernetes API server
- "all"
toPorts:
-
ports:
- port: "8080"
protocol: "TCP"
- port: "8081"
protocol: "UDP"
rules:
dns:
- matchPattern: "*.example.com"
- matchName: "api.cilium.io"
http:
- method: "GET"
path: "/public/.*"
host: "api.example.com"
headers:
- "X-My-Header: true"
- method: "POST"
path: "/private"
kafka:
- role: "produce"
topic: "my-topic"
- role: "consume"
apiKey: "fetch"
egress:
-
toEndpoints:
- matchLabels:
app: database
toCIDR:
- "10.0.0.0/8"
toEntities:
- "world"
toPorts:
-
ports:
- port: "5432"
protocol: "TCP"
-
ports:
- port: "53"
protocol: "UDP"
rules:
dns:
- matchPattern: "*"
Ví dụ 2:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: example-cnp
namespace: default
spec:
endpointSelector:
matchLabels:
app: myapp
description: "Policy for myapp"
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: "GET"
path: "/api/v1/data"
- fromCIDR:
- 192.168.1.0/24
toPorts:
- ports:
- port: "5432"
protocol: TCP
egress:
- toEndpoints:
- matchLabels:
app: database
toPorts:
- ports:
- port: "5432"
protocol: TCP
- toCIDRSet:
- cidr: 10.0.0.0/8
except:
- 10.0.0.1/32
- toFQDNs:
- matchPattern: "*.example.com"
toPorts:
- ports:
- port: "443"
protocol: TCP
Ví dụ 3
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: <policy-name>
namespace: <namespace>
spec:
endpointSelector:
matchLabels:
<label-key>: <label-value>
ingress:
- fromEndpoints:
- matchLabels:
<label-key>: <label-value>
fromCIDR:
- <CIDR-block>
fromEntities: (e.g., host, world)
- <entity-name>
toPorts:
- ports:
- port: "<port-number>"
protocol: <TCP|UDP|SCTP>
rules:
http:
- method: "<HTTP-method>" # e.g., GET, POST
path: "<path-regex>" # e.g., /api/v1/.*
kafka:
- topic: "<kafka-topic-name>"
apiKeys:
- "<kafka-api-key>"
egress:
- toEndpoints:
- matchLabels:
<label-key>: <label-value>
toCIDR:
- <CIDR-block>
toEntities: (e.g., host, world)
- <entity-name>
toPorts:
- ports:
- port: "<port-number>"
protocol: <TCP|UDP|SCTP>
rules:
http:
- method: "<HTTP-method>"
path: "<path-regex>"
kafka:
- topic: "<kafka-topic-name>"
apiKeys:
- "<kafka-api-key>"
toFQDNs: # specific Fully Qualified Domain Names
- matchPattern: "<domain-pattern>" # e.g., *.example.com
toServices: # Allow traffic to specific Services
- serviceName: "<service-name>"
namespace: "<service-namespace>"
Lab chính: Khác NS
Đầu Bài:
-
2 namespace frontend và backend ,
có pod lần lượt là nginx:80 và tomcat:8080
-
Tạo rule deny all cả ingress và
egress trên mỗi namespace
-
Mở rule cho internet gọi vào
service của nginx với port 30080
-
Mở rule nginx có thể gọi tomcat
service với port 8080
-
Lệnh debug nếu rule chưa mở hết
Bài Giải:
Lab chính: Cùng NS
Đầu Bài:
-
Tạo ns có tên là private-ns, có 1
pod là busybox và 1 pod là nginx
-
Tạo rule deny all cả ingress và
egress namespace này
-
Mở rule làm sao để busybox gọi
được nginx
-
Lệnh debug nếu rule chưa mở hết
Bài Giải:
Phần 3.1 Lab Loại 1
Lab1
Case 2: Basic pod to pod
frontend, backend, db. Nhưng hiện tại frontend
vẫn gọi đc DB. Cần chặn lại và chỉ cho backend gọi. Cách làm:
1.
Chặn toàn bộ traffic trên ns db
2.
Chỉ mở traffic cho backend gọi DB
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: prod
spec:
endpointSelector:
matchLabels:
role: db
ingress:
- fromEndpoints:
- matchLabels:
role: backend
Case 3: matchExpressions tự do
Ta muốn thêm 1 yêu cầu đặc biệt từ yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: backend-ingress-from-frontend
namespace: staging
spec:
endpointSelector:
matchLabels:
role: backend
ingress:
- fromEndpoints:
- matchLabels:
role: frontend
matchExpressions:
- key: k8s:io.kubernetes.pod.namespace
operator: In
values:
- staging
- prod
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: comprehensive-policy-with-matchExpressions
spec:
# Áp dụng chính sách này cho các pod
thỏa mãn tất cả các điều kiện sau
endpointSelector:
matchExpressions:
- key: app
operator: In
values:
- web-server
- frontend
- key: env
operator: NotIn
values:
- dev
- key: owner
operator: Exists
- key: test
operator: DoesNotExist
ingress:
- fromEndpoints:
- matchExpressions:
- key: app
operator: In
values:
- monitor
- logger
- key: security-level
operator: Exists
egress:
- toEndpoints:
- matchExpressions:
- key: tier
operator: In
values:
- database
- key: db-version
operator: NotIn
values:
- 1.0
Case 4: Egress basic
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: prod
spec:
endpointSelector:
matchLabels:
role: frontend
egress:
- toEndpoints:
- matchLabels:
role: backend
Case 5: Allow all 1 namespace
Riêng môi trường namesapce dev, mở all traffic
tất cả các pod
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-dev-namespace-traffic
namespace: dev
spec:
endpointSelector: {}
ingress:
- fromEndpoints:
- matchLabels: {}
egress:
- toEndpoints:
- matchLabels: {}
Case 6 matchLabels và matchExpression
# k get pod -n prod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
backend 1/1 Running 0
21m role=backend
db
1/1 Running 0
21m role=db
frontend 1/1 Running 0
21m role=frontend
inventory 1/1 Running 0
3m38s role=inventory
orders 1/1 Running 0
3m38s role=orders
products 1/1 Running 0
3m37s role=products
In the prod
namespace, several new pods including orders,
inventory, and products have
been created.
Your task is to configure a network policy orders-egress-to-inventory-products
with the necessary permissions so that pods with the label role=orders can egress on port 3000
to the inventory pod and product pod.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "orders-egress-to-inventory-products"
namespace: prod
spec:
endpointSelector:
matchLabels:
role: orders
egress:
- toEndpoints:
- matchExpressions:
- key: role
operator: In
values:
- inventory
- products
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "3000"
protocol: TCP
Case 7: Egress with 2 endpoint
In admin namespace, allow role=admin to egress on port 4000 to any role=user, and port 5000 to any role=products pods, across all
namespaces.
Use name: admin-egress-policy
k -n admin get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
admin 1/1 Running 0
3m27s role=admin
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: admin-egress-policy
spec:
endpointSelector:
matchLabels:
role: admin
k8s:io.kubernetes.pod.namespace: admin
egress:
- toEndpoints:
- matchLabels:
role: user
toPorts:
- ports:
- port: "4000"
protocol: TCP
- toEndpoints:
- matchLabels:
role: products
toPorts:
- ports:
- port: "5000"
protocol: TCP
Case 8: Egress with CIDR
k -n prod get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
backend 1/1 Running 0
28m role=backend
db
1/1 Running 0
28m role=db
frontend 1/1 Running 0
28m role=frontend
inventory 1/1 Running 0
11m role=inventory
orders 1/1 Running 0
11m role=orders
payment 1/1 Running 0
23s role=payment
products 1/1 Running 0
11m role=products
The payment service (located in
the prod namespace) requires
communication with an external card validation service, which is accessible at
the IP address 200.100.17.1.
Create an egress policy cidr-rule that enables the payment
service to send traffic to the external card validation service specifically on
port 443.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: cidr-rule
namespace: prod
spec:
endpointSelector:
matchLabels:
role: payment
egress:
- toCIDR:
- 200.100.17.1/32
toPorts:
- ports:
- port: "443"
protocol: TCP
Case 9: Egress with except CIDR
The payment service must also
be configured to communicate with an external fraud detection service located
at the IP range 100.10.0.0/24, excluding the address 100.10.0.50.
Add an additional rule to the previously configured
policy cidr-rule for the payment service
and update it to enable communication with the external fraud detection service
on port 3000.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: cidr-rule
namespace: prod
spec:
endpointSelector:
matchLabels:
role: payment
egress:
- toCIDR:
- 200.100.17.1/32
toPorts:
- ports:
- port: "443"
protocol: TCP
- toCIDRSet:
- cidr: 100.10.0.0/24
except:
- "100.10.0.50/32"
toPorts:
- ports:
- port: "3000"
protocol: TCP
Case 10 :
The payment service must also
be configured to communicate with an external fraud detection service located
at the IP range 100.10.0.0/24, excluding the address 100.10.0.50.
Add an additional rule to the previously configured
policy cidr-rule for the payment service
and update it to enable communication with the external fraud detection service
on port 3000.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: cidr-rule
namespace: prod
spec:
endpointSelector:
matchLabels:
role: payment
egress:
- toCIDR:
- 200.100.17.1/32
toPorts:
- ports:
- port: "443"
protocol: TCP
- toCIDRSet:
- cidr: 100.10.0.0/24
except:
- "100.10.0.50/32"
toPorts:
- ports:
- port: "3000"
protocol: TCP
Case 11 Allow al traffic ingress
The end users will interact with the application by
accessing the webapp hosted on the product service.
Configure an ingress policy my-policy to allow all traffic from
outside the cluster to the products service role=products
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "my-policy"
namespace: prod
spec:
endpointSelector:
matchLabels:
role: products
ingress:
- fromEntities:
- world
Case 12: Allow al traffic egress
In the admin namespace, a monitoring
service pod has been set up with role=monitoring. This service will need to talk to all the nodes in the
cluster.
Configure an egress policy my-policy to explicitly allow it to talk
to all nodes by configuring role=monitoring pods to egress to host and remote-node entities (so they can reach all
cluster nodes).
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: admin
spec:
endpointSelector:
matchLabels:
role: monitoring
egress:
- toEntities:
- host
- remote-node
Case 13A: HTTP Method
In the prod namespace, configure a
network policy my-policy to allow ingress on HTTP port 80 to pods with
label role=user from any pod in the same
namespace and only for these HTTP methods/paths:
●
GET /users
●
POST /users
●
PATCH /users
●
GET /auth/token
F
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: prod
spec:
endpointSelector:
matchLabels:
role: user
ingress:
- fromEndpoints:
- matchLabels: {}
toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: GET
path: /users
- method: POST
path: /users
- method: PATCH
path: /users
- method: GET
path: /auth/token
Case 13B: DNS Method
A warehouse service has been
established in the prod namespace.
Configure a policy named my-policy for the warehouse service
to enable it to send DNS requests to the kube-dns server located in the kube-system namespace for the
following Fully Qualified Domain Names (FQDNs):
●
kodekloud.com
●
app.kodekloud.com
●
api.kodekloud.com
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: prod
spec:
endpointSelector:
matchLabels:
app: warehouse
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchName: kodekloud.com
- matchName: app.kodekloud.com
- matchName: api.kodekloud.com
Case 14: Allow all DNS to kube dns
server
Let’s make sure that all pods
in our cluster can talk to the kube-dns server.
Create a CiliumClusterwideNetworkPolicy with name: allow-dns-clusterwide to allow all pods to egress DNS queries
(port 53 ANY, any FQDN) to the kube-dns server.
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: allow-dns-clusterwide
spec:
endpointSelector: {}
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
Phần 3.2 Lab Loại 2
Case 2:
In the video-app namespace, an upload pod should only egress
to video-encoding on TCP port 5000. The existing policy allows
port 4000, so upload cannot
reach video-encoding on the correct port.
Update the my-policy CiliumNetworkPolicy so that port 5000 is permitted.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: video-app
spec:
endpointSelector:
matchLabels:
role: upload
egress:
- toEndpoints:
- matchLabels:
role: video-encoding
toPorts:
- ports:
- port: "5000"
protocol: TCP
Case 3:
In the video-app namespace,
the subscription pod
should only receive TCP port 80 traffic
from pods labeled role=content-access. For some reason, all pods are still
able to communicate with the subscription service. Find out the cause and
update my-policy accordingly.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: my-policy
namespace: video-app
spec:
endpointSelector:
matchLabels:
role: subscription
ingress:
- fromEndpoints:
- matchLabels:
role: content-access
toPorts:
- ports:
- port: "80"
protocol: TCP
Case 4:
The admin pod in the admin namespace must connect to
the content-management pod in video-app on TCP port 443. Two policies exist: content-management-policy (in video-app) and admin-policy (also in video-app).
Figure out why the admin service can’t talk to the
content-management service.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: content-management-policy
namespace: video-app
spec:
endpointSelector:
matchLabels:
role: content-management
ingress:
- fromEndpoints:
- matchLabels:
role: admin
k8s:io.kubernetes.pod.namespace: admin
toPorts:
- ports:
- port: "443"
protocol: TCP
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: admin-policy
namespace: admin
spec:
endpointSelector:
matchLabels:
role: admin
egress:
- toEndpoints:
- matchLabels:
role: content-management
k8s:io.kubernetes.pod.namespace: video-app
toPorts:
- ports:
- port: "443"
protocol: TCP
Case 5:
The subscription service in the video-app namespace communicates
with the notification service on port 3000 in the video-app namespace.
Recently, an engineer
implemented an egress policy for the subscription service to permit egress
traffic to the notification service. However, after applying the policy, the
application encountered issues. The engineer confirmed that the subscription
service could access the notification service Pod’s IP on port 3000, yet the application remained
non-functional.
Review and fix the policy subscription-policy.
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: subscription-policy
namespace: video-app
spec:
endpointSelector:
matchLabels:
role: subscription
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
- toEndpoints:
- matchLabels:
role: notification
toPorts:
- ports:
- port: "3000"
protocol: TCP
Case 6:
A cluster-wide policy named external-lockdown is currently blocking all
external ingress (fromEntities: world), but it’s also preventing pods from talking to each
other internally. Update external-lockdown so it continues to block external traffic yet allows
intra-cluster pod-to-pod communication.
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: external-lockdown
spec:
endpointSelector: {}
ingressDeny:
- fromEntities:
- world
ingress:
- fromEntities:
- all
4. Service Mesh
4.1 Cilium Ingress
https://docs.cilium.io/en/stable/network/servicemesh/ingress/
Để chạy được CiliumIngress ta cần mở những
tính năng sau:
-
nodePort.enabled=true
#Dòng
2270
-
kubeProxyReplacement=true
-
l7Proxy=true (enabled by default).
-
ingressController.enabled=true #dòng 905
-
ingressController.default=true #dòng 911 dùng để làm defaut ingress
-
ingressController.loadbalancerMode=shared
helm upgrade cilium cilium/ --namespace kube-system --create-namespace
--values values.yaml
kubectl -n kube-system rollout restart
deployment cilium-operator
kubectl -n kube-system rollout restart
daemonset cilium
#
k get ingressclasses.networking.k8s.io
NAME CONTROLLER PARAMETERS AGE
cilium cilium.io/ingress-controller <none> 64s
nginx k8s.io/ingress-nginx <none> 214d
Ví
dụ 1
#1. Deployment (nginx)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
#2. Service (ClusterIP)
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
#3. Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ing
spec:
ingressClassName: cilium
rules:
- host: nginx.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-svc
port:
number: 80
Ví
dụ 2:
In our cluster, we are hosting two applications:
funchat.com and streameasy.com. Create an Ingress rule
named multi-app-ingress in the default namespace with the
following routing configurations:
- Route funchat.com/auth to
the service chat-auth
- Route funchat.com/messages to
the service chat-messages
- Route streameasy.com/video to
the service streameasy-video
- Route streameasy.com/moderation to
the service streameasy-moderation
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-app-ingress
namespace: default
spec:
ingressClassName: cilium # optional if Cilium is set as the default
rules:
- host: funchat.com
http:
paths:
- path: /auth
pathType: Prefix
backend:
service:
name: chat-auth
port:
number: 80
- path: /messages
pathType: Prefix
backend:
service:
name: chat-messages
port:
number: 80
- host: streameasy.com
http:
paths:
- path: /video
pathType: Prefix
backend:
service:
name: streameasy-video
port:
number: 80
- path: /moderation
pathType: Prefix
backend:
service:
name: streameasy-moderation
port:
number: 80
4.2 Cilium GatewayAPI
https://docs.cilium.io/en/stable/network/servicemesh/gateway-api/gateway-api/
Giới hạn của ingress quản lý khó ở các item
sau. Vì vậy ta chuyển sang GatewayAPI
Để chạy được CiliumGatewayAPI ta cần mở những
tính năng sau:
-
nodePort.enabled=true #Dòng 2270
-
kubeProxyReplacement=true
-
gatewayAPI.enabled=true
-
l7Proxy=true (enabled
by default).
-
Cài https://gateway-api.sigs.k8s.io/guides/?h=crds#getting-started-with-gateway-api Ta chọn bản experimental-install.yaml
.
-
kubectl apply -f
https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/experimental-install.yaml
Tạo gatewayclass -> tạo gateway -> Tạo
HTTPRoute
Sau khi apply gatewayAPI.enabled=true. Ta sẽ
có gateway class là cilium được tạo ra
Bài tập 1:
Create a Gateway named my-gateway in the
default namespace using the Cilium GatewayClass.
Listeners:
HTTP on port 80
HTTPS on port 443, TLS terminate using stored
secret my-cert
Allow routes only from the same namespace.
Sau khi tạo gateway, sẽ có service LB listen dạng clusterIP
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
namespace: default
spec:
gatewayClassName: cilium
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: Same
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: my-cert
namespace: default
allowedRoutes:
namespaces:
from: Same
Bài tập 2, tiếp theo
Create an HTTPRoute
named multi-app-route in the default namespace bound
to my-gateway with the following routing:
- Host blog.example.com,
paths /home → service blog-home, /api →
service blog-api
- Host shop.example.com,
paths /cart → service shop-cart, /checkout →
service shop-checkout
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: multi-app-route
namespace: default
spec:
parentRefs:
- name: my-gateway
rules:
# Blog routes
- matches:
- path:
type: PathPrefix
value: /home
headers:
- name: Host
value: blog.example.com
backendRefs:
- name: blog-home
kind: Service
port: 80
- matches:
- path:
type: PathPrefix
value: /api
headers:
- name: Host
value: blog.example.com
backendRefs:
- name: blog-api
kind: Service
port: 80
# Shop routes
- matches:
- path:
type: PathPrefix
value: /cart
headers:
- name: Host
value: shop.example.com
backendRefs:
- name: shop-cart
kind: Service
port: 80
- matches:
- path:
type: PathPrefix
value: /checkout
headers:
- name: Host
value: shop.example.com
backendRefs:
- name: shop-checkout
kind: Service
port: 80
4.3 NetworkPolicy cho Ingress và
GatewayAPI
Policy này có vẻ chưa ngon, cần test lại
apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
name: "allow-external"
spec:
description: "Allow Traffic from outside world to
ingress"
endpointSelector: {}
ingress:
- fromEntities:
- world
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: allow-ingress-egress
spec:
description: "Allow all the egress traffic from reserved
ingress identity to any endpoints in the cluster"
endpointSelector:
matchExpressions:
- key: reserved:ingress
operator: Exists
egress:
- toEntities:
- cluster
4.4 Cilium Encryption
Link dẫn chiếu: https://docs.cilium.io/en/latest/security/network/encryption-ipsec/
Chức năng Encrypt này chỉ có tác dụng khi pod
gọi pod giữa 2 hoặc nhiều cluster với nhau. Trường hợp trong cùng 1 cluster thì
không có tác dụng mã hóa.
kubectl create -n kube-system secret generic cilium-ipsec-keys --from-literal=keys="3+ rfc4106(gcm(aes)) $(dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -p -c 64) 128"
#Sửa file values.yaml những vị trí sau
encryption.enabled=true
encryption.type=ipsec
4.5 mTLS
Linh gốc: https://docs.cilium.io/en/stable/network/servicemesh/mutual-authentication/mutual-authentication/
Để bật mTLS ta cần sửa file values.yaml những
hạng mục sau
encryption.enabled=true
authentication.enable=true
authentication.mutual.spire.enabled=true
authentication.mutual.spire.install.enabled=true
Thực hiện upgrade và reload ds/deployemtn
5. Cluster Mesh
6. Observability
Toàn bộ tài liệu hay về hubble https://docs.cilium.io/en/stable/observability/hubble/
Trước tiên ta phải bật Hubble, Hubble relay,
hubble ui trong file helm
#
Dòng 1202
hubble:
enabled: true
#
Dòng 1498 Enable Hubble Relay
relay:
enabled:
false
#
Dòng 1723 Hubble UI
ui:
#
-- Whether to enable the Hubble UI.
enabled: true
enabled: true
Sau đó ta cần bật API. Có 2 cách để bật hubble
API
cilium hubble port-forward
HOẶC
kubectl -n kube-system port-forward
service/hubble-relay 4245:80
Sau đó ta check bằng lệnh #hubble status (lệnh hơi lâu, mới chạy đc ra kết quả ok)
hubble status
hubble observe --pod xxxxxx
Chú ý: mặc định hubble chỉ tracing L3,L4 nếu muốn trace L7 ta phải định
nghĩa network policy có gắn với L7 (GET/POST/PUT/DELETE…)
Ví dụ để trace về L7
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-http
spec:
endpointSelector:
matchLabels:
run: backend
ingress:
- fromEndpoints:
- matchLabels:
run: frontend
toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: GET
- method: POST
- method: PATCH
- method: PUT
hubble observe --pod backend | grep GET
Oct 12 18:19:08.199:
default/frontend:48188 (ID:26240) -> default/backend:80 (ID:62209)
http-request FORWARDED (HTTP/1.1 GET http://10.0.0.105/)
Oct 12 18:19:08.200:
default/frontend:48188 (ID:26240) <- default/backend:80 (ID:62209)
http-response FORWARDED (HTTP/1.1 200 2ms (GET http://10.0.0.105/))
6. BGP & Extend Networking
6.1 EgressGateway
Mở EgressGateway
#---vim values.yaml
bpf:
masquerade: true
kubeProxyReplacement: "true"
egressGateway:
# -- Enables egress gateway to redirect
and SNAT the traffic that leaves the
# cluster.
enabled: true
helm upgrade cilium cilium/cilium -n kube-system -f values.yaml
sleep 10
kubectl rollout restart deploy cilium-operator -n kube-system
kubectl rollout restart ds cilium -n kube-system
Lab1:
Đứng từ pod-1 ở master-node1 này. Ping đến pod-1
ở worker-node1 kia. Sau đó tcpdump tại node kia xem nguồn xuất phát từ đâu
sudo tcpdump -i any -n -A 'port 80'
Kết quả:
Làm ngược lại với pod tương tự
Kết quả:
Lab2:
Từ lab1, chúng ta muốn force toàn bộ nguồn traffic từ worker-node1 dù có thử cách
nào. Hãy thử bằng egressGateway
Bước 1. Mở egress
như hướng dẫn trên
Bước 2: gán nhãn
cho node để xác định node nào mở egressGW: kubectl label node
node01 egress-gateway=true
Bước 3: Tạo
EgressGatewayPolicy
Policy này sẽ force toàn bộ pod có nhãn là
app-node1 có traffic đi ra phải theo worker-node có nhãn là egress-gateway=true
apiVersion: cilium.io/v2
kind: CiliumEgressGatewayPolicy
metadata:
name: egress-policy
spec:
selectors:
- podSelector:
matchLabels:
app: app-node1
destinationCIDRs:
- "0.0.0.0/0"
egressGateway:
nodeSelector:
matchLabels:
egress-gateway: "true"
egressIP: <NODE01_IP-Cái Này cần đổi IP của node>
Bước 4. Thực hiện ping thử lại như Lab1. Ta sẽ
thấy source đi ra luôn là worker-node1
6.2 LoadBalancer IPAM
6.3 BGP
6.4 L2 Announcement
7. Optimization
Không có nhận xét nào:
Đăng nhận xét