Thứ Tư, 26 tháng 2, 2025

Nginx proxy to Kubeapi-server and act as log audit

 Bài Toán: Một ngày đẹp trời, 1 ý tưởng thật dị hợm nảy ra trong đầu:

  • Liệu nginx có thể đứng trước làm proxy reverse cho KubeAPI không?
  • Liệu cùng chung IP:port, nhưng khác domain thì có thể kết nối các cụm cluster khác nhau được không?

Thực hiện:

Phần 1: Nginx Proxy_Pass tới kube-api

Kiến trúc: Admin ---> Nginx ---> KubeAPI

Bước 1.1 Cài OpenResty (Hoặc Nginx)

Các bạn có thể cài nginx hoặc openresty đều được. Mục đích mình cài Openrestry (khi cài openresty sẽ có luôn nginx) là để sử dụng cho phần 2.

# Dành cho CenOS ( https://openresty.org/en/linux-packages.html )
cd /etc/yum.repo.d/
curl -O https://openresty.org/package/centos/openresty.repo
yum install openresty
systemctl start openresty
systemctl enable openresty

# Dành cho Ubuntu
Làm theo hướng dẫn các version của ubuntu: https://openresty.org/en/linux-packages.html#ubuntu , Ở đây tôi đang dùng 22.04
wget -O - https://openresty.org/package/pubkey.gpg | sudo gpg --dearmor -o /usr/share/keyrings/openresty.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/openresty.gpg] http://openresty.org/package/ubuntu $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/openresty.list > /dev/null
apt-get update
apt-get -y install --no-install-recommends openresty
systemctl start openresty
systemctl enable openresty

Bước 1.2 : Cấu hình nginx.conf

Ta sẽ sử dụng cert có sẵn trong kube-apiserver

cat /etc/kubernetes/manifests/kube-apiserver.yaml    
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key

Thêm cầu hình proxy mới như sau:

# vim /usr/local/openresty/nginx/conf/nginx.conf

upstream kubeapi_cluster {
    server 192.168.88.12:6443 max_fails=3 fail_timeout=5s;
    server 192.168.88.13:6443 max_fails=3 fail_timeout=5s;
    server 192.168.88.14:6443 max_fails=3 fail_timeout=5s;
  }

server {
    listen 6443 ssl;
    server_name _;

    ssl_certificate /etc/kubernetes/pki/apiserver.crt;
    ssl_certificate_key /etc/kubernetes/pki/apiserver.key;

    location / {
        proxy_pass https://kubeapi_cluster/;

        proxy_http_version                 1.1;
        proxy_cache_bypass                 $http_upgrade;

        # Proxy SSL
        proxy_ssl_server_name              on;

        # Proxy headers
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Scheme $scheme;
        proxy_set_header X-Forwarded-Proto  $scheme;
        proxy_set_header X-Forwarded-For    $remote_addr;
        proxy_set_header X-Real-IP          $remote_addr;

        # Proxy timeouts
        proxy_connect_timeout              600s;
        proxy_send_timeout                 600s;
        proxy_read_timeout                 600s;

        proxy_ssl_certificate /etc/kubernetes/pki/apiserver-kubelet-client.crt;
        proxy_ssl_certificate_key /etc/kubernetes/pki/apiserver-kubelet-client.key;
    }
}


Reload lại OpenResty Nginx
systemctl reload openresty

Sửa kube config trỏ vào pod mới 6443 -> 6444

sed -i -e 's/6443/6444/g'  ~/.kube/config

Test kiểm tra get api và logs nginx

# kubectl get node -v=6
I1117 13:26:50.223517   36661 round_trippers.go:553] GET https://192.168.88.12:6444/api?timeout=32s 200 OK in 25 milliseconds
I1117 13:26:50.231582   36661 round_trippers.go:553] GET https://192.168.88.12:6444/apis?timeout=32s 200 OK in 5 milliseconds
I1117 13:26:50.262655   36661 round_trippers.go:553] GET https://192.168.88.12:6444/api/v1/nodes?limit=500 200 OK in 5 milliseconds
NAME           STATUS   ROLES           AGE     VERSION
master-node    Ready    control-plane   4d20h   v1.30.0
worker-node1   Ready    <none>          4d20h   v1.29.8
# tail /usr/local/openresty/nginx/logs/access.log -n 1
192.168.88.12 - - [17/Nov/2024:13:26:50 +0000] "GET /api/v1/nodes?limit=500 HTTP/1.1" 200 9242 "-" "kubectl/v1.30.0 (linux/amd64) kubernetes/7c48c2b"

*** Ý tưởng nữa hiện ra trong đầu, liệu bung luôn cả request và response của nó ra logs được không...next phần 2

Phần 2: Nginx đóng vai trò AuditLogs

Cầu hình thêm nginx.conf bổ xung log_format, access_log và lua

vim /usr/local/openresty/nginx/conf/nginx.conf

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

  #Thêm phần log format này:
    log_format log_req_resp escape=none '$remote_addr - $remote_user [$time_local] '
           ' "$request" $status $body_bytes_sent ${request_time}ms '
           '| PRINT_REQUEST_BODY: $request_body '
           '| PRINT_REQUEST_HEADER:"$req_header" '
           '| PRINT_RESPONSE_HEADER:"$resp_header" '
           '| PRINT_RESPONSE_BODY:"$resp_body" ';

    access_log  logs/access.log log_req_resp;

upstream kubeapi_cluster {
    server 192.168.88.12:6443 max_fails=3 fail_timeout=5s;
    server 192.168.88.13:6443 max_fails=3 fail_timeout=5s;
    server 192.168.88.14:6443 max_fails=3 fail_timeout=5s;
  }

   #Default 80
   server {
        listen       80;
        server_name  localhost;
        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
    
    # Cấu hình 6444 log request và response
    server {
    listen 6444 ssl;
    server_name _;

    ssl_certificate /etc/kubernetes/pki/apiserver.crt;
    ssl_certificate_key /etc/kubernetes/pki/apiserver.key;

    location / {

        #Step2: Config REPSONSE_BODY
        lua_need_request_body on;

        set $resp_body "";
        body_filter_by_lua '
          local resp_body = string.sub(ngx.arg[1], 1, 1000000)
          ngx.ctx.buffered = (ngx.ctx.buffered or "") .. resp_body
          if ngx.arg[2] then
             ngx.var.resp_body = ngx.ctx.buffered
          end
        ';

        #Step3: Config REQUEST_HEADER, RESPONSE_HEADER
        set $req_header "";
        set $resp_header "";
        header_filter_by_lua '
          local h = ngx.req.get_headers()
          for k, v in pairs(h) do
              ngx.var.req_header = ngx.var.req_header .. k.."="..v.." "
          end
          local rh = ngx.resp.get_headers()
          for k, v in pairs(rh) do
              ngx.var.resp_header = ngx.var.resp_header .. k.."="..v.." "
          end
        ';


        proxy_pass https://kubeapi_cluster/;

        proxy_http_version                 1.1;
        proxy_cache_bypass                 $http_upgrade;

        # Proxy SSL
        proxy_ssl_server_name              on;

        # Proxy headers
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_set_header Host               $host;
        proxy_set_header X-Forwarded-Scheme $scheme;
        proxy_set_header X-Forwarded-Proto  $scheme;
        proxy_set_header X-Forwarded-For    $remote_addr;
        proxy_set_header X-Real-IP          $remote_addr;

        # Proxy timeouts
        proxy_connect_timeout              600s;
        proxy_send_timeout                 600s;
        proxy_read_timeout                 600s;

        proxy_ssl_certificate /etc/kubernetes/pki/apiserver-kubelet-client.crt;
        proxy_ssl_certificate_key /etc/kubernetes/pki/apiserver-kubelet-client.key;
    }
}


}



Kết quả

# kubectl get node -v=6
I1117 13:37:22.883328   39935 round_trippers.go:553] GET https://192.168.88.12:6444/api/v1/nodes?limit=500 200 OK in 13 milliseconds
NAME           STATUS   ROLES           AGE     VERSION
master-node    Ready    control-plane   4d20h   v1.30.0
worker-node1   Ready    <none>          4d20h   v1.29.8

tail -n1 /var/log/nginx/access.log
192.168.88.12 -  [17/Nov/2024:13:34:58 +0000]  "GET /api/v1/nodes?limit=500 HTTP/1.1" 200 9242 0.010ms 
| PRINT_REQUEST_BODY:  
| PRINT_REQUEST_HEADER:"host=192.168.88.12:6444 user-agent=kubectl/v1.30.0 (linux/amd64) kubernetes/7c48c2b accept=application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json accept-encoding=gzip kubectl-command=kubectl get kubectl-session=8b063f81-279c-47dc-a39b-fc42abff2dcb " 
| PRINT_RESPONSE_HEADER:"cache-control=no-cache, private content-type=application/json connection=keep-alive x-kubernetes-pf-prioritylevel-uid=c7bb2e92-7125-47ee-b8af-b0da52a06199 audit-id=d4859fdd-7e70-4791-81ea-5d5a02d02d1d x-kubernetes-pf-flowschema-uid=c2ed420f-f18f-4d80-aaf1-291dd5dc931d " 
| PRINT_RESPONSE_BODY:"{"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"resourceVersion":"120353"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Status","type":"string","format":"","description":"The status of the node","priority":0},{"name":"Roles","type":"string","format":"","description":"The roles of the node","priority":0},{"name":"Age","type":"string","format":"","description":"CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata","priority":0},{"name":"Version","type":"string","format":"","description":"Kubelet Version reported by the node.","priority":0},{"name":"Internal-IP","type":"string","format":"","description":"List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP).","priority":1},{"name":"External-IP","type":"string","format":"","description":"List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP).","priority":1},{"name":"OS-Image","type":"string","format":"","description":"OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)).","priority":1},{"name":"Kernel-Version","type":"string","format":"","description":"Kernel Version reported by the node from 'uname -r' (e.g. 3.16.0-0.bpo.4-amd64).","priority":1},{"name":"Container-Runtime","type":"string","format":"","description":"ContainerRuntime Version reported by the node through runtime remote API (e.g. containerd://1.4.2).","priority":1}],"rows":[{"cells":["master-node","Ready","control-plane","4d20h","v1.30.0","192.168.88.12","\u003cnone\u003e","Ubuntu 22.04.5 LTS","5.15.0-125-generic","containerd://1.7.22"],"object":{"kind":"PartialObjectMetadata","apiVersion":"meta.k8s.io/v1","metadata":{"name":"master-node","uid":"a0262311-4b3b-4261-bb97-db648705e311","resourceVersion":"119778","creationTimestamp":"2024-11-12T16:57:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"master-node","kubernetes.io/os":"linux","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","projectcalico.org/IPv4Address":"192.168.88.12/24","projectcalico.org/IPv4IPIPTunnelAddr":"172.16.77.128","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-11-12T16:57:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-11-12T16:57:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-11-13T18:48:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}}},{"manager":"calico-node","operation":"Update","apiVersion":"v1","time":"2024-11-17T12:54:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-11-17T13:30:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{}}}},"subresource":"status"}]}}},{"cells":["worker-node1","Ready","\u003cnone\u003e","4d20h","v1.29.8","192.168.88.13","\u003cnone\u003e","Ubuntu 22.04.5 LTS","5.15.0-125-generic","containerd://1.7.22"],"object":{"kind":"PartialObjectMetadata","apiVersion":"meta.k8s.io/v1","metadata":{"name":"worker-node1","uid":"604ca6f0-252d-4566-82bf-129163a96ea2","resourceVersion":"120025","creationTimestamp":"2024-11-12T16:58:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"worker-node1","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","projectcalico.org/IPv4Address":"192.168.88.13/24","projectcalico.org/IPv4IPIPTunnelAddr":"172.16.180.192","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-11-12T16:58:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-11-12T16:58:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"calico-node","operation":"Update","apiVersion":"v1","time":"2024-11-17T12:55:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-11-17T12:55:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-11-17T13:32:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:cpu":{},"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{}}}},"subresource":"status"}]}}}]}

Kết quả không đẹp đẽ như bật Kube AuditLogs trực tiếp từ Policy vì nội dung trả về khá nhiều, như vậy sẽ làm đầy ổ cứng rất nhanh. Nhưng cũng là 1 phương án thay thế có thể áp dụng.. ta có thể dùng logrotate để nén log và lưu trữ trong 1 khoảng thời gian nhất định.

(*Bài viết chỉ mang tính chất ý tưởng và chưa áp dụng vào thực tế... ^^)

(Nguồn tham khảo https://gist.github.com/gilangvperdana/2c4877c8efb729534e7f7c55e6e1e2d3

https://medium.com/@rifewang/what-happens-after-running-a-kubectl-command-8aeed20ed5c4 )


My Viblo https://viblo.asia/p/nginx-proxy-toi-kube-apiserver-va-nginx-lam-audit-logs-lai-request-response-EbNVQgoRJvR 

Thứ Ba, 28 tháng 1, 2025

RBAC user/group

#Bài toán: 

- fresher1 mới vào cty, nằm trong nhóm "developer-readonly" có quyền read-only toàn bộ resource trong namespace "testenv"

- senior1 cũng thuộc nhóm "developer-readonly" có thêm quyền tạo, xóa pod


#Generate key & csr

openssl genrsa -out "fresher1-key.pem" 2048

openssl genrsa -out "senior1-key.pem" 2048

openssl req -new -key "fresher1-key.pem" -out "fresher1-csr.csr" -subj "/CN=fresher1/O=developer"

openssl req -new -key "senior1-key.pem" -out "senior1-csr.csr" -subj "/CN=senior1/O=developer"


#Import csr to k8s

cat <<EOF > fresher1-csr.yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: fresher1
spec:
  request: $(cat fresher1-csr.csr | base64 -w0)
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
EOF


cat <<EOF > senior1-csr.yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: senior1
spec:
  request: $(cat senior1-csr.csr | base64 -w0)
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
EOF


kubectl apply -f fresher1-csr.yaml

kubectl apply -f senior1-csr.yaml


#Load Crt from csr has been approved

kubectl certificate approve fresher1

kubectl certificate approve senior1

kubectl get csr fresher1 -o jsonpath='{.status.certificate}'| base64 -d > fresher1-crt.crt

kubectl get csr senior1 -o jsonpath='{.status.certificate}'| base64 -d > senior1-crt.crt


#Grant RBAC "develop group"

kubectl create ns testenv

kubectl -n testenv create role developer-readonly --verb=get,list --resource=*

kubectl -n testenv create rolebinding developer-readonly --role=developer-readonly --group=developer

#Grant RBAC addtion for seninor1

kubectl -n testenv create role developer-modify --verb=delete,create --resource=*

kubectl -n testenv create rolebinding developer-modify --role=developer-modify     --user=senior1


#Create kube-config

kubectl config set-credentials fresher1 --client-key=fresher1-key.pem --client-certificate=fresher1-crt.crt --embed-certs=true

kubectl config set-credentials senior1 --client-key=senior1-key.pem --client-certificate=senior1-crt.crt --embed-certs=true

kubectl config set-context fresher1 --cluster=kubernetes --user=fresher1

kubectl config set-context senior1 --cluster=kubernetes --user=senior1


#Test create fail

k -n testenv --context=fresher1 run nginx --image=nginx

k -n testenv --context=fresher1 expose pod nginx --target-port=80 --port=80 --type=ClusterIP

#Test create, list pass

k -n testenv --context=senior1 run nginx --image=nginx

k -n testenv --context=senior1 expose pod nginx --target-port=80 --port=80 --type=ClusterIP

k -n testenv --context=fresher1 get svc

k -n testenv --context=senior1 get svc




Nginx proxy to Kubeapi-server and act as log audit

  Bài Toán:   Một ngày đẹp trời, 1 ý tưởng thật dị hợm nảy ra trong đầu: Liệu nginx có thể đứng trước làm proxy reverse cho KubeAPI không? L...