Dok Docs
Github Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

kube-apiserver-ha

Overview

In the general environment, since there are no tools such as vip, Nginx is deployed on the worker node, and the request from the worker to the kube-apiserver is reverse-proxyed to achieve high availability.

Introduction

On all nodes, no matter it is the components on the normal worker nodes of the controlplane, such as kube-proxy, kube-controller-manager, etc., they will not directly access the kube-apiserver. By default, this address should be ip:6443, however The DOK transformation is to deploy an Nginx component on each node as a reverse proxy. By accessing the local 127.0.0.1:8443, the component will be proxied to any kube-apiserver, thereby realizing kube HA of -apiserver, the current implementation of DOK is to deploy an nginx on each node to proxy requests to kube-apiserver.

The deployment of the reverse proxy can also be implemented through a static Pod, so the kubelet process will help manage the static Pod, detect the status, assist in restarting, etc. For the principle of the static Pod, you can refer to official documents

Reverse Proxy

The kubelet/kube-proxy that needs to be concerned If the kube-apiserver is accessed through a reverse proxy, will it affect other components? The real situation is not, because the original 6443 port is still available, such as some health checks, indicators can still be obtained through the 6443 port.

Simple Test

nerdctl pull docker.io/tekn0ir/nginx-stream
nerdctl run -d -p 8443:8443 -v /root/stream.conf.d:/opt/nginx/stream.conf.d -v /root/http.conf.d:/opt/nginx/http.conf.d --name nginx tekn0ir/nginx-stream
nerdctl run -d -p 8443:8443 -v /root/stream.conf.d:/opt/nginx/stream.conf.d -v /root/http.conf.d:/opt/nginx/http.conf.d --name nginx nginx-stream
docker run -d --net host -p 8443:8443 --name nginx nginx-stream

In the future, it is planned to implement the deployment of reverse proxy through Static Pod, but the problem of circular dependency needs to be solved

Phase I

Install Nginx

Phase I, only Nginx will be installed on worker nodes as the reverse proxy of kube-apiserver. Users can also find Nginx-related rpm packages in the installation package and install them by themselves.

rpm -ivh /root/dok-release/bin/kernel/rh-nginx120* /root/dok-release/bin/kernel/scl-utils*

Logs

Logs would be collected to /var/log/nginx/tcp-access.log

img.png

Phase II

Considering that the installation of Nginx will have some requirements for different versions of the kernel and sources, the second phase of the plan considers directly using the container to deploy the proxy container. The deployment of this container will be before kubeadm init, that is, after Kubelet starts It will directly wait for the startup of kube-apiserver through this proxy.

nerdctl -n k8s.io run --network host --name reverse-nginx-proxy -v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx:1.20.1-alpine

Phase III(Pending)

As mentioned above, the future plan is to deploy the reverse proxy through the Static Pod. In fact, the proxy relationship on the network is no different from the direct deployment of Nginx. The main difference is that the reverse proxy is deployed through the Static Pod, and there is no need to manage different operations. System, source, Nginx version and dependencies, and Kubelet can assist in managing reverse proxy Pods. Log collection can be done together with ordinary Pod collection. For specific implementation, please refer to [kube-ha-proxy.yaml](kube -ha-proxy.yaml), the plan is to control kubeadm to read patch.yaml patch to kube through parameters such as --experimental-patches /root/dok-release/bin/k8s/ kube-apiserver.

The current problem is that the /etc/kubernetes/manifest created by kubeadm cannot put the Yaml of the custom static Pod, and it will not start after putting it in. It is not easy to implement without modifying kubead

FAQ

Reference

  1. 基于nginx代理的kube-apiserver高可用方案