make-pkg-release
dok-release-package.sh This script can package the installation package required by DOK from beginning to end and upload it to the cos bucket. If you don’t need to upload it, you can comment out the script The last few lines of , it is recommended to execute it on a machine that can access the external network, because this can greatly improve the efficiency of script operation. If you insist on running in a local or domestic environment, please consider configuring the proxy method for different file access (It is more troublesome, and it is not recommended).
dok-release-package.sh The biggest principle of this script is that all components are open source and can be easily obtained through the Internet, and some custom configuration files will pass cat
and other methods, edit in the script, in short, through the script, other users can make a DOK installation package, and because the script is updated very quickly, the content of this article may not completely follow the script The iteration speed, so for specific details, please refer to the script directly.
Docker is still a tool that you are familiar with and is very easy to install and obtain, so the construction of the installation package directly uses docker instead of tools such as ctr and nerdctl.
In the process of image optimization, docker has been completely abandoned, and users can operate through tools such as crictl/nerdctl
yum install -y yum-utils pigz tar wget tree
# In the environment of assembly line packaging and installation package, it is more convenient to install docker with yum, or you can directly use nerdctl
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
systemctl enable docker
systemctl start docker
This directory structure is very important, and the values of many paths must be fixed, otherwise DOK may encounter problems when performing cluster creation.
# build the directory structure of the installation package
rm -rf /root/dok-release
mkdir -p /root/dok-release
mkdir /root/dok-release/app
mkdir /root/dok-release/docs
mkdir /root/dok-release/network
mkdir /root/dok-release/bin
mkdir /root/dok-release/bin/{cni,k8s,runc,containerd,tools,kernel}
mkdir /root/dok-release/conf
mkdir /root/dok-release/image
The k8s suite mainly refers to the necessary binary and configuration files for installing a Kubernetes cluster, and of course also includes kubeadm, runc, containerd, etc.
cd /root/dok-release/bin/k8s/ || exit
# bin
wget -c https://dl.k8s.io/v1.21.7/bin/linux/amd64/kubelet
wget -c https://dl.k8s.io/v1.21.7/bin/linux/amd64/kubeadm
wget -c https://dl.k8s.io/v1.21.7/bin/linux/amd64/kubectl
# configs
wget -c https://raw.githubusercontent.com/kubernetes/release/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service
wget -c https://raw.githubusercontent.com/kubernetes/release/master/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf
# runc related
cd /root/dok-release/bin/runc/ || exit
wget -c https://github.com/opencontainers/runc/releases/download/v1.1.3/runc.amd64 && mv runc.amd64 runc
# Containerd related
cd /root/dok-release/bin/containerd/ || exit
wget -c https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
wget -c https://github.com/containerd/containerd/releases/download/v1.5.5/containerd-1.5.5-linux-amd64.tar.gz && tar zxvf containerd-1.5.5-linux-amd64.tar.gz && rm -rf containerd-1.5.5-linux-amd64.tar.gz && mv bin/* . && rm -rf bin/
containerd config default > config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' config.toml
Afterwards, we need to pay close attention to this file /root/dok-release/bin/k8s/dok.yaml
, because this is the template file used by kubeadm to create a cluster. For master0, of course, you need to pay special attention to a parameter controlPlaneEndpoint
, which will be changed to 127.0.0.1:8443
when DOK executes to create a cluster, because this value will be written into the kubelet configuration file. Kubelet relies on this value to communicate with kube-apiserver. As for why it is this value, you can refer to the kube-apiserver-ha chapter in the DOK document. The content of the file is a bit long, so I won’t repeat it here.
Some CNI plugins will install CNI plugins through an installation Pod, mainly referring to those binary installations, such as calico, but DOK’s default network plugin flannel does not have a similar operation, so CNI plugins need to be installed on each node through DOK to specify on the directory.
# CNI related
cd /root/dok-release/bin/cni/ || exit
wget -c https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz && tar zxvf cni-plugins-linux-amd64-v1.1.1.tgz && rm -rf cni-plugins-linux-amd64-v1.1.1.tgz
There are many tools, mainly for the smooth installation of the cluster and the convenience of operation and maintenance after the cluster is created. For example, tools such as k9s and etcdctl include some pre-installed software when installing components. For details, you can view the script.
The images installed offline are an important part of DOK, including Kubernetes clusters and Helm applications. The images required by these containers will be downloaded in advance and packaged into the installation package. When the cluster is created, these image compression packages will be distributed to each node, and Load it into the containerd mirroring system through nerdctl load
, and you can view the script file in the mirror list. The following explains how these mirrors are collected and downloaded to the local.
# k8s images
/root/dok-release/bin/k8s/kubeadm config images pull --kubernetes-version 1.21.7
# CNI images
cd /root/dok-release/network || exit
# flannel
wget -c https://raw.githubusercontent.com/flannel-io/flannel/v0.18.1/Documentation/kube-flannel.yml
# calico
wget -c https://projectcalico.docs.tigera.io/manifests/calico.yaml
images=`grep -irn "image: " *|grep -v "#image"| awk -F "image: " '{print $2}'|sort|uniq`
for i in $images ; do
echo "pulling image: $i"
docker pull $i
done
# harbor images
helm repo add harbor https://helm.goharbor.io
helm repo update
images=`helm template harbor harbor/harbor --version 1.8.2 --set metrics.enabled=true --set metrics.serviceMonitor.enabled=true | grep -i "image: "|awk -F "image: " '{print $2}'|sort|uniq`
for i in $images ; do
echo "pulling image: $i"
docker pull $i
done
The principle of Helm App installation is to create a file named dok-values.yaml
after helm pull
to the tgz package, after decompression, to inject the value needed for DOK deployment, in the final stage of cluster creation , helm install -f dok-values.yaml
implemented at master0, the above is the logic of deploying applications in DOK.
When new components are introduced, the installation package needs to be repackaged. It is not recommended to start this script from scratch, because it is not cost-effective to go through all the processes just to add a few lines of commands. Here is a brief introduction on how to minimize The cost of updating the installation package.
# suppose there is already dok-release-without-app-image.gz on a certain host
tar zxvf dok-release-without-app-image.gz
# after decompression, add the required files according to the organization method of the installation package