1. 说明

此文档需要在有网络的机器上准备好需要的镜像和安装的软件。需要此机器上安装有 Podman (用于导出镜像)。

2. 准备资源

复制以下脚本到可连接网络的机器(或配置 http_proxyhttps_proxy 环境变量),然后执行脚本即可。

此脚本生成的离线安装包仅支持 amd64 ,且仅在 Debian bullseye 上测试。
点击展开脚本
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
#!/usr/bin/env bash
set -e
# https://github.com/kubernetes/kubernetes/releases
VERSION_KUBEADM="1.25.2"
# https://github.com/kubernetes-sigs/cri-tools/releases
VERSION_CRICTL="1.25.0"
VERSION_KUBEADM_CONF="0.4.0"
ARCH="amd64"

WORK_PATH=/tmp/kubernetes
SRC_PATH=$WORK_PATH/src
DIST_PATH=$WORK_PATH/kubernetes-$VERSION_KUBEADM
BUILD_ROOT=$DIST_PATH/rootfs
IMAGE_ROOT=$DIST_PATH/images
if [ -d $IMAGE_ROOT ]; then
    rm -r $IMAGE_ROOT || :
fi
mkdir -p $SRC_PATH $DIST_PATH $BUILD_ROOT $IMAGE_ROOT

#====== down
c_down() {
    if [ ! -f "$SRC_PATH/$2" ]; then
        wget "$1" -c -O "$SRC_PATH/$2.tmp" || exit 1
        mv "$SRC_PATH/$2.tmp" "$SRC_PATH/$2"
    fi
}

KUBERNETES_BIN_URL="https://storage.googleapis.com/kubernetes-release/release"
c_down $KUBERNETES_BIN_URL/v$VERSION_KUBEADM/bin/linux/$ARCH/kubeadm kubeadm
c_down $KUBERNETES_BIN_URL/v$VERSION_KUBEADM/bin/linux/$ARCH/kubelet kubelet
c_down $KUBERNETES_BIN_URL/v$VERSION_KUBEADM/bin/linux/$ARCH/kubectl kubectl
c_down https://github.com/kubernetes-sigs/cri-tools/releases/download/v${VERSION_CRICTL}/crictl-v${VERSION_CRICTL}-linux-${ARCH}.tar.gz crictl.tgz
c_down https://raw.githubusercontent.com/kubernetes/release/v${VERSION_KUBEADM_CONF}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service kubelet.service
c_down https://raw.githubusercontent.com/kubernetes/release/v${VERSION_KUBEADM_CONF}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf 10-kubeadm.conf

# down images
c_down_image() {
    IMAGE_NAME="$1"
    IMAGE_SAVE_NAME="$(echo "$IMAGE_NAME" | sed -e 's@/@-@g' -e 's@:@-@g' -e 's/@/-/g')"
    CURRENT_DOCKER_PATH=$IMAGE_ROOT/$IMAGE_SAVE_NAME
    if [ ! -f "$CURRENT_DOCKER_PATH" ]; then
        podman pull "$1"
        podman save "$1" -o "$CURRENT_DOCKER_PATH"
    fi
    echo "$IMAGE_NAME $IMAGE_SAVE_NAME" >>$DIST_PATH/images.txt
}
chmod +x $SRC_PATH/kubeadm
IFS=" " read -r -a LIST <<<"$(HTTP_PROXY=no HTTPS_PROXY=no $SRC_PATH/kubeadm config images list 2>/dev/null | xargs echo)"
# shellcheck disable=SC2128
for IMAGE in ${LIST[*]}; do
    c_down_image "$IMAGE"
done
# containerd 需要
c_down_image "k8s.gcr.io/pause:3.6"
# k8s cni 插件需要
c_down_image "docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0"
c_down_image "docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1"

# ============ BUILD =========
rm -r $BUILD_ROOT
mkdir -p $BUILD_ROOT/usr/local/bin \
    $BUILD_ROOT/usr/lib/systemd/system \
    $BUILD_ROOT/etc/systemd/system/kubelet.service.d \
    $BUILD_ROOT/etc/modules-load.d/ \
    $BUILD_ROOT/etc/sysctl.d \
    $BUILD_ROOT/usr/share/bash-completion/completions
cp $SRC_PATH/kubeadm $SRC_PATH/kubelet $SRC_PATH/kubectl $BUILD_ROOT/usr/local/bin
tar Czxf $BUILD_ROOT/usr/local/bin $SRC_PATH/crictl.tgz
chmod -R +x $BUILD_ROOT/usr/local/bin/

sed -e "s|/usr/bin|/usr/local/bin|g" <$SRC_PATH/kubelet.service | tee $BUILD_ROOT/usr/lib/systemd/system/kubelet.service >/dev/null
sed -e "s|/usr/bin|/usr/local/bin|g" <$SRC_PATH/10-kubeadm.conf | tee $BUILD_ROOT/etc/systemd/system/kubelet.service.d/10-kubeadm.conf >/dev/null

$BUILD_ROOT/usr/local/bin/crictl completion bash | tee $BUILD_ROOT/usr/share/bash-completion/completions/crictl >/dev/null
$BUILD_ROOT/usr/local/bin/kubeadm completion bash | tee $BUILD_ROOT/usr/share/bash-completion/completions/kubeadm >/dev/null
$BUILD_ROOT/usr/local/bin/kubectl completion bash | tee $BUILD_ROOT/usr/share/bash-completion/completions/kubectl >/dev/null

cat <<EOF | tee $BUILD_ROOT/etc/modules-load.d/kubernetes.conf >/dev/null
overlay
br_netfilter
EOF
cat <<EOF | tee $BUILD_ROOT/etc/sysctl.d/zz-kubernetes.conf >/dev/null
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# =========================== package ===============================
cat <<"EOD" | tee $DIST_PATH/install.sh >/dev/null
#!/bin/bash
set -e
INSTALL_PATH=${INSTALLED_PATH:-/}
PWD_PATH=$(
    cd $(dirname ${BASH_SOURCE[0]})
    pwd
)
BUILD_ROOT_PATH=$PWD_PATH/rootfs
IMAGES_PATH=$PWD_PATH/images
cd $PWD_PATH || exit 1
pre(){
    chown -Rv root:root $BUILD_ROOT_PATH ||:
    chmod -Rv 755 $BUILD_ROOT_PATH/usr ||:
    chmod -Rv 644 $BUILD_ROOT_PATH/etc $BUILD_ROOT_PATH/usr/lib ||:
}
copy(){
    cp -rvf $BUILD_ROOT_PATH/* $INSTALL_PATH
}
include-img(){
    LIST=$(
        cd $IMAGES_PATH || exit
        ls *
    )
    for IMAGE in ${LIST[*]}; do
        ctr -n=k8s.io image import $IMAGES_PATH/"$IMAGE" ||
        podman -n=k8s.io load --input $IMAGES_PATH/"$IMAGE"
    done

}
configure(){
    modprobe br_netfilter
    sysctl --system
    systemctl daemon-reload
    systemctl enable --now kubelet
}
install(){
    pre
    copy
    configure
    include-img
}
if [ ! "$1" ]; then
    install
fi
EOD

chmod +x $DIST_PATH/install.sh
(
    cd "$(dirname $DIST_PATH)" || exit 1
    tar zcvf "$(basename $DIST_PATH)".tgz "$(basename $DIST_PATH)"
)

echo "软件包准备完成,数据保存在 $(dirname $DIST_PATH)/$(basename $DIST_PATH).tgz"
exit 0

如无问题,数据将打包并压缩在 /tmp/kubernetes/kubernetes-1.25.0.tgz,将此压缩包上传到远程机器即可。

3. 导入并初始化环境

将离线压缩包上传后,使用如下命令解压并准备环境:

1
2
3
tar Czxvf /tmp kubernetes-1.25.2.tgz
cd /tmp/kubernetes-1.25.2 || exit 1
bash -x install.sh

执行完成后,脚本会把接下来安装需要的资源和镜像安装到系统,同时会将CNI 网络组件的镜像导入到系统,一切无误后,即可正式开始安装 kubernetes

4. 部署 kubernetes

一切环境准备完成后,即可开始部署 kubernetes

4.1. 重置 kubernetes (可选)

如果部署失败,可尝试重置 kubernetes 后重新部署。

1
2
3
4
# 重置 kubeadm
yes | kubeadm reset
# 删除 CNI 插件残留
rm -rf /etc/cni/net.d/

4.2. 部署控制节点

kubernetes 需要 conntrack 软件包。

控制节点部署在 dragon-k8s-master 机器上,SSH 连接机器,执行以下命令:

1
2
3
4
5
6
7
8
# 部署控制节点
kubeadm init \
    --upload-certs \
    --pod-network-cidr 10.254.0.0/16 \
    --node-name node-master  --kubernetes-version=v1.25.2 | tee ~/kubectl-installer.log
export KUBECONFIG=/etc/kubernetes/admin.conf
# 查看节点状态,有输出则表示成功
kubectl get nodes

如果一切无误,则执行 kubectl get nodes 命令时会显示如下内容:

1
2
NAME          STATUS     ROLES           AGE     VERSION
node-master   NotReady   control-plane   7m54s   v1.24.2

4.3. 部署 CNI 插件

kubernetes 控制节点部署完成后,需要配置CNI网络插件实现各个节点之间的通信。这里我们使用的是 flannel

注意,此处变更了 flannel 的部分配置,使用 wireguard 进行节点间通信,同时集群内部地址改为 10.254.0.0/16
点击展开部署配置
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
  - kind: ServiceAccount
    name: flannel
    namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.254.0.0/16",
      "Backend": {
        "Type": "wireguard"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
        - operator: Exists
          effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
        - name: install-cni-plugin
          #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
          image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
          command:
            - cp
          args:
            - -f
            - /flannel
            - /opt/cni/bin/flannel
          volumeMounts:
            - name: cni-plugin
              mountPath: /opt/cni/bin
        - name: install-cni
          #image: flannelcni/flannel:v0.19.1 for ppc64le and mips64le (dockerhub limitations may apply)
          image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
          command:
            - cp
          args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
          volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
      containers:
        - name: kube-flannel
          #image: flannelcni/flannel:v0.19.1 for ppc64le and mips64le (dockerhub limitations may apply)
          image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
          command:
            - /opt/bin/flanneld
          args:
            - --ip-masq
            - --kube-subnet-mgr
          resources:
            requests:
              cpu: "100m"
              memory: "50Mi"
            limits:
              cpu: "100m"
              memory: "50Mi"
          securityContext:
            privileged: false
            capabilities:
              add: ["NET_ADMIN", "NET_RAW"]
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: EVENT_QUEUE_DEPTH
              value: "5000"
          volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
            - name: xtables-lock
              mountPath: /run/xtables.lock
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni-plugin
          hostPath:
            path: /opt/cni/bin
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate

执行命令 watch kubectl get pods --all-namespaces, 直到出现类似于如下示例的回显即表示部署成功。

1
2
3
4
5
6
7
8
9
NAMESPACE     NAME                                  READY   STATUS    RESTARTS      AGE
kube-system   coredns-6d4b75cb6d-5tmll              1/1     Running   0             15m
kube-system   coredns-6d4b75cb6d-j98fd              1/1     Running   0             15m
kube-system   etcd-node-master                      1/1     Running   2             15m
kube-system   kube-apiserver-node-master            1/1     Running   2 (16m ago)   16m
kube-system   kube-controller-manager-node-master   1/1     Running   2             16m
kube-system   kube-flannel-ds-d54gd                 1/1     Running   0             3m33s
kube-system   kube-proxy-n95j9                      1/1     Running   0             15m
kube-system   kube-scheduler-node-master            1/1     Running   2             16m

此时你可以使用 wg 命令查看部署的 flannel

4.4. 部署工作节点

在部署控制节点时,kubeadm 已将部署工作节点的命令展示到控制台,你还可以使用 tail -n 2 ~/kubectl-installer.log 查看命令。注意:Token 过期时间为 24小时,如果超过24小时,可在控制节点执行 kubeadm token create 重新生成 token.

在加入集群时,需为每个节点添加唯一标识,使用参数 --node-name 即可。 部署完成后,执行 kubectl get nodes ,控制台回显如下:

1
2
3
4
NAME             STATUS   ROLES           AGE    VERSION
node-master      Ready    control-plane   46m    v1.24.2
node-worker-01   Ready    <none>          13m    v1.24.2
node-worker-02   Ready    <none>          105s   v1.24.2

至此,Kubernetes 部署完成。