1. 说明

此方案存在问题 (主节点关机时会丢失部分数据),请勿使用 !

kubernetes 部署后可选择一个节点的某个目录作为集群数据存储目录。

为保证数据安全和方便外部备份,数据将物理存储在主节点上。

部署时需要如下镜像,同时需要集群上每个节点部署有 nfs-common 软件包。

1
registry.k8s.io/sig-storage/nfs-provisioner:v3.0.0

2. 导入 RBAC 规则和初始化

在开始前需要导入RBAC规则和初始化配置。

点击展开部署命令
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
apiVersion: v1
kind: Namespace
metadata:
  name: nfs-ganesha
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: nfs-server-provisioner
    release: nfs-server
  name: nfs-server-provisioner
  namespace: nfs-ganesha

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-server-provisioner
  labels:
    app: nfs-server-provisioner
    release: nfs-server
rules:
  - apiGroups: [ "" ]
    resources: [ "persistentvolumes" ]
    verbs: [ "get", "list", "watch", "create", "delete" ]
  - apiGroups: [ "" ]
    resources: [ "persistentvolumeclaims" ]
    verbs: [ "get", "list", "watch", "update" ]
  - apiGroups: [ "storage.k8s.io" ]
    resources: [ "storageclasses" ]
    verbs: [ "get", "list", "watch" ]
  - apiGroups: [ "" ]
    resources: [ "events" ]
    verbs: [ "list", "watch", "create", "update", "patch" ]
  - apiGroups: [ "" ]
    resources: [ "services", "endpoints" ]
    verbs: [ "get" ]
  - apiGroups: [ "extensions" ]
    resources: [ "podsecuritypolicies" ]
    resourceNames: [ "nfs-provisioner" ]
    verbs: [ "use" ]
  - apiGroups: [ "" ]
    resources: [ "endpoints" ]
    verbs: [ "get", "list", "watch", "create", "delete", "update", "patch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: nfs-server-provisioner
    release: nfs-server
  name: nfs-server-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nfs-server-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-server-provisioner
    namespace: nfs-ganesha

3. 映射节点目录

具体步骤可参考 RookNFS , 此处仅放置最终配置。

在导入配置前前需在对应的节点创建或挂载相关目录 /share
点击展开配置
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-ganesha-local
  labels:
    app: nfs-ganesha
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 2000Gi
  local:
    # 指定挂载的目录
    path: /share
    # 配置节点亲和性
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - 'node-master' # 指定部署在 node-master 节点上,可根据实际需求更改
  persistentVolumeReclaimPolicy: Retain
  storageClassName: sc-nfs-ganesha-local
  volumeMode: Filesystem
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-nfs-ganesha-local
  labels:
    app: nfs-ganesha
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-ganesha-local
  namespace: nfs-ganesha
  labels:
    app: nfs-ganesha
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2000Gi
  storageClassName: sc-nfs-ganesha-local
  volumeMode: Filesystem
  volumeName: pv-nfs-ganesha-local

4. 部署 NFS Server

将上一操作创建的 PVC 绑定至 StatefulSet,然后通过 Service 暴露对应的端口,最后将目标绑定在 StorageClass 即可 ,在此配置中,Provisioner 为 nfs.3rd.storage.k8s.io/nfs-server-provisioner ,根据前面节点亲和性配置和接下来的容忍度配置,NFS Server 将被调度到主节点上。

点击展开配置
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sts-nfs-ganesha
  labels:
    app: nfs-ganesha
  namespace: nfs-ganesha
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-ganesha
  serviceName: svc-nfs-ganesha
  template:
    metadata:
      labels:
        app: nfs-ganesha
    spec:
      terminationGracePeriodSeconds: 100
      serviceAccountName: nfs-server-provisioner
      containers:
        - name: nfs-server-provisioner
          image: "registry.k8s.io/sig-storage/nfs-provisioner:v3.0.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: nfs
              containerPort: 2049
              protocol: TCP
            - name: nfs-udp
              containerPort: 2049
              protocol: UDP
            - name: nlockmgr
              containerPort: 32803
              protocol: TCP
            - name: nlockmgr-udp
              containerPort: 32803
              protocol: UDP
            - name: mountd
              containerPort: 20048
              protocol: TCP
            - name: mountd-udp
              containerPort: 20048
              protocol: UDP
            - name: rquotad
              containerPort: 875
              protocol: TCP
            - name: rquotad-udp
              containerPort: 875
              protocol: UDP
            - name: rpcbind
              containerPort: 111
              protocol: TCP
            - name: rpcbind-udp
              containerPort: 111
              protocol: UDP
            - name: statd
              containerPort: 662
              protocol: TCP
            - name: statd-udp
              containerPort: 662
              protocol: UDP
          securityContext:
            capabilities:
              add:
                - DAC_READ_SEARCH
                - SYS_RESOURCE
          args:
            - "-provisioner=nfs.3rd.storage.k8s.io/nfs-server-provisioner"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: SERVICE_NAME
              value: svc-nfs-ganesha
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          volumeMounts:
            - name: export-volume
              mountPath: /export
      volumes:
        - name: export-volume
          persistentVolumeClaim:
            claimName: pvc-nfs-ganesha-local
      tolerations: # 配置污点和容忍度,使其可在主节点运行
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
          operator: Exists
        - effect: NoSchedule
          key: node-role.kubernetes.io/control-plane
          operator: Exists
        - effect: NoSchedule
          key: node.kubernetes.io/not-ready
          operator: Exists
        - effect: NoSchedule
          key: node.kubernetes.io/unreachable
          operator: Exists
        - effect: NoSchedule
          key: node.kubernetes.io/disk-pressure
          operator: Exists
        - effect: NoSchedule
          key: node.kubernetes.io/memory-pressure
          operator: Exists
        - effect: NoSchedule
          key: node.kubernetes.io/pid-pressure
          operator: Exists
        - effect: NoSchedule
          key: node.kubernetes.io/unschedulable
          operator: Exists
        - effect: NoSchedule
          key: node.kubernetes.io/network-unavailable
          operator: Exists
---
apiVersion: v1
kind: Service
metadata:
  name: svc-nfs-ganesha
  labels:
    app: nfs-ganesha
  namespace: nfs-ganesha
spec:
  type: ClusterIP
  ports:
    - port: 2049
      targetPort: nfs
      protocol: TCP
      name: nfs
    - port: 2049
      targetPort: nfs-udp
      protocol: UDP
      name: nfs-udp
    - port: 32803
      targetPort: nlockmgr
      protocol: TCP
      name: nlockmgr
    - port: 32803
      targetPort: nlockmgr-udp
      protocol: UDP
      name: nlockmgr-udp
    - port: 20048
      targetPort: mountd
      protocol: TCP
      name: mountd
    - port: 20048
      targetPort: mountd-udp
      protocol: UDP
      name: mountd-udp
    - port: 875
      targetPort: rquotad
      protocol: TCP
      name: rquotad
    - port: 875
      targetPort: rquotad-udp
      protocol: UDP
      name: rquotad-udp
    - port: 111
      targetPort: rpcbind
      protocol: TCP
      name: rpcbind
    - port: 111
      targetPort: rpcbind-udp
      protocol: UDP
      name: rpcbind-udp
    - port: 662
      targetPort: statd
      protocol: TCP
      name: statd
    - port: 662
      targetPort: statd-udp
      protocol: UDP
      name: statd-udp
  selector:
    app: nfs-ganesha
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: 'sc-nfs-share'
  labels:
    app: nfs-ganesha
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs.3rd.storage.k8s.io/nfs-server-provisioner
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
  - vers=4

5. 验证部署情况

5.1. 查看部署结果

使用以下命令查看部署结果。

1
kubectl get pods,svc,ingress,configmaps,sc -n nfs-ganesha

5.2. 导入测试配置

导入以下配置以创建测试 Pod

点击展开配置
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
# 测试
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-test-nfs-ganesha
  labels:
    app: nfs-ganesha
    mode: test
spec:
  storageClassName: 'sc-nfs-share'
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nfs-ganesha
    mode: test
    role: web-frontend
  name: deploy-test-nfs-ganesha-web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nfs-ganesha
      mode: test
      role: web-frontend
  template:
    metadata:
      labels:
        app: nfs-ganesha
        mode: test
        role: web-frontend
    spec:
      containers:
        - name: web
          image: nginx
          ports:
            - name: web
              containerPort: 80
          volumeMounts:
            # name must match the volume name below
            - name: nfs-ganesha-vol
              mountPath: "/usr/share/nginx/html"
      volumes:
        - name: nfs-ganesha-vol
          persistentVolumeClaim:
            claimName: pvc-test-nfs-ganesha
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nfs-ganesha
    mode: test
    role: busybox
  name: deploy-test-nfs-ganesha-busybox
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nfs-ganesha
      mode: test
      role: busybox
  template:
    metadata:
      labels:
        app: nfs-ganesha
        mode: test
        role: busybox
    spec:
      containers:
        - image: busybox
          command:
            - sh
            - -c
            - "while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done"
          imagePullPolicy: IfNotPresent
          name: busybox
          volumeMounts:
            # name must match the volume name below
            - name: nfs-ganesha-vol
              mountPath: "/mnt"
      volumes:
        - name: nfs-ganesha-vol
          persistentVolumeClaim:
            claimName: pvc-test-nfs-ganesha
---
kind: Service
apiVersion: v1
metadata:
  name: svc-test-nfs-ganesha
  labels:
    app: nfs-ganesha
    mode: test
spec:
  ports:
    - port: 80
  selector:
    role: web-frontend

5.3. 验证测试

导入完成后使用 kubectl get pods 查看创建情况。创建完成后,执行以下命令,有数据返回则表示成功:

1
2
3
4
5
6
# 验证创建结果
kubectl get deploy,pvc,pods -l mode=test,app=nfs-ganesha
# 测试结果
CLUSTER_IP=$(kubectl get services svc-test-nfs-ganesha -o jsonpath='{.spec.clusterIP}')
POD_NAME=$(kubectl get pod -l app=nfs-ganesha,role=busybox -o jsonpath='{.items[0].metadata.name}')
kubectl exec "$POD_NAME" -- wget -qO- http://"$CLUSTER_IP"
如果返回403错误,可能是因为文件系统权限问题,可通过 kubectl exec 手动查看。