说到StorageClass肯定绕不开的就是针对pv和pvc的理解,这里就不做过多的描述了,后面会加上相关描述连接,我们先不要过度的去想StorageClass,我们先需要知道StorageClass能做什么,有什么好处。StorageClass可以方便一些服务需要数据存储的时候可以通过pvc动态绑定给pv,然后供pod使用。
StorageClass 资源
、parameters 和
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/aws-ebs parameters: type: gp2 reclaimPolicy: Retain allowVolumeExpansion: true mountOptions: - debug volumeBindingMode: Immediate
因为我们需要把StorageClass关联nfs,所以先安装nfs服务端,并设置好目录,然后设置开机自启动。客户端也要在所有node节点上安装,并设置开机自启动。
# 安装nfs-server [root@wulaoer.org ~]# yum install -y nfs-utils # 授权存储目录(master) [root@wulaoer.org ~]# echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports # 执行以下命令,启动 nfs 服务;创建共享目录 [root@wulaoer.org ~]# mkdir -p /nfs/data [root@wulaoer.org ~]# chmod 777 -R /nfs/data [root@wulaoer.org ~]# systemctl enable rpcbind [root@wulaoer.org ~]# systemctl enable nfs-server [root@wulaoer.org ~]# systemctl start rpcbind [root@wulaoer.org ~]# systemctl start nfs-server # 在kubernetes集群所有节点执行 [root@wulaoer.org ~]# yum install -y nfs-utils rpcbind [root@wulaoer.org ~]# systemctl start rpcbind [root@wulaoer.org ~]# systemctl enable rpcbind [root@wulaoer.org ~]# systemctl start nfs [root@wulaoer.org ~]# systemctl enable nfs #需要注意的是kubernetes集群的任何节点必须和nfs集群进行通信,如果不能通信可以看一下防火墙。 # 使配置生效 [root@wulaoer.org ~]# exportfs -r # 检查配置是否生效 [root@wulaoer.org ~]# exportfs # 测试,在子节点 IP为master的ip [root@wulaoer.org ~]# showmount -e 10.211.55.14 Export list for 10.211.55.14: /nfs/data *
nfs服务端已经部署完成,我们的目的是把nfs通过StorageClass关联起来。这个时候nfs也是kubernetes中的一个组件了,需要进行授权,授予在kubernetes集群中运行的权限。根据需求只修改命名空间。
---
apiVersion: v1
kind: ServiceAccount #创建个SA账号主要用来管理NFS provisioner在k8s集群中运行的权限
metadata:
name: nfs-client-provisioner #和上面的SA账号保持一致
# replace with namespace where provisioner is deployed
namespace: default
---
#以下就是ClusterRole,ClusterRoleBinding,Role,RoleBinding都是权限绑定配置,不在解释。直接复制即可。
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---------------------------------分割线----------------------------------------------
[root@Mater storageclass]# kubectl apply -f rbac.yaml
serviceaccount/nfs-client-provisioner unchanged
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner unchanged
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner unchanged
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner unchanged
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner unchanged
apiVersion: storage.k8s.io/v1
#存储类的资源名称
kind: StorageClass
metadata:
#存储类的名称,自定义
name: nfs-storage
annotations:
#注解,是否是默认的存储,注意:KubeSphere默认就需要个默认存储,因此这里注解要设置为“默认”的存储系统,表示为"true",代表默认。
storageclass.kubernetes.io/is-default-class: "true"
#存储分配器的名字,自定义
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
#只运行一个副本应用
replicas: 1
#描述了如何用新的POD替换现有的POD
strategy:
#Recreate表示重新创建Pod
type: Recreate
#选择后端Pod
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner #创建账户
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
#image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 #使用NFS存储分配器的镜像
# resources:
# limits:
# cpu: 10m
# requests:
# cpu: 10m
volumeMounts:
- name: nfs-client-root #定义个存储卷,
mountPath: /persistentvolumes #表示挂载容器内部的路径
env:
- name: PROVISIONER_NAME #定义存储分配器的名称
value: k8s-sigs.io/nfs-subdir-external-provisioner #需要和上面定义的保持名称一致
- name: NFS_SERVER #指定NFS服务器的地址,你需要改成你的NFS服务器的IP地址
value: 10.211.55.14 ## 指定自己nfs服务器地址
- name: NFS_PATH
value: /nfs/data ## nfs服务器共享的目录 #指定NFS服务器共享的目录
volumes:
- name: nfs-client-root #存储卷的名称,和前面定义的保持一致
nfs:
server: 10.211.55.14 #NFS服务器的地址,和上面保持一致,这里需要改为你的IP地址
path: /nfs/data #NFS共享的存储目录,和上面保持一致
-----------------------------分割线---------------------------------
[root@Mater storageclass]# kubectl apply -f sc.yaml
storageclass.storage.k8s.io/nfs-storage unchanged
deployment.apps/nfs-client-provisioner unchanged
创建后直接看一下StorageClass,这时是有两个,一个是前面创建的默认的,另一个就是上面创建的。
[root@Mater ~]# kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 21s standard kubernetes.io/aws-ebs Retain Immediate true 15m
创建一个pvc,并测试pvc是否可以在nfs共享文件中创建文件。
kind: PersistentVolumeClaim #创建PVC资源
apiVersion: v1
metadata:
name: nginx-pvc #PVC的名称
spec:
accessModes: #定义对PV的访问模式,代表PV可以被多个PVC以读写模式挂载
- ReadWriteMany
resources: #定义PVC资源的参数
requests: #设置具体资源需求
storage: 200Mi #表示申请200MI的空间资源
storageClassName: nfs-storage
-----------------------分割线---------------------------------------------
[root@Mater storageclass]# kubectl apply -f pvc.yaml
persistentvolumeclaim/tomcat-pvc unchanged
[root@Mater storageclass]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pvc Bound pvc-85e99ea4-80e6-4f6e-a305-54b620f9ca64 200Mi RWX nfs-storage 8m29s
验证通过,说明可以在nfs上创建pvc,并且在nfs共享目录里也有创建的pvc文件名,说明这个实验已经创建成功了。但是在实验时,刚开始总是报错创建的pvc是Pending状态,通过查看信息,如下:
[root@Mater storageclass]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pvc Bound pvc-29c5e68b-abd4-4822-a338-440b9e66421e 200Mi RWX nfs-storage 4s
[root@Mater storageclass]# kubectl describe pvc nginx-pvc
Name: nginx-pvc
Namespace: default
StorageClass: nfs-storage
Status: Bound
Volume: pvc-29c5e68b-abd4-4822-a338-440b9e66421e
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
volume.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 200Mi
Access Modes: RWX
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 6m13s persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s-sigs.io/nfs-subdir-external-provisioner" or manually created by system administrator
网上很多说是修改apiserver的配置,但是我修改apiserver配置时就会出现apiserver重启,然后就用不了了,都说是1.20版本的才有,但是我这个版本相对比较高的,后来发现是应该有node节点没有安装nfs客户端才导致的,所以一定要根据文档来做,这样避免没必要的问题。

您可以选择一种方式赞助本站
支付宝扫一扫赞助
微信钱包扫描赞助
赏