kubernetes1.21.1最新版本集群搭建部署(3master+3node)

部署资源规划

本次部署kubernetes集群采用多master,kubeadm方式部署

系统采用CentOS 7 64位 ,本次实验采用CentOS 7.8

之前写过一篇部署文档:https://blog.csdn.net/wt334502157/article/details/83992120

但是版本过于陈旧V1.12,所以再写一篇最新版本的1.21.1的部署流程,按照步骤一步步操作基本都是能够成功部署搭建出来

服务器清单:

172.28.54.176 master01
172.28.54.177 master02
172.28.54.178 master03
172.28.54.179 node01
172.28.54.180 node02
172.28.54.181 node03

服务器采用阿里云主机,实验结束回收十分方便

部署准备

设置主机名

【注意:】本次搭建采用root用户,非root用户可使用sudo命令还执行一些无权限操作

172.28.54.176 --> hostnamectl set-hostname master01
172.28.54.177 --> hostnamectl set-hostname master02
172.28.54.178 --> hostnamectl set-hostname master03
172.28.54.179 --> hostnamectl set-hostname node01
172.28.54.180 --> hostnamectl set-hostname node02
172.28.54.181 --> hostnamectl set-hostname node03

修改hosts解析

# 各主机增加hosts解析,简化后续一些操作
[root@master01 ~]# vim /etc/hosts

::1     localhost       localhost.localdomain   localhost6      localhost6.localdomain6
127.0.0.1       localhost       localhost.localdomain   localhost4      localhost4.localdomain4

172.28.54.176   iZ0jlcc8f7mcmzj0z0a397Z iZ0jlcc8f7mcmzj0z0a397Z

# K8S_1.21 集群
172.28.54.176   master01
172.28.54.177   master02
172.28.54.178   master03
172.28.54.179   node01
172.28.54.180   node02
172.28.54.181   node03

关闭防火墙

各服务器上关闭防火墙,关闭服务,并设为开机不自启

[root@master01 ~]# systemctl stop firewalld && systemctl disable firewalld

清空防火墙规则

各服务器清空防火墙规则,避免出现网络层面的问题影响实验

[root@master01 ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

关闭swap分区

各服务器如果开启了 swap 分区,kubelet 会启动失败 ,需要关闭swap分区

[root@master01 ~]# swapoff -a

swapoff -a只是暂时关闭交换分区,但如果遇到服务器重启等操作,因为重启会加载/etc/fstab中的挂载信息,所以为了防止开机自动挂载 swap 分区,将开机启动挂载定义文件 /etc/fstab 中相应swap配置注释:

[root@master01 ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

关闭 SELinux

各服务器关闭 SELinux,否则后续 K8S 挂载目录时可能会因为安全方面问题导致报错 Permission denied

[root@master01 ~]# setenforce 0

永久修改:/etc/selinux/config ;SELINUX设置为disabled

[root@master01 ~]# cat /etc/selinux/config | grep -vE "^#|^$" | grep SELINUX
SELINUX=disabled
SELINUXTYPE=targeted

时间同步

各服务器时间同步配置

[root@master01 ~]# yum install ntpdate -y && ntpdate time.windows.com

内核参数优化

各服务器上都优化内核参数,优化运行的底层环境

[root@master01 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

[root@master01 ~]# sysctl --system

配置yum源

在各服务器上配置docker和kubernetes的yum源,避免安装组件失败

docker-ce

[root@master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

kubernetes

[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum makecache fast生成缓存

[root@master01 ~]# yum makecache fast

安装系统工具

各服务器安装系统环境常用工具,避免后续依赖问题等等

[root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

清理原有docker环境

各服务器上清理原docker环境,避免部署时环境污染,如果服务器全新的无需操作

[root@master01 ~]# yum remove docker \
      docker-client \
      docker-client-latest \
      docker-common \
      docker-latest \
      docker-latest-logrotate \
      docker-logrotate \
      docker-engine

安装部署

安装docker-ce

各服务器安装docker-ce

[root@master01 ~]# yum -y install docker-ce

启动docker并设置开机自启动

[root@master01 ~]# systemctl start docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

查看docker运行状态,各服务器查看docker运行状态,是否健康运行

[root@master01 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-05-19 13:51:29 CST; 46s ago
     Docs: https://docs.docker.com
 Main PID: 11975 (dockerd)
   CGroup: /system.slice/docker.service
           └─11975 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
Hint: Some lines were ellipsized, use -l to show in full.

各服务器配置镜像加速

如果是阿里云主机,可以自行登录控制台,找到容器镜像服务,找不到入口搜索一下,镜像工具,镜像加速器;
在这里插入图片描述

[root@master01 ~]# mkdir -p /etc/docker
[root@master01 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://dm8leeoe.mirror.aliyuncs.com"]
}
EOF


[root@master01 ~]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://dm8leeoe.mirror.aliyuncs.com"]
}

加载配置并重启docker

[root@master01 ~]# systemctl daemon-reload && sudo systemctl restart docker

安装kubelet、kubeadm、kubectl

各服务器安装K8S组件并启动kubelet 设置kubelet开机自启动

[root@master01 ~]# yum install -y kubectl kubelet kubeadm && systemctl enable kubelet && systemctl start kubelet

安装nginx并配置master信息

nginx安装,为了实现反向代理3台 master的 apiserver 6443端口

安装nginx

[root@master01 ~]#  yum install -y nginx

更改nginx配置

[root@master01 ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

stream {

   upstream apiserver {
      server 172.28.54.176:6443;
      server 172.28.54.177:6443;
      server 172.28.54.178:6443;
 }

    server {
        listen   16443;
        proxy_pass apiserver;
    }
}

【注意1:】 upstream apiserver 中的server ip为规划的3个master服务器ip地址,如果后期出现横向扩展master数量,需要对应变更nginx配置文件中的upstream中的master的ip清单

【注意2:】 server中的listen端口记得改成16443,如果这里用6443,会占用这个端口,那集群启动时,apiserver会启动失败

nginx配置检查

[root@master01 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

启动nginx

[root@master01 ~]# nginx

kubeadm init初始化

[root@master01 ~]# kubeadm init --control-plane-endpoint  172.28.54.176:16443 --kubernetes-version=1.21.1  --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16 --upload-certs

–control-plane-endpoint 172.28.54.176:16443 对应nginx配置文件中server 的监听listen端口16443

–kubernetes-version=1.21.1 对应启动版本,本次部署1.21.1 最新版

–image-repository registry.aliyuncs.com/google_containers 指定使用阿里云镜像资源,否则会拉取失败

–service-cidr=10.10.0.0/16 网络配置相关

–pod-network-cidr=10.122.0.0/16 网络配置相关

–upload-certs 证书相关配置,多master部署必要配置

控制台输出 [ 省去一些info输出信息 ]:


[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.28.54.176:16443 --token p3ae38.tw83t8wn4cfnrpns \
	--discovery-token-ca-cert-hash sha256:ceeec59274e3c45437452176cf4a0b14c99d6d08174542559293e24ab2acb1f9 \
	--control-plane --certificate-key 43fe21c5f5852a5bc1a96c02aaeeddbb0a626870139a91045237bd0f8cc6859c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.28.54.176:16443 --token p3ae38.tw83t8wn4cfnrpns \
	--discovery-token-ca-cert-hash sha256:ceeec59274e3c45437452176cf4a0b14c99d6d08174542559293e24ab2acb1f9 

【注意】: 把2个kubeadm join 的命令复制到自己本地留存一份,后面复制粘贴使用方便,避免控制台新的输出把内容顶掉

根据控制台输出内容,需要一些相关操作

[root@master01 ~]# mkdir -p $HOME/.kube
[root@master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

【注意1:】 初始化时不同的环境可能会有一些镜像拉取失败的情况,毕竟原始的镜像来自于不同的仓库,最后根据提示的报错,个别拉取不到的可以尝试单独拉取源镜像,然后变更tag标签,举例:

[root@master01 ~]# docker pull louwy001/coredns-coredns:v1.8.0
v1.8.0: Pulling from louwy001/coredns-coredns
c6568d217a00: Pull complete 
5984b6d55edf: Pull complete 
Digest: sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61
Status: Downloaded newer image for louwy001/coredns-coredns:v1.8.0
docker.io/louwy001/coredns-coredns:v1.8.0
[root@master01 ~]# docker tag docker.io/louwy001/coredns-coredns:v1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
[root@master01 ~]# docker images
REPOSITORY                                                TAG       IMAGE ID       CREATED        SIZE
louwy001/coredns-coredns                                  v1.8.0    296a6d5035e2   6 months ago   42.5MB
registry.aliyuncs.com/google_containers/coredns/coredns   v1.8.0    296a6d5035e2   6 months ago   42.5MB
[root@master01 ~]# 

【注意2:】 在初始化失败,或者遇到一些棘手问题时,先尝试根据不同的报错排查问题,如果找到解决方案,使用kubeadm reset 恢复到初始化之前,再进行新的初始化操作,并且kubeadm reset 完,删除对应的目录,控制台输出中会有提示

join master

加入master02 和 master03操作,操作方式相同

[root@master02 ~]# kubeadm join 172.28.54.176:16443 --token p3ae38.tw83t8wn4cfnrpns --discovery-token-ca-cert-hash sha256:ceeec59274e3c45437452176cf4a0b14c99d6d08174542559293e24ab2acb1f9 --control-plane --certificate-key 43fe21c5f5852a5bc1a96c02aaeeddbb0a626870139a91045237bd0f8cc6859c

[preflight] Running pre-flight checks
[mark-control-plane] Marking the node master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@master02 ~]# mkdir -p $HOME/.kube
[root@master02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

join node

加入node01、node02 、node03节点 操作,操作方式相同

[root@node01 ~]# kubeadm join 172.28.54.176:16443 --token p3ae38.tw83t8wn4cfnrpns --discovery-token-ca-cert-hash sha256:ceeec59274e3c45437452176cf4a0b14c99d6d08174542559293e24ab2acb1f9 

[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

初步查看集群

[root@master01 ~]# kubectl get nodes
NAME       STATUS     ROLES                  AGE   VERSION
master01   NotReady   control-plane,master   23m   v1.21.1
master02   NotReady   control-plane,master   21m   v1.21.1
master03   NotReady   control-plane,master   18m   v1.21.1
node01     NotReady   <none>                 21m   v1.21.1
node02     NotReady   <none>                 20m   v1.21.1
node03     NotReady   <none>                 19m   v1.21.1

部署网络组件calico

[root@master01 ~]# wget https://docs.projectcalico.org/manifests/calico.yaml    
[root@master01 ~]# kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
[root@master01 ~]# 

重新查看集群

[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES                  AGE   VERSION
master01   Ready    control-plane,master   27m   v1.21.1
master02   Ready    control-plane,master   25m   v1.21.1
master03   Ready    control-plane,master   22m   v1.21.1
node01     Ready    <none>                 25m   v1.21.1
node02     Ready    <none>                 24m   v1.21.1
node03     Ready    <none>                 23m   v1.21.1
[root@master01 ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-b656ddcfc-hcgkt   1/1     Running   0          99s
kube-system   calico-node-8pdhh                         1/1     Running   0          99s
kube-system   calico-node-999ks                         1/1     Running   0          99s
kube-system   calico-node-cc9sg                         1/1     Running   0          99s
kube-system   calico-node-nwc52                         1/1     Running   0          99s
kube-system   calico-node-rhhld                         1/1     Running   0          99s
kube-system   calico-node-vghpr                         1/1     Running   0          99s
kube-system   coredns-545d6fc579-94hl7                  1/1     Running   0          27m
kube-system   coredns-545d6fc579-qdffv                  1/1     Running   0          27m
kube-system   etcd-master01                             1/1     Running   0          28m
kube-system   etcd-master02                             1/1     Running   0          25m
kube-system   etcd-master03                             1/1     Running   0          23m
kube-system   kube-apiserver-master01                   1/1     Running   0          28m
kube-system   kube-apiserver-master02                   1/1     Running   0          25m
kube-system   kube-apiserver-master03                   1/1     Running   0          23m
kube-system   kube-controller-manager-master01          1/1     Running   1          28m
kube-system   kube-controller-manager-master02          1/1     Running   0          25m
kube-system   kube-controller-manager-master03          1/1     Running   0          23m
kube-system   kube-proxy-7sm7m                          1/1     Running   0          24m
kube-system   kube-proxy-94966                          1/1     Running   0          25m
kube-system   kube-proxy-f95c2                          1/1     Running   0          27m
kube-system   kube-proxy-mwt7k                          1/1     Running   0          25m
kube-system   kube-proxy-rx7bw                          1/1     Running   0          23m
kube-system   kube-proxy-xc854                          1/1     Running   0          25m
kube-system   kube-scheduler-master01                   1/1     Running   0          28m
kube-system   kube-scheduler-master02                   1/1     Running   0          25m
kube-system   kube-scheduler-master03                   1/1     Running   0          23m

截止到这里,一个3master + 3node 的集群已经部署完毕,可以正常使用了

部署dashboard

部署dashboard界面化管理

安装组件

下载yaml文件并apply

[root@master01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
[root@master01 ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@master01 ~]# 
[root@master01 ~]# kubectl get svc --all-namespaces
NAMESPACE              NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.10.0.1       <none>        443/TCP                  41m
kube-system            kube-dns                    ClusterIP   10.10.0.10      <none>        53/UDP,53/TCP,9153/TCP   41m
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.10.255.20    <none>        8000/TCP                 5m56s
kubernetes-dashboard   kubernetes-dashboard        ClusterIP   10.10.236.120   <none>        443/TCP                  5m56s

更改svc访问方式

可以看到kubernetes-dashboard的service是ClusterIP,不能通过远程页面访问,这里把配置改成NodePort

[root@master01 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: "2021-05-19T08:14:27Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  resourceVersion: "5158"
  uid: dd04a057-d6f5-48c3-9bf7-6c8693a82811
spec:
  clusterIP: 10.10.236.120
  clusterIPs:
  - 10.10.236.120
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort			# 此处配置需要改成NodePort,然后保存退出
status:
  loadBalancer: {}
# 重新查看svc,这时候TYPE已经是NodePort,有了PORT 31324
[root@master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.10.255.20    <none>        8000/TCP        16m
kubernetes-dashboard        NodePort    10.10.236.120   <none>        443:31324/TCP   16m
[root@master01 ~]# 
[root@master01 ~]# netstat -tnlpu|grep 31324
tcp        0      0 0.0.0.0:31324           0.0.0.0:*               LISTEN      6175/kube-proxy     
[root@master01 ~]#   

现在可以通过master的外网ip和端口访问

增加访问策略

【注意:】 如果是阿里云主机需要去控制台加入网策略,否则端口在浏览器访问,是拒绝访问。不是阿里云主机,这一步可以跳过。
在这里插入图片描述

查看dashboard访问token

浏览器访问dashboard需要拿着token去访问,所以查看一下当前的token

[root@master01 ~]# kubectl get secret -n kubernetes-dashboard
NAME                               TYPE                                  DATA   AGE
default-token-twkp2                kubernetes.io/service-account-token   3      28m
kubernetes-dashboard-certs         Opaque                                0      28m
kubernetes-dashboard-csrf          Opaque                                1      28m
kubernetes-dashboard-key-holder    Opaque                                2      28m
kubernetes-dashboard-token-w825g   kubernetes.io/service-account-token   3      28m

# token用于登录界面dashboard使用
[root@master01 ~]# kubectl describe secret/kubernetes-dashboard-token-w825g -n kubernetes-dashboard
Name:         kubernetes-dashboard-token-w825g
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 74e19a15-b52e-442e-a5ee-9c7100ae4973

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImRaQ2Njd2NiTTFwY1B1NWJTanZfbk9razVpd1BHS1oyUFhEckFLdTBlcFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi13ODI1ZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc0ZTE5YTE1LWI1MmUtNDQyZS1hNWVlLTljNzEwMGFlNDk3MyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.eaaDf2Ldd9oFZ7rOPvTZ16NzCj9QORq2w9o-8_3QD6Wp5_Glb3CavOPps-i_cMTeI83fd32BLdRX147ps-0UzkfhZd2K2e7SgRCR7P_GLLVEX6VwWpnqYkDJJj-3RzeTs3YDKtuPat7BNFyy0LmAmhM_AyxUXkzqXWOrI8-UpL1CWNDPmYYZH4mvU7-Ix75Hmq27Zm0ZyuTr5IdtLzLUtrhbibI9vALRBWm0gU22haw_tmkD1tL_fmhhWWjMHdHGqE7Jbn5W3XlFlEenJSrBu50oHTKQrIuOAmFth3yKiwEn7aowQ0wg-WiK_kJLRbw45n0XT1618x1A1ER3DrzdNw

浏览器访问

添加网络策略后,浏览器访问公网IP+31324端口(master01对应公网IP 8.130.29.76)

https://8.130.29.76:31324/
在这里插入图片描述

把token复制进去,点击登录
在这里插入图片描述

页面访问出现了报错
system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource “namespaces”
从报错内容来看,大致是权限问题,获取不到集群其它namespaces中的资源

创建角色提权

新建一个dashboardAccount.yaml文件,内容如下:

[root@master01 ~]# vim dashboardAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: aks-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: aks-dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: aks-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-head
  labels:
    k8s-app: kubernetes-dashboard-head
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-head
  namespace: kube-system

apply使用

[root@master01 ~]# kubectl apply -f dashboardAccount.yaml 
serviceaccount/aks-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/aks-dashboard-admin created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-head created
[root@master01 ~]# 

重新获取自定义的secret的token

[root@master01 ~]# kubectl describe secret/aks-dashboard-admin-token-xzwwd -n kube-system
Name:         aks-dashboard-admin-token-xzwwd
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: aks-dashboard-admin
              kubernetes.io/service-account.uid: 0c539fcd-6242-46ed-85d5-5586e3b9d0bf

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImRaQ2Njd2NiTTFwY1B1NWJTanZfbk9razVpd1BHS1oyUFhEckFLdTBlcFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJha3MtZGFzaGJvYXJkLWFkbWluLXRva2VuLXh6d3dkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFrcy1kYXNoYm9hcmQtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwYzUzOWZjZC02MjQyLTQ2ZWQtODVkNS01NTg2ZTNiOWQwYmYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWtzLWRhc2hib2FyZC1hZG1pbiJ9.bEs7S4u2fLvRVRrvPzlWI57PjUYgrNc5eChd5eaqAhZThLlxwVtenrTSzrdWZzQufqyRQEYdldp4iXffT_VKCVX19LZvvrUCStlczUQ8tFkXkJTDFhSdxIVmtiAoy8ramxfH7kY7x83AR7lVMloUJb4fInGVRv8ZBoWmrAYz_SLNxbgpaNSqPNqTK2H7KXbsBoKY4EQLtx1KO71y2pusvilQSQCTwuBJyGinkn9BtcjZ8a8PRcM0k8m6iMYb4-6bydxq09eG9YmBwIAR_q9f2DDOhwybnjBG9shykz9BX-jwpN-YOhHEKuXGWcpvowohO2rE1-dGrqUiTyXj29JQww

浏览器注销界面,使用新的token登录

可以成功登录,并获取到集群资源
在这里插入图片描述

截止到此,部署全部完成。

Logo

魔乐社区(Modelers.cn) 是一个中立、公益的人工智能社区,提供人工智能工具、模型、数据的托管、展示与应用协同服务,为人工智能开发及爱好者搭建开放的学习交流平台。社区通过理事会方式运作,由全产业链共同建设、共同运营、共同享有,推动国产AI生态繁荣发展。

更多推荐