Kubernetes1.9生产环境高可用实践--002-apiserver高可用安装部署

xiaoxiao2021-02-28  43

Apiserver采用高可用方式安装部署。这篇文章接上一篇《Kubernetes1.9生产环境高可用实践–001-ETCD高可用集群部署》。

在这一篇,我们着重写如何部署ApiServer,以及apiserver高可用的配置。

配置中使用到的文件下载地址:https://pan.baidu.com/s/1wyhV_kBpIqZ_MdS2Ghb8sg

Apiserver采用高可用方式安装部署。这篇文章接上一篇《Kubernetes1.9生产环境高可用实践–001-ETCD高可用集群部署》。

在这一篇,我们着重写如何部署ApiServer,以及apiserver高可用的配置。

配置中使用到的文件下载地址:https://pan.baidu.com/s/1wyhV_kBpIqZ_MdS2Ghb8sg

01. ApiServer服务器配置

准备两台服务器: 192.168.3.53 yds-dev-svc01-master01 192.168.3.54 yds-dev-svc01-master02 192.168.3.46 yds-dev-svc01-master03

01.01. 服务器配置

在yds-dev-svc01-master01中配置如下信息:

[root@localhost ~]# hostnamectl set-hostname yds-dev-svc01-master01 [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens32 UUID=7d6fb2ed-364c-415f-9b02-0e54436ff1ec DEVICE=ens32 ONBOOT=yes IPADDR=192.168.3.53 NETMASK=255.255.255.0 GATEWAY=192.168.3.1 DNS1=192.168.3.10 DNS1=61.139.2.69

设置内核

cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl -p /etc/sysctl.conf #若问题 执行sysctl -p 时出现: sysctl -p sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory 解决方法: modprobe br_netfilter ls /proc/sys/net/bridge

配置完成后,退出重新登录。

在yds-dev-svc01-master02中配置如下信息:

[root@yds-dev-svc01-master02 ~]# hostnamectl set-hostname yds-dev-svc01-master02 [root@yds-dev-svc01-master02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens32 UUID=7d6fb2ed-364c-415f-9b02-0e54436ff1ec DEVICE=ens32 ONBOOT=yes IPADDR=192.168.3.54 NETMASK=255.255.255.0 GATEWAY=192.168.3.1 DNS1=192.168.3.10 DNS2=61.139.2.69

设置内核

cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl -p /etc/sysctl.conf #若问题 执行sysctl -p 时出现: sysctl -p sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory 解决方法: modprobe br_netfilter ls /proc/sys/net/bridge

配置完成后,退出重新登录。

将服务器升级到最新。 升级完成后,重新启动一下。

yum update -y; yum install -y epel-release reboot

01.02. 准备安装文件

[root@yds-dev-svc01-master01 ~]# ls /usr/bin/kube* /usr/bin/kube-apiserver /usr/bin/kube-controller-manager /usr/bin/kubectl /usr/bin/kube-scheduler

02. 安装文件配置

查看kube-apiserver版本

[root@yds-dev-svc01-master01 ~]# /usr/local/bin/kube-apiserver --version Kubernetes v1.9.0

03. 证书配置

记得我们在ETCD创建证书的服务器吗,服务名为:yds-dev-svc01-etcd01。为了方便,我们继续使用这一台服务器来创建证书

03.01. 创建 kubernetes 证书

在yds-dev-svc01-etcd01服务器中,我们进入到证书创建目录:

[root@yds-dev-svc01-etcd01 key]# pwd /tmp/key [root@yds-dev-svc01-etcd01 key]# ls ca-config.json ca-csr.json ca.pem etcd-csr.json etcd.pem ca.csr ca-key.pem etcd.csr etcd-key.pem

03.02. 创建kubernetes证书配置文件

[root@yds-dev-svc01-etcd01 key]# cat kubernetes-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.3.53", "192.168.3.54", "192.168.3.55", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Chengdu", "L": "Chengdu", "O": "k8s", "OU": "System" } ] }

03.03. 生成 kubernetes 证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

03.04. 检查生成证书

[root@yds-dev-svc01-etcd01 key]# ls kubernetes* kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem

03.05. 证书校验

[root@yds-dev-svc01-etcd01 key]# openssl x509 -noout -text -in kubernetes.pem Certificate: Data: Version: 3 (0x2) Serial Number: 1b:1e:a5:49:91:9a:f0:65:54:09:3d:04:1a:c5:3c:18:f2:88:6a:18 Signature Algorithm: sha256WithRSAEncryption Issuer: C=CN, ST=chengdu, L=chengdu, O=k8s, OU=System, CN=kubernetes Validity Not Before: Apr 8 09:19:00 2018 GMT Not After : Apr 5 09:19:00 2028 GMT Subject: C=CN, ST=Chengdu, L=Chengdu, O=k8s, OU=System, CN=kubernetes Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:bf:b0:3f:c5:2a:d6:e2:73:6e:bc:82:a1:c0:3d: 72:c0:5d:83:07:40:57:00:6c:58:c3:83:75:5b:63: 18:e8:10:68:6c:1a:49:e9:74:a4:b8:a0:3a:6b:db: 9b:d2:1b:1a:27:53:9d:4b:4e:20:e7:74:40:a8:8b: 85:62:6c:52:02:09:7e:a5:51:25:97:4f:c7:b8:44: 69:74:7a:50:64:0b:0d:17:1d:42:3a:04:d5:ea:18: 65:ed:17:4e:c3:71:5e:a1:af:76:b2:2f:4f:af:3d: 9c:89:d7:ec:ee:33:54:83:99:ea:dd:a3:79:7e:c0: 6b:18:44:74:a5:0e:86:1c:f8:26:e1:19:73:27:d8: 35:8e:ae:e7:2d:70:45:46:65:85:12:c1:4c:a6:c2: d8:b7:6a:f5:ef:e1:c4:14:3d:92:d7:c8:7f:12:4a: 40:84:26:9f:2d:86:8b:46:1f:d9:5a:45:1a:12:8d: 97:c7:3e:a8:6c:a4:ae:de:36:dd:72:01:07:ff:7a: 63:b4:14:56:ca:1b:ea:89:99:fe:fe:0b:c1:38:1a: 5e:25:f1:3e:51:f5:3c:ea:da:56:d5:c7:a4:9e:3b: 29:2e:52:48:42:de:cd:fd:d0:aa:86:82:a5:ef:2e: 53:cb:6f:00:9c:6c:e6:c9:4b:06:12:45:cb:f5:13: 72:0f Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 83:97:B3:C0:33:6B:AE:88:34:B7:D9:DD:1E:E4:2D:0E:4B:39:3B:F9 X509v3 Authority Key Identifier: keyid:CC:1B:74:74:30:7C:44:AD:4A:51:EC:B7:AC:69:0F:E9:7C:66:F3:01 X509v3 Subject Alternative Name: DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster, DNS:kubernetes.default.svc.cluster.local, IP Address:127.0.0.1, IP Address:192.168.3.53, IP Address:192.168.3.54, IP Address:192.168.3.55, IP Address:10.254.0.1 Signature Algorithm: sha256WithRSAEncryption 2b:a3:69:46:9b:8d:6b:11:69:39:de:31:d7:77:44:a9:43:d3: 97:ea:80:86:da:ba:3b:e9:f7:24:58:f5:a3:48:39:e4:f2:97: fa:f8:f6:ae:98:75:df:aa:84:b9:0e:69:8e:c5:4a:33:21:27: 0a:6d:b8:07:f1:4a:59:5b:0e:b1:10:fa:9e:e0:5e:ca:6b:c4: 25:ee:c4:3e:6e:f7:12:5e:71:a5:10:bd:0c:4c:a3:51:79:2d: 1d:2a:15:7f:d1:d1:4e:d3:b2:a8:fe:72:f6:20:14:dc:ef:97: db:dd:f6:86:0b:c4:4c:cb:07:92:a5:25:25:1a:70:a3:d2:85: 9d:7c:b5:ba:68:a8:aa:33:b9:1e:a0:12:c9:af:d3:4c:25:d4: 99:9c:e1:9a:d0:13:04:e8:1b:26:c5:7e:24:6c:14:a0:a8:aa: 5b:5f:66:16:a5:2d:78:68:96:e5:cb:82:99:19:65:fa:01:44: de:08:d2:f5:c0:1d:62:72:88:ff:9a:35:3c:4e:b4:cf:31:b2: 3f:7d:09:e7:5f:f7:a2:30:79:ca:91:b7:1e:34:ef:5f:8a:f7: 26:05:1c:ff:6f:78:7e:ae:a7:47:42:35:3c:3b:f2:b9:2b:ab: 47:1c:f1:f1:3b:67:6a:b2:17:23:7c:db:90:21:4a:88:31:1d: 0f:c1:ad:1a

确认 Issuer 字段的内容和 ca-csr.json 一致; 确认 Subject 字段的内容和 kubernetes-csr.json 一致; 确认 X509v3 Subject Alternative Name 字段的内容和 kubernetes-csr.json 一致; 确认 X509v3 Key Usage、Extended Key Usage 字段的内容和 ca-config.json 中 kubernetes profile 一致;

03.06. 证书分发

将生成的证书复制到Kubernetes的配置目录/etc/kubernetes/ssl/

复制证书到yds-dev-svc01-master01

需要先在服务器中创建目录/etc/kubernetes/ssl/

scp etcd.pem etcd-key.pem root@192.168.3.53:/etc/kubernetes/ssl scp ca.pem ca-key.pem kubernetes.pem kubernetes-key.pem root@192.168.3.53:/etc/kubernetes/ssl/

复制证书到yds-dev-svc01-master02

需要先在服务器中创建目录/etc/kubernetes/ssl/

scp etcd.pem etcd-key.pem root@192.168.3.54:/etc/kubernetes/ssl scp ca.pem ca-key.pem kubernetes.pem kubernetes-key.pem root@192.168.3.54:/etc/kubernetes/ssl/

04. 配置文件

04.01 审核配置文件

创建审核配置文件。

cat >> audit-policy.yaml <<EOF # Log all requests at the Metadata level. apiVersion: audit.k8s.io/v1beta1 kind: Policy rules: - level: Metadata EOF

将创建的配置文件 audit-policy.yaml放到/etc/kubernetes目录中。

cp audit-policy.yaml /etc/kubernetes/

04.02. 创建 TLS Bootstrapping Token

在yds-dev-svc01-master01和yds-dev-svc01-master02中分别执行

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > /etc/kubernetes/token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF

05. 配置和启动 kube-apiserver

我们这里先配置服务器yds-dev-svc01-master01,yds-dev-svc01-master01配置完,测试完成后,再配置yds-dev-svc01-master02.

05.01. 创建kube-apiserver apiserver配置文件

[root@yds-dev-svc01-master01 ~]# cat /etc/kubernetes/apiserver ### ## kubernetes system config ## ## The following values are used to configure the kube-apiserver ## # ## The address on the local server to listen to. KUBE_API_ADDRESS="--advertise-address=192.168.3.53 --bind-address=0.0.0.0 --insecure-bind-address=127.0.0.1" # ## The port on the local server to listen on. KUBE_API_PORT="--port=8080 --secure-port=6443" # ## Port minions listen on #KUBELET_PORT="--kubelet-port=10250" # ## Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.3.50:2379,192.168.3.51:2379,192.168.3.52:2379" # ## Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # ## default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction" # ## Add your own! KUBE_API_ARGS="--apiserver-count=2 --authorization-mode=RBAC,Node --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/etcd.pem --etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h" KUBE_AUDIT="--audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/kubernetes/audit.log --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100

–service-node-port-range=30000-32767 这个指定pod端口的范围。

05.01. 创建apiserver systemd unit文件

创建kube-apiserver systemd unit文件kube-apiserver.service

[root@yds-dev-svc01-master01 ~]# cat /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Service Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_AUDIT \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target

05.02. 创建kube-apiserver config配置文件

[root@yds-dev-svc01-master01 ~]# cat /etc/kubernetes/config ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://127.0.0.1:8080"

config 文件中的配置会在kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service,kubelet.service,kube-proxy.service文件件调用。 KUBE_MASTER 这里配置的是http类型访问。因为这里主要是本服务器中controller-manager,scheduler和proxy使用。

05.03. 打开防火墙端口

firewall-cmd --add-port=6443/tcp --permanent firewall-cmd --reload

05.04. 启动apiserver服务

systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver

05.05. 启动apiserver服务

通过curl访问API接口。

[root@yds-dev-svc01-master01 ~]# curl -L --cacert /etc/kubernetes/ssl/ca.pem https://192.168.3.53:6443/api { "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "192.168.3.53:6443" } ] } [root@yds-dev-svc01-master01 kubernetes]# curl -L http://127.0.0.1:8080/api { "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "192.168.3.53:6443" } ] }

06. 配置kube-controller-manager

06.01. 创建controller-manager配置

创建配置文件/etc/kubernetes/controller-manager。

[root@yds-dev-svc01-master01 ~]# cat /etc/kubernetes/controller-manager ### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"

–leader-elect=true 这里必须要设置为true,保证集群中只有一个kube-controller-manager处于活跃状态

06.02. 创建 kube-controller-manager的serivce配置文件

创建配置文件/usr/lib/systemd/system/kube-controller-manager.service

[root@yds-dev-svc01-master01 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager ExecStart=/usr/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target

06.03. 启动kube-controller-manager

systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager

07. 配置kube-scheduler

07.01. 创建scheduler文件

创建配置文件/etc/kubernetes/scheduler

[root@yds-dev-svc01-master01 ~]# cat /etc/kubernetes/scheduler ### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"

–leader-elect=true 这里必须要设置为true,保证集群中只有一个kube-scheduler处于活跃状态

07.02. 创建kube-scheduler systemd文件

创建配置文件/usr/lib/systemd/system/kube-scheduler.service

[root@yds-dev-svc01-master01 ~]# cat /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target

07.03. 启动scheduler

$ systemctl daemon-reload $ systemctl enable kube-scheduler $ systemctl start kube-scheduler

08. 配置kubectl管理工具

kubectl工具用于管理k8s集群,最好不要把这个工具安装在apiserver服务器中。

这里,我们继续回到服务器yds-dev-svc01-etcd01中进行操作。 记得我们创建证书的目录为:

cd /tmp/key/

08.01. 创建 admin 证书

$ cat admin-csr.json { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] }

08.02. 生面admin证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

08.03. 分发证书

scp admin-key.pem admin.pem root@192.168.3.53:/etc/kubernetes/ssl/

08.04. 配置工具

设置KUBE_APISERVER变量

export KUBE_APISERVER="https://192.168.3.53:6443"

设置集群参数

kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER}

设置客户端认证参数

kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem

设置上下文参数

kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin

设置默认上下文

kubectl config use-context kubernetes

09. 检查集群信息

[root@yds-dev-svc01-master01 ~]# kubectl get componentstatuses NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}

通过上面的输出信息,我们查看到controller-manager,scheduler状态正常。说明我们的安装正常。

10. 配置yds-dev-svc01-master02

前面我们配置了yds-dev-svc01-master01服务。现在我们来配置yds-dev-svc01-master02。因两台都是apiserver,因此,我们只需要把yds-dev-svc01-master01的配置复制到yds-dev-svc01-master02中,并做相应的修改就好。

10.01. 复制证书到/etc/kubernetes/ssl中

[root@yds-dev-svc01-master01 ~]# scp -r /etc/kubernetes/ssl root@192.168.3.54:/etc/kubernetes/ root@192.168.3.54's password: ca.pem 100% 1359 2.9MB/s 00:00 ca-key.pem 100% 1679 2.7MB/s 00:00 kubernetes.pem 100% 1627 2.9MB/s 00:00 kubernetes-key.pem 100% 1679 3.5MB/s 00:00 etcd.pem 100% 1436 3.1MB/s 00:00 etcd-key.pem 100% 1679 3.9MB/s 00:00 admin-key.pem 100% 1675 83.4KB/s 00:00 admin.pem 100% 1399 185.6KB/s 00:00

10.02. 复制/etc/kubernetes/apiserver

[root@yds-dev-svc01-master01 kubernetes]# scp apiserver root@192.168.3.54:/etc/kubernetes/ root@192.168.3.54's password: apiserver 100% 1645 122.6KB/s 00:00

复制完成,需要把以下行的192.168.3.53改成192.168.3.54

KUBE_API_ADDRESS="--advertise-address=192.168.3.53 --bind-address=192.168.3.53 --insecure-bind-address=127.0.0.1"

TO

KUBE_API_ADDRESS="--advertise-address=192.168.3.54 --bind-address=192.168.3.54 --insecure-bind-address=127.0.0.1"

10.03. 复制/etc/kubernetes/config

[root@yds-dev-svc01-master01 kubernetes]# scp config root@192.168.3.54:/etc/kubernetes/ root@192.168.3.54's password: config 100% 657 60.4KB/s 00:00

10.04. 复制/etc/kubernetes/controller-manager

[root@yds-dev-svc01-master01 kubernetes]# scp controller-manager root@192.168.3.54:/etc/kubernetes/ root@192.168.3.54's password: controller-manager 100% 517 49.9KB/s 00:00

10.05. 复制/etc/kubernetes/scheduler

[root@yds-dev-svc01-master01 kubernetes]# scp scheduler root@192.168.3.54:/etc/kubernetes/ root@192.168.3.54's password: scheduler 100% 150 25.8KB/s 00:00

10.06. 复制/etc/kubernetes/token.csv

[root@yds-dev-svc01-master01 kubernetes]# scp token.csv root@192.168.3.54:/etc/kubernetes/ root@192.168.3.54's password: token.csv 100% 84 6.4KB/s 00:00

10.07. 复制apiserver,controller-manager,scheduler systemd配置文件

[root@yds-dev-svc01-master01 system]# cd /usr/lib/systemd/system/ [root@yds-dev-svc01-master01 system]# scp kube-* root@192.168.3.54:/usr/lib/systemd/system/ root@192.168.3.54's password: kube-apiserver.service 100% 611 57.5KB/s 00:00 kube-controller-manager.service 100% 432 53.9KB/s 00:00 kube-scheduler.service 100% 438 53.5KB/s 00:00

10.08. 复制二进制文件

[root@yds-dev-svc01-master01 ~]# scp /usr/bin/kube* root@192.168.3.54:/usr/bin/ root@192.168.3.54's password: kube-apiserver 100% 200MB 12.5MB/s 00:16 kube-controller-manager 100% 130MB 10.9MB/s 00:12 kubectl 100% 64MB 10.7MB/s 00:06 kube-scheduler 100% 59MB 58.7MB/s 00:01

10.09. 复制审计文件

[root@yds-dev-svc01-master01 ~]# scp /etc/kubernetes/audit-policy.yaml root@192.168.3.54:/etc/kubernetes/ root@192.168.3.54's password: kube-apiserver 100% 200MB 12.5MB/s 00:16 kube-controller-manager 100% 130MB 10.9MB/s 00:12 kubectl 100% 64MB 10.7MB/s 00:06 kube-scheduler 100% 59MB 58.7MB/s 00:01

10.10. yds-dev-svc01-master02服务器配置

将所有的服务复制完成后,我们在yds-dev-svc01-master02中进行一些配置。并启动服务。

chmod +x /usr/bin/kube* firewall-cmd --add-port=6443/tcp --permanent firewall-cmd --reload systemctl daemon-reload systemctl enable kube-apiserver kube-controller-manager kube-scheduler systemctl start kube-apiserver kube-controller-manager kube-scheduler systemctl status kube-apiserver kube-controller-manager kube-scheduler

10.11 到这里,我们的两台apiserver服务已经可以独立运行了。接下来,我们继续配置apiserver的高可用。

11. 配置apiserver高可用

11.01. 安装keepalived

在yds-dev-svc01-master01和yds-dev-svc01-master02中安装keepalive. 我们这里安装的版本为:keepalived-1.4.2

yum install -y gcc openssl-devel wget cd /tmp wget http://www.keepalived.org/software/keepalived-1.4.2.tar.gz tar -xvzf keepalived-1.4.2.tar.gz cd keepalived-1.4.2 ./configure --prefix=/usr/local/keepalived make && make install ln -s /usr/local/keepalived/sbin/keepalived /usr/sbin/keepalived

编辑systemd unit文件

[root@yds-dev-svc01-master01 ~]# cat /usr/lib/systemd/system/keepalived.service [Unit] Description=LVS and VRRP High Availability Monitor After= network-online.target syslog.target Wants=network-online.target [Service] Type=forking PIDFile=/var/run/keepalived.pid KillMode=process EnvironmentFile=-/usr/local/keepalived/etc/sysconfig/keepalived ExecStart=/usr/local/keepalived/sbin/keepalived $KEEPALIVED_OPTIONS ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target

11.02. yds-dev-svc01-master01配置keepalived

cat >/etc/keepalived/keepalived.conf <<EOF global_defs { router_id yds-dev-svc01-master01 } vrrp_script CheckK8sMaster { script "curl -o /dev/null -s -w %{http_code} -k https://192.168.3.53:6443" interval 3 timeout 3 fall 2 rise 2 } vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 110 priority 120 advert_int 1 mcast_src_ip 192.168.3.53 nopreempt authentication { auth_type PASS auth_pass ydstest } unicast_peer { 192.168.3.54 } virtual_ipaddress { 192.168.3.55/24 } track_script { CheckK8sMaster } } EOF

11.03. yds-dev-svc01-master02配置keepalived

cat >/etc/keepalived/keepalived.conf <<EOF global_defs { router_id yds-dev-svc01-master02 } vrrp_script CheckK8sMaster { script "curl -o /dev/null -s -w %{http_code} -k https://192.168.3.54:6443" interval 3 timeout 3 fall 2 rise 2 } vrrp_instance VI_1 { state BACKUP interface ens32 virtual_router_id 110 priority 100 advert_int 1 mcast_src_ip 192.168.3.54 nopreempt authentication { auth_type PASS auth_pass ydstest } unicast_peer { 192.168.3.53 } virtual_ipaddress { 192.168.3.55/24 } track_script { CheckK8sMaster } } EOF

11.04. keepalived启动

在yds-dev-svc01-master01和yds-dev-svc01-master02中启动keepalived.

systemctl daemon-reload ;systemctl enable keepalived; systemctl restart keepalived; systemctl status keepalived

11.05. 高可用测试

1.查看IP信息 yds-dev-svc01-master01

[root@yds-dev-svc01-master01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:48:d8:a8 brd ff:ff:ff:ff:ff:ff inet 192.168.3.53/24 brd 192.168.3.255 scope global ens32 valid_lft forever preferred_lft forever inet 192.168.3.55/24 scope global secondary ens32 valid_lft forever preferred_lft forever inet6 fe80::9cd:60a3:99e2:48ff/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::fbd2:5239:fe68:ea3d/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::2a36:8b76:9a1d:7d50/64 scope link tentative dadfailed valid_lft forever preferred_lft forever

yds-dev-svc01-master02

[root@yds-dev-svc01-master02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:fc:62:1d brd ff:ff:ff:ff:ff:ff inet 192.168.3.54/24 brd 192.168.3.255 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::9cd:60a3:99e2:48ff/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::fbd2:5239:fe68:ea3d/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::2a36:8b76:9a1d:7d50/64 scope link tentative dadfailed valid_lft forever preferred_lft forever

我们查看到,现在192.168.3.55在yds-dev-svc01-master01中。 我们访问一下192.168.3.55

[root@yds-dev-svc01-master02 ~]# curl -k https://192.168.3.55:6443 { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403

服务访问正常。

现在,我们停用yds-dev-svc01-master01中的kube-apiserver.

在yds-dev-svc01-master01

[root@yds-dev-svc01-master01 ~]# systemctl stop kube-apiserver [root@yds-dev-svc01-master01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:48:d8:a8 brd ff:ff:ff:ff:ff:ff inet 192.168.3.53/24 brd 192.168.3.255 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::9cd:60a3:99e2:48ff/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::fbd2:5239:fe68:ea3d/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::2a36:8b76:9a1d:7d50/64 scope link tentative dadfailed valid_lft forever preferred_lft forever

在yds-dev-svc01-master02

[root@yds-dev-svc01-master02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:fc:62:1d brd ff:ff:ff:ff:ff:ff inet 192.168.3.54/24 brd 192.168.3.255 scope global ens32 valid_lft forever preferred_lft forever inet 192.168.3.55/24 scope global secondary ens32 valid_lft forever preferred_lft forever inet6 fe80::9cd:60a3:99e2:48ff/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::fbd2:5239:fe68:ea3d/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::2a36:8b76:9a1d:7d50/64 scope link tentative dadfailed valid_lft forever preferred_lft forever

看到IP 192.168.3.55已经切换到yds-dev-svc01-master02中。测试访问:

[root@yds-dev-svc01-master02 ~]# curl -k https://192.168.3.55:6443 { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403

服务访问正常。

以上,我们的apiserver高可用配置完成。

你的支持,是笔者最大的动力:

转载请注明原文地址: https://www.6miu.com/read-2623026.html

最新回复(0)