引言
在构建高可用的Kubernetes集群中,使用Keepalived可以实现负载均衡和故障转移,提高集群的稳定性和可靠性。本文将介绍如何在CentOS 7上部署Kubernetes v1.16高可用集群,并使用Keepalived提供IP地址的故障转移功能。
环境准备
- 三台CentOS 7服务器,分别称为master1、master2和worker1
- 安装了Docker和Kubernetes组件的虚拟机
- 三台服务器之间可以相互通信,网络正常
步骤一:部署etcd集群
1. 安装etcd
$ sudo yum install etcd -y
2. 配置etcd
在master1和master2上分别编辑/etc/etcd/etcd.conf文件,指定节点的名称(节点名不同),节点的IP地址以及端口。
# 在master1上编辑etcd.conf
$ sudo vi /etc/etcd/etcd.conf
# 修改以下行
ETCD_NAME=master1
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://<master1-ip>:2380"
ETCD_LISTEN_CLIENT_URLS="http://<master1-ip>:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://<master1-ip>:2380"
ETCD_INITIAL_CLUSTER="master1=http://<master1-ip>:2380,master2=http://<master2-ip>:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
ETCD_ADVERTISE_CLIENT_URLS="http://<master1-ip>:2379"
# 在master2上编辑etcd.conf
$ sudo vi /etc/etcd/etcd.conf
# 修改以下行
ETCD_NAME=master2
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://<master2-ip>:2380"
ETCD_LISTEN_CLIENT_URLS="http://<master2-ip>:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://<master2-ip>:2380"
ETCD_INITIAL_CLUSTER="master1=http://<master1-ip>:2380,master2=http://<master2-ip>:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"
ETCD_ADVERTISE_CLIENT_URLS="http://<master2-ip>:2379"
3. 启动etcd服务
# 在master1上启动etcd
$ sudo systemctl enable etcd
$ sudo systemctl start etcd
# 在master2上启动etcd
$ sudo systemctl enable etcd
$ sudo systemctl start etcd
步骤二:部署Kubernetes控制平面
1. 安装kube-apiserver、kube-controller-manager和kube-scheduler
在master1和master2上分别安装kube-apiserver、kube-controller-manager和kube-scheduler。
$ sudo yum install kubernetes-master -y
2. 配置kube-apiserver、kube-controller-manager和kube-scheduler
在master1和master2上分别编辑/etc/kubernetes/apiserver和/etc/kubernetes/config文件。
# 在master1上编辑apiserver
$ sudo vi /etc/kubernetes/apiserver
# 修改以下行
KUBE_ETCD_SERVERS="--etcd-servers=http://<master1-ip>:2379"
KUBE_API_ADDRESS="--advertise-address=<master1-ip>"
KUBE_API_PORT="--secure-port=6443"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_API_ARGS="--authorization-mode=Node,RBAC"
# 在master2上编辑apiserver
$ sudo vi /etc/kubernetes/apiserver
# 修改以下行
KUBE_ETCD_SERVERS="--etcd-servers=http://<master2-ip>:2379"
KUBE_API_ADDRESS="--advertise-address=<master2-ip>"
KUBE_API_PORT="--secure-port=6443"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_API_ARGS="--authorization-mode=Node,RBAC"
# 在master1上编辑config
$ sudo vi /etc/kubernetes/config
# 修改以下行
KUBE_MASTER="--master=http://<master1-ip>:8080"
# 在master2上编辑config
$ sudo vi /etc/kubernetes/config
# 修改以下行
KUBE_MASTER="--master=http://<master2-ip>:8080"
3. 启动kube-apiserver、kube-controller-manager和kube-scheduler
# 在master1上启动kube-apiserver、kube-controller-manager和kube-scheduler
$ sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
$ sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
# 在master2上启动kube-apiserver、kube-controller-manager和kube-scheduler
$ sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
$ sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
步骤三:部署Kubernetes节点
1. 安装kubelet和kube-proxy
在worker1上安装kubelet和kube-proxy。
$ sudo yum install kubernetes-node -y
2. 配置kubelet和kube-proxy
在worker1上编辑/etc/kubernetes/kubelet和/etc/kubernetes/proxy文件。
# 在worker1上编辑kubelet
$ sudo vi /etc/kubernetes/kubelet
# 修改以下行
KUBELET_ADDRESS="--address=<worker1-ip>"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=<worker1-hostname>"
KUBELET_API_SERVER="--api-servers=http://<master1-ip>:8080"
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=cluster.local"
# 在worker1上编辑proxy
$ sudo vi /etc/kubernetes/proxy
# 修改以下行
KUBE_PROXY_ARGS="--bind-address=<worker1-ip> --cluster-cidr=10.254.0.0/16 --hostname-override=<worker1-hostname> --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf"
3. 启动kubelet和kube-proxy
# 在worker1上启动kubelet和kube-proxy
$ sudo systemctl enable kubelet kube-proxy
$ sudo systemctl start kubelet kube-proxy
步骤四:部署Keepalived集群
1. 安装Keepalived
在master1和master2上安装Keepalived。
$ sudo yum install keepalived -y
2. 配置Keepalived
在master1和master2上分别编辑/etc/keepalived/keepalived.conf文件。
# 在master1上编辑keepalived.conf
$ sudo vi /etc/keepalived/keepalived.conf
# 修改以下行
vrrp_instance VI_1 {
state MASTER
interface <network-interface>
virtual_router_id 51
priority 100
virtual_ipaddress {
<virtual-ip>
}
}
# 在master2上编辑keepalived.conf
$ sudo vi /etc/keepalived/keepalived.conf
# 修改以下行
vrrp_instance VI_1 {
state BACKUP
interface <network-interface>
virtual_router_id 51
priority 99
virtual_ipaddress {
<virtual-ip>
}
}
3. 启动Keepalived
# 在master1上启动Keepalived
$ sudo systemctl enable keepalived
$ sudo systemctl start keepalived
# 在master2上启动Keepalived
$ sudo systemctl enable keepalived
$ sudo systemctl start keepalived
结论
通过以上步骤,我们成功部署了一个包含etcd集群、Kubernetes控制平面和Kubernetes节点的高可用Kubernetes集群,并通过Keepalived实现了IP地址的故障转移功能。这将提高集群的稳定性和可靠性,确保应用的高可用性。
本文来自极简博客,作者:软件测试视界,转载请注明原文链接:Centos7部署k8s[v1.16]高可用[keepalived]集群
微信扫一扫,打赏作者吧~