Kubernetes容器编排架构设计实战:高可用集群部署与资源调度优化策略

 
更多

Kubernetes容器编排架构设计实战:高可用集群部署与资源调度优化策略

引言

随着容器技术的快速发展,Kubernetes作为容器编排领域的事实标准,已经成为企业构建云原生应用的核心基础设施。然而,要充分发挥Kubernetes的潜力,不仅需要理解其核心概念,更需要掌握高可用集群的部署方法和资源调度的优化策略。

本文将深入探讨Kubernetes的核心架构设计理念,详细介绍高可用集群的部署方案、资源调度策略、网络配置优化等关键技术,为企业构建稳定高效的容器化基础设施提供实践指导。

Kubernetes核心架构设计理念

控制平面与工作节点分离

Kubernetes采用控制平面(Control Plane)与工作节点(Worker Node)分离的架构设计,这种设计确保了系统的可扩展性和可靠性。

控制平面组件:

  • API Server:集群的统一入口,提供RESTful API接口
  • etcd:分布式键值存储,保存集群状态信息
  • Scheduler:负责Pod的调度决策
  • Controller Manager:运行各种控制器,维护集群状态

工作节点组件:

  • kubelet:节点代理,负责Pod的生命周期管理
  • kube-proxy:网络代理,实现服务发现和负载均衡
  • 容器运行时:如Docker、containerd等

声明式API设计

Kubernetes采用声明式API设计,用户通过YAML或JSON文件描述期望的系统状态,控制器负责将当前状态调整到期望状态。这种设计简化了系统管理,提高了可靠性。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.20
        ports:
        - containerPort: 80

高可用集群部署方案

架构设计原则

高可用Kubernetes集群的设计需要遵循以下原则:

  1. 无单点故障:所有关键组件都需要冗余部署
  2. 数据持久化:etcd数据需要可靠存储和备份
  3. 负载均衡:API Server访问需要负载均衡
  4. 故障自动恢复:系统应具备自动故障检测和恢复能力

etcd高可用部署

etcd作为Kubernetes的核心存储组件,其高可用性至关重要。建议采用奇数个节点(通常3或5个)的集群部署模式。

# etcd集群配置示例
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.1.10:2380,etcd-2=https://192.168.1.11:2380,etcd-3=https://192.168.1.12:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-1"

etcd集群部署脚本:

#!/bin/bash
# etcd高可用部署脚本

# 安装etcd
ETCD_VERSION="v3.5.0"
wget https://github.com/etcd-io/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz
tar -xzf etcd-${ETCD_VERSION}-linux-amd64.tar.gz
sudo cp etcd-${ETCD_VERSION}-linux-amd64/etcd* /usr/local/bin/

# 创建etcd配置目录
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chown -R etcd:etcd /var/lib/etcd

# 生成证书(简化示例)
openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 3650 -out ca.pem -subj "/CN=etcd-ca"

# 启动etcd服务
cat > /etc/systemd/system/etcd.service << EOF
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
  --name etcd-1 \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://192.168.1.10:2380 \\
  --listen-peer-urls https://192.168.1.10:2380 \\
  --listen-client-urls https://192.168.1.10:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://192.168.1.10:2379 \\
  --initial-cluster-token etcd-cluster-1 \\
  --initial-cluster etcd-1=https://192.168.1.10:2380,etcd-2=https://192.168.1.11:2380,etcd-3=https://192.168.1.12:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

API Server高可用部署

API Server是Kubernetes集群的入口,需要通过负载均衡器实现高可用。

API Server配置示例:

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - name: kube-apiserver
    image: k8s.gcr.io/kube-apiserver:v1.22.0
    command:
    - kube-apiserver
    - --advertise-address=192.168.1.20
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://192.168.1.10:2379,https://192.168.1.11:2379,https://192.168.1.12:2379
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key

负载均衡器配置(HAProxy示例):

# HAProxy配置文件
global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

defaults
    log global
    mode tcp
    option tcplog
    option dontlognull
    timeout connect 5000
    timeout client 50000
    timeout server 50000

frontend k8s-api
    bind *:6443
    default_backend k8s-api-backend

backend k8s-api-backend
    balance roundrobin
    server master1 192.168.1.20:6443 check
    server master2 192.168.1.21:6443 check
    server master3 192.168.1.22:6443 check

控制器管理器和调度器高可用

控制器管理器和调度器通过领导者选举机制实现高可用。

apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - name: kube-controller-manager
    image: k8s.gcr.io/kube-controller-manager:v1.22.0
    command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=0.0.0.0
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --node-cidr-mask-size=24
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --use-service-account-credentials=true

资源调度优化策略

资源请求和限制配置

合理的资源配置是优化调度的基础。通过设置requests和limits,可以确保Pod获得必要的资源,同时防止资源滥用。

apiVersion: v1
kind: Pod
metadata:
  name: resource-demo
spec:
  containers:
  - name: demo-container
    image: nginx
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

节点亲和性和反亲和性

通过节点亲和性(Node Affinity)和反亲和性(Node Anti-Affinity),可以控制Pod在特定节点上的调度。

节点亲和性示例:

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/e2e-az-name
            operator: In
            values:
            - e2e-az1
            - e2e-az2
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: another-node-label-key
            operator: In
            values:
            - another-node-label-value
  containers:
  - name: with-node-affinity
    image: k8s.gcr.io/pause:2.0

Pod反亲和性示例:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-cache
spec:
  selector:
    matchLabels:
      app: store
  replicas: 3
  template:
    metadata:
      labels:
        app: store
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - store
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: redis-server
        image: redis:3.2-alpine

污点和容忍度

污点(Taint)和容忍度(Toleration)机制可以确保只有特定的Pod能够调度到标记的节点上。

# 给节点添加污点
kubectl taint nodes node1 key=value:NoSchedule

# Pod容忍污点配置
apiVersion: v1
kind: Pod
metadata:
  name: tolerant-pod
spec:
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "value"
    effect: "NoSchedule"
  containers:
  - name: nginx
    image: nginx

自定义调度器

对于特殊需求,可以开发自定义调度器来实现特定的调度逻辑。

// 自定义调度器示例
package main

import (
    "context"
    "fmt"
    "log"

    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/client-go/kubernetes"
    "k8s.io/client-go/tools/clientcmd"
)

func main() {
    // 加载kubeconfig
    config, err := clientcmd.BuildConfigFromFlags("", "path/to/kubeconfig")
    if err != nil {
        log.Fatal(err)
    }

    // 创建clientset
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
        log.Fatal(err)
    }

    // 获取待调度的Pod
    pods, err := clientset.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{
        FieldSelector: "spec.nodeName=",
    })
    if err != nil {
        log.Fatal(err)
    }

    for _, pod := range pods.Items {
        // 实现自定义调度逻辑
        nodeName := customScheduler(pod)
        if nodeName != "" {
            // 绑定Pod到节点
            err := clientset.CoreV1().Pods(pod.Namespace).Bind(context.TODO(), &v1.Binding{
                ObjectMeta: metav1.ObjectMeta{
                    Name:      pod.Name,
                    Namespace: pod.Namespace,
                },
                Target: v1.ObjectReference{
                    Kind: "Node",
                    Name: nodeName,
                },
            }, metav1.CreateOptions{})
            if err != nil {
                log.Printf("Failed to bind pod %s: %v", pod.Name, err)
            }
        }
    }
}

func customScheduler(pod v1.Pod) string {
    // 实现自定义调度算法
    // 这里只是一个示例,实际应用中需要复杂的逻辑
    return "selected-node-name"
}

网络配置优化

CNI插件选择与配置

选择合适的CNI(Container Network Interface)插件对网络性能至关重要。常用的CNI插件包括Calico、Flannel、Cilium等。

Calico配置示例:

# Calico DaemonSet配置
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: calico-node
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  template:
    metadata:
      labels:
        k8s-app: calico-node
    spec:
      hostNetwork: true
      tolerations:
      - effect: NoSchedule
        operator: Exists
      - effect: NoExecute
        operator: Exists
      serviceAccountName: calico-node
      containers:
      - name: calico-node
        image: calico/node:v3.20.0
        env:
        - name: DATASTORE_TYPE
          value: kubernetes
        - name: FELIX_LOGSEVERITYSCREEN
          value: info
        - name: CLUSTER_TYPE
          value: k8s,bgp
        - name: CALICO_DISABLE_FILE_LOGGING
          value: "true"
        - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
          value: ACCEPT
        - name: FELIX_IPV6SUPPORT
          value: "false"
        - name: WAIT_FOR_DATASTORE
          value: "true"
        - name: CALICO_IPV4POOL_CIDR
          value: "10.244.0.0/16"
        - name: CALICO_IPV4POOL_IPIP
          value: Always
        - name: FELIX_IPINIPMTU
          value: "1440"
        - name: FELIX_BPFENABLED
          value: "true"
        volumeMounts:
        - name: lib-modules
          mountPath: /lib/modules
          readOnly: true
        - name: var-run-calico
          mountPath: /var/run/calico
        - name: var-lib-calico
          mountPath: /var/lib/calico
        - name: xtables-lock
          mountPath: /run/xtables.lock
          readOnly: false
        securityContext:
          privileged: true
      volumes:
      - name: lib-modules
        hostPath:
          path: /lib/modules
      - name: var-run-calico
        hostPath:
          path: /var/run/calico
      - name: var-lib-calico
        hostPath:
          path: /var/lib/calico
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

网络策略配置

通过NetworkPolicy可以控制Pod之间的网络通信,提高安全性。

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 6379
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978

服务发现优化

合理配置Service和Ingress可以优化服务发现和负载均衡。

# Headless Service配置
apiVersion: v1
kind: Service
metadata:
  name: nginx-headless
spec:
  clusterIP: None
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80

---
# StatefulSet配置
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx-headless"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.20
        ports:
        - containerPort: 80
          name: web

监控与日志管理

Prometheus监控配置

Prometheus是Kubernetes生态系统中最流行的监控解决方案。

# Prometheus Operator CRD配置
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus
  labels:
    prometheus: prometheus
spec:
  replicas: 2
  serviceAccountName: prometheus
  serviceMonitorSelector:
    matchLabels:
      team: frontend
  resources:
    requests:
      memory: 400Mi
  enableAdminAPI: false

Prometheus监控规则配置:

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: prometheus-rules
spec:
  groups:
  - name: kubernetes-apps
    rules:
    - alert: KubePodCrashLooping
      expr: rate(kube_pod_container_status_restarts_total[15m]) * 60 * 5 > 0
      for: 15m
      labels:
        severity: warning
      annotations:
        summary: Pod is crash looping (instance {{ $labels.instance }})
        description: "Pod {{ $labels.pod }} is crash looping\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

日志收集与分析

使用EFK(Elasticsearch、Fluentd、Kibana)或Loki等方案进行日志管理。

Fluentd配置示例:

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
  namespace: kube-system
data:
  fluent.conf: |
    <source>
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/fluentd-containers.log.pos
      tag kubernetes.*
      read_from_head true
      <parse>
        @type json
        time_format %Y-%m-%dT%H:%M:%S.%NZ
      </parse>
    </source>

    <filter kubernetes.**>
      @type kubernetes_metadata
    </filter>

    <match kubernetes.**>
      @type elasticsearch
      logstash_format true
      host elasticsearch-logging
      port 9200
      logstash_prefix kubernetes
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.system.buffer
        flush_mode interval
        retry_type exponential_backoff
        flush_thread_count 2
        flush_interval 5s
        retry_forever
        retry_max_interval 30
        chunk_limit_size 2M
        queue_limit_length 8
        overflow_action block
      </buffer>
    </match>

安全最佳实践

RBAC权限管理

基于角色的访问控制(RBAC)是Kubernetes安全的核心。

# 创建ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
  namespace: default

---
# 创建Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

---
# RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
- kind: ServiceAccount
  name: my-service-account
  namespace: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

网络安全策略

通过PodSecurityPolicy和NetworkPolicy增强安全性。

# PodSecurityPolicy配置
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restricted
spec:
  privileged: false
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    - 'persistentVolumeClaim'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    rule: 'MustRunAsNonRoot'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
      - min: 1
        max: 65535
  fsGroup:
    rule: 'MustRunAs'
    ranges:
      - min: 1
        max: 65535
  readOnlyRootFilesystem: false

性能调优建议

kubelet配置优化

# kubelet配置文件优化
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
staticPodPath: /etc/kubernetes/manifests
syncFrequency: 1m0s
fileCheckFrequency: 20s
httpCheckFrequency: 20s
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd
hairpinMode: promiscuous-bridge
serializeImagePulls: false
maxPods: 110
podCIDR: 10.244.0.0/24
resolvConf: /run/systemd/resolve/resolv.conf
cpuManagerPolicy: static
kubeReserved:
  cpu: 200m
  memory: 256Mi
systemReserved:
  cpu: 200m
  memory: 256Mi
evictionHard:
  memory.available: "100Mi"
  nodefs.available: "10%"
  nodefs.inodesFree: "5%"
  imagefs.available: "15%"

API Server性能优化

# API Server启动参数优化
--max-requests-inflight=3000
--max-mutating-requests-inflight=1000
--request-timeout=1m0s
--min-request-timeout=300
--target-ram-mb=0
--kubelet-timeout=10s
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/p

打赏

本文固定链接: https://www.cxy163.net/archives/5709 | 绝缘体

该日志由 绝缘体.. 于 2024年06月04日 发表在 未分类 分类下, 你可以发表评论,并在保留原文地址及作者的情况下引用到你的网站或博客。
原创文章转载请注明: Kubernetes容器编排架构设计实战:高可用集群部署与资源调度优化策略 | 绝缘体
关键字: , , , ,

Kubernetes容器编排架构设计实战:高可用集群部署与资源调度优化策略:等您坐沙发呢!

发表评论


快捷键:Ctrl+Enter