Kubernetes原生AI应用部署新趋势:KubeRay与KServe在生产环境中的落地实践详解

 
更多

Kubernetes原生AI应用部署新趋势:KubeRay与KServe在生产环境中的落地实践详解

引言

随着人工智能技术的快速发展,企业对AI应用的部署需求日益增长。传统的AI部署方式往往面临着资源利用率低、扩展性差、运维复杂等问题。Kubernetes作为容器编排的事实标准,为AI应用的部署提供了强大的基础设施支持。在Kubernetes生态中,KubeRay和KServe作为两个重要的AI部署解决方案,正在成为云原生AI平台建设的新趋势。

本文将深入探讨KubeRay和KServe的架构设计、部署配置、性能调优和故障排查等关键技术细节,为企业构建生产级云原生AI平台提供全面的实践指导。

KubeRay:Ray集群的Kubernetes原生管理方案

KubeRay架构设计

KubeRay是Ray项目官方提供的Kubernetes原生管理工具,旨在简化Ray集群在Kubernetes环境中的部署和管理。其核心架构包括以下几个关键组件:

  1. KubeRay Operator:负责监听Ray集群的CRD(Custom Resource Definition)变化,并根据定义自动创建和管理Ray集群
  2. RayCluster CRD:自定义资源定义,用于描述Ray集群的配置信息
  3. RayService CRD:用于管理Ray服务,支持自动扩缩容和服务发现
  4. RayJob CRD:用于管理Ray作业的生命周期

KubeRay部署配置

安装KubeRay Operator

# 添加KubeRay Helm仓库
helm repo add kuberay https://ray-project.github.io/kuberay-helm/

# 更新Helm仓库
helm repo update

# 安装KubeRay Operator
helm install kuberay-operator kuberay/kuberay-operator --version 1.0.0

创建Ray集群

apiVersion: ray.io/v1alpha1
kind: RayCluster
metadata:
  name: raycluster-complete
spec:
  rayVersion: '2.9.0'
  # Ray Head节点配置
  headGroupSpec:
    rayStartParams:
      dashboard-host: '0.0.0.0'
      num-cpus: '1'
      node-ip-address: $MY_POD_IP
    template:
      spec:
        containers:
        - name: ray-head
          image: rayproject/ray:2.9.0-py310
          ports:
          - containerPort: 6379
            name: gcs
          - containerPort: 8265
            name: dashboard
          - containerPort: 10001
            name: client
          lifecycle:
            preStop:
              exec:
                command: ["/bin/sh","-c","ray stop"]
          volumeMounts:
            - mountPath: /tmp/ray
              name: ray-logs
          resources:
            limits:
              cpu: "1"
              memory: "2Gi"
            requests:
              cpu: "500m"
              memory: "1Gi"
        volumes:
          - name: ray-logs
            emptyDir: {}
  # Ray Worker节点配置
  workerGroupSpecs:
  - replicas: 1
    minReplicas: 0
    maxReplicas: 10
    groupName: small-group
    rayStartParams:
      num-cpus: '1'
    template:
      spec:
        containers:
        - name: ray-worker
          image: rayproject/ray:2.9.0-py310
          lifecycle:
            preStop:
              exec:
                command: ["/bin/sh","-c","ray stop"]
          volumeMounts:
            - mountPath: /tmp/ray
              name: ray-logs
          resources:
            limits:
              cpu: "1"
              memory: "1Gi"
            requests:
              cpu: "500m"
              memory: "500Mi"
        volumes:
          - name: ray-logs
            emptyDir: {}

Ray服务配置

apiVersion: ray.io/v1alpha1
kind: RayService
metadata:
  name: rayservice-sample
spec:
  serveConfigV2: |
    applications:
    - name: my-app
      import_path: fruit.deployment:fruit_app
      runtime_env:
        working_dir: "https://github.com/ray-project/test_dag/archive/41d09119cbdf8450599f993f51318e9e27c59098.zip"
      deployments:
      - name: MangoStand
        num_replicas: 1
        user_config:
          price: 3
        autoscaling_config:
          min_replicas: 1
          max_replicas: 3
          target_num_ongoing_requests_per_replica: 10
  rayClusterConfig:
    rayVersion: '2.9.0'
    headGroupSpec:
      rayStartParams:
        dashboard-host: '0.0.0.0'
        num-cpus: '1'
        node-ip-address: $MY_POD_IP
      template:
        spec:
          containers:
          - name: ray-head
            image: rayproject/ray:2.9.0-py310
            ports:
            - containerPort: 6379
              name: gcs
            - containerPort: 8265
              name: dashboard
            - containerPort: 10001
              name: client
            lifecycle:
              preStop:
                exec:
                  command: ["/bin/sh","-c","ray stop"]
            volumeMounts:
              - mountPath: /tmp/ray
                name: ray-logs
            resources:
              limits:
                cpu: "1"
                memory: "2Gi"
              requests:
                cpu: "500m"
                memory: "1Gi"
          volumes:
            - name: ray-logs
              emptyDir: {}
    workerGroupSpecs:
    - replicas: 1
      minReplicas: 0
      maxReplicas: 10
      groupName: small-group
      rayStartParams:
        num-cpus: '1'
      template:
        spec:
          containers:
          - name: ray-worker
            image: rayproject/ray:2.9.0-py310
            lifecycle:
              preStop:
                exec:
                  command: ["/bin/sh","-c","ray stop"]
            volumeMounts:
              - mountPath: /tmp/ray
                name: ray-logs
            resources:
              limits:
                cpu: "1"
                memory: "1Gi"
              requests:
                cpu: "500m"
                memory: "500Mi"
          volumes:
            - name: ray-logs
              emptyDir: {}

KubeRay性能调优

资源优化配置

apiVersion: ray.io/v1alpha1
kind: RayCluster
metadata:
  name: raycluster-optimized
spec:
  rayVersion: '2.9.0'
  headGroupSpec:
    rayStartParams:
      dashboard-host: '0.0.0.0'
      num-cpus: '2'
      num-gpus: '0'
      object-store-memory: '1073741824'  # 1GB
      redis-max-memory: '536870912'     # 512MB
    template:
      spec:
        containers:
        - name: ray-head
          image: rayproject/ray:2.9.0-py310
          env:
          - name: RAY_DISABLE_DOCKER_CPU_WARNING
            value: "1"
          - name: RAY_OBJECT_STORE_ALLOW_SLOW_STORAGE
            value: "1"
          resources:
            limits:
              cpu: "2"
              memory: "4Gi"
              ephemeral-storage: "10Gi"
            requests:
              cpu: "1"
              memory: "2Gi"
              ephemeral-storage: "5Gi"
  workerGroupSpecs:
  - replicas: 3
    minReplicas: 1
    maxReplicas: 10
    groupName: optimized-workers
    rayStartParams:
      num-cpus: '2'
      num-gpus: '0'
    template:
      spec:
        containers:
        - name: ray-worker
          image: rayproject/ray:2.9.0-py310
          env:
          - name: RAY_DISABLE_DOCKER_CPU_WARNING
            value: "1"
          resources:
            limits:
              cpu: "2"
              memory: "2Gi"
              ephemeral-storage: "5Gi"
            requests:
              cpu: "1"
              memory: "1Gi"
              ephemeral-storage: "2Gi"

网络优化配置

apiVersion: ray.io/v1alpha1
kind: RayCluster
metadata:
  name: raycluster-network-optimized
spec:
  rayVersion: '2.9.0'
  enableInTreeAutoscaling: true
  headGroupSpec:
    rayStartParams:
      dashboard-host: '0.0.0.0'
      node-manager-port: '8076'
      object-manager-port: '8077'
    template:
      spec:
        # 启用主机网络以减少网络开销
        hostNetwork: true
        dnsPolicy: ClusterFirstWithHostNet
        containers:
        - name: ray-head
          image: rayproject/ray:2.9.0-py310
          ports:
          - containerPort: 6379
            name: gcs
            hostPort: 6379
          - containerPort: 8265
            name: dashboard
            hostPort: 8265
          - containerPort: 10001
            name: client
            hostPort: 10001
          - containerPort: 8076
            name: node-manager
            hostPort: 8076
          - containerPort: 8077
            name: object-manager
            hostPort: 8077

KServe:Kubernetes原生的机器学习服务平台

KServe架构设计

KServe是Kubernetes原生的机器学习模型服务平台,提供了统一的API和管理界面来部署、扩展和管理机器学习模型。其核心架构包括:

  1. InferenceService CRD:核心自定义资源,用于定义模型服务的配置
  2. Predictor:模型预测器,支持多种框架(TensorFlow、PyTorch、XGBoost等)
  3. Transformer:数据预处理和后处理组件
  4. Explainer:模型解释器,提供模型决策的可解释性
  5. Model Registry:模型注册中心,支持多种模型存储后端

KServe部署配置

安装KServe

# 安装KServe CRDs
kubectl apply -f https://github.com/kserve/kserve/releases/download/v0.12.0/kserve_crds.yaml

# 安装KServe核心组件
kubectl apply -f https://github.com/kserve/kserve/releases/download/v0.12.0/kserve.yaml

部署TensorFlow模型

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "tensorflow-model"
spec:
  predictor:
    tensorflow:
      storageUri: "gs://kfserving-examples/models/tensorflow/flowers"
      resources:
        requests:
          cpu: "100m"
          memory: "256Mi"
        limits:
          cpu: "1"
          memory: "1Gi"
      runtimeVersion: "2.12.0"
      protocolVersion: "v2"

部署PyTorch模型

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "pytorch-model"
spec:
  predictor:
    pytorch:
      storageUri: "gs://kfserving-examples/models/pytorch/mnist"
      resources:
        requests:
          cpu: "100m"
          memory: "256Mi"
        limits:
          cpu: "1"
          memory: "1Gi"
      runtimeVersion: "2.1.0"
      protocolVersion: "v2"

部署SKLearn模型

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "sklearn-model"
spec:
  predictor:
    sklearn:
      storageUri: "gs://kfserving-examples/models/sklearn/iris"
      resources:
        requests:
          cpu: "100m"
          memory: "256Mi"
        limits:
          cpu: "1"
          memory: "1Gi"
      runtimeVersion: "1.3.0"

高级配置示例

带有Transformer的部署

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "transformer-example"
spec:
  transformer:
    containers:
    - image: kserve/image-transformer:v0.12.0
      name: user-container
      env:
      - name: STORAGE_URI
        value: gs://kfserving-examples/models/tensorflow/flowers
      resources:
        requests:
          cpu: "100m"
          memory: "256Mi"
        limits:
          cpu: "1"
          memory: "1Gi"
  predictor:
    tensorflow:
      storageUri: "gs://kfserving-examples/models/tensorflow/flowers"
      resources:
        requests:
          cpu: "100m"
          memory: "256Mi"
        limits:
          cpu: "1"
          memory: "1Gi"

自定义模型服务器

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "custom-model-server"
spec:
  predictor:
    containers:
    - image: custom-model-server:latest
      name: kserve-container
      env:
      - name: MODEL_STORAGE_URI
        value: gs://my-bucket/models/custom-model
      ports:
      - containerPort: 8080
        protocol: TCP
      resources:
        requests:
          cpu: "200m"
          memory: "512Mi"
        limits:
          cpu: "1"
          memory: "2Gi"

性能调优最佳实践

资源配额管理

为KServe设置资源配额

apiVersion: v1
kind: ResourceQuota
metadata:
  name: kserve-quota
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 8Gi
    limits.cpu: "8"
    limits.memory: 16Gi
    requests.storage: 100Gi
    persistentvolumeclaims: "10"
---
apiVersion: v1
kind: LimitRange
metadata:
  name: kserve-limit-range
spec:
  limits:
  - default:
      cpu: "1"
      memory: 2Gi
    defaultRequest:
      cpu: "500m"
      memory: 1Gi
    type: Container

为KubeRay设置资源配额

apiVersion: v1
kind: ResourceQuota
metadata:
  name: kuberay-quota
spec:
  hard:
    requests.cpu: "8"
    requests.memory: 16Gi
    limits.cpu: "16"
    limits.memory: 32Gi
    pods: "50"

自动扩缩容配置

KServe水平扩缩容

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "autoscaled-model"
spec:
  predictor:
    minReplicas: 1
    maxReplicas: 10
    scaleTarget: 50  # 目标CPU使用率
    scaleMetric: cpu
    tensorflow:
      storageUri: "gs://kfserving-examples/models/tensorflow/flowers"
      resources:
        requests:
          cpu: "100m"
          memory: "256Mi"
        limits:
          cpu: "1"
          memory: "1Gi"

KubeRay自动扩缩容

apiVersion: ray.io/v1alpha1
kind: RayCluster
metadata:
  name: autoscaled-raycluster
spec:
  enableInTreeAutoscaling: true
  autoscalerOptions:
    idleTimeoutSeconds: 60
    upscalingMode: Default
  workerGroupSpecs:
  - replicas: 1
    minReplicas: 0
    maxReplicas: 20
    groupName: autoscaled-workers
    rayStartParams:
      num-cpus: '2'
    template:
      spec:
        containers:
        - name: ray-worker
          image: rayproject/ray:2.9.0-py310
          resources:
            limits:
              cpu: "2"
              memory: "2Gi"
            requests:
              cpu: "1"
              memory: "1Gi"

网络优化

启用服务网格

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "mesh-enabled-model"
  annotations:
    sidecar.istio.io/inject: "true"
spec:
  predictor:
    tensorflow:
      storageUri: "gs://kfserving-examples/models/tensorflow/flowers"
      resources:
        requests:
          cpu: "100m"
          memory: "256Mi"
        limits:
          cpu: "1"
          memory: "1Gi"

监控与日志

Prometheus监控配置

KServe监控

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: kserve-monitor
  labels:
    app: kserve
spec:
  selector:
    matchLabels:
      app: kserve
  endpoints:
  - port: metrics
    interval: 30s

KubeRay监控

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: kuberay-monitor
  labels:
    app: kuberay
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: kuberay
  endpoints:
  - port: metrics
    interval: 30s

日志收集配置

Fluentd配置

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
data:
  fluent.conf: |
    <source>
      @type tail
      path /var/log/containers/*_kserve_*.log
      pos_file /var/log/fluentd-kserve.pos
      tag kserve.*
      format json
      time_key time
      time_format %Y-%m-%dT%H:%M:%S.%NZ
    </source>
    
    <source>
      @type tail
      path /var/log/containers/*_kuberay_*.log
      pos_file /var/log/fluentd-kuberay.pos
      tag kuberay.*
      format json
      time_key time
      time_format %Y-%m-%dT%H:%M:%S.%NZ
    </source>
    
    <match kserve.**>
      @type elasticsearch
      host elasticsearch
      port 9200
      logstash_format true
      logstash_prefix kserve
    </match>
    
    <match kuberay.**>
      @type elasticsearch
      host elasticsearch
      port 9200
      logstash_format true
      logstash_prefix kuberay
    </match>

故障排查与诊断

常见问题诊断

检查InferenceService状态

# 查看InferenceService状态
kubectl get inferenceservice

# 查看详细状态信息
kubectl describe inferenceservice tensorflow-model

# 查看Pod状态
kubectl get pods -l serving.kserve.io/inferenceservice=tensorflow-model

# 查看日志
kubectl logs -l serving.kserve.io/inferenceservice=tensorflow-model -c kserve-container

检查RayCluster状态

# 查看RayCluster状态
kubectl get raycluster

# 查看详细状态信息
kubectl describe raycluster raycluster-complete

# 查看Pod状态
kubectl get pods -l ray.io/cluster=raycluster-complete

# 查看日志
kubectl logs -l ray.io/node-type=head -l ray.io/cluster=raycluster-complete

性能问题诊断

资源使用情况监控

# 查看Pod资源使用情况
kubectl top pods -l serving.kserve.io/inferenceservice=tensorflow-model

# 查看节点资源使用情况
kubectl top nodes

# 使用kubectl describe查看资源请求和限制
kubectl describe pod <pod-name>

网络连通性检查

# 检查服务连通性
kubectl exec -it <pod-name> -- curl -v http://<service-name>:8080

# 检查端口开放情况
kubectl port-forward service/<service-name> 8080:8080

# 检查DNS解析
kubectl exec -it <pod-name> -- nslookup <service-name>

安全最佳实践

访问控制配置

网络策略配置

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: kserve-ingress
spec:
  podSelector:
    matchLabels:
      serving.kserve.io/inferenceservice: tensorflow-model
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: kserve-system
    ports:
    - protocol: TCP
      port: 8080

RBAC配置

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: kserve-models
  name: model-deployer
rules:
- apiGroups: ["serving.kserve.io"]
  resources: ["inferenceservices"]
  verbs: ["get", "list", "create", "update", "delete"]
- apiGroups: [""]
  resources: ["pods", "services", "configmaps"]
  verbs: ["get", "list", "create", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: model-deployer-binding
  namespace: kserve-models
subjects:
- kind: User
  name: model-deployer
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: model-deployer
  apiGroup: rbac.authorization.k8s.io

密钥管理

使用Secret存储敏感信息

apiVersion: v1
kind: Secret
metadata:
  name: model-storage-credentials
type: Opaque
data:
  aws-access-key-id: <base64-encoded-access-key>
  aws-secret-access-key: <base64-encoded-secret-key>
---
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "secure-model"
spec:
  predictor:
    tensorflow:
      storageUri: "s3://my-model-bucket/models/tensorflow/flowers"
      env:
      - name: AWS_ACCESS_KEY_ID
        valueFrom:
          secretKeyRef:
            name: model-storage-credentials
            key: aws-access-key-id
      - name: AWS_SECRET_ACCESS_KEY
        valueFrom:
          secretKeyRef:
            name: model-storage-credentials
            key: aws-secret-access-key

生产环境部署建议

高可用性配置

多副本部署

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "ha-model"
spec:
  predictor:
    minReplicas: 3
    maxReplicas: 10
    tensorflow:
      storageUri: "gs://kfserving-examples/models/tensorflow/flowers"
      resources:
        requests:
          cpu: "200m"
          memory: "512Mi"
        limits:
          cpu: "1"
          memory: "2Gi"

跨区域部署

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "multi-region-model"
spec:
  predictor:
    tensorflow:
      storageUri: "gs://kfserving-examples/models/tensorflow/flowers"
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: topology.kubernetes.io/region
                operator: In
                values:
                - us-west-1
                - us-east-1

备份与恢复策略

定期备份配置

apiVersion: batch/v1
kind: CronJob
metadata:
  name: kserve-backup
spec:
  schedule: "0 2 * * *"  # 每天凌晨2点执行
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: backup
            image: kserve/backup-tool:latest
            command:
            - /bin/sh
            - -c
            - |
              kubectl get inferenceservice -o yaml > /backup/inferenceservices-$(date +%Y%m%d).yaml
              gsutil cp /backup/inferenceservices-*.yaml gs://backup-bucket/kserve/
            volumeMounts:
            - name: backup-volume
              mountPath: /backup
          volumes:
          - name: backup-volume
            emptyDir: {}
          restartPolicy: OnFailure

总结

KubeRay和KServe作为Kubernetes生态中的重要AI部署解决方案,为企业构建云原生AI平台提供了强大的支持。通过本文的详细介绍,我们可以看到:

  1. KubeRay专注于Ray集群的Kubernetes原生管理,适合需要分布式计算能力的AI应用
  2. KServe专注于机器学习模型的服务化部署,提供了统一的模型服务接口
  3. 两者都支持丰富的配置选项,包括资源管理、自动扩缩容、网络优化等
  4. 在生产环境中,需要重点关注性能调优、监控告警、安全配置等方面

随着AI技术的不断发展,Kubernetes原生的AI部署方案将成为企业数字化转型的重要基础设施。通过合理选择和配置KubeRay与KServe,企业可以构建高效、稳定、可扩展的云原生AI平台

打赏

本文固定链接: https://www.cxy163.net/archives/6452 | 绝缘体-小明哥的技术博客

该日志由 绝缘体.. 于 2023年03月17日 发表在 aws, git, go, kubernetes, redis, 云计算, 开发工具, 数据库, 编程语言 分类下, 你可以发表评论,并在保留原文地址及作者的情况下引用到你的网站或博客。
原创文章转载请注明: Kubernetes原生AI应用部署新趋势:KubeRay与KServe在生产环境中的落地实践详解 | 绝缘体-小明哥的技术博客
关键字: , , , ,

Kubernetes原生AI应用部署新趋势:KubeRay与KServe在生产环境中的落地实践详解:等您坐沙发呢!

发表评论


快捷键:Ctrl+Enter