我正在尝试在生产中同时使用kubernetes HPA和cluster-autoscaler,但是我想观察一下触发扩展的时间,以便可以通知操作人员和开发人员。 我可以在扩展生命周期中自定义任何webhook,还是自该以来应该观看哪个事件 网络已经将该事件收集到一个隔离的ES?
fanguonan2009 回答:触发kubernetes HPA自动缩放后是否有任何Webhook或事件
我将使用Container Lifecycle Hooks,更具体地说明PostStart
和PreStop
。
在Kubernetes文档中,我们可以阅读以下内容:
有两个暴露于容器的钩子:
PostStart
此挂钩在创建容器后立即执行。但是,不能保证该挂钩将在容器ENTRYPOINT之前执行。没有参数传递给处理程序。
PreStop
由于API请求或管理事件(例如,活动探测失败,抢占,资源争用等)而终止容器之前,将立即调用此挂钩。如果容器已经处于终止或完成状态,则对preStop挂钩的调用将失败。它是阻塞的,表示它是同步的,因此必须先完成删除容器的调用才能发送。没有参数传递给处理程序。
可以在Termination of Pods中找到有关终止行为的更详细说明。
您可以使用它们执行特定的命令,例如run.sh
,或针对Container上的特定端点执行HTTP请求。
示例pod
可能如下所示:
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/bin/sh","nginx -s quit; while killall -0 nginx; do sleep 1; done"]
对于群集扩展,您可以从Apiserver或命令行监视事件。
如果您执行kubectl get event
并且节点将扩展,您可能会看到类似以下内容的
$ kubectl get event --watch
LAST SEEN TYPE REASON KIND MESSAGE
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-default-pool-a663f7f4-8dxk event: Registered Node gke-standard-cluster-1-default-pool-a663f7f4-8dxk in Controller
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-default-pool-a663f7f4-pk6v event: Registered Node gke-standard-cluster-1-default-pool-a663f7f4-pk6v in Controller
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-default-pool-a663f7f4-wt5m event: Registered Node gke-standard-cluster-1-default-pool-a663f7f4-wt5m in Controller
13m Normal Starting Node Starting kubelet.
13m Normal NodeHasSufficientMemory Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 status is now: NodeHasSufficientMemory
13m Normal NodeHasNoDiskPressure Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 status is now: NodeHasNoDiskPressure
13m Normal NodeHasSufficientPID Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 status is now: NodeHasSufficientPID
13m Normal NodeAllocatableEnforced Node Updated Node Allocatable limit across pods
13m Normal NodeReady Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 status is now: NodeReady
13m Normal Starting Node Starting kube-proxy.
13m Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-73x0 in Controller
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-73x0 in Controller
13m Normal Starting Node Starting kubelet.
13m Normal NodeHasSufficientMemory Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz status is now: NodeHasSufficientMemory
13m Normal NodeHasNoDiskPressure Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz status is now: NodeHasNoDiskPressure
13m Normal NodeHasSufficientPID Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz status is now: NodeHasSufficientPID
13m Normal NodeAllocatableEnforced Node Updated Node Allocatable limit across pods
13m Normal NodeReady Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz status is now: NodeReady
13m Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-fbpz in Controller
13m Normal Starting Node Starting kube-proxy.
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-fbpz in Controller
13m Normal Starting Node Starting kubelet.
13m Normal NodeHasSufficientMemory Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 status is now: NodeHasSufficientMemory
13m Normal NodeHasNoDiskPressure Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 status is now: NodeHasNoDiskPressure
13m Normal NodeHasSufficientPID Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 status is now: NodeHasSufficientPID
13m Normal NodeAllocatableEnforced Node Updated Node Allocatable limit across pods
13m Normal NodeReady Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 status is now: NodeReady
13m Normal Starting Node Starting kube-proxy.
13m Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 in Controller
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 in Controller
,
kubernetes在触发自动缩放时会生成事件。检查(kubectl get events)命令输出。您可以留意这些事件并通知乡亲