Error deployment exceeded its progress deadline

Kubernetes блог Контроллеры в Kubernetes: Deployment, состояние (status) Получить ссылку Facebook Twitter Pinterest Электронная почта Другие приложения Deployment входит в различные состояния (status) в течение своего жизненного цикла. Он может прогрессировать (progressing) при развертывании нового ReplicaSet, может быть завершен (complete) или может потерпеть неудачу при прогрессировании (fail to progress). Прогрессирующий Deployment Kubernetes помечает Deployment […]

Содержание

  1. Kubernetes блог
  2. Контроллеры в Kubernetes: Deployment, состояние (status)
  3. Прогрессирующий Deployment
  4. Завершенный Deployment
  5. Неудавшийся Deployment
  6. Работа при неудачном Deployment
  7. Политика очистки
  8. Canary Deployment
  9. Deployments
  10. Use Case
  11. Creating a Deployment
  12. Pod-template-hash label
  13. Updating a Deployment
  14. Rollover (aka multiple updates in-flight)
  15. Label selector updates
  16. Rolling Back a Deployment
  17. Checking Rollout History of a Deployment
  18. Rolling Back to a Previous Revision
  19. Scaling a Deployment
  20. Proportional scaling
  21. Pausing and Resuming a rollout of a Deployment
  22. Deployment status
  23. Progressing Deployment
  24. Complete Deployment
  25. Failed Deployment
  26. Operating on a failed deployment
  27. Clean up Policy
  28. Canary Deployment
  29. Writing a Deployment Spec
  30. Pod Template
  31. Replicas
  32. Selector
  33. Strategy
  34. Recreate Deployment
  35. Rolling Update Deployment
  36. Progress Deadline Seconds
  37. Min Ready Seconds
  38. Revision History Limit
  39. Paused
  40. What’s next
  41. Feedback

Kubernetes блог

Контроллеры в Kubernetes: Deployment, состояние (status)

  • Получить ссылку
  • Facebook
  • Twitter
  • Pinterest
  • Электронная почта
  • Другие приложения

Deployment входит в различные состояния (status) в течение своего жизненного цикла. Он может прогрессировать (progressing) при развертывании нового ReplicaSet, может быть завершен (complete) или может потерпеть неудачу при прогрессировании (fail to progress).

Прогрессирующий Deployment

Kubernetes помечает Deployment как прогрессирующий (progressing), когда выполняется одна из следующих задач:

  • Deployment создает новый ReplicaSet.
  • Deployment расширяет свой новейший ReplicaSet.
  • Deployment сокращает свои старые ReplicaSet.
  • Новые Pod’ы становятся готовыми или доступными (готовыми как минимум за MinReadySeconds).

Вы можете следить за ходом Deployment, используя kubectl rollout status.

Завершенный Deployment

Kubernetes отмечает Deployment как завершенный (complete), если он имеет следующие характеристики:

  • Все реплики, связанные с Deployment, были обновлены до последней указанной вами версии, что означает, что все запрошенные вами обновления были выполнены.
  • Доступны все реплики, связанные с Deployment.
  • Старые реплики для Deployment не запущены.

Вы можете проверить, завершен ли Deployment, используя kubectl rollout status. Если развертывание завершено успешно, kubectl rollout status возвращает нулевой код выхода.

Неудавшийся Deployment

Ваш Deployment может застрять при попытке развернуть его новейший ReplicaSet, даже не завершив его. Это может произойти из-за некоторых из следующих факторов:

  • Недостаточная квота
  • Сбои проб готовности
  • Ошибки загрузки образа
  • Недостаточно разрешений
  • Предел диапазонов
  • Неправильная конфигурация приложения

Одним из способов обнаружения этого условия является указание параметра крайнего срока (deadline) в спецификации Deployment: (.spec.progressDeadlineSeconds). .spec.progressDeadlineSeconds обозначает количество секунд, в течение которых контроллер Deployment ждет, прежде чем указать (в состоянии Deployment), что процесс Deployment остановлен.

Следующая команда kubectl устанавливает спецификацию с progressDeadlineSeconds, чтобы контроллер сообщал об отсутствии прогресса для Deployment через 10 минут:

По истечении крайнего срока контроллер Deployment добавляет условие DeploymentCondition со следующими атрибутами в .status.conditions для Deployment:

  • Type=Progressing
  • Status=False
  • Reason=ProgressDeadlineExceeded

Примечание. Kubernetes не предпринимает никаких действий для остановленного (stalled) Deployment, кроме как сообщает в состоянии Reason=ProgressDeadlineExceeded. Оркестраторы более высокого уровня могут воспользоваться этим и действовать соответственно, например, откатить Deployment до его предыдущей версии.

Примечание. Если вы приостанавливаете Deployment, Kubernetes не проверяет ход выполнения в указанный срок. Вы можете безопасно приостановить Deployment в середине развертывания и возобновить работу, не вызывая условия превышения срока.

Вы можете столкнуться с временными ошибками в ваших Deployment, либо из-за установленного вами небольшого тайм-аута, либо из-за любых других ошибок, которые можно рассматривать как временные. Например, допустим, у вас недостаточно квоты. Если вы описываете Deployment, вы заметите следующий раздел:

Если вы запустите kubectl get deployment nginx-deployment -o yaml, состояние Deployment будет примерно таким:

В конце концов, как только крайний срок выполнения Deployment будет превышен, Kubernetes обновит статус и причину (reason) условия выполнения (Progressing condition):

Вы можете решить проблему недостаточной квоты, сократив Deployment, сократив другие контроллеры, которые вы можете использовать, или увеличив квоту в своем пространстве имен. Если вы удовлетворяете условиям квоты, а контроллер Deployment завершает развертывание Deployment, вы увидите обновление состояния Deployment с успешным условием (Status=True и Reason=NewReplicaSetAvailable).

Type=Available с Status=True означает, что ваш Deployment имеет минимальную доступность. Минимальная доступность определяется параметрами, указанными в стратегии развертывания. Type=Progressing со Status=True означает, что ваш Deployment находится либо в середине развертывания, и оно прогрессирует, либо успешно завершил свое выполнение и доступны минимально необходимые новые реплики (в нашем случае Reason=NewReplicaSetAvailable означает, что развертывание завершено).

Вы можете проверить, удалось ли выполнить Deployment, используя kubectl rollout status. kubectl rollout status возвращает ненулевой код завершения, если Deployment превысило срок выполнения.

Работа при неудачном Deployment

Все действия, которые применяются к завершенному (complete) Deployment, также применимы к неудачному (failed) Deployment. Вы можете масштабировать его расширяя/сокращая, откатиться к предыдущей ревизии или даже приостановить его, если вам нужно применить несколько настроек в шаблоне Pod для Deployment.

Политика очистки

Вы можете установить поле .spec.revisionHistoryLimit в Deployment, чтобы указать, сколько старых ReplicaSets для этого Deployment вы хотите сохранить. Остальные будут удалены сборщиком мусора в фоновом режиме. По умолчанию это 10.

Примечание. Явное задание этого поля равным 0 приведет к очистке всей истории Deployment, поэтому Deployment не сможет выполнить откат.

Canary Deployment

Если вы хотите развернуть релизы для подмножества пользователей или серверов с помощью Deployment, вы можете создать несколько Deployment, по одному для каждого релиза, следуя Canary шаблону.

Источник

Deployments

A Deployment provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

Use Case

The following are typical use cases for Deployments:

  • Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
  • Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
  • Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
  • Scale up the Deployment to facilitate more load.
  • Pause the rollout of a Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
  • Use the status of the Deployment as an indicator that a rollout has stuck.
  • Clean up older ReplicaSets that you don’t need anymore.

Creating a Deployment

The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods:

In this example:

A Deployment named nginx-deployment is created, indicated by the .metadata.name field. This name will become the basis for the ReplicaSets and Pods which are created later. See Writing a Deployment Spec for more details.

The Deployment creates a ReplicaSet that creates three replicated Pods, indicated by the .spec.replicas field.

The .spec.selector field defines how the created ReplicaSet finds which Pods to manage. In this case, you select a label that is defined in the Pod template ( app: nginx ). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.

The template field contains the following sub-fields:

  • The Pods are labeled app: nginx using the .metadata.labels field.
  • The Pod template’s specification, or .template.spec field, indicates that the Pods run one container, nginx , which runs the nginx Docker Hub image at version 1.14.2.
  • Create one container and name it nginx using the .spec.template.spec.containers[0].name field.

Before you begin, make sure your Kubernetes cluster is up and running. Follow the steps given below to create the above Deployment:

Create the Deployment by running the following command:

Run kubectl get deployments to check if the Deployment was created.

If the Deployment is still being created, the output is similar to the following:

When you inspect the Deployments in your cluster, the following fields are displayed:

  • NAME lists the names of the Deployments in the namespace.
  • READY displays how many replicas of the application are available to your users. It follows the pattern ready/desired.
  • UP-TO-DATE displays the number of replicas that have been updated to achieve the desired state.
  • AVAILABLE displays how many replicas of the application are available to your users.
  • AGE displays the amount of time that the application has been running.

Notice how the number of desired replicas is 3 according to .spec.replicas field.

To see the Deployment rollout status, run kubectl rollout status deployment/nginx-deployment .

The output is similar to:

Run the kubectl get deployments again a few seconds later. The output is similar to this:

Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.

To see the ReplicaSet ( rs ) created by the Deployment, run kubectl get rs . The output is similar to this:

ReplicaSet output shows the following fields:

  • NAME lists the names of the ReplicaSets in the namespace.
  • DESIRED displays the desired number of replicas of the application, which you define when you create the Deployment. This is the desired state.
  • CURRENT displays how many replicas are currently running.
  • READY displays how many replicas of the application are available to your users.
  • AGE displays the amount of time that the application has been running.

Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]-[HASH] . This name will become the basis for the Pods which are created.

The HASH string is the same as the pod-template-hash label on the ReplicaSet.

To see the labels automatically generated for each Pod, run kubectl get pods —show-labels . The output is similar to:

The created ReplicaSet ensures that there are three nginx Pods.

You must specify an appropriate selector and Pod template labels in a Deployment (in this case, app: nginx ).

Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn’t stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave unexpectedly.

Pod-template-hash label

The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.

This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any existing Pods that the ReplicaSet might have.

Updating a Deployment

Follow the steps given below to update your Deployment:

Let’s update the nginx Pods to use the nginx:1.16.1 image instead of the nginx:1.14.2 image.

or use the following command:

The output is similar to:

Alternatively, you can edit the Deployment and change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 :

The output is similar to:

To see the rollout status, run:

The output is similar to this:

Get more details on your updated Deployment:

After the rollout succeeds, you can view the Deployment by running kubectl get deployments . The output is similar to this:

Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.

The output is similar to this:

Running get pods should now show only the new Pods:

The output is similar to this:

Next time you want to update these Pods, you only need to update the Deployment’s Pod template again.

Deployment ensures that only a certain number of Pods are down while they are being updated. By default, it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).

Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).

For example, if you look at the above Deployment closely, you will see that it first creates a new Pod, then deletes an old Pod, and creates another new one. It does not kill old Pods until a sufficient number of new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case of a Deployment with 4 replicas, the number of Pods would be between 3 and 5.

Get details of your Deployment:

The output is similar to this:

Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet (nginx-deployment-1564180365) and scaled it up to 1 and waited for it to come up. Then it scaled down the old ReplicaSet to 2 and scaled up the new ReplicaSet to 2 so that at least 3 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy. Finally, you’ll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.

Rollover (aka multiple updates in-flight)

Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up the desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labels match .spec.selector but whose template does not match .spec.template are scaled down. Eventually, the new ReplicaSet is scaled to .spec.replicas and all old ReplicaSets is scaled to 0.

If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously — it will add it to its list of old ReplicaSets and start scaling it down.

For example, suppose you create a Deployment to create 5 replicas of nginx:1.14.2 , but then update the Deployment to create 5 replicas of nginx:1.16.1 , when only 3 replicas of nginx:1.14.2 had been created. In that case, the Deployment immediately starts killing the 3 nginx:1.14.2 Pods that it had created, and starts creating nginx:1.16.1 Pods. It does not wait for the 5 replicas of nginx:1.14.2 to be created before changing course.

Label selector updates

It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front. In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped all of the implications.

  • Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too, otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector does not select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets and creating a new ReplicaSet.
  • Selector updates changes the existing value in a selector key — result in the same behavior as additions.
  • Selector removals removes an existing key from the Deployment selector — do not require any changes in the Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the removed label still exists in any existing Pods and ReplicaSets.

Rolling Back a Deployment

Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping. By default, all of the Deployment’s rollout history is kept in the system so that you can rollback anytime you want (you can change that by modifying revision history limit).

Suppose that you made a typo while updating the Deployment, by putting the image name as nginx:1.161 instead of nginx:1.16.1 :

The output is similar to this:

The rollout gets stuck. You can verify it by checking the rollout status:

The output is similar to this:

Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.

You see that the number of old replicas ( nginx-deployment-1564180365 and nginx-deployment-2035384211 ) is 2, and new replicas (nginx-deployment-3066724191) is 1.

The output is similar to this:

Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.

The output is similar to this:

Get the description of the Deployment:

The output is similar to this:

To fix this, you need to rollback to a previous revision of Deployment that is stable.

Checking Rollout History of a Deployment

Follow the steps given below to check the rollout history:

First, check the revisions of this Deployment:

The output is similar to this:

CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify the CHANGE-CAUSE message by:

  • Annotating the Deployment with kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause=»image updated to 1.16.1″
  • Manually editing the manifest of the resource.

To see the details of each revision, run:

The output is similar to this:

Rolling Back to a Previous Revision

Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.

Now you’ve decided to undo the current rollout and rollback to the previous revision:

The output is similar to this:

Alternatively, you can rollback to a specific revision by specifying it with —to-revision :

The output is similar to this:

For more details about rollout related commands, read kubectl rollout .

The Deployment is now rolled back to a previous stable revision. As you can see, a DeploymentRollback event for rolling back to revision 2 is generated from Deployment controller.

Check if the rollback was successful and the Deployment is running as expected, run:

The output is similar to this:

Get the description of the Deployment:

The output is similar to this:

Scaling a Deployment

You can scale a Deployment by using the following command:

The output is similar to this:

Assuming horizontal Pod autoscaling is enabled in your cluster, you can set up an autoscaler for your Deployment and choose the minimum and maximum number of Pods you want to run based on the CPU utilization of your existing Pods.

The output is similar to this:

Proportional scaling

RollingUpdate Deployments support running multiple versions of an application at the same time. When you or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress or paused), the Deployment controller balances the additional replicas in the existing active ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called proportional scaling.

For example, you are running a Deployment with 10 replicas, maxSurge=3, and maxUnavailable=2.

Ensure that the 10 replicas in your Deployment are running.

The output is similar to this:

You update to a new image which happens to be unresolvable from inside the cluster.

The output is similar to this:

The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it’s blocked due to the maxUnavailable requirement that you mentioned above. Check out the rollout status:

Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren’t using proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, you spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.

In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming the new replicas become healthy. To confirm this, run:

The output is similar to this:

The rollout status confirms how the replicas were added to each ReplicaSet.

The output is similar to this:

Pausing and Resuming a rollout of a Deployment

When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you’re ready to apply those changes, you resume rollouts for the Deployment. This approach allows you to apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.

For example, with a Deployment that was created:

Get the Deployment details:

The output is similar to this:

Get the rollout status:

The output is similar to this:

Pause by running the following command:

The output is similar to this:

Then update the image of the Deployment:

The output is similar to this:

Notice that no new rollout started:

The output is similar to this:

Get the rollout status to verify that the existing ReplicaSet has not changed:

The output is similar to this:

You can make as many updates as you wish, for example, update the resources that will be used:

The output is similar to this:

The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.

Eventually, resume the Deployment rollout and observe a new ReplicaSet coming up with all the new updates:

The output is similar to this:

Watch the status of the rollout until it’s done.

The output is similar to this:

Get the status of the latest rollout:

The output is similar to this:

Deployment status

A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.

Progressing Deployment

Kubernetes marks a Deployment as progressing when one of the following tasks is performed:

  • The Deployment creates a new ReplicaSet.
  • The Deployment is scaling up its newest ReplicaSet.
  • The Deployment is scaling down its older ReplicaSet(s).
  • New Pods become ready or available (ready for at least MinReadySeconds).

When the rollout becomes “progressing”, the Deployment controller adds a condition with the following attributes to the Deployment’s .status.conditions :

  • type: Progressing
  • status: «True»
  • reason: NewReplicaSetCreated | reason: FoundNewReplicaSet | reason: ReplicaSetUpdated

You can monitor the progress for a Deployment by using kubectl rollout status .

Complete Deployment

Kubernetes marks a Deployment as complete when it has the following characteristics:

  • All of the replicas associated with the Deployment have been updated to the latest version you’ve specified, meaning any updates you’ve requested have been completed.
  • All of the replicas associated with the Deployment are available.
  • No old replicas for the Deployment are running.

When the rollout becomes “complete”, the Deployment controller sets a condition with the following attributes to the Deployment’s .status.conditions :

  • type: Progressing
  • status: «True»
  • reason: NewReplicaSetAvailable

This Progressing condition will retain a status value of «True» until a new rollout is initiated. The condition holds even when availability of replicas changes (which does instead affect the Available condition).

You can check if a Deployment has completed by using kubectl rollout status . If the rollout completed successfully, kubectl rollout status returns a zero exit code.

The output is similar to this:

and the exit status from kubectl rollout is 0 (success):

Failed Deployment

Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur due to some of the following factors:

  • Insufficient quota
  • Readiness probe failures
  • Image pull errors
  • Insufficient permissions
  • Limit ranges
  • Application runtime misconfiguration

One way you can detect this condition is to specify a deadline parameter in your Deployment spec: ( .spec.progressDeadlineSeconds ). .spec.progressDeadlineSeconds denotes the number of seconds the Deployment controller waits before indicating (in the Deployment status) that the Deployment progress has stalled.

The following kubectl command sets the spec with progressDeadlineSeconds to make the controller report lack of progress of a rollout for a Deployment after 10 minutes:

The output is similar to this:

Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following attributes to the Deployment’s .status.conditions :

  • type: Progressing
  • status: «False»
  • reason: ProgressDeadlineExceeded

This condition can also fail early and is then set to status value of «False» due to reasons as ReplicaSetCreateError . Also, the deadline is not taken into account anymore once the Deployment rollout completes.

See the Kubernetes API conventions for more information on status conditions.

You may experience transient errors with your Deployments, either due to a low timeout that you have set or due to any other kind of error that can be treated as transient. For example, let’s suppose you have insufficient quota. If you describe the Deployment you will notice the following section:

The output is similar to this:

If you run kubectl get deployment nginx-deployment -o yaml , the Deployment status is similar to this:

Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:

You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota conditions and the Deployment controller then completes the Deployment rollout, you’ll see the Deployment’s status update with a successful condition ( status: «True» and reason: NewReplicaSetAvailable ).

type: Available with status: «True» means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. type: Progressing with status: «True» means that your Deployment is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum required new replicas are available (see the Reason of the condition for the particulars — in our case reason: NewReplicaSetAvailable means that the Deployment is complete).

You can check if a Deployment has failed to progress by using kubectl rollout status . kubectl rollout status returns a non-zero exit code if the Deployment has exceeded the progression deadline.

The output is similar to this:

and the exit status from kubectl rollout is 1 (indicating an error):

Operating on a failed deployment

All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template.

Clean up Policy

You can set .spec.revisionHistoryLimit field in a Deployment to specify how many old ReplicaSets for this Deployment you want to retain. The rest will be garbage-collected in the background. By default, it is 10.

Canary Deployment

If you want to roll out releases to a subset of users or servers using the Deployment, you can create multiple Deployments, one for each release, following the canary pattern described in managing resources.

Writing a Deployment Spec

As with all other Kubernetes configs, a Deployment needs .apiVersion , .kind , and .metadata fields. For general information about working with config files, see deploying applications, configuring containers, and using kubectl to manage resources documents.

When the control plane creates new Pods for a Deployment, the .metadata.name of the Deployment is part of the basis for naming those Pods. The name of a Deployment must be a valid DNS subdomain value, but this can produce unexpected results for the Pod hostnames. For best compatibility, the name should follow the more restrictive rules for a DNS label.

A Deployment also needs a .spec section.

Pod Template

The .spec.template and .spec.selector are the only required fields of the .spec .

The .spec.template is a Pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion or kind .

In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See selector.

Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified.

Replicas

.spec.replicas is an optional field that specifies the number of desired Pods. It defaults to 1.

Should you manually scale a Deployment, example via kubectl scale deployment deployment —replicas=X , and then you update that Deployment based on a manifest (for example: by running kubectl apply -f deployment.yaml ), then applying that manifest overwrites the manual scaling that you previously did.

If a HorizontalPodAutoscaler (or any similar API for horizontal scaling) is managing scaling for a Deployment, don’t set .spec.replicas .

Instead, allow the Kubernetes control plane to manage the .spec.replicas field automatically.

Selector

.spec.selector is a required field that specifies a label selector for the Pods targeted by this Deployment.

.spec.selector must match .spec.template.metadata.labels , or it will be rejected by the API.

In API version apps/v1 , .spec.selector and .metadata.labels do not default to .spec.template.metadata.labels if not set. So they must be set explicitly. Also note that .spec.selector is immutable after creation of the Deployment in apps/v1 .

A Deployment may terminate Pods whose labels match the selector if their template is different from .spec.template or if the total number of such Pods exceeds .spec.replicas . It brings up new Pods with .spec.template if the number of Pods is less than the desired number.

If you have multiple controllers that have overlapping selectors, the controllers will fight with each other and won’t behave correctly.

Strategy

.spec.strategy specifies the strategy used to replace old Pods by new ones. .spec.strategy.type can be «Recreate» or «RollingUpdate». «RollingUpdate» is the default value.

Recreate Deployment

All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate .

Rolling Update Deployment

The Deployment updates Pods in a rolling update fashion when .spec.strategy.type==RollingUpdate . You can specify maxUnavailable and maxSurge to control the rolling update process.

Max Unavailable

.spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the maximum number of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by rounding down. The value cannot be 0 if .spec.strategy.rollingUpdate.maxSurge is 0. The default value is 25%.

For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available at all times during the update is at least 70% of the desired Pods.

Max Surge

.spec.strategy.rollingUpdate.maxSurge is an optional field that specifies the maximum number of Pods that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The value cannot be 0 if MaxUnavailable is 0. The absolute number is calculated from the percentage by rounding up. The default value is 25%.

For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the total number of Pods running at any time during the update is at most 130% of desired Pods.

Progress Deadline Seconds

.spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed progressing — surfaced as a condition with type: Progressing , status: «False» . and reason: ProgressDeadlineExceeded in the status of the resource. The Deployment controller will keep retrying the Deployment. This defaults to 600. In the future, once automatic rollback will be implemented, the Deployment controller will roll back a Deployment as soon as it observes such a condition.

If specified, this field needs to be greater than .spec.minReadySeconds .

Min Ready Seconds

.spec.minReadySeconds is an optional field that specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when a Pod is considered ready, see Container Probes.

Revision History Limit

A Deployment’s revision history is stored in the ReplicaSets it controls.

.spec.revisionHistoryLimit is an optional field that specifies the number of old ReplicaSets to retain to allow rollback. These old ReplicaSets consume resources in etcd and crowd the output of kubectl get rs . The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments.

More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up. In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.

Paused

.spec.paused is an optional boolean field for pausing and resuming a Deployment. The only difference between a paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the paused Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when it is created.

What’s next

  • Learn about Pods.
  • Run a Stateless Application Using a Deployment.
  • Deployment is a top-level resource in the Kubernetes REST API. Read the Deployment object definition to understand the API for deployments.
  • Read about PodDisruptionBudget and how you can use it to manage application availability during disruptions.

Feedback

Was this page helpful?

Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.

Источник

I am new to the kubernetes, may I know which one I should add on or other things?
Which jobs are failing:
azure/k8s-deploy@v1.2

Which test(s) are failing:
Deployment

Since when has it been failing:
After first time of the deployment it fail

strategy:  none
/usr/bin/kubectl apply -f /tmp/Deployment_scoutasiacluster-dbd1_1602667425570,/tmp/Service_scoutasiacluster-dbd1_1602667425571 --namespace scoutasiacluster8fce
deployment.apps/scoutasiacluster-dbd1 configured
service/scoutasiacluster-dbd1 unchanged
/usr/bin/kubectl rollout status Deployment/scoutasiacluster-dbd1 --namespace scoutasiacluster8fce
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
error: deployment "scoutasiacluster-dbd1" exceeded its progress deadline
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "scoutasiacluster-dbd1" rollout to finish: 1 old replicas are pending termination...
Error: Error: error: deployment "scoutasiacluster-dbd1" exceeded its progress deadline
/usr/bin/kubectl describe Deployment scoutasiacluster-dbd1 --namespace scoutasiacluster8fce
Name:                   scoutasiacluster-dbd1
Namespace:              scoutasiacluster8fce
CreationTimestamp:      Wed, 14 Oct 2020 06:09:46 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 14
Selector:               app=scoutasiacluster-dbd1
Replicas:               2 desired | 2 updated | 3 total | 0 available | 3 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=scoutasiacluster-dbd1
  Containers:
   scoutasiacluster-dbd1:
    Image:        ***.azurecr.io/scoutasiacluster:d4a2078b03d4fa031146c48c83b623735d9b59ee
    Port:         4902/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    False   ProgressDeadlineExceeded
OldReplicaSets:  scoutasiacluster-dbd1-65c476cfdc (1/1 replicas created)
NewReplicaSet:   scoutasiacluster-dbd1-557f685568 (2/2 replicas created)
Events:
  Type    Reason             Age                  From                   Message
  ----    ------             ----                 ----                   -------
  Normal  ScalingReplicaSet  53m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-55dcbb85cd to 1
  Normal  ScalingReplicaSet  53m                  deployment-controller  Scaled up replica set scoutasiacluster-dbd1-69c78c8dbf to 1
  Normal  ScalingReplicaSet  46m (x2 over 3h13m)  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-55dcbb85cd to 0
  Normal  ScalingReplicaSet  46m                  deployment-controller  Scaled up replica set scoutasiacluster-dbd1-69c78c8dbf to 2
  Normal  ScalingReplicaSet  36m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-d6454856d to 0
  Normal  ScalingReplicaSet  36m                  deployment-controller  Scaled up replica set scoutasiacluster-dbd1-694bdbc585 to 1
  Normal  ScalingReplicaSet  35m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-69c78c8dbf to 1
  Normal  ScalingReplicaSet  35m                  deployment-controller  Scaled up replica set scoutasiacluster-dbd1-694bdbc585 to 2
  Normal  ScalingReplicaSet  34m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-69c78c8dbf to 0
  Normal  ScalingReplicaSet  34m                  deployment-controller  Scaled up replica set scoutasiacluster-dbd1-79798c79c9 to 1
  Normal  ScalingReplicaSet  33m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-694bdbc585 to 1
  Normal  ScalingReplicaSet  33m                  deployment-controller  Scaled up replica set scoutasiacluster-dbd1-5f98b46d6 to 1
  Normal  ScalingReplicaSet  32m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-694bdbc585 to 0
  Normal  ScalingReplicaSet  20m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-79798c79c9 to 0
  Normal  ScalingReplicaSet  20m                  deployment-controller  Scaled up replica set scoutasiacluster-dbd1-7d4c8cf88b to 1
  Normal  ScalingReplicaSet  19m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-5f98b46d6 to 1
  Normal  ScalingReplicaSet  19m                  deployment-controller  Scaled up replica set scoutasiacluster-dbd1-65c476cfdc to 1
  Normal  ScalingReplicaSet  18m                  deployment-controller  Scaled up replica set scoutasiacluster-dbd1-65c476cfdc to 2
  Normal  ScalingReplicaSet  18m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-5f98b46d6 to 0
  Normal  ScalingReplicaSet  18m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-65c476cfdc to 1
  Normal  ScalingReplicaSet  18m                  deployment-controller  Scaled up replica set scoutasiacluster-dbd1-557f685568 to 1
  Normal  ScalingReplicaSet  17m (x2 over 32m)    deployment-controller  (combined from similar events): Scaled up replica set scoutasiacluster-dbd1-557f685568 to 2
  Normal  ScalingReplicaSet  17m                  deployment-controller  Scaled down replica set scoutasiacluster-dbd1-7d4c8cf88b to 0
/usr/bin/kubectl get service/scoutasiacluster-dbd1 -o json --namespace scoutasiacluster8fce
{
    "apiVersion": "v1",
    "kind": "Service",
    "metadata": {
        "annotations": {
            "kubectl.kubernetes.io/last-applied-configuration": "{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"scoutasiacluster-dbd1"},"name":"scoutasiacluster-dbd1","namespace":"scoutasiacluster8fce"},"spec":{"ports":[{"name":"http","port":4902,"protocol":"TCP","targetPort":4902}],"selector":{"app":"scoutasiacluster-dbd1"},"type":"LoadBalancer"}}n"
        },
        "creationTimestamp": "2020-10-14T06:09:46Z",
        "finalizers": [
            "service.kubernetes.io/load-balancer-cleanup"
        ],
        "labels": {
            "app": "scoutasiacluster-dbd1"
        },
        "name": "scoutasiacluster-dbd1",
        "namespace": "scoutasiacluster8fce",
        "resourceVersion": "163012",
        "selfLink": "/api/v1/namespaces/scoutasiacluster8fce/services/scoutasiacluster-dbd1",
        "uid": "29cfb024-3cd0-4d48-a032-97ac497bf97c"
    },
    "spec": {
        "clusterIP": "x.x.x.x",
        "externalTrafficPolicy": "Cluster",
        "ports": [
            {
                "name": "http",
                "nodePort": 31494,
                "port": 4902,
                "protocol": "TCP",
                "targetPort": 4902
            }
        ],
        "selector": {
            "app": "scoutasiacluster-dbd1"
        },
        "sessionAffinity": "None",
        "type": "LoadBalancer"
    },
    "status": {
        "loadBalancer": {
            "ingress": [
                {
                    "ip": "x.x.x.x"
                }
            ]
        }
    }
}
ServiceExternalIP scoutasiacluster-dbd1 x.x.x.x
Error: Error: RolloutStatusTimedout
0s
0s
on: [push]
jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@master
    
    - uses: azure/docker-login@v1
      with:
        login-server: saesvcseasitet.azurecr.io
        username: ${{ secrets.acr_saesvcseasitet_username }}
        password: ${{ secrets.acr_saesvcseasitet_password }}
    
    - name: Build and push image to ACR
      id: build-image
      run: |
        docker build "$GITHUB_WORKSPACE/" -f  "Dockerfile" -t saesvcseasitet.azurecr.io/scoutasiacluster:${{ github.sha }}
        docker push saesvcseasitet.azurecr.io/scoutasiacluster:${{ github.sha }}
    
    - uses: azure/k8s-set-context@v1
      with:
         kubeconfig: ${{ secrets.aks_scoutasiacluster_kubeConfig }}
      id: login
    
    - name: Create namespace
      run: |
        namespacePresent=`kubectl get namespace | grep scoutasiacluster8fce | wc -l`
        if [ $namespacePresent -eq 0 ]
        then
            echo `kubectl create namespace scoutasiacluster8fce`
        fi

    - uses: azure/k8s-create-secret@v1
      with:
        namespace: scoutasiacluster8fce
        container-registry-url: saesvcseasitet.azurecr.io
        container-registry-username: ${{ secrets.acr_saesvcseasitet_username }}
        container-registry-password: ${{ secrets.acr_saesvcseasitet_password }}
        secret-name: scoutasiacludockerauth
       
    - uses: azure/k8s-deploy@v1.2
      with:
        namespace: scoutasiacluster8fce
        manifests: |
          manifests/deployment.yml
          manifests/service.yml
        images: |
          saesvcseasitet.azurecr.io/scoutasiacluster:${{ github.sha }}
        imagepullsecrets: |
          scoutasiacludockerauth
kind: Deployment
apiVersion: apps/v1
metadata:
  name: scoutasiacluster-dbd1
  namespace: scoutasiacluster8fce
  selfLink: >-
    /apis/apps/v1/namespaces/scoutasiacluster8fce/deployments/scoutasiacluster-dbd1
  uid: 0b046dbf-90de-4c40-8e26-11ee8f5d804f
  resourceVersion: '218963'
  generation: 16
  creationTimestamp: '2020-10-14T06:09:46Z'
  annotations:
    deployment.kubernetes.io/revision: '16'
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"scoutasiacluster-dbd1","namespace":"scoutasiacluster8fce"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"scoutasiacluster-dbd1"}},"template":{"metadata":{"labels":{"app":"scoutasiacluster-dbd1"}},"spec":{"containers":[{"image":"saesvcseasitet.azurecr.io/scoutasiacluster:8b6d5618483648ca6f023dc6b804282ca36f680b","name":"scoutasiacluster-dbd1","ports":[{"containerPort":4902}]}],"imagePullSecrets":[{"name":"scoutasiacludockerauth"}]}}}}
spec:
  replicas: 2
  selector:
    matchLabels:
      app: scoutasiacluster-dbd1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: scoutasiacluster-dbd1
    spec:
      containers:
        - name: scoutasiacluster-dbd1
          image: >-
            saesvcseasitet.azurecr.io/scoutasiacluster:8b6d5618483648ca6f023dc6b804282ca36f680b
          ports:
            - containerPort: 4902
              protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      imagePullSecrets:
        - name: scoutasiacludockerauth
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
status:
  observedGeneration: 16
  replicas: 3
  updatedReplicas: 2
  unavailableReplicas: 3
  conditions:
    - type: Available
      status: 'False'
      lastUpdateTime: '2020-10-14T09:24:27Z'
      lastTransitionTime: '2020-10-14T09:24:27Z'
      reason: MinimumReplicasUnavailable
      message: Deployment does not have minimum availability.
    - type: Progressing
      status: 'True'
      lastUpdateTime: '2020-10-14T12:37:01Z'
      lastTransitionTime: '2020-10-14T12:37:01Z'
      reason: ReplicaSetUpdated
      message: ReplicaSet "scoutasiacluster-dbd1-6fbb8bcf69" is progressing.

Testgrid link:

Reason for failure:
It wait until too long then it cause failure

Anything else we need to know:

Deployment входит в различные состояния (status) в течение своего жизненного цикла. Он может прогрессировать (progressing) при развертывании нового ReplicaSet, может быть завершен (complete) или может потерпеть неудачу при прогрессировании (fail to progress).

Прогрессирующий Deployment

Kubernetes помечает Deployment как прогрессирующий (progressing), когда выполняется одна из следующих задач:

  • Deployment создает новый ReplicaSet.
  • Deployment расширяет свой новейший ReplicaSet.
  • Deployment сокращает свои старые ReplicaSet.
  • Новые Pod’ы становятся готовыми или доступными (готовыми как минимум за MinReadySeconds).

Вы можете следить за ходом Deployment, используя kubectl rollout status.

Завершенный Deployment

Kubernetes отмечает Deployment как завершенный (complete), если он имеет следующие характеристики:

  • Все реплики, связанные с Deployment, были обновлены до последней указанной вами версии, что означает, что все запрошенные вами обновления были выполнены.
  • Доступны все реплики, связанные с Deployment.
  • Старые реплики для Deployment не запущены.

Вы можете проверить, завершен ли Deployment, используя kubectl rollout status. Если развертывание завершено успешно, kubectl rollout status возвращает нулевой код выхода.

kubectl rollout status deployment.v1.apps/nginx-deployment

Вывод:

Waiting for rollout to finish: 2 of 3 updated replicas are available...
deployment.apps/nginx-deployment successfully rolled out
$ echo $?
0

Неудавшийся Deployment

Ваш Deployment может застрять при попытке развернуть его новейший ReplicaSet, даже не завершив его. Это может произойти из-за некоторых из следующих факторов:

  • Недостаточная квота
  • Сбои проб готовности
  • Ошибки загрузки образа
  • Недостаточно разрешений
  • Предел диапазонов
  • Неправильная конфигурация приложения

Одним из способов обнаружения этого условия является указание параметра крайнего срока (deadline) в спецификации Deployment: (.spec.progressDeadlineSeconds). .spec.progressDeadlineSeconds обозначает количество секунд, в течение которых контроллер Deployment ждет, прежде чем указать (в состоянии Deployment), что процесс Deployment остановлен.

Следующая команда kubectl устанавливает спецификацию с progressDeadlineSeconds, чтобы контроллер сообщал об отсутствии прогресса для Deployment через 10 минут:

kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'

Вывод:

deployment.apps/nginx-deployment patched

По истечении крайнего срока контроллер Deployment добавляет условие DeploymentCondition со следующими атрибутами в .status.conditions для Deployment:

  • Type=Progressing
  • Status=False
  • Reason=ProgressDeadlineExceeded

Примечание. Kubernetes не предпринимает никаких действий для остановленного (stalled) Deployment, кроме как сообщает в состоянии Reason=ProgressDeadlineExceeded. Оркестраторы более высокого уровня могут воспользоваться этим и действовать соответственно, например, откатить Deployment до его предыдущей версии.

Примечание. Если вы приостанавливаете Deployment, Kubernetes не проверяет ход выполнения в указанный срок. Вы можете безопасно приостановить Deployment в середине развертывания и возобновить работу, не вызывая условия превышения срока.

Вы можете столкнуться с временными ошибками в ваших Deployment, либо из-за установленного вами небольшого тайм-аута, либо из-за любых других ошибок, которые можно рассматривать как временные. Например, допустим, у вас недостаточно квоты. Если вы описываете Deployment, вы заметите следующий раздел:

kubectl describe deployment nginx-deployment

Вывод

<...>
Conditions:
  Type            Status  Reason
  ----            ------  ------
  Available       True    MinimumReplicasAvailable
  Progressing     True    ReplicaSetUpdated
  ReplicaFailure  True    FailedCreate
<...>

Если вы запустите kubectl get deployment nginx-deployment -o yaml, состояние Deployment будет примерно таким:

status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: 2016-10-04T12:25:39Z
    lastUpdateTime: 2016-10-04T12:25:39Z
    message: Replica set "nginx-deployment-4262182780" is progressing.
    reason: ReplicaSetUpdated
    status: "True"
    type: Progressing
  - lastTransitionTime: 2016-10-04T12:25:42Z
    lastUpdateTime: 2016-10-04T12:25:42Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2016-10-04T12:25:39Z
    lastUpdateTime: 2016-10-04T12:25:39Z
    message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
      object-counts, requested: pods=1, used: pods=3, limited: pods=2'
    reason: FailedCreate
    status: "True"
    type: ReplicaFailure
  observedGeneration: 3
  replicas: 2
  unavailableReplicas: 2

В конце концов, как только крайний срок выполнения Deployment будет превышен, Kubernetes обновит статус и причину (reason) условия выполнения (Progressing condition):

Conditions:
  Type            Status  Reason
  ----            ------  ------
  Available       True    MinimumReplicasAvailable
  Progressing     False   ProgressDeadlineExceeded
  ReplicaFailure  True    FailedCreate

Вы можете решить проблему недостаточной квоты, сократив Deployment, сократив другие контроллеры, которые вы можете использовать, или увеличив квоту в своем пространстве имен. Если вы удовлетворяете условиям квоты, а контроллер Deployment завершает развертывание Deployment, вы увидите обновление состояния Deployment с успешным условием (Status=True и Reason=NewReplicaSetAvailable).

Conditions:
  Type          Status  Reason
  ----          ------  ------
  Available     True    MinimumReplicasAvailable
  Progressing   True    NewReplicaSetAvailable

Type=Available с Status=True означает, что ваш Deployment имеет минимальную доступность. Минимальная доступность определяется параметрами, указанными в стратегии развертывания. Type=Progressing со Status=True означает, что ваш Deployment находится либо в середине развертывания, и оно прогрессирует, либо успешно завершил свое выполнение и доступны минимально необходимые новые реплики (в нашем случае Reason=NewReplicaSetAvailable означает, что развертывание завершено).

Вы можете проверить, удалось ли выполнить Deployment, используя kubectl rollout status. kubectl rollout status возвращает ненулевой код завершения, если Deployment превысило срок выполнения.

kubectl rollout status deployment.v1.apps/nginx-deployment

Вывод:

Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadline
$ echo $?
1

Работа при неудачном Deployment

Все действия, которые применяются к завершенному (complete) Deployment, также применимы к неудачному (failed) Deployment. Вы можете масштабировать его расширяя/сокращая, откатиться к предыдущей ревизии или даже приостановить его, если вам нужно применить несколько настроек в шаблоне Pod для Deployment.

Политика очистки

Вы можете установить поле .spec.revisionHistoryLimit в Deployment, чтобы указать, сколько старых ReplicaSets для этого Deployment вы хотите сохранить. Остальные будут удалены сборщиком мусора в фоновом режиме. По умолчанию это 10.

Примечание. Явное задание этого поля равным 0 приведет к очистке всей истории Deployment, поэтому Deployment не сможет выполнить откат.

Canary Deployment

Если вы хотите развернуть релизы для подмножества пользователей или серверов с помощью Deployment, вы можете создать несколько Deployment, по одному для каждого релиза, следуя Canary шаблону.


Читайте также:

  • Контроллеры в Kubernetes: Deployment, создание
  • Контроллеры в Kubernetes: Deployment, обновление
  • Контроллеры в Kubernetes: Deployment, откат (rollback)

Here’s a post about deploying applications to Kubernetes and associated things to take into account. This post was originally published in 2019, but is good stuff today – if you encounter something that should be updated, please let us know!

When writing and setting up software, it’s natural for us to focus on just the happy path. After all, that’s the path that everyone wants. Unfortunately, software can fail quite often, so we need to give the unhappy paths some attention as well.

Kubernetes is no exception here. When deploying software to Kubernetes, it’s easy to focus on the happy path without properly checking that everything went as expected. In this article, I’ll talk about what is typically missing when deploying applications to Kubernetes, and demonstrate how to improve it.

Typical flow for deploying applications to Kubernetes

In Kubernetes, most service-style applications use Deployments to run applications on Kubernetes. Using Deployments, you can describe how to run your application container as a Pod in Kubernetes and how many replicas of the application to run. Kubernetes will then take care of running as many replicas as specified.

Here’s an example deployment manifest in YAML format for running three instances of a simple hello world web app:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: polarsquad/hello-world-app:master
        name: hello-world
        ports:
        - containerPort: 3000

One of the key features of Deployments is how it manages application updates. By default, updating the Deployment manifest in Kubernetes causes the application to be updated in a rolling fashion. This way you’ll have the previous version of the deployment running while the new one is brought up. In the Deployment manifest, you can specify how many replicas to bring up and down at once during updates.

For example, we can add a rolling update strategy to the spec section of the manifest where we bring one replica up at a time, and make sure there are no missing healthy replicas at any point during the upgrade.

spec:
 strategy:
   type: RollingUpdate
   rollingUpdate:
     maxUnavailable: 0
     maxSurge: 1

The update is usually performed either by patching the manifest directly or by applying a full Deployment manifest from the file system. From Kubernetes’ point of view, it makes no difference. If the contents of the manifest update are valid, then Kubernetes will happily accept the update. Most of the time, an application update mostly contains a change in the container image tag or some of the environment variable configurations you might have.

To automate the process, you might choose to deploy your app in your CI pipeline using kubectl.

kubectl apply -f deployment.yaml

So now you have a pattern and a flow for getting your app to run on Kubernetes. Everything good, right? Unfortunately, no!

It’s a great start, but it’s usually not enough. Applying a deployment to Kubernetes finishes once Kubernetes has accepted the deployment, not when it has finished. Kubectl apply does not verify that your application even starts. This deployment flow is demonstrated in the picture below.

In order to properly check that the update proceeds as expected, we need assistance from another kubectl command.

Rollout to the rescue!

This is where kubectl’s rollout command becomes handy! We can use it to check how our deployment is doing.

By default, the command waits until all of the Pods in the deployment have been started successfully. When the deployment succeeds, the command exits with return code zero to indicate success.

$ kubectl rollout status deployment myapp
Waiting for deployment "myapp" rollout to finish: 0 of 3 updated replicas are available…
Waiting for deployment "myapp" rollout to finish: 1 of 3 updated replicas are available…
Waiting for deployment "myapp" rollout to finish: 2 of 3 updated replicas are available…
deployment "myapp" successfully rolled out

If the deployment fails, the command exits with a non-zero return code to indicate a failure.

If you’re already using kubectl to deploy applications from CI, using rollout to verify your deployment in CI will be a breeze. By running rollout directly after deploying changes, we can block the CI task from completing until the application deployment finishes. We can then use the return code from rollout to either pass or fail the CI task.

So far so good, but how does Kubernetes know when an application deployment succeeds?

Readiness probes and deadlines

In order for Kubernetes to know when an application is ready, it needs some help from the application. Kubernetes uses readiness probes to examine how the application is doing. Once an application instance starts responding to the readiness probe with a positive response, the instance is considered ready for use.

For web services, the most simple implementation is an HTTP GET endpoint that starts responding with a 200 OK status code when the server starts. In our hello world app, we could consider the app healthy when the index page can be loaded. Here’s the readiness probe configuration for our hello world app:

readinessProbe:
  httpGet:
    path: /
    port: 3000

A more sophisticated implementation of the health check might perform some background checks to verify that everything is ready for the application to serve requests, and serve that information through a dedicated health endpoint (e.g. /health or /ready). It’s up to the application developers to figure out when the application is ready, and how to respond back to probes.

Readiness probes tell Kubernetes when an application is ready, but not if the application will ever become ready. If the application keeps failing, it may never respond with a positive response to Kubernetes. How does Kubernetes then know when the deployment is going nowhere?

In our Deployment manifest, we can specify how long Kubernetes should wait for deployment to progress until it considers the deployment to have failed. If the deployment doesn’t proceed until the deadline is met, Kubernetes marks the deployment status as failed, which the rollout status command will be able to pick up.

$ kubectl rollout status deployment myapp
Waiting for deployment "myapp" rollout to finish: 1 out of 3 new replicas have been updated…
error: deployment "myapp" exceeded its progress deadline

What makes the deadline fantastic is that if the deployment manages to proceed within the deadline, Kubernetes will reset the deadline timer, and start waiting again. This way you don’t have to estimate a deadline for the entire deployment, but just a single instance of the application.

For example, if we set a deadline of 30 seconds, Kubernetes will wait 30 seconds for the application to become ready. If the application becomes ready, Kubernetes will wait another 30 seconds for the next instance to become ready.

Scripting automated rollback

Currently, when a deployment fails in Kubernetes, the deployment process stops, but the pods from the failed deployment are kept around. On deployment failure, your environment may contain pods from both the old and new deployments.

To get back to a stable, working state, we can use the rollout undo command to bring back the working pods and clean up the failed deployment.

$ kubectl rollout undo deployment myapp
deployment.extensions/myapp
$ kubectl rollout status deployment myapp
deployment "myapp" successfully rolled out

Awesome! Now that we have a way to determine when our deployments fail and how to revert the deployment, we can automate the deployment and rollback process with a simple shell script.

kubectl apply -f myapp.yaml
if ! kubectl rollout status deployment myapp; then
    kubectl rollout undo deployment myapp
    kubectl rollout status deployment myapp
    exit 1
fi

We first rollout the changes, and then immediately wait for the rollout status. If the rollout succeeds, we continue normally. If it fails, we undo the deployment, wait for undo to finish, and report back a failure with exit code 1. This flow is demonstrated in the picture below.

There is one major caveat not addressed in the script: kubectl commands may fail because of the network conditions! The script above doesn’t account for any connection failures, which means that the script may interpret a network failure in the rollout command as a failed deployment. Kubectl does retry retriable errors automatically, but it will fail eventually if the Kubernetes API is not available for a long period of time.

Conclusion

In this article, I’ve talked about the typical deployment flow used with service style applications in Kubernetes, and how it’s not enough to ensure safe deployments. I’ve presented a way to extend the deployment flow with status checks and an automated rollback procedure.

One area I haven’t covered is how the same checks and automated rollback are achieved in Helm. Helm deployments have their own additional quirks when it comes to detecting failed deployments. I’ve covered these quirks in this article.

I’ve published the code examples in a GitHub Gist. Thanks for reading!

  • bgrant0607
  • janetkuo
    title: Deployments

{:toc}

What is a Deployment?

A Deployment provides declarative updates for Pods and Replica Sets (the next-generation Replication Controller).
You only need to describe the desired state in a Deployment object, and the Deployment
controller will change the actual state to the desired state at a controlled rate for you.
You can define Deployments to create new resources, or replace existing ones
by new ones.

A typical use case is:

  • Create a Deployment to bring up a Replica Set and Pods.
  • Check the status of a Deployment to see if it succeeds or not.
  • Later, update that Deployment to recreate the Pods (for example, to use a new image).
  • Rollback to an earlier Deployment revision if the current Deployment isn’t stable.
  • Pause and resume a Deployment.

Creating a Deployment

Here is an example Deployment. It creates a Replica Set to
bring up 3 nginx Pods.

{% include code.html language=»yaml» file=»nginx-deployment.yaml» ghlink=»/docs/concepts/workloads/controllers/nginx-deployment.yaml» %}

Run the example by downloading the example file and then running this command:

$ kubectl create -f docs/user-guide/nginx-deployment.yaml --record
deployment "nginx-deployment" created

Setting the kubectl flag --record to true allows you to record current command in the annotations of the resources being created or updated. It will be useful for future introspection; for example, to see the commands executed in each Deployment revision.

Then running get immediately will give:

$ kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3         0         0            0           1s

This indicates that the Deployment’s number of desired replicas is 3 (according to deployment’s .spec.replicas), the number of current replicas (.status.replicas) is 0, the number of up-to-date replicas (.status.updatedReplicas) is 0, and the number of available replicas (.status.availableReplicas) is also 0.

Running the get again a few seconds later, should give:

$ kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3         3         3            3           18s

This indicates that the Deployment has created all three replicas, and all replicas are up-to-date (contains the latest pod template) and available (pod status is ready for at least Deployment’s .spec.minReadySeconds). Running kubectl get rs and kubectl get pods will show the Replica Set (RS) and Pods created.

$ kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-2035384211   3         3         0       18s

You may notice that the name of the Replica Set is always <the name of the Deployment>-<hash value of the pod template>.

$ kubectl get pods --show-labels
NAME                                READY     STATUS    RESTARTS   AGE       LABELS
nginx-deployment-2035384211-7ci7o   1/1       Running   0          18s       app=nginx,pod-template-hash=2035384211
nginx-deployment-2035384211-kzszj   1/1       Running   0          18s       app=nginx,pod-template-hash=2035384211
nginx-deployment-2035384211-qqcnn   1/1       Running   0          18s       app=nginx,pod-template-hash=2035384211

The created Replica Set will ensure that there are three nginx Pods at all times.

Note: You must specify appropriate selector and pod template labels of a Deployment (in this case, app = nginx), i.e. don’t overlap with other controllers (including Deployments, Replica Sets, Replication Controllers, etc.) Kubernetes won’t stop you from doing that, and if you end up with multiple controllers that have overlapping selectors, those controllers will fight with each other’s and won’t behave correctly.

Updating a Deployment

Note: a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. .spec.template) is changed,
e.g. updating labels or container images of the template. Other updates, such as scaling the Deployment, will not trigger a rollout.

Suppose that we now want to update the nginx Pods to start using the nginx:1.9.1 image
instead of the nginx:1.7.9 image.

$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
deployment "nginx-deployment" image updated

Alternatively, we can edit the Deployment and change .spec.template.spec.containers[0].image from nginx:1.7.9 to nginx:1.9.1:

$ kubectl edit deployment/nginx-deployment
deployment "nginx-deployment" edited

To see its rollout status, simply run:

$ kubectl rollout status deployment/nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out

After the rollout succeeds, you may want to get the Deployment:

$ kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3         3         3            3           36s

The number of up-to-date replicas indicates that the Deployment has updated the replicas to the latest configuration.
The current replicas indicates the total replicas this Deployment manages, and the available replicas indicates the
number of current replicas that are available.

We can run kubectl get rs to see that the Deployment updated the Pods by creating a new Replica Set and scaling it up to 3 replicas, as well as scaling down the old Replica Set to 0 replicas.

$ kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-1564180365   3         3         0       6s
nginx-deployment-2035384211   0         0         0       36s

Running get pods should now show only the new Pods:

$ kubectl get pods
NAME                                READY     STATUS    RESTARTS   AGE
nginx-deployment-1564180365-khku8   1/1       Running   0          14s
nginx-deployment-1564180365-nacti   1/1       Running   0          14s
nginx-deployment-1564180365-z9gth   1/1       Running   0          14s

Next time we want to update these Pods, we only need to update the Deployment’s pod template again.

Deployment can ensure that only a certain number of Pods may be down while they are being updated. By
default, it ensures that at least 25% less than the desired number of Pods are
up (25% max unavailable).

Deployment can also ensure that only a certain number of Pods may be created above the desired number of Pods. By default, it ensures that at most 25% more than the desired number of Pods are up (25% max surge).

For example, if you look at the above Deployment closely, you will see that
it first created a new Pod, then deleted some old Pods and created new ones. It
does not kill old Pods until a sufficient number of new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. It makes sure that number of available Pods is at least 2 and the number of total Pods is at most 4.

$ kubectl describe deployments
Name:           nginx-deployment
Namespace:      default
CreationTimestamp:  Tue, 15 Mar 2016 12:01:06 -0700
Labels:         app=nginx
Selector:       app=nginx
Replicas:       3 updated | 3 total | 3 available | 0 unavailable
StrategyType:       RollingUpdate
MinReadySeconds:    0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:     <none>
NewReplicaSet:      nginx-deployment-1564180365 (3/3 replicas created)
Events:
  FirstSeen LastSeen    Count   From                     SubobjectPath   Type        Reason              Message
  --------- --------    -----   ----                     -------------   --------    ------              -------
  36s       36s         1       {deployment-controller }                 Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-2035384211 to 3
  23s       23s         1       {deployment-controller }                 Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 1
  23s       23s         1       {deployment-controller }                 Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 2
  23s       23s         1       {deployment-controller }                 Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 2
  21s       21s         1       {deployment-controller }                 Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 0
  21s       21s         1       {deployment-controller }                 Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 3

Here we see that when we first created the Deployment, it created a Replica Set (nginx-deployment-2035384211) and scaled it up to 3 replicas directly.
When we updated the Deployment, it created a new Replica Set (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old Replica Set to 2, so that at least 2 Pods were available and at most 4 Pods were created at all times.
It then continued scaling up and down the new and the old Replica Set, with the same rolling update strategy. Finally, we’ll have 3 available replicas in the new Replica Set, and the old Replica Set is scaled down to 0.

Multiple Updates

Each time a new deployment object is observed by the deployment controller, a Replica Set is
created to bring up the desired Pods if there is no existing Replica Set doing so.
Existing Replica Set controlling Pods whose labels match .spec.selector but whose
template does not match .spec.template are scaled down.
Eventually, the new Replica Set will be scaled to .spec.replicas and all old Replica Sets will
be scaled to 0.

If you update a Deployment while an existing deployment is in progress,
the Deployment will create a new Replica Set as per the update and start scaling that up, and
will roll the Replica Set that it was scaling up previously — it will add it to its list of old Replica Sets and will
start scaling it down.

For example, suppose you create a Deployment to create 5 replicas of nginx:1.7.9,
but then updates the Deployment to create 5 replicas of nginx:1.9.1, when only 3
replicas of nginx:1.7.9 had been created. In that case, Deployment will immediately start
killing the 3 nginx:1.7.9 Pods that it had created, and will start creating
nginx:1.9.1 Pods. It will not wait for 5 replicas of nginx:1.7.9 to be created
before changing course.

Rolling Back a Deployment

Sometimes you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping.
By default, two previous Deployment’s rollout history are kept in the system so that you can rollback anytime you want
(you can change that by modifying revision history limit).

Note: a Deployment’s revision is created when a Deployment’s rollout is triggered. This means that the new revision is created
if and only if the Deployment’s pod template (i.e. .spec.template) is changed, e.g. updating labels or container images of the template.
Other updates, such as scaling the Deployment, will not create a Deployment revision — so that we can facilitate simultaneous manual- or
auto-scaling. This implies that when you rollback to an earlier revision, only the Deployment’s pod template part will be rolled back.

Suppose that we made a typo while updating the Deployment, by putting the image name as nginx:1.91 instead of nginx:1.9.1:

$ kubectl set image deployment/nginx-deployment nginx=nginx:1.91
deployment "nginx-deployment" image updated

The rollout will be stuck.

$ kubectl rollout status deployments nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...

Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.

You will also see that both the number of old replicas (nginx-deployment-1564180365 and nginx-deployment-2035384211) and new replicas (nginx-deployment-3066724191) are 2.

$ kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-1564180365   2         2         0       25s
nginx-deployment-2035384211   0         0         0       36s
nginx-deployment-3066724191   2         2         2       6s

Looking at the Pods created, you will see that the 2 Pods created by new Replica Set are stuck in an image pull loop.

$ kubectl get pods
NAME                                READY     STATUS             RESTARTS   AGE
nginx-deployment-1564180365-70iae   1/1       Running            0          25s
nginx-deployment-1564180365-jbqqo   1/1       Running            0          25s
nginx-deployment-3066724191-08mng   0/1       ImagePullBackOff   0          6s
nginx-deployment-3066724191-eocby   0/1       ImagePullBackOff   0          6s

Note that the Deployment controller will stop the bad rollout automatically, and will stop scaling up the new Replica Set.

$ kubectl describe deployment
Name:           nginx-deployment
Namespace:      default
CreationTimestamp:  Tue, 15 Mar 2016 14:48:04 -0700
Labels:         app=nginx
Selector:       app=nginx
Replicas:       2 updated | 3 total | 2 available | 2 unavailable
StrategyType:       RollingUpdate
MinReadySeconds:    0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:     nginx-deployment-1564180365 (2/2 replicas created)
NewReplicaSet:      nginx-deployment-3066724191 (2/2 replicas created)
Events:
  FirstSeen LastSeen    Count   From                    SubobjectPath   Type        Reason              Message
  --------- --------    -----   ----                    -------------   --------    ------              -------
  1m        1m          1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-2035384211 to 3
  22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 1
  22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 2
  22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 2
  21s       21s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 0
  21s       21s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 3
  13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-3066724191 to 1
  13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-1564180365 to 2
  13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-3066724191 to 2

To fix this, we need to rollback to a previous revision of Deployment that is stable.

Checking Rollout History of a Deployment

First, check the revisions of this deployment:

$ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment":
REVISION    CHANGE-CAUSE
1           kubectl create -f docs/user-guide/nginx-deployment.yaml --record
2           kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
3           kubectl set image deployment/nginx-deployment nginx=nginx:1.91

Because we recorded the command while creating this Deployment using --record, we can easily see the changes we made in each revision.

To further see the details of each revision, run:

$ kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" revision 2
  Labels:       app=nginx
          pod-template-hash=1159050644
  Annotations:  kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
  Containers:
   nginx:
    Image:      nginx:1.9.1
    Port:       80/TCP
     QoS Tier:
        cpu:      BestEffort
        memory:   BestEffort
    Environment Variables:      <none>
  No volumes.

Rolling Back to a Previous Revision

Now we’ve decided to undo the current rollout and rollback to the previous revision:

$ kubectl rollout undo deployment/nginx-deployment
deployment "nginx-deployment" rolled back

Alternatively, you can rollback to a specific revision by specify that in --to-revision:

$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
deployment "nginx-deployment" rolled back

For more details about rollout related commands, read kubectl rollout.

The Deployment is now rolled back to a previous stable revision. As you can see, a DeploymentRollback event for rolling back to revision 2 is generated from Deployment controller.

$ kubectl get deployment
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3         3         3            3           30m

$ kubectl describe deployment
Name:           nginx-deployment
Namespace:      default
CreationTimestamp:  Tue, 15 Mar 2016 14:48:04 -0700
Labels:         app=nginx
Selector:       app=nginx
Replicas:       3 updated | 3 total | 3 available | 0 unavailable
StrategyType:       RollingUpdate
MinReadySeconds:    0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:     <none>
NewReplicaSet:      nginx-deployment-1564180365 (3/3 replicas created)
Events:
  FirstSeen LastSeen    Count   From                    SubobjectPath   Type        Reason              Message
  --------- --------    -----   ----                    -------------   --------    ------              -------
  30m       30m         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-2035384211 to 3
  29m       29m         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 1
  29m       29m         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 2
  29m       29m         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 2
  29m       29m         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 0
  29m       29m         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-3066724191 to 2
  29m       29m         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-3066724191 to 1
  29m       29m         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-1564180365 to 2
  2m        2m          1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-3066724191 to 0
  2m        2m          1       {deployment-controller }                Normal      DeploymentRollback  Rolled back deployment "nginx-deployment" to revision 2
  29m       2m          2       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 3

Clean up Policy

You can set .spec.revisionHistoryLimit field to specify how much revision history of this deployment you want to keep. By default,
all revision history will be kept; explicitly setting this field to 0 disallows a deployment being rolled back.

Scaling a Deployment

You can scale a Deployment by using the following command:

$ kubectl scale deployment nginx-deployment --replicas 10
deployment "nginx-deployment" scaled

Assuming horizontal pod autoscaling is enabled
in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of
Pods you want to run based on the CPU utilization of your existing Pods.

$ kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
deployment "nginx-deployment" autoscaled

RollingUpdate Deployments support running multiple versions of an application at the same time. When you
or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress
or paused), then the Deployment controller will balance the additional replicas in the existing active
ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called proportional scaling.

For example, you are running a Deployment with 10 replicas, maxSurge=3, and maxUnavailable=2.

$ kubectl get deploy
NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment     10        10        10           10          50s

You update to a new image which happens to be unresolvable from inside the cluster.

$ kubectl set image deploy/nginx-deployment nginx=nginx:sometag
deployment "nginx-deployment" image updated

The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191 but it’s blocked due to the
maxUnavailable requirement that we mentioned above.

$ kubectl get rs
NAME                          DESIRED   CURRENT   READY     AGE
nginx-deployment-1989198191   5         5         0         9s
nginx-deployment-618515232    8         8         8         1m

Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas
to 15. The Deployment controller needs to decide where to add these new 5 replicas. If we weren’t using
proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, we
spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the
most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the
ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.

In our example above, 3 replicas will be added to the old ReplicaSet and 2 replicas will be added to the
new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming
the new replicas become healthy.

$ kubectl get deploy
NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment     15        18        7            8           7m
$ kubectl get rs
NAME                          DESIRED   CURRENT   READY     AGE
nginx-deployment-1989198191   7         7         0         7m
nginx-deployment-618515232    11        11        11        7m

Pausing and Resuming a Deployment

You can also pause a Deployment mid-way and then resume it. A use case is to support canary deployment.

Update the Deployment again and then pause the Deployment with kubectl rollout pause:

$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1; kubectl rollout pause deployment/nginx-deployment
deployment "nginx-deployment" image updated
deployment "nginx-deployment" paused

Note that any current state of the Deployment will continue its function, but new updates to the Deployment will not have an effect as long as the Deployment is paused.

The Deployment was still in progress when we paused it, so the actions of scaling up and down Replica Sets are paused too.

$ kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-1564180365   2         2         2       1h
nginx-deployment-2035384211   2         2         0       1h
nginx-deployment-3066724191   0         0         0       1h

In a separate terminal, watch for rollout status changes and you’ll see the rollout won’t continue:

$ kubectl rollout status deployment/nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...

To resume the Deployment, simply do kubectl rollout resume:

$ kubectl rollout resume deployment/nginx-deployment
deployment "nginx-deployment" resumed

Then the Deployment will continue and finish the rollout:

$ kubectl rollout status deployment/nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment spec update to be observed...
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment nginx-deployment successfully rolled out
$ kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-1564180365   3         3         3       1h
nginx-deployment-2035384211   0         0         0       1h
nginx-deployment-3066724191   0         0         0       1h

Note: You cannot rollback a paused Deployment until you resume it.

Deployment status

A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet,
it can be complete, or it can fail to progress.

Progressing Deployment

Kubernetes marks a Deployment as progressing when one of the following tasks is performed:

  • The Deployment is in the process of creating a new ReplicaSet.
  • The Deployment is scaling up an existing ReplicaSet.
  • The Deployment is scaling down an existing ReplicaSet.
  • New pods become available.

You can monitor the progress for a Deployment by using kubectl rollout status.

Complete Deployment

Kubernetes marks a Deployment as complete when it has the following characteristics:

  • The Deployment has minimum availability. Minimum availability means that the Deployment’s number of available replicas
    equals or exceeds the number required by the Deployment strategy.
  • All of the replicas associated with the Deployment have been updated to the latest version you’ve specified, meaning any
    updates you’ve requested have been completed.
  • No old pods for the Deployment are running.

You can check if a Deployment has completed by using kubectl rollout status. If the rollout completed successfully, kubectl rollout status returns a zero exit code.

$ kubectl rollout status deploy/nginx
Waiting for rollout to finish: 2 of 3 updated replicas are available...
deployment "nginx" successfully rolled out
$ echo $?
0

Failed Deployment

Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur due to some of the following factors:

  • Insufficient quota
  • Readiness probe failures
  • Image pull errors
  • Insufficient permissions
  • Limit ranges
  • Application runtime misconfiguration

One way you can detect this condition is to specify a deadline parameter in your Deployment spec: (spec.progressDeadlineSeconds). spec.progressDeadlineSeconds denotes the number of seconds the Deployment controller waits before indicating (via the Deployment status) that the Deployment progress has stalled.

The following kubectl command sets the spec with progressDeadlineSeconds to make the controller report lack of progress for a Deployment after 10 minutes:

$ kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
"nginx-deployment" patched

Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following attributes to
the Deployment’s status.conditions:

  • Type=Progressing
  • Status=False
  • Reason=ProgressDeadlineExceeded

See the Kubernetes API conventions for more information on status conditions.

Note that in version 1.5, Kubernetes will take no action on a stalled Deployment other than to report a status condition with
Reason=ProgressDeadlineExceeded.

Note: If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You can safely pause a Deployment in the middle of a rollout and resume without triggering the condition for exceeding the deadline.

You may experience transient errors with your Deployments, either due to a low timeout that you have set or due to any other kind
of error that can be treated as transient. For example, let’s suppose you have insufficient quota. If you describe the Deployment
you will notice the following section:

$ kubectl describe deployment nginx-deployment
<...>
Conditions:
  Type            Status  Reason
  ----            ------  ------
  Available       True    MinimumReplicasAvailable
  Progressing     True    ReplicaSetUpdated
  ReplicaFailure  True    FailedCreate
<...>

If you run kubectl get deployment nginx-deployment -o yaml, the Deployement status might look like this:

status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: 2016-10-04T12:25:39Z
    lastUpdateTime: 2016-10-04T12:25:39Z
    message: Replica set "nginx-deployment-4262182780" is progressing.
    reason: ReplicaSetUpdated
    status: "True"
    type: Progressing
  - lastTransitionTime: 2016-10-04T12:25:42Z
    lastUpdateTime: 2016-10-04T12:25:42Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2016-10-04T12:25:39Z
    lastUpdateTime: 2016-10-04T12:25:39Z
    message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
      object-counts, requested: pods=1, used: pods=3, limited: pods=2'
    reason: FailedCreate
    status: "True"
    type: ReplicaFailure
  observedGeneration: 3
  replicas: 2
  unavailableReplicas: 2

Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:

Conditions:
  Type            Status  Reason
  ----            ------  ------
  Available       True    MinimumReplicasAvailable
  Progressing     False   ProgressDeadlineExceeded
  ReplicaFailure  True    FailedCreate

You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running,
or by increasing quota in your namespace. If you satisfy the quota conditions and the Deployment controller then completes the Deployment
rollout, you’ll see the Deployment’s status update with a successful condition (Status=True and Reason=NewReplicaSetAvailable).

Conditions:
  Type          Status  Reason
  ----          ------  ------
  Available     True    MinimumReplicasAvailable
  Progressing   True    NewReplicaSetAvailable

Type=Available with Status=True means that your Deployment has minimum availability. Minimum availability is dictated
by the parameters specified in the deployment strategy. Type=Progressing with Status=True means that your Deployment
is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum
required new replicas are available (see the Reason of the condition for the particulars — in our case
Reason=NewReplicaSetAvailable means that the Deployment is complete).

You can check if a Deployment has failed to progress by using kubectl rollout status. kubectl rollout status returns a non-zero exit code if the Deployment has exceeded the progression deadline.

$ kubectl rollout status deploy/nginx
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadline
$ echo $?
1

Operating on a failed deployment

All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back
to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment pod template.

Use Cases

Canary Deployment

If you want to roll out releases to a subset of users or servers using the Deployment, you can create multiple Deployments, one for each release,
following the canary pattern described in managing resources.

Writing a Deployment Spec

As with all other Kubernetes configs, a Deployment needs apiVersion, kind, and
metadata fields. For general information about working with config files,
see deploying applications, configuring containers, and using kubectl to manage resources documents.

A Deployment also needs a .spec section.

Pod Template

The .spec.template is the only required field of the .spec.

The .spec.template is a pod template. It has exactly
the same schema as a Pod, except it is nested and does not have an
apiVersion or kind.

In addition to required fields for a Pod, a pod template in a Deployment must specify appropriate
labels (i.e. don’t overlap with other controllers, see selector) and an appropriate restart policy.

Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default
if not specified.

Replicas

.spec.replicas is an optional field that specifies the number of desired Pods. It defaults
to 1.

Selector

.spec.selector is an optional field that specifies a label selector for the Pods
targeted by this deployment.

If specified, .spec.selector must match .spec.template.metadata.labels, or it will
be rejected by the API. If .spec.selector is unspecified, .spec.selector.matchLabels will be defaulted to
.spec.template.metadata.labels.

Deployment may kill Pods whose labels match the selector, in the case that their
template is different than .spec.template or if the total number of such Pods
exceeds .spec.replicas. It will bring up new Pods with .spec.template if
number of Pods are less than the desired number.

Note that you should not create other pods whose labels match this selector, either directly, via another Deployment or via another controller such as Replica Sets or Replication Controllers. Otherwise, the Deployment will think that those pods were created by it. Kubernetes will not stop you from doing this.

If you have multiple controllers that have overlapping selectors, the controllers will fight with each other’s and won’t behave correctly.

Strategy

.spec.strategy specifies the strategy used to replace old Pods by new ones.
.spec.strategy.type can be «Recreate» or «RollingUpdate». «RollingUpdate» is
the default value.

Recreate Deployment

All existing Pods are killed before new ones are created when
.spec.strategy.type==Recreate.

Rolling Update Deployment

The Deployment updates Pods in a rolling update fashion
when .spec.strategy.type==RollingUpdate.
You can specify maxUnavailable and maxSurge to control
the rolling update process.

Max Unavailable

.spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the
maximum number of Pods that can be unavailable during the update process.
The value can be an absolute number (e.g. 5) or a percentage of desired Pods
(e.g. 10%).
The absolute number is calculated from percentage by rounding up.
This can not be 0 if .spec.strategy.rollingUpdate.maxSurge is 0.
By default, a fixed value of 1 is used.

For example, when this value is set to 30%, the old Replica Set can be scaled down to
70% of desired Pods immediately when the rolling update starts. Once new Pods are
ready, old Replica Set can be scaled down further, followed by scaling up the new Replica Set,
ensuring that the total number of Pods available at all times during the
update is at least 70% of the desired Pods.

Max Surge

.spec.strategy.rollingUpdate.maxSurge is an optional field that specifies the
maximum number of Pods that can be created above the desired number of Pods.
Value can be an absolute number (e.g. 5) or a percentage of desired Pods
(e.g. 10%).
This can not be 0 if MaxUnavailable is 0.
The absolute number is calculated from percentage by rounding up.
By default, a value of 1 is used.

For example, when this value is set to 30%, the new Replica Set can be scaled up immediately when
the rolling update starts, such that the total number of old and new Pods do not exceed
130% of desired Pods. Once old Pods have been killed,
the new Replica Set can be scaled up further, ensuring that the total number of Pods running
at any time during the update is at most 130% of desired Pods.

Progress Deadline Seconds

.spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want
to wait for your Deployment to progress before the system reports back that the Deployment has
failed progressing — surfaced as a condition with Type=Progressing, Status=False.
and Reason=ProgressDeadlineExceeded in the status of the resource. The deployment controller will keep
retrying the Deployment. In the future, once automatic rollback will be implemented, the deployment
controller will roll back a Deployment as soon as it observes such a condition.

If specified, this field needs to be greater than .spec.minReadySeconds.

Min Ready Seconds

.spec.minReadySeconds is an optional field (with default value of 600s) that specifies the
minimum number of seconds for which a newly created Pod should be ready
without any of its containers crashing, for it to be considered available.
This defaults to 0 (the Pod will be considered available as soon as it is ready).
To learn more about when a Pod is considered ready, see Container Probes.

Rollback To

.spec.rollbackTo is an optional field with the configuration the Deployment is rolling back to. Setting this field will trigger a rollback, and this field will be cleared every time a rollback is done.

Revision

.spec.rollbackTo.revision is an optional field specifying the revision to rollback to. This defaults to 0, meaning rollback to the last revision in history.

Revision History Limit

A deployment’s revision history is stored in the replica sets it controls.

.spec.revisionHistoryLimit is an optional field (with default value of two) that specifies the number of old Replica Sets to retain to allow rollback. Its ideal value depends on the frequency and stability of new deployments. All old Replica Sets will be kept by default, consuming resources in etcd and crowding the output of kubectl get rs, if this field is not set. The configuration of each Deployment revision is stored in its Replica Sets; therefore, once an old Replica Set is deleted, you lose the ability to rollback to that revision of Deployment.

More specifically, setting this field to zero means that all old replica sets with 0 replica will be cleaned up.
In this case, a new deployment rollout cannot be undone, since its revision history is cleaned up.

Paused

.spec.paused is an optional boolean field for pausing and resuming a Deployment. It defaults to false (a Deployment is not paused).

Alternative to Deployments

kubectl rolling update

Kubectl rolling update updates Pods and Replication Controllers in a similar fashion.
But Deployments are recommended, since they are declarative, server side, and have additional features, such as rolling back to any previous revision even after the rolling update is done.

A Deployment provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

Use Case

The following are typical use cases for Deployments:

  • Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
  • Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
  • Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
  • Scale up the Deployment to facilitate more load.
  • Pause the rollout of a Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
  • Use the status of the Deployment as an indicator that a rollout has stuck.
  • Clean up older ReplicaSets that you don’t need anymore.

Creating a Deployment

The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods:

controllers/nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

In this example:

  • A Deployment named nginx-deployment is created, indicated by the .metadata.name field.

  • The Deployment creates three replicated Pods, indicated by the .spec.replicas field.

  • The .spec.selector field defines how the Deployment finds which Pods to manage. In this case, you select a label that is defined in the Pod template (app: nginx). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.

  • The template field contains the following sub-fields:

    • The Pods are labeled app: nginxusing the .metadata.labels field.
    • The Pod template’s specification, or .template.spec field, indicates that the Pods run one container, nginx, which runs the nginx Docker Hub image at version 1.14.2.
    • Create one container and name it nginx using the .spec.template.spec.containers[0].name field.

Before you begin, make sure your Kubernetes cluster is up and running. Follow the steps given below to create the above Deployment:

  1. Create the Deployment by running the following command:

    kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
    
  2. Run kubectl get deployments to check if the Deployment was created.

    If the Deployment is still being created, the output is similar to the following:

    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment   0/3     0            0           1s
    

    When you inspect the Deployments in your cluster, the following fields are displayed:

    • NAME lists the names of the Deployments in the namespace.
    • READY displays how many replicas of the application are available to your users. It follows the pattern ready/desired.
    • UP-TO-DATE displays the number of replicas that have been updated to achieve the desired state.
    • AVAILABLE displays how many replicas of the application are available to your users.
    • AGE displays the amount of time that the application has been running.

    Notice how the number of desired replicas is 3 according to .spec.replicas field.

  3. To see the Deployment rollout status, run kubectl rollout status deployment/nginx-deployment.

    The output is similar to:

    Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
    deployment "nginx-deployment" successfully rolled out
    
  4. Run the kubectl get deployments again a few seconds later. The output is similar to this:

    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment   3/3     3            3           18s
    

    Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.

  5. To see the ReplicaSet (rs) created by the Deployment, run kubectl get rs. The output is similar to this:

    NAME                          DESIRED   CURRENT   READY   AGE
    nginx-deployment-75675f5897   3         3         3       18s
    

    ReplicaSet output shows the following fields:

    • NAME lists the names of the ReplicaSets in the namespace.
    • DESIRED displays the desired number of replicas of the application, which you define when you create the Deployment. This is the desired state.
    • CURRENT displays how many replicas are currently running.
    • READY displays how many replicas of the application are available to your users.
    • AGE displays the amount of time that the application has been running.

    Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]-[RANDOM-STRING]. The random string is randomly generated and uses the pod-template-hash as a seed.

  6. To see the labels automatically generated for each Pod, run kubectl get pods --show-labels. The output is similar to:

    NAME                                READY     STATUS    RESTARTS   AGE       LABELS
    nginx-deployment-75675f5897-7ci7o   1/1       Running   0          18s       app=nginx,pod-template-hash=3123191453
    nginx-deployment-75675f5897-kzszj   1/1       Running   0          18s       app=nginx,pod-template-hash=3123191453
    nginx-deployment-75675f5897-qqcnn   1/1       Running   0          18s       app=nginx,pod-template-hash=3123191453
    

    The created ReplicaSet ensures that there are three nginx Pods.

Pod-template-hash label

The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.

This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any existing Pods that the ReplicaSet might have.

Updating a Deployment

Follow the steps given below to update your Deployment:

  1. Let’s update the nginx Pods to use the nginx:1.16.1 image instead of the nginx:1.14.2 image.

    kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
    

    or use the following command:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
    

    The output is similar to:

    deployment.apps/nginx-deployment image updated
    

    Alternatively, you can edit the Deployment and change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1:

    kubectl edit deployment/nginx-deployment
    

    The output is similar to:

    deployment.apps/nginx-deployment edited
    
  2. To see the rollout status, run:

    kubectl rollout status deployment/nginx-deployment
    

    The output is similar to this:

    Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
    

    or

    deployment "nginx-deployment" successfully rolled out
    

Get more details on your updated Deployment:

  • After the rollout succeeds, you can view the Deployment by running kubectl get deployments. The output is similar to this:

    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment   3/3     3            3           36s
    
  • Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.

    The output is similar to this:

    NAME                          DESIRED   CURRENT   READY   AGE
    nginx-deployment-1564180365   3         3         3       6s
    nginx-deployment-2035384211   0         0         0       36s
    
  • Running get pods should now show only the new Pods:

    The output is similar to this:

    NAME                                READY     STATUS    RESTARTS   AGE
    nginx-deployment-1564180365-khku8   1/1       Running   0          14s
    nginx-deployment-1564180365-nacti   1/1       Running   0          14s
    nginx-deployment-1564180365-z9gth   1/1       Running   0          14s
    

    Next time you want to update these Pods, you only need to update the Deployment’s Pod template again.

    Deployment ensures that only a certain number of Pods are down while they are being updated. By default, it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).

    Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).

    For example, if you look at the above Deployment closely, you will see that it first created a new Pod, then deleted some old Pods, and created new ones. It does not kill old Pods until a sufficient number of new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. It makes sure that at least 2 Pods are available and that at max 4 Pods in total are available.

  • Get details of your Deployment:

    kubectl describe deployments
    

    The output is similar to this:

    Name:                   nginx-deployment
    Namespace:              default
    CreationTimestamp:      Thu, 30 Nov 2017 10:56:25 +0000
    Labels:                 app=nginx
    Annotations:            deployment.kubernetes.io/revision=2
    Selector:               app=nginx
    Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
    StrategyType:           RollingUpdate
    MinReadySeconds:        0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge
    Pod Template:
      Labels:  app=nginx
       Containers:
        nginx:
          Image:        nginx:1.16.1
          Port:         80/TCP
          Environment:  <none>
          Mounts:       <none>
        Volumes:        <none>
      Conditions:
        Type           Status  Reason
        ----           ------  ------
        Available      True    MinimumReplicasAvailable
        Progressing    True    NewReplicaSetAvailable
      OldReplicaSets:  <none>
      NewReplicaSet:   nginx-deployment-1564180365 (3/3 replicas created)
      Events:
        Type    Reason             Age   From                   Message
        ----    ------             ----  ----                   -------
        Normal  ScalingReplicaSet  2m    deployment-controller  Scaled up replica set nginx-deployment-2035384211 to 3
        Normal  ScalingReplicaSet  24s   deployment-controller  Scaled up replica set nginx-deployment-1564180365 to 1
        Normal  ScalingReplicaSet  22s   deployment-controller  Scaled down replica set nginx-deployment-2035384211 to 2
        Normal  ScalingReplicaSet  22s   deployment-controller  Scaled up replica set nginx-deployment-1564180365 to 2
        Normal  ScalingReplicaSet  19s   deployment-controller  Scaled down replica set nginx-deployment-2035384211 to 1
        Normal  ScalingReplicaSet  19s   deployment-controller  Scaled up replica set nginx-deployment-1564180365 to 3
        Normal  ScalingReplicaSet  14s   deployment-controller  Scaled down replica set nginx-deployment-2035384211 to 0
    

    Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that at least 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy. Finally, you’ll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.

Rollover (aka multiple updates in-flight)

Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up the desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labels match .spec.selector but whose template does not match .spec.template are scaled down. Eventually, the new ReplicaSet is scaled to .spec.replicas and all old ReplicaSets is scaled to 0.

If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously — it will add it to its list of old ReplicaSets and start scaling it down.

For example, suppose you create a Deployment to create 5 replicas of nginx:1.14.2, but then update the Deployment to create 5 replicas of nginx:1.16.1, when only 3 replicas of nginx:1.14.2 had been created. In that case, the Deployment immediately starts killing the 3 nginx:1.14.2 Pods that it had created, and starts creating nginx:1.16.1 Pods. It does not wait for the 5 replicas of nginx:1.14.2 to be created before changing course.

Label selector updates

It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front. In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped all of the implications.

  • Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too, otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector does not select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets and creating a new ReplicaSet.
  • Selector updates changes the existing value in a selector key — result in the same behavior as additions.
  • Selector removals removes an existing key from the Deployment selector — do not require any changes in the Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the removed label still exists in any existing Pods and ReplicaSets.

Rolling Back a Deployment

Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping. By default, all of the Deployment’s rollout history is kept in the system so that you can rollback anytime you want (you can change that by modifying revision history limit).

  • Suppose that you made a typo while updating the Deployment, by putting the image name as nginx:1.161 instead of nginx:1.16.1:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.161 
    

    The output is similar to this:

    deployment.apps/nginx-deployment image updated
    
  • The rollout gets stuck. You can verify it by checking the rollout status:

    kubectl rollout status deployment/nginx-deployment
    

    The output is similar to this:

    Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
    
  • Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here.

  • You see that the number of old replicas (nginx-deployment-1564180365 and nginx-deployment-2035384211) is 2, and new replicas (nginx-deployment-3066724191) is 1.

    The output is similar to this:

    NAME                          DESIRED   CURRENT   READY   AGE
    nginx-deployment-1564180365   3         3         3       25s
    nginx-deployment-2035384211   0         0         0       36s
    nginx-deployment-3066724191   1         1         0       6s
    
  • Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.

    The output is similar to this:

    NAME                                READY     STATUS             RESTARTS   AGE
    nginx-deployment-1564180365-70iae   1/1       Running            0          25s
    nginx-deployment-1564180365-jbqqo   1/1       Running            0          25s
    nginx-deployment-1564180365-hysrc   1/1       Running            0          25s
    nginx-deployment-3066724191-08mng   0/1       ImagePullBackOff   0          6s
    
  • Get the description of the Deployment:

    kubectl describe deployment
    

    The output is similar to this:

    Name:           nginx-deployment
    Namespace:      default
    CreationTimestamp:  Tue, 15 Mar 2016 14:48:04 -0700
    Labels:         app=nginx
    Selector:       app=nginx
    Replicas:       3 desired | 1 updated | 4 total | 3 available | 1 unavailable
    StrategyType:       RollingUpdate
    MinReadySeconds:    0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge
    Pod Template:
      Labels:  app=nginx
      Containers:
       nginx:
        Image:        nginx:1.161
        Port:         80/TCP
        Host Port:    0/TCP
        Environment:  <none>
        Mounts:       <none>
      Volumes:        <none>
    Conditions:
      Type           Status  Reason
      ----           ------  ------
      Available      True    MinimumReplicasAvailable
      Progressing    True    ReplicaSetUpdated
    OldReplicaSets:     nginx-deployment-1564180365 (3/3 replicas created)
    NewReplicaSet:      nginx-deployment-3066724191 (1/1 replicas created)
    Events:
      FirstSeen LastSeen    Count   From                    SubObjectPath   Type        Reason              Message
      --------- --------    -----   ----                    -------------   --------    ------              -------
      1m        1m          1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-2035384211 to 3
      22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 1
      22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 2
      22s       22s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 2
      21s       21s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 1
      21s       21s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-1564180365 to 3
      13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled down replica set nginx-deployment-2035384211 to 0
      13s       13s         1       {deployment-controller }                Normal      ScalingReplicaSet   Scaled up replica set nginx-deployment-3066724191 to 1
    

    To fix this, you need to rollback to a previous revision of Deployment that is stable.

Checking Rollout History of a Deployment

Follow the steps given below to check the rollout history:

  1. First, check the revisions of this Deployment:

    kubectl rollout history deployment/nginx-deployment
    

    The output is similar to this:

    deployments "nginx-deployment"
    REVISION    CHANGE-CAUSE
    1           kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml
    2           kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
    3           kubectl set image deployment/nginx-deployment nginx=nginx:1.161
    

    CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:

    • Annotating the Deployment with kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"
    • Manually editing the manifest of the resource.
  2. To see the details of each revision, run:

    kubectl rollout history deployment/nginx-deployment --revision=2
    

    The output is similar to this:

    deployments "nginx-deployment" revision 2
      Labels:       app=nginx
              pod-template-hash=1159050644
      Annotations:  kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
      Containers:
       nginx:
        Image:      nginx:1.16.1
        Port:       80/TCP
         QoS Tier:
            cpu:      BestEffort
            memory:   BestEffort
        Environment Variables:      <none>
      No volumes.
    

Rolling Back to a Previous Revision

Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.

  1. Now you’ve decided to undo the current rollout and rollback to the previous revision:

    kubectl rollout undo deployment/nginx-deployment
    

    The output is similar to this:

    deployment.apps/nginx-deployment rolled back
    

    Alternatively, you can rollback to a specific revision by specifying it with --to-revision:

    kubectl rollout undo deployment/nginx-deployment --to-revision=2
    

    The output is similar to this:

    deployment.apps/nginx-deployment rolled back
    

    For more details about rollout related commands, read kubectl rollout.

    The Deployment is now rolled back to a previous stable revision. As you can see, a DeploymentRollback event for rolling back to revision 2 is generated from Deployment controller.

  2. Check if the rollback was successful and the Deployment is running as expected, run:

    kubectl get deployment nginx-deployment
    

    The output is similar to this:

    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment   3/3     3            3           30m
    
  3. Get the description of the Deployment:

    kubectl describe deployment nginx-deployment
    

    The output is similar to this:

    Name:                   nginx-deployment
    Namespace:              default
    CreationTimestamp:      Sun, 02 Sep 2018 18:17:55 -0500
    Labels:                 app=nginx
    Annotations:            deployment.kubernetes.io/revision=4
                            kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
    Selector:               app=nginx
    Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
    StrategyType:           RollingUpdate
    MinReadySeconds:        0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge
    Pod Template:
      Labels:  app=nginx
      Containers:
       nginx:
        Image:        nginx:1.16.1
        Port:         80/TCP
        Host Port:    0/TCP
        Environment:  <none>
        Mounts:       <none>
      Volumes:        <none>
    Conditions:
      Type           Status  Reason
      ----           ------  ------
      Available      True    MinimumReplicasAvailable
      Progressing    True    NewReplicaSetAvailable
    OldReplicaSets:  <none>
    NewReplicaSet:   nginx-deployment-c4747d96c (3/3 replicas created)
    Events:
      Type    Reason              Age   From                   Message
      ----    ------              ----  ----                   -------
      Normal  ScalingReplicaSet   12m   deployment-controller  Scaled up replica set nginx-deployment-75675f5897 to 3
      Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-c4747d96c to 1
      Normal  ScalingReplicaSet   11m   deployment-controller  Scaled down replica set nginx-deployment-75675f5897 to 2
      Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-c4747d96c to 2
      Normal  ScalingReplicaSet   11m   deployment-controller  Scaled down replica set nginx-deployment-75675f5897 to 1
      Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-c4747d96c to 3
      Normal  ScalingReplicaSet   11m   deployment-controller  Scaled down replica set nginx-deployment-75675f5897 to 0
      Normal  ScalingReplicaSet   11m   deployment-controller  Scaled up replica set nginx-deployment-595696685f to 1
      Normal  DeploymentRollback  15s   deployment-controller  Rolled back deployment "nginx-deployment" to revision 2
      Normal  ScalingReplicaSet   15s   deployment-controller  Scaled down replica set nginx-deployment-595696685f to 0
    

Scaling a Deployment

You can scale a Deployment by using the following command:

kubectl scale deployment/nginx-deployment --replicas=10

The output is similar to this:

deployment.apps/nginx-deployment scaled

Assuming horizontal Pod autoscaling is enabled in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of Pods you want to run based on the CPU utilization of your existing Pods.

kubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80

The output is similar to this:

deployment.apps/nginx-deployment scaled

Proportional scaling

RollingUpdate Deployments support running multiple versions of an application at the same time. When you or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress or paused), the Deployment controller balances the additional replicas in the existing active ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called proportional scaling.

For example, you are running a Deployment with 10 replicas, maxSurge=3, and maxUnavailable=2.

  • Ensure that the 10 replicas in your Deployment are running.

    The output is similar to this:

    NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    nginx-deployment     10        10        10           10          50s
    
  • You update to a new image which happens to be unresolvable from inside the cluster.

    kubectl set image deployment/nginx-deployment nginx=nginx:sometag
    

    The output is similar to this:

    deployment.apps/nginx-deployment image updated
    
  • The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it’s blocked due to the maxUnavailable requirement that you mentioned above. Check out the rollout status:

    The output is similar to this:
    
    NAME                          DESIRED   CURRENT   READY     AGE
    nginx-deployment-1989198191   5         5         0         9s
    nginx-deployment-618515232    8         8         8         1m
    
  • Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren’t using proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, you spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.

In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming the new replicas become healthy. To confirm this, run:

The output is similar to this:

NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment     15        18        7            8           7m

The rollout status confirms how the replicas were added to each ReplicaSet.

The output is similar to this:

NAME                          DESIRED   CURRENT   READY     AGE
nginx-deployment-1989198191   7         7         0         7m
nginx-deployment-618515232    11        11        11        7m

Pausing and Resuming a rollout of a Deployment

When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you’re ready to apply those changes, you resume rollouts for the Deployment. This approach allows you to apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.

  • For example, with a Deployment that was created:

    Get the Deployment details:

    The output is similar to this:

    NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    nginx     3         3         3            3           1m
    

    Get the rollout status:

    The output is similar to this:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   3         3         3         1m
    
  • Pause by running the following command:

    kubectl rollout pause deployment/nginx-deployment
    

    The output is similar to this:

    deployment.apps/nginx-deployment paused
    
  • Then update the image of the Deployment:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
    

    The output is similar to this:

    deployment.apps/nginx-deployment image updated
    
  • Notice that no new rollout started:

    kubectl rollout history deployment/nginx-deployment
    

    The output is similar to this:

    deployments "nginx"
    REVISION  CHANGE-CAUSE
    1   <none>
    
  • Get the rollout status to verify that the existing ReplicaSet has not changed:

    The output is similar to this:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   3         3         3         2m
    
  • You can make as many updates as you wish, for example, update the resources that will be used:

    kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
    

    The output is similar to this:

    deployment.apps/nginx-deployment resource requirements updated
    

    The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused.

  • Eventually, resume the Deployment rollout and observe a new ReplicaSet coming up with all the new updates:

    kubectl rollout resume deployment/nginx-deployment
    

    The output is similar to this:

    deployment.apps/nginx-deployment resumed
    
  • Watch the status of the rollout until it’s done.

    The output is similar to this:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   2         2         2         2m
    nginx-3926361531   2         2         0         6s
    nginx-3926361531   2         2         1         18s
    nginx-2142116321   1         2         2         2m
    nginx-2142116321   1         2         2         2m
    nginx-3926361531   3         2         1         18s
    nginx-3926361531   3         2         1         18s
    nginx-2142116321   1         1         1         2m
    nginx-3926361531   3         3         1         18s
    nginx-3926361531   3         3         2         19s
    nginx-2142116321   0         1         1         2m
    nginx-2142116321   0         1         1         2m
    nginx-2142116321   0         0         0         2m
    nginx-3926361531   3         3         3         20s
    
  • Get the status of the latest rollout:

    The output is similar to this:

    NAME               DESIRED   CURRENT   READY     AGE
    nginx-2142116321   0         0         0         2m
    nginx-3926361531   3         3         3         28s
    

Deployment status

A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new ReplicaSet, it can be complete, or it can fail to progress.

Progressing Deployment

Kubernetes marks a Deployment as progressing when one of the following tasks is performed:

  • The Deployment creates a new ReplicaSet.
  • The Deployment is scaling up its newest ReplicaSet.
  • The Deployment is scaling down its older ReplicaSet(s).
  • New Pods become ready or available (ready for at least MinReadySeconds).

You can monitor the progress for a Deployment by using kubectl rollout status.

Complete Deployment

Kubernetes marks a Deployment as complete when it has the following characteristics:

  • All of the replicas associated with the Deployment have been updated to the latest version you’ve specified, meaning any updates you’ve requested have been completed.
  • All of the replicas associated with the Deployment are available.
  • No old replicas for the Deployment are running.

You can check if a Deployment has completed by using kubectl rollout status. If the rollout completed successfully, kubectl rollout status returns a zero exit code.

kubectl rollout status deployment/nginx-deployment

The output is similar to this:

Waiting for rollout to finish: 2 of 3 updated replicas are available...
deployment "nginx-deployment" successfully rolled out

and the exit status from kubectl rollout is 0 (success):

0

Failed Deployment

Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur due to some of the following factors:

  • Insufficient quota
  • Readiness probe failures
  • Image pull errors
  • Insufficient permissions
  • Limit ranges
  • Application runtime misconfiguration

One way you can detect this condition is to specify a deadline parameter in your Deployment spec: (.spec.progressDeadlineSeconds). .spec.progressDeadlineSeconds denotes the number of seconds the Deployment controller waits before indicating (in the Deployment status) that the Deployment progress has stalled.

The following kubectl command sets the spec with progressDeadlineSeconds to make the controller report lack of progress for a Deployment after 10 minutes:

kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'

The output is similar to this:

deployment.apps/nginx-deployment patched

Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following attributes to the Deployment’s .status.conditions:

  • Type=Progressing
  • Status=False
  • Reason=ProgressDeadlineExceeded

See the Kubernetes API conventions for more information on status conditions.

You may experience transient errors with your Deployments, either due to a low timeout that you have set or due to any other kind of error that can be treated as transient. For example, let’s suppose you have insufficient quota. If you describe the Deployment you will notice the following section:

kubectl describe deployment nginx-deployment

The output is similar to this:

<...>
Conditions:
  Type            Status  Reason
  ----            ------  ------
  Available       True    MinimumReplicasAvailable
  Progressing     True    ReplicaSetUpdated
  ReplicaFailure  True    FailedCreate
<...>

If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:

status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: 2016-10-04T12:25:39Z
    lastUpdateTime: 2016-10-04T12:25:39Z
    message: Replica set "nginx-deployment-4262182780" is progressing.
    reason: ReplicaSetUpdated
    status: "True"
    type: Progressing
  - lastTransitionTime: 2016-10-04T12:25:42Z
    lastUpdateTime: 2016-10-04T12:25:42Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2016-10-04T12:25:39Z
    lastUpdateTime: 2016-10-04T12:25:39Z
    message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
      object-counts, requested: pods=1, used: pods=3, limited: pods=2'
    reason: FailedCreate
    status: "True"
    type: ReplicaFailure
  observedGeneration: 3
  replicas: 2
  unavailableReplicas: 2

Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the reason for the Progressing condition:

Conditions:
  Type            Status  Reason
  ----            ------  ------
  Available       True    MinimumReplicasAvailable
  Progressing     False   ProgressDeadlineExceeded
  ReplicaFailure  True    FailedCreate

You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota conditions and the Deployment controller then completes the Deployment rollout, you’ll see the Deployment’s status update with a successful condition (Status=True and Reason=NewReplicaSetAvailable).

Conditions:
  Type          Status  Reason
  ----          ------  ------
  Available     True    MinimumReplicasAvailable
  Progressing   True    NewReplicaSetAvailable

Type=Available with Status=True means that your Deployment has minimum availability. Minimum availability is dictated by the parameters specified in the deployment strategy. Type=Progressing with Status=True means that your Deployment is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum required new replicas are available (see the Reason of the condition for the particulars — in our case Reason=NewReplicaSetAvailable means that the Deployment is complete).

You can check if a Deployment has failed to progress by using kubectl rollout status. kubectl rollout status returns a non-zero exit code if the Deployment has exceeded the progression deadline.

kubectl rollout status deployment/nginx-deployment

The output is similar to this:

Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadline

and the exit status from kubectl rollout is 1 (indicating an error):

1

Operating on a failed deployment

All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template.

Clean up Policy

You can set .spec.revisionHistoryLimit field in a Deployment to specify how many old ReplicaSets for this Deployment you want to retain. The rest will be garbage-collected in the background. By default, it is 10.

Canary Deployment

If you want to roll out releases to a subset of users or servers using the Deployment, you can create multiple Deployments, one for each release, following the canary pattern described in managing resources.

Writing a Deployment Spec

As with all other Kubernetes configs, a Deployment needs .apiVersion, .kind, and .metadata fields. For general information about working with config files, see deploying applications, configuring containers, and using kubectl to manage resources documents. The name of a Deployment object must be a valid DNS subdomain name.

A Deployment also needs a .spec section.

Pod Template

The .spec.template and .spec.selector are the only required fields of the .spec.

The .spec.template is a Pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion or kind.

In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See selector.

Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified.

Replicas

.spec.replicas is an optional field that specifies the number of desired Pods. It defaults to 1.

Should you manually scale a Deployment, example via kubectl scale deployment deployment --replicas=X, and then you update that Deployment based on a manifest (for example: by running kubectl apply -f deployment.yaml), then applying that manifest overwrites the manual scaling that you previously did.

If a HorizontalPodAutoscaler (or any similar API for horizontal scaling) is managing scaling for a Deployment, don’t set .spec.replicas.

Instead, allow the Kubernetes control plane to manage the .spec.replicas field automatically.

Selector

.spec.selector is a required field that specifies a label selector for the Pods targeted by this Deployment.

.spec.selector must match .spec.template.metadata.labels, or it will be rejected by the API.

In API version apps/v1, .spec.selector and .metadata.labels do not default to .spec.template.metadata.labels if not set. So they must be set explicitly. Also note that .spec.selector is immutable after creation of the Deployment in apps/v1.

A Deployment may terminate Pods whose labels match the selector if their template is different from .spec.template or if the total number of such Pods exceeds .spec.replicas. It brings up new Pods with .spec.template if the number of Pods is less than the desired number.

If you have multiple controllers that have overlapping selectors, the controllers will fight with each other and won’t behave correctly.

Strategy

.spec.strategy specifies the strategy used to replace old Pods by new ones. .spec.strategy.type can be «Recreate» or «RollingUpdate». «RollingUpdate» is the default value.

Recreate Deployment

All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate.

Rolling Update Deployment

The Deployment updates Pods in a rolling update fashion when .spec.strategy.type==RollingUpdate. You can specify maxUnavailable and maxSurge to control the rolling update process.

Max Unavailable

.spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the maximum number of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by rounding down. The value cannot be 0 if .spec.strategy.rollingUpdate.maxSurge is 0. The default value is 25%.

For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available at all times during the update is at least 70% of the desired Pods.

Max Surge

.spec.strategy.rollingUpdate.maxSurge is an optional field that specifies the maximum number of Pods that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The value cannot be 0 if MaxUnavailable is 0. The absolute number is calculated from the percentage by rounding up. The default value is 25%.

For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the total number of Pods running at any time during the update is at most 130% of desired Pods.

Progress Deadline Seconds

.spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed progressing — surfaced as a condition with Type=Progressing, Status=False. and Reason=ProgressDeadlineExceeded in the status of the resource. The Deployment controller will keep retrying the Deployment. This defaults to 600. In the future, once automatic rollback will be implemented, the Deployment controller will roll back a Deployment as soon as it observes such a condition.

If specified, this field needs to be greater than .spec.minReadySeconds.

Min Ready Seconds

.spec.minReadySeconds is an optional field that specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when a Pod is considered ready, see Container Probes.

Revision History Limit

A Deployment’s revision history is stored in the ReplicaSets it controls.

.spec.revisionHistoryLimit is an optional field that specifies the number of old ReplicaSets to retain to allow rollback. These old ReplicaSets consume resources in etcd and crowd the output of kubectl get rs. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments.

More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up. In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.

Paused

.spec.paused is an optional boolean field for pausing and resuming a Deployment. The only difference between a paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the paused Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when it is created.

What’s next

  • Learn about Pods.
  • Run a Stateless Application Using a Deployment.
  • Deployment is a top-level resource in the Kubernetes REST API. Read the Deployment object definition to understand the API for deployments.
  • Read about PodDisruptionBudget and how you can use it to manage application availability during disruptions.

Понравилась статья? Поделить с друзьями:
  • Error dependency pixman 1 not found tried pkgconfig
  • Error dependency is not satisfiable
  • Error dependency colorama is missing install via pip3 install colorama
  • Error dep policy is not enabled due system configuration
  • Error delta double java