Configure Service Accounts for Pods
A service account provides an identity for processes that run in a Pod.
This is a user introduction to Service Accounts. See also the Cluster Admin Guide to Service Accounts.
When you (a human) access the cluster (for example, using kubectl
), you are authenticated by the apiserver as a particular User Account (currently this is usually admin
, unless your cluster administrator has customized your cluster).
Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default
).
Use the Default Service Account to access the API server.
When you create a pod, if you do not specify a service account, it is automatically assigned the default
service account in the same namespace.
If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/podname -o yaml
), you can see the spec.serviceAccountName
field has been automatically set.
kubectl get pods platform-website-deployment-77b9cfc887-l9g66 -o yaml -n alpha
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false
on the service account:
apiVersion: v1 kind: ServiceAccount metadata: name: build-robot automountServiceAccountToken: false ...
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1 kind: Pod metadata: name: my-pod spec: serviceAccountName: build-robot automountServiceAccountToken: false ...
The pod spec takes precedence over the service account if both specify a automountServiceAccountToken
value.
Use Multiple Service Accounts.
Every namespace has a default service account resource called default
.
You can list this and any other serviceAccount resources in the namespace with this command:
# serviceAccounts kubectl get serviceAccounts
can create additional ServiceAccount objects like this:
$ cat > /tmp/serviceaccount.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: build-robot EOF $ kubectl create -f /tmp/serviceaccount.yaml serviceaccount "build-robot" created
If you get a complete dump of the service account object, like this:
$ kubectl get serviceaccounts/build-robot -o yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: 2015-06-16T00:12:59Z name: build-robot namespace: default resourceVersion: "272500" selfLink: /api/v1/namespaces/default/serviceaccounts/build-robot uid: 721ab723-13bc-11e5-aec2-42010af0021e secrets: - name: build-robot-token-bvbk5
then you will see that a token has automatically been created and is referenced by the service account.
You may use authorization plugins to set permissions on service accounts.
To use a non-default service account, simply set the spec.serviceAccountName
field of a pod to the name of the service account you wish to use.
The service account has to exist at the time the pod is created, or it will be rejected.
You cannot update the service account of an already created pod.
You can clean up the service account from this example like this:
$ kubectl delete serviceaccount/build-robot
Manually create a service account API token.
Suppose we have an existing service account named “build-robot” as mentioned above, and we create a new secret manually.
$ cat > /tmp/build-robot-secret.yaml <<EOF apiVersion: v1 kind: Secret metadata: name: build-robot-secret annotations: kubernetes.io/service-account.name: build-robot type: kubernetes.io/service-account-token EOF $ kubectl create -f /tmp/build-robot-secret.yaml secret "build-robot-secret" created
Now you can confirm that the newly built secret is populated with an API token for the “build-robot” service account.
Any tokens for non-existent service accounts will be cleaned up by the token controller.
$ kubectl describe secrets/build-robot-secret Name: build-robot-secret Namespace: default Labels: <none> Annotations: kubernetes.io/service-account.name=build-robot kubernetes.io/service-account.uid=da68f9c6-9d26-11e7-b84e-002dc52800da Type: kubernetes.io/service-account-token Data ==== ca.crt: 1338 bytes namespace: 7 bytes token: ...
Add ImagePullSecrets to a service account
First, create an imagePullSecret, as described here.
Next, verify it has been created. For example:
$ kubectl get secrets myregistrykey NAME TYPE DATA AGE myregistrykey kubernetes.io/.dockerconfigjson 1 1d
Next, modify the default service account for the namespace to use this secret as an imagePullSecret.
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "acrkey"}]}'
Interactive version requiring manual edit:
$ kubectl get serviceaccounts default -o yaml > ./sa.yaml $ cat sa.yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: 2015-08-07T22:02:39Z name: default namespace: default resourceVersion: "243024" selfLink: /api/v1/namespaces/default/serviceaccounts/default uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6 secrets: - name: default-token-uudge $ vi sa.yaml [editor session not shown] [delete line with key "resourceVersion"] [add lines with "imagePullSecrets:"] $ cat sa.yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: 2015-08-07T22:02:39Z name: default namespace: default selfLink: /api/v1/namespaces/default/serviceaccounts/default uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6 secrets: - name: default-token-uudge imagePullSecrets: - name: myregistrykey $ kubectl replace serviceaccount default -f ./sa.yaml serviceaccounts/default
Now, any new pods created in the current namespace will have this added to their spec:
spec: imagePullSecrets: - name: myregistrykey
Pull an Image from a Private Registry
how to create a Pod that uses a Secret to pull an image from a private Docker registry or repository.
Log in to Docker
must authenticate with a registry in order to pull a private image:
docker login
When prompted, enter your Docker username and password.
The login process creates or updates a config.json
file that holds an authorization token.
View the config.json
file:
cat ~/.docker/config.json
The output contains a section similar to this:
{ "auths": { "https://index.docker.io/v1/": { "auth": "c3R...zE2" } } }
Note:
If you use a Docker credentials store, you won’t see that auth
entry ,but a credsStore
entry with the name of the store as value.
Create a Secret in the cluster that holds your authorization token
A Kubernetes cluster uses the Secret of docker-registry
type ,to authenticate with a container registry to pull a private image.
Create this Secret, naming it regcred
:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
where:
<your-registry-server>
is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub)<your-name>
is your Docker username.<your-pword>
is your Docker password.<your-email>
is your Docker email.
You have successfully set your Docker credentials in the cluster as a Secret called regcred
.
Inspecting the Secret regcred
To understand the contents of the regcred
Secret you just created, start by viewing the Secret in YAML format:
kubectl get secret regcred --output=yaml #The output is similar to this: apiVersion: v1 data: .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0= kind: Secret metadata: ... name: regcred ... type: kubernetes.io/dockerconfigjson
The value of the .dockerconfigjson
field is a base64 representation of your Docker credentials.
To understand what is in the .dockerconfigjson
field, convert the secret data to a readable format:
kubectl get secret regcred --output="jsonpath={.data..dockerconfigjson}" | base64 -d
The output is similar to this:
{"auths":{"yourprivateregistry.com":{"username":"janedoe","password":"xxxxxxxxxxx","email":"jdoe@example.com","auth":"c3R...zE2"}}}
To understand what is in the auth
field, convert the base64-encoded data to a readable format:
echo "c3R...zE2" | base64 -d
The output, username and password concatenated with a :
, is similar to this:
janedoe:xxxxxxxxxxx
Notice that the Secret data contains the authorization token similar to your local ~/.docker/config.json
file.
You have successfully set your Docker credentials as a Secret called regcred
in the cluster.
Create a Pod that uses your Secret
a configuration file for a Pod that needs access to your Docker credentials in regcred
:
apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: <your-private-image> imagePullSecrets: - name: regcred
Download the above file:
wget -O my-private-reg-pod.yaml https://k8s.io/docs/tasks/configure-pod-container/private-reg-pod.yaml
In file my-private-reg-pod.yaml
, replace <your-private-image>
with the path to an image in a private registry such as:
janedoe/jdoe-private:v1
To pull the image from the private registry, Kubernetes needs credentials.
The imagePullSecrets
field in the configuration file specifies that Kubernetes should get the credentials from a Secret named regcred
.
Create a Pod that uses your Secret, and verify that the Pod is running:
kubectl create -f my-private-reg-pod.yaml kubectl get pod private-reg
Configure Liveness and Readiness Probes
how to configure liveness and readiness probes for Containers
The kubelet uses liveness probes to know when to restart a Container.
For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress.
Restarting a Container in such a state can help to make the application more available despite bugs.
The kubelet uses readiness probes to know when a Container is ready to start accepting traffic.
A Pod is considered ready when all of its Containers are ready.
One use of this signal is to control which Pods are used as backends for Services.
When a Pod is not ready, it is removed from Service load balancers.
Define a liveness command
Many applications running for long periods of time eventually transition to broken states, and cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy such situations.
create a Pod that runs a Container based on the k8s.gcr.io/busybox
image. Here is the configuration file for the Pod:
apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: k8s.gcr.io/busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5
In the configuration file, you can see that the Pod has a single Container.
The periodSeconds
field specifies that the kubelet should perform a liveness probe every 5 seconds.
The initialDelaySeconds
field tells the kubelet that it should wait 5 second before performing the first probe.
To perform a probe, the kubelet executes the command cat /tmp/healthy
in the Container. If the command succeeds, it returns 0, and the kubelet considers the Container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the Container and restarts it.
#When the Container starts, it executes this command: /bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"
For the first 30 seconds of the Container’s life, there is a /tmp/healthy
file.
So during the first 30 seconds, the command cat /tmp/healthy
returns a success code.
After 30 seconds, cat /tmp/healthy
returns a failure code.
#Create the Pod: kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yaml #Within 30 seconds, view the Pod events: kubectl describe pod liveness-exec #The output indicates that no liveness probes have failed yet: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0 23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox" 23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox" 23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined] 23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
#After 35 seconds, view the Pod events again: kubectl describe pod liveness-exec #At the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated. FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0 36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox" 36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox" 36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined] 36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e 2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
#Wait another 30 seconds, and verify that the Container has been restarted: kubectl get pod liveness-exec #The output shows that RESTARTS has been incremented: NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 1 1m
Define a liveness HTTP request
Another kind of liveness probe uses an HTTP GET request.
Here is the configuration file for a Pod that runs a container based on the k8s.gcr.io/liveness
image.
apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-http spec: containers: - name: liveness image: k8s.gcr.io/liveness args: - /server livenessProbe: httpGet: path: /healthz port: 8080 httpHeaders: - name: X-Custom-Header value: Awesome initialDelaySeconds: 3 periodSeconds: 3
In the configuration file, you can see that the Pod has a single Container.
The periodSeconds
field specifies that the kubelet should perform a liveness probe every 3 seconds.
The initialDelaySeconds
field tells the kubelet that it should wait 3 seconds before performing the first probe.
To perform a probe, the kubelet sends an HTTP GET request to the server that is running in the Container and listening on port 8080. If the handler for the server’s /healthz
path returns a success code, the kubelet considers the Container to be alive and healthy. If the handler returns a failure code, the kubelet kills the Container and restarts it.
Any code greater than or equal to 200 and less than 400 indicates success. Any other code indicates failure.
You can see the source code for the server in server.go.
For the first 10 seconds that the Container is alive, the /healthz
handler returns a status of 200. After that, the handler returns a status of 500.
http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) { duration := time.Now().Sub(started) if duration.Seconds() > 10 { w.WriteHeader(500) w.Write([]byte(fmt.Sprintf("error: %v", duration.Seconds()))) } else { w.WriteHeader(200) w.Write([]byte("ok")) } })
The kubelet starts performing health checks 3 seconds after the Container starts.
So the first couple of health checks will succeed.
But after 10 seconds, the health checks will fail, and the kubelet will kill and restart the Container.
#To try the HTTP liveness check, create a Pod: kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/http-liveness.yaml #After 10 seconds, view Pod events to verify that liveness probes have failed and the Container has been restarted: kubectl describe pod liveness-http
Define a TCP liveness probe
A third type of liveness probe uses a TCP Socket.
With this configuration, the kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure.
apiVersion: v1 kind: Pod metadata: name: goproxy labels: app: goproxy spec: containers: - name: goproxy image: k8s.gcr.io/goproxy:0.1 ports: - containerPort: 8080 readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 15 periodSeconds: 20
As you can see, configuration for a TCP check is quite similar to an HTTP check.
This example uses both readiness and liveness probes.
The kubelet will send the first readiness probe 5 seconds after the container starts.
This will attempt to connect to the goproxy
container on port 8080.
If the probe succeeds, the pod will be marked as ready.
The kubelet will continue to run this check every 10 seconds.
In addition to the readiness probe, this configuration includes a liveness probe.
The kubelet will run the first liveness probe 15 seconds after the container starts.
Just like the readiness probe, this will attempt to connect to the goproxy
container on port 8080. If the liveness probe fails, the container will be restarted.
#To try the TCP liveness check, create a Pod: kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml #After 15 seconds, view Pod events to verify that liveness probes: kubectl describe pod goproxy
Use a named port
You can use a named ContainerPort for HTTP or TCP liveness checks:
ports: - name: liveness-port containerPort: 8080 hostPort: 8080 livenessProbe: httpGet: path: /healthz port: liveness-port
Define readiness probes
Sometimes, applications are temporarily unable to serve traffic.
For example, an application might need to load large data or configuration files during startup.
In such cases, you don’t want to kill the application, but you don’t want to send it requests either.
Kubernetes provides readiness probes to detect and mitigate these situations.
A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.
Readiness probes are configured similarly to liveness probes.
The only difference is that you use the readinessProbe
field instead of the livenessProbe
field.
readinessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5
Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.
Readiness and liveness probes can be used in parallel for the same container.
Using both can ensure that traffic does not reach a container that is not ready for it, and that containers are restarted when they fail.
Configure Probes
Probes have a number of fields that you can use to more precisely control the behavior of liveness and readiness checks:
initialDelaySeconds
: Number of seconds after the container has started before liveness or readiness probes are initiated.periodSeconds
: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.timeoutSeconds
: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.successThreshold
: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.failureThreshold
: When a Pod starts and the probe fails, Kubernetes will tryfailureThreshold
times before giving up. Giving up in case of liveness probe means restarting the Pod. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.
HTTP probes have additional fields that can be set on httpGet
:
host
: Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.scheme
: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.path
: Path to access on the HTTP server.httpHeaders
: Custom headers to set in the request. HTTP allows repeated headers.port
: Name or number of the port to access on the container. Number must be in the range 1 to 65535.
For an HTTP probe, the kubelet sends an HTTP request to the specified path and port to perform the check.
The kubelet sends the probe to the pod’s IP address, unless the address is overridden by the optional host
field in httpGet
.
If scheme
field is set to HTTPS
, the kubelet sends an HTTPS request skipping the certificate verification.
In most scenarios, you do not want to set the host
field.
Here’s one scenario where you would set it. Suppose the Container listens on 127.0.0.1 and the Pod’s hostNetwork
field is true. Then host
, under httpGet
, should be set to 127.0.0.1.
If your pod relies on virtual hosts, which is probably the more common case, you should not use host
, but rather set the Host
header in httpHeaders
.