zoukankan      html  css  js  c++  java
  • kubectl 之 patch 命令

    patch命令

    kubectl patch — Update field(s) of a resource using strategic merge patch

    Synopsis

    kubectl patch [Options]

    Description

    Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.

    JSON and YAML formats are accepted.

    JSON and YAML formats are accepted.

    Options

    --allow-missing-template-keys=true

    If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.

    --dry-run=false

    If true, only print the object that would be sent, without sending it.

    -f--filename=[]

    Filename, directory, or URL to files identifying the resource to update

    -k--kustomize=""

    Process the kustomization directory. This flag can't be used together with -f or -R.

    --local=false

    If true, patch will operate on the content of the file, not the server-side resource.

    -o--output=""

    Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.

    -p--patch=""

    The patch to be applied to the resource JSON file.

    --record=false

    Record current kubectl command in the resource annotation. If set to false, do not record the command. If set to true, record the command. If not set, default to updating the existing annotation value only if one already exists.

    -R--recursive=false

    Process the directory used in -f--filename recursively. Useful when you want to manage related manifests organized within the same directory.

    --template=""

    Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [ ⟨http://golang.org/pkg/text/template/#pkg-overview⟩].

    --type="strategic"

    The type of patch being provided; one of [json merge strategic]

    Options Inherited from Parent Commands

    --alsologtostderr=false

    log to standard error as well as files

    --application-metrics-count-limit=100

    Max number of application metrics to store (per container)

    --as=""

    Username to impersonate for the operation

    --as-group=[]

    Group to impersonate for the operation, this flag can be repeated to specify multiple groups.

    --azure-container-registry-config=""

    Path to the file containing Azure container registry configuration information.

    --boot-id-file="/proc/sys/kernel/random/boot_id"

    Comma-separated list of files to check for boot-id. Use the first one that exists.

    --cache-dir="/builddir/.kube/http-cache"

    Default HTTP cache directory

    --certificate-authority=""

    Path to a cert file for the certificate authority

    --client-certificate=""

    Path to a client certificate file for TLS

    --client-key=""

    Path to a client key file for TLS

    --cloud-provider-gce-lb-src-cidrs=130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16

    CIDRs opened in GCE firewall for LB traffic proxy health checks

    --cluster=""

    The name of the kubeconfig cluster to use

    --container-hints="/etc/cadvisor/container_hints.json"

    location of the container hints file

    --containerd="/run/containerd/containerd.sock"

    containerd endpoint

    --containerd-namespace="k8s.io"

    containerd namespace

    --context=""

    The name of the kubeconfig context to use

    --default-not-ready-toleration-seconds=300

    Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.

    --default-unreachable-toleration-seconds=300

    Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.

    --docker="unix:///var/run/docker.sock"

    docker endpoint

    --docker-env-metadata-whitelist=""

    a comma-separated list of environment variable keys that needs to be collected for docker containers

    --docker-only=false

    Only report docker containers in addition to root stats

    --docker-root="/var/lib/docker"

    DEPRECATED: docker root is read from docker info (this is a fallback, default: /var/lib/docker)

    --docker-tls=false

    use TLS to connect to docker

    --docker-tls-ca="ca.pem"

    path to trusted CA

    --docker-tls-cert="cert.pem"

    path to client certificate

    --docker-tls-key="key.pem"

    path to private key

    --enable-load-reader=false

    Whether to enable cpu load reader

    --event-storage-age-limit="default=0"

    Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is a duration. Default is applied to all non-specified event types

    --event-storage-event-limit="default=0"

    Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is an integer. Default is applied to all non-specified event types

    --global-housekeeping-interval=1m0s

    Interval between global housekeepings

    --housekeeping-interval=10s

    Interval between container housekeepings

    --insecure-skip-tls-verify=false

    If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure

    --kubeconfig=""

    Path to the kubeconfig file to use for CLI requests.

    --log-backtrace-at=:0

    when logging hits line file:N, emit a stack trace

    --log-cadvisor-usage=false

    Whether to log the usage of the cAdvisor container

    --log-dir=""

    If non-empty, write log files in this directory

    --log-file=""

    If non-empty, use this log file

    --log-file-max-size=1800

    Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.

    --log-flush-frequency=5s

    Maximum number of seconds between log flushes

    --logtostderr=true

    log to standard error instead of files

    --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"

    Comma-separated list of files to check for machine-id. Use the first one that exists.

    --match-server-version=false

    Require server version to match client version

    -n--namespace=""

    If present, the namespace scope for this CLI request

    --password=""

    Password for basic authentication to the API server

    --profile="none"

    Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)

    --profile-output="profile.pprof"

    Name of the file to write the profile to

    --request-timeout="0"

    The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.

    -s--server=""

    The address and port of the Kubernetes API server

    --skip-headers=false

    If true, avoid header prefixes in the log messages

    --skip-log-headers=false

    If true, avoid headers when opening log files

    --stderrthreshold=2

    logs at or above this threshold go to stderr

    --storage-driver-buffer-duration=1m0s

    Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction

    --storage-driver-db="cadvisor"

    database name

    --storage-driver-host="localhost:8086"

    database host:port

    --storage-driver-password="root"

    database password

    --storage-driver-secure=false

    use secure connection with database

    --storage-driver-table="stats"

    table name

    --storage-driver-user="root"

    database username

    --token=""

    Bearer token for authentication to the API server

    --update-machine-info-interval=5m0s

    Interval between machine info updates.

    --user=""

    The name of the kubeconfig user to use

    --username=""

    Username for basic authentication to the API server

    -v--v=0

    number for the log level verbosity

    --version=false

    Print version information and quit

    --vmodule=

    comma-separated list of pattern=N settings for file-filtered logging

    comma-separated list of pattern=N settings for file-filtered logging

    Example

      # Partially update a node using a strategic merge patch. Specify the patch as JSON.
      kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'
      
      # Partially update a node using a strategic merge patch. Specify the patch as YAML.
      kubectl patch node k8s-node-1 -p $'spec:
     unschedulable: true'
      
      # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch.
      kubectl patch -f node.json -p '{"spec":{"unschedulable":true}}'
      
      # Update a container's image; spec.containers[*].name is required because it's a merge key.
      kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
      
      # Update a container's image using a json patch with positional arrays.
      kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'

     # update a configmaps using using a json patch
     kubectl patch configmap platform-adsproxyapi-config --type=json -p "`"[{'op':'replace', 'path': '/data/****', 'value':'****' }]`""-n ***

     

    JSON Patch and JSON Merge Patch

    Partly as a side effect of the PATCH HTTP verb gaining attention in the recent years, people started to come up with ideas about representing JSON-driven PATCH formats which declaratively describe differences between two JSON documents.

    The number or home-grew solutions is probably countless, two formats have been published by IETF as RFC documents to solve this problem: RFC 6902 (JSON Patch) and RFC 7396 (JSON Merge Patch).

    Both have advantages and disadvantages, and none of them will fit everybody’s usecases, so lets have a quick look at which one to use.

    JSON Patch

    The JSON Patch format is similar to a database transaction: it is an array of mutating operations on a JSON document, which is executed atomically by a proper implementation. It is basically a series of "add""remove""replace","move" and "copy" operations.

    As a short example lets consider the following JSON document:

    {
    	"users" : [
    		{ "name" : "Alice" , "email" : "alice@example.org" },
    		{ "name" : "Bob" , "email" : "bob@example.org" }
    	]
    }
    

    We can run the following patch on it, which changes Alice’s email address then adds a new element to the array:

    [
    	{
    		"op" : "replace" ,
    		"path" : "/users/0/email" ,
    		"value" : "alice@wonderland.org"
    	},
    	{
    		"op" : "add" ,
    		"path" : "/users/-" ,
    		"value" : {
    			"name" : "Christine",
    			"email" : "christine@example.org"
    		}
    	}
    ]

    The result will be:

    {
    	"users" : [
    		{ "name" : "Alice" , "email" : "alice@wonderland.org" },
    		{ "name" : "Bob" , "email" : "bob@example.org" },
    		{ "name" : "Christine" , "email" : "christine@example.org" }
    	]
    }
    

     

    So the outline of the operations described in a JSON Patch is

    • the "op" key denotes operation
    • the arguments of the operation are described by the other keys
    • there is always a "path" argument, which is JSON Pointer pointing to the document fragment which is the target of the operation

    An interesting option of the JSON Patch specification is its "test" operator: its evaluation doesn’t come with any side effects, so it isn’t a data manipulating operator.

    Instead it can be used to describe assertions on the document at given points of the JSON Patch execution. If the "test" evaluates to false then an error occurs, subsequent operations won’t be executed, and the document is rolled back to its initial state. 

    I think the "test" can be useful for checking preconditions before a patch execution or may be a safety net to check at the end of execution if everything looks all right. Patches are run atomically by implementations therefore if a "test" finds inconsistency in the document then you can safely assume that the document is still in consistent (initial) state after patch failure.

    SON Merge Patch

    Alongside JSON Patch there is an other JSON-based format, JSON Merge Patch - RFC 7386 , which can be used more or less for the same purpose, ie. it describes a changed version of a JSON document.

    The conceptual difference compared to JSON Patch is that JSON Merge Patch is similar to a diff file. It simply contains the nodes of the document which should be different after execution.

    As a quick example (taken from the spec) if we have the following document:

    {
    	"a": "b",
    	"c": {
    		"d": "e",
    		"f": "g"
    	}
    }
    

    Then we can run the following patch on it:

    {
    	"a":"z",
    	"c": {
    		"f": null
    	}
    }
    

    which will change the value of "a" to "z" and will delete the "f" key.

    The simplicity of the format may look first promising at the first glance, since most probably anyone understanding the schema of the original document will also instantly understand a merge patch document too. It is just a standardization of one may naturally call a patch of a JSON document.

    But this simplicity comes with some limitations:

    • Deletion happens by setting a key to null. This inherently means that it isn’t possible to change a key’s value to null, since such modification cannot be described by a merge patch document.
    • Arrays cannot be manipulated by merge patches. If you want to add an element to an array, or mutate any of its elements then you have to include the entire array in the merge patch document, even if the actually changed parts is minimal.
    • the execution of a merge patch document never results in error. Any malformed patch will be merged, so it is a very liberal format. It is not necessarily good, since you will probably need to perform programmatic check after merge, or run a JSON Schema validation after the merge.

    Summary

    JSON Merge Patch is a naively simple format, with limited usability. Probably it is a good choice if you are building something small, with very simple JSON Schema, but you want offer a quickly understandable, more or less working method for clients to update JSON documents. A REST API designed for public consumption but without appropriate client libraries might be a good example.

    For more complex usecases I’d pick JSON Patch, since it is applicable to any JSON documents (unline merge patch, which is not able to set keys to null). The specification also ensures atomic execution and robust error reporting.

  • 相关阅读:
    B. Sorted Adjacent Differences(思维构造)
    C. Yet Another Counting Problem(循环节规律)
    B. Phoenix and Beauty(贪心构造)
    Phoenix and Distribution(字典序贪心)
    D. Almost All Divisors(数学分解因子)
    Mongodb之简介
    web服务版智能语音对话
    图灵机器人
    人工智能之语音
    人工智能
  • 原文地址:https://www.cnblogs.com/panpanwelcome/p/11649434.html
Copyright © 2011-2022 走看看