zoukankan      html  css  js  c++  java
  • k8s-ingress部署测试以及深入理解

    1、ingress 部署有两种方式。本次采用DaemonSet部署。

    apiVersion: v1
    kind: Namespace
    metadata:
      name: ingress-nginx
    
    ---
    
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: default-http-backend
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
      namespace: ingress-nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app.kubernetes.io/name: default-http-backend
          app.kubernetes.io/part-of: ingress-nginx
      template:
        metadata:
          labels:
            app.kubernetes.io/name: default-http-backend
            app.kubernetes.io/part-of: ingress-nginx
        spec:
          terminationGracePeriodSeconds: 60
          containers:
            - name: default-http-backend
              # Any image is permissible as long as:
              # 1. It serves a 404 page at /
              # 2. It serves 200 on a /healthz endpoint
              image: k8s.gcr.io/defaultbackend-amd64:1.5
              livenessProbe:
                httpGet:
                  path: /healthz
                  port: 8080
                  scheme: HTTP
                initialDelaySeconds: 30
                timeoutSeconds: 5
              ports:
                - containerPort: 8080
              resources:
                limits:
                  cpu: 10m
                  memory: 20Mi
                requests:
                  cpu: 10m
                  memory: 20Mi
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: default-http-backend
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      ports:
        - port: 80
          targetPort: 8080
      
      selector:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: nginx-configuration
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: tcp-services
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: udp-services
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nginx-ingress-serviceaccount
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRole
    metadata:
      name: nginx-ingress-clusterrole
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    rules:
      - apiGroups:
          - ""
        resources:
          - configmaps
          - endpoints
          - nodes
          - pods
          - secrets
        verbs:
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - services
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - "extensions"
        resources:
          - ingresses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - events
        verbs:
          - create
          - patch
      - apiGroups:
          - "extensions"
        resources:
          - ingresses/status
        verbs:
          - update
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: Role
    metadata:
      name: nginx-ingress-role
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    rules:
      - apiGroups:
          - ""
        resources:
          - configmaps
          - pods
          - secrets
          - namespaces
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - configmaps
        resourceNames:
          # Defaults to "<election-id>-<ingress-class>"
          # Here: "<ingress-controller-leader>-<nginx>"
          # This has to be adapted if you change either parameter
          # when launching the nginx-ingress-controller.
          - "ingress-controller-leader-nginx"
        verbs:
          - get
          - update
      - apiGroups:
          - ""
        resources:
          - configmaps
        verbs:
          - create
      - apiGroups:
          - ""
        resources:
          - endpoints
        verbs:
          - get
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: RoleBinding
    metadata:
      name: nginx-ingress-role-nisa-binding
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: nginx-ingress-role
    subjects:
      - kind: ServiceAccount
        name: nginx-ingress-serviceaccount
        namespace: ingress-nginx
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: nginx-ingress-clusterrole-nisa-binding
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: nginx-ingress-clusterrole
    subjects:
      - kind: ServiceAccount
        name: nginx-ingress-serviceaccount
        namespace: ingress-nginx
    
    ---
    apiVersion: extensions/v1beta1
    #apiVersion: v1
    kind: DaemonSet
    metadata:
      name: nginx-ingress-controller
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    spec:
    #  replicas: 1  注意使用DaemonSet部署不要设置re.默认为会给标签上面会有一个副本
    #  selector:                    #注释掉
    #    matchLabels:
    #      app.kubernetes.io/name: ingress-nginx  #注释掉
    #      app.kubernetes.io/part-of: ingress-nginx  #注释掉
    #      sIngress: true   #主调式
      template:
        metadata:
          labels:
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/part-of: ingress-nginx
          annotations:
            prometheus.io/port: "10254"
            prometheus.io/scrape: "true"
        spec:
          serviceAccountName: nginx-ingress-serviceaccount
          nodeSelector:
            isIngress: "true"
          hostNetwork: true                     ####启用主机网络 。会霸占主机80端口,如果主机80端口已经被使用。则会启动失败。
          containers:
            - name: nginx-ingress-controller
              image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
              args:
                - /nginx-ingress-controller
                - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
                - --configmap=$(POD_NAMESPACE)/nginx-configuration
                - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
                - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
                - --publish-service=$(POD_NAMESPACE)/ingress-nginx
                - --annotations-prefix=nginx.ingress.kubernetes.io
              securityContext:
                capabilities:
                  drop:
                    - ALL
                  add:
                    - NET_BIND_SERVICE
                # www-data -> 33
                runAsUser: 33
              env:
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
              ports:
                - name: http
                  containerPort: 80
                - name: https
                  containerPort: 443
              livenessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 1
              readinessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 1
    
    ---
    

    2、部署结果

    kubectl   apply   -f  mandatory.yaml 
    
    [root@VM_0_48_centos ingress-nginx]# kubectl  get pods  -n ingress-nginx
    NAME                                    READY   STATUS    RESTARTS   AGE
    default-http-backend-85b8b595f9-j5twg   1/1     Running   1          30h
    nginx-ingress-controller-mrvzh          1/1     Running   0          104m
    nginx-ingress-controller-pgp9t          1/1     Running   0          105m
    nginx-ingress-controller-vd7v6          1/1     Running   0          104m
    [root@VM_0_48_centos ingress-nginx]# 

    3、测试

    [root@VM_0_48_centos ingress-nginx]# cat test-deploy-demon.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: myapp
    spec:
      selector:
        app: myapp
        release: canary
      ports:
      - name: http
        port: 80
        targetPort: 80
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata: 
      name: myapp-deploy
    spec:
      replicas: 1
      selector: 
        matchLabels:
          app: myapp
          release: canary
      template:
        metadata:
          labels:
            app: myapp
            release: canary
        spec:
          containers:
          - name: myapp
            image: ikubernetes/myapp:v2
            ports:
            - name: httpd
              containerPort: 80
    [root@VM_0_48_centos ingress-nginx]# cat test.yaml 
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-myapp
      namespace: default                     #######注意和service 保持一致。
      annotations: 
        kubernetes.io/ingress.class: ""
    spec:
      tls:  #是否启用https证书。
      - hosts:
        - test.xiajq.com
        secretName: tomcat-ingress-secret  ##
      rules:
      - host: test.xiajq.com #生产中该域名应当可以被公网解析
        http:
          paths:
          - path: 
            backend:
              serviceName: myapp
              servicePort: 80

    4、如果不使用https,可以直接本地hosts解析后,访问。如果https还需创建证书

    openssl genrsa -out tls.key 2048
    openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=test.xiajq.com
    kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key #创建secret

    5、本地hosts解析后。测试访问结果:

    https://test.xiajq.com/

    6、ingress 解析原理。本质上还是使用了nginx作为代理服务器进行解析。只是nginx采用node方式部署。同时授权了动态获取k8s ep权限。可以动态感知后端ep变化。从而动态更新nginx代理到后端pod配置。也就nginx.conf的配置。

    --查看node端口
    [root@VM_0_2_centos ~]# netstat -ntlup|grep 80 
    tcp        0      0 172.19.0.2:2380         0.0.0.0:*               LISTEN      1187/etcd           
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      10193/nginx: master 
    
    --进入pod
     kubectl  exec -it nginx-ingress-controller-mrvzh    /bin/bash  -n ingress-nginx 
    
    --
        include /etc/nginx/mime.types;
            default_type text/html;
    
            gzip on;
            gzip_comp_level 5;
            gzip_http_version 1.1;
            gzip_min_length 256;
            gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
            gzip_proxied any;
            gzip_vary on;
    
            # Custom headers for response
    
            server_tokens on;
    
            # disable warnings
            uninitialized_variable_warn off;
    
            # Additional available variables:
            # $namespace
            # $ingress_name
            # $service_name
            # $service_port
            log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
    
            map $request_uri $loggable {
    
                    default 1;
            }
    
            access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
    
            error_log  /var/log/nginx/error.log notice;
    
            resolver 183.60.83.19 183.60.82.98 valid=30s ipv6=off;
    
            # See https://www.nginx.com/blog/websocket-nginx
            map $http_upgrade $connection_upgrade {
                    default          upgrade;
    
                    # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
                    ''               '';
    
            }
    
            # The following is a sneaky way to do "set $the_real_ip $remote_addr"
            # Needed because using set is not allowed outside server blocks.
            map '' $the_real_ip {
    
                    default          $remote_addr;
    
            }
    
            # trust http_x_forwarded_proto headers correctly indicate ssl offloading
            map $http_x_forwarded_proto $pass_access_scheme {
                    default          $http_x_forwarded_proto;
                    ''               $scheme;
            }
    
            map $http_x_forwarded_port $pass_server_port {
                    default           $http_x_forwarded_port;
                    ''                $server_port;
            }
    
            # Obtain best http host
            map $http_host $this_host {
                    default          $http_host;
                    ''               $host;
            }
    
            map $http_x_forwarded_host $best_http_host {
                    default          $http_x_forwarded_host;
                    ''               $this_host;
            }
    
            # validate $pass_access_scheme and $scheme are http to force a redirect
            map "$scheme:$pass_access_scheme" $redirect_to_https {
                    default          0;
                    "http:http"      1;
                    "https:http"     1;
            }
    
            map $pass_server_port $pass_port {
                    443              443;
                    default          $pass_server_port;
            }
    
            # Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
            # If no such header is provided, it can provide a random value.
            map $http_x_request_id $req_id {
                    default   $http_x_request_id;
    
                    ""        $request_id;
    
            }
    
            server_name_in_redirect off;
            port_in_redirect        off;
    
            ssl_protocols TLSv1.2;
    
            # turn on session caching to drastically improve performance
    
            ssl_session_cache builtin:1000 shared:SSL:10m;
            ssl_session_timeout 10m;
    
            # allow configuring ssl session tickets
            ssl_session_tickets on;
    
            # slightly reduce the time-to-first-byte
            ssl_buffer_size 4k;
    
            # allow configuring custom ssl ciphers
            ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
            ssl_prefer_server_ciphers on;
    
            ssl_ecdh_curve auto;
    
            proxy_ssl_session_reuse on;
    
            upstream upstream_balancer {
                    server 0.0.0.1; # placeholder
    
                    balancer_by_lua_block {
                            balancer.balance()
                    }
    
                    keepalive 32;
    
            }
    
            # Global filters
    
            ## start server _
            server {
                    server_name _ ;
    
                    listen 80 default_server reuseport backlog=511;
    
                    set $proxy_upstream_name "-";
    
                    listen 443  default_server reuseport backlog=511 ssl http2;
    
                    # PEM sha: 472974aa2c90732bd8c67a1e5ef601e2f5915003
                    ssl_certificate                         /etc/ingress-controller/ssl/default-fake-certificate.pem;
                    ssl_certificate_key                     /etc/ingress-controller/ssl/default-fake-certificate.pem;
    
                    location / {
    
                            set $namespace      "";
                            set $ingress_name   "";
                            set $service_name   "";
                            set $service_port   "0";
                            set $location_path  "/";
    
                            rewrite_by_lua_block {
    
                                    balancer.rewrite()
    
                            }
    
                            log_by_lua_block {
    
                                    balancer.log()
    
                                    monitor.call()
                            }
    
                            if ($scheme = https) {
                                    more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains";
                            }
    
                            access_log off;
    
                            port_in_redirect off;
    
                            set $proxy_upstream_name "upstream-default-backend";
    
                            client_max_body_size                    1m;
    
                            proxy_set_header Host                   $best_http_host;
    
                            # Pass the extracted client certificate to the backend
    
                            # Allow websocket connections
                            proxy_set_header                        Upgrade           $http_upgrade;
    
                            proxy_set_header                        Connection        $connection_upgrade;
    
                            proxy_set_header X-Request-ID           $req_id;
                            proxy_set_header X-Real-IP              $the_real_ip;
    
                            proxy_set_header X-Forwarded-For        $the_real_ip;
    
                            proxy_set_header X-Forwarded-Host       $best_http_host;
                            proxy_set_header X-Forwarded-Port       $pass_port;
                            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
    
                            proxy_set_header X-Original-URI         $request_uri;
    
                            proxy_set_header X-Scheme               $pass_access_scheme;
    
                            # Pass the original X-Forwarded-For
                            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
    
                            # mitigate HTTPoxy Vulnerability
                            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                            proxy_set_header Proxy                  "";
    
                            # Custom headers to proxied server
    
                            proxy_connect_timeout                   5s;
                            proxy_send_timeout                      60s;
                            proxy_read_timeout                      60s;
    
                            proxy_buffering                         off;
                            proxy_buffer_size                       4k;
                            proxy_buffers                           4 4k;
                            proxy_request_buffering                 on;
    
                            proxy_http_version                      1.1;
    
                            proxy_cookie_domain                     off;
                            proxy_cookie_path                       off;
    
                            # In case of errors try the next upstream server before returning an error
                            proxy_next_upstream                     error timeout;
                            proxy_next_upstream_tries               3;
    
                            proxy_pass http://upstream_balancer;
    
                            proxy_redirect                          off;
    
                    }
    
                    # health checks in cloud providers require the use of port 80
                    location /healthz {
    
                            access_log off;
                            return 200;
                    }
    
                    # this is required to avoid error if nginx is being monitored
                    # with an external software (like sysdig)
                    location /nginx_status {
    
                            allow 127.0.0.1;
    
                            deny all;
    
                            access_log off;
                            stub_status on;
                    }
    
            }
            ## end server _
    
            ## start server dashboard.xiajq.com
            server {
                    server_name dashboard.xiajq.com ;
    
                    listen 80;
    
                    set $proxy_upstream_name "-";
    
                    listen 443  ssl http2;
    
                    # PEM sha: 472974aa2c90732bd8c67a1e5ef601e2f5915003
                    ssl_certificate                         /etc/ingress-controller/ssl/default-fake-certificate.pem;
                    ssl_certificate_key                     /etc/ingress-controller/ssl/default-fake-certificate.pem;
    
                    location / {
    
                            set $namespace      "ingress-nginx";
                            set $ingress_name   "ingress-myapp";
                            set $service_name   "kubernetes-dashboard";
                            set $service_port   "443";
                            set $location_path  "/";
    
                            rewrite_by_lua_block {
    
                                    balancer.rewrite()
    
                            }
    
                            log_by_lua_block {
    
                                    balancer.log()
    
                                    monitor.call()
                            }
    
                            if ($scheme = https) {
                                    more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains";
                            }
    
                            port_in_redirect off;
    
                            set $proxy_upstream_name "";
    
                            # enforce ssl on server side
                            if ($redirect_to_https) {
    
                                    return 308 https://$best_http_host$request_uri;
    
                            }
    
                            client_max_body_size                    1m;
    
                            proxy_set_header Host                   $best_http_host;
    
                            # Pass the extracted client certificate to the backend
    
                            # Allow websocket connections
                            proxy_set_header                        Upgrade           $http_upgrade;
    
                            proxy_set_header                        Connection        $connection_upgrade;
    
                            proxy_set_header X-Request-ID           $req_id;
                            proxy_set_header X-Real-IP              $the_real_ip;
    
                            proxy_set_header X-Forwarded-For        $the_real_ip;
    
                            proxy_set_header X-Forwarded-Host       $best_http_host;
                            proxy_set_header X-Forwarded-Port       $pass_port;
                            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
    
                            proxy_set_header X-Original-URI         $request_uri;
    
                            proxy_set_header X-Scheme               $pass_access_scheme;
    
                            # Pass the original X-Forwarded-For
                            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
    
                            # mitigate HTTPoxy Vulnerability
                            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                            proxy_set_header Proxy                  "";
    
                            # Custom headers to proxied server
    
                            proxy_connect_timeout                   5s;
                            proxy_send_timeout                      60s;
                            proxy_read_timeout                      60s;
    
                            proxy_buffering                         off;
                            proxy_buffer_size                       4k;
                            proxy_buffers                           4 4k;
                            proxy_request_buffering                 on;
    
                            proxy_http_version                      1.1;
    
                            proxy_cookie_domain                     off;
                            proxy_cookie_path                       off;
    
                            # In case of errors try the next upstream server before returning an error
                            proxy_next_upstream                     error timeout;
                            proxy_next_upstream_tries               3;
    
                            # No endpoints available for the request
                            return 503;
    
                    }
    
            }
            ## end server dashboard.xiajq.com
    
            ## start server test.xiajq.com   ############################## 在创建一个ingress后,会自动在ng.conf中默认配置
            server {
                    server_name test.xiajq.com ;
    
                    listen 80;
    
                    set $proxy_upstream_name "-";
    
                    listen 443  ssl http2;   ##启用443端口
    
                    # PEM sha: 6b96e02eb5f1eaa3d45486073017e7fd8ce3a40e
                    ssl_certificate                         /etc/ingress-controller/ssl/default-tomcat-ingress-secret.pem;   #证书位置。这个为我们创建的证书
                    ssl_certificate_key                     /etc/ingress-controller/ssl/default-tomcat-ingress-secret.pem;
    
                    ssl_trusted_certificate                 /etc/ingress-controller/ssl/default-tomcat-ingress-secret-full-chain.pem;  ##
                    ssl_stapling                            on;
                    ssl_stapling_verify                     on;
    
                    location / {
     
                            set $namespace      "default";            ####设置变量
                            set $ingress_name   "ingress-myapp";    ###
                            set $service_name   "myapp";             ###服务名字  
                            set $service_port   "80";                 ##端口
                            set $location_path  "/";                   ###路径
    
                            rewrite_by_lua_block {
    
                                    balancer.rewrite()
    
                            }
    
                            log_by_lua_block {
    
                                    balancer.log()
    
                                    monitor.call()
                            }
    
                            if ($scheme = https) {
                                    more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains";
                            }
    
                            port_in_redirect off;
    
                            set $proxy_upstream_name "default-myapp-80";
    
                            # enforce ssl on server side
                            if ($redirect_to_https) {
    
                                    return 308 https://$best_http_host$request_uri;
    
                            }
    
                            client_max_body_size                    1m;
    
                            proxy_set_header Host                   $best_http_host;
    
                            # Pass the extracted client certificate to the backend
    
                            # Allow websocket connections
                            proxy_set_header                        Upgrade           $http_upgrade;
    
                            proxy_set_header                        Connection        $connection_upgrade;
    
                            proxy_set_header X-Request-ID           $req_id;
                            proxy_set_header X-Real-IP              $the_real_ip;
    
                            proxy_set_header X-Forwarded-For        $the_real_ip;
    
                            proxy_set_header X-Forwarded-Host       $best_http_host;
                            proxy_set_header X-Forwarded-Port       $pass_port;
                            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
    
                            proxy_set_header X-Original-URI         $request_uri;
    
                            proxy_set_header X-Scheme               $pass_access_scheme;
    
                            # Pass the original X-Forwarded-For
                            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
    
                            # mitigate HTTPoxy Vulnerability
                            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                            proxy_set_header Proxy                  "";
    
                            # Custom headers to proxied server
    
                            proxy_connect_timeout                   5s;
                            proxy_send_timeout                      60s;
                            proxy_read_timeout                      60s;
    
                            proxy_buffering                         off;
                            proxy_buffer_size                       4k;
                            proxy_buffers                           4 4k;
                            proxy_request_buffering                 on;
    
                            proxy_http_version                      1.1;
    
                            proxy_cookie_domain                     off;
                            proxy_cookie_path                       off;
    
                            # In case of errors try the next upstream server before returning an error
                            proxy_next_upstream                     error timeout;
                            proxy_next_upstream_tries               3;
    
                            proxy_pass http://upstream_balancer;
    
                            proxy_redirect                          off;
    
                    }
    
            }
            ## end server test.xiajq.com
    
            # backend for when default-backend-service is not configured or it does not have endpoints
            server {
                    listen 8181 default_server reuseport backlog=511;
    
                    set $proxy_upstream_name "-";
    
                    location / {
                            return 404;
                    }
            }
    
            # default server, used for NGINX healthcheck and access to nginx stats
            server {
                    listen 18080 default_server reuseport backlog=511;
    
                    set $proxy_upstream_name "-";
    
                    location /healthz {
    
                            access_log off;
                            return 200;
                    }
    
                    location /is-dynamic-lb-initialized {
    
                            access_log off;
    
                            content_by_lua_block {
                                    local configuration = require("configuration")
                                    local backend_data = configuration.get_backends_data()
                                    if not backend_data then
                                    ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
                                    return
                                    end
    
                                    ngx.say("OK")
                                    ngx.exit(ngx.HTTP_OK)
                            }
                    }
    
                    location /nginx_status {
                            set $proxy_upstream_name "internal";
    
                            access_log off;
                            stub_status on;
                    }
    
                    location /configuration {
                            access_log off;
    
                            allow 127.0.0.1;
    
                            deny all;
    
                            # this should be equals to configuration_data dict
                            client_max_body_size                    10m;
                            proxy_buffering                         off;
    
                            content_by_lua_block {
                                    configuration.call()
                            }
                    }
    
                    location / {
    
                            set $proxy_upstream_name "upstream-default-backend";
                            proxy_set_header    Host   $best_http_host;
    
                            proxy_pass          http://upstream_balancer;
    
                    }
    
            }
    }
    
    stream {
            log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
    
            access_log /var/log/nginx/access.log log_stream;
    
            error_log  /var/log/nginx/error.log;
    
            # TCP services
    
            # UDP services
    
    }

    7、ingress 使用案例,添加白名单

    igress控制ip访问
    
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: whitelist
      annotations:
        ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/24"
        ingress.kubernetes.io/whitelist-source-range: 192.168**/,192.168**/32
    spec:
      rules:
      - host: whitelist.test.net
      http:
        paths:
        - path: /
        backend:
          serviceName: webserver
          servicePort: 80
    yaml下载地址: https://github.com/kubernetes/ingress-nginx/tree/nginx-0.20.0/deploy

      

  • 相关阅读:
    Objective-C Collection was mutated while being enumerated crash
    C++ assert断言
    Objective-C 禁用NSMenu中的系统services菜单项
    django----命令
    django----admin源码流程
    django----admin
    django----利用Form 实现两次密码输入是否一样 ( 局部钩子和全局钩子 )
    django----基于Form组件实现的增删改和基于ModelForm实现的增删改
    java----面试题
    课外知识----ini
  • 原文地址:https://www.cnblogs.com/xiajq/p/11394951.html
Copyright © 2011-2022 走看看