zoukankan      html  css  js  c++  java
  • How Load Balancing Policies Work

    How Load Balancing Policies Work https://docs.cloud.oracle.com/en-us/iaas/Content/Balance/Reference/lbpolicies.htm

    How Load Balancing Policies Work

    After you create a load balancer, you can apply policies to control traffic distribution to your backend servers. The Load Balancing service supports three primary policy types:

    When processing load or capacity varies among backend servers, you can refine each of these policy types with backend server weighting. Weighting affects the proportion of requests directed to each server. For example, a server weighted '3' receives three times the number of connections as a server weighted '1'. You assign weights based on criteria of your choosing, such as each server's traffic-handling capacity.

    Load balancer policy decisions apply differently to TCP load balancers, cookie-based session persistent HTTP requests (sticky requests), and non-sticky HTTP requests.

    • A TCP load balancer considers policy and weight criteria to direct an initial incoming request to a backend server. All subsequent packets on this connection go to the same endpoint.
    • An HTTP load balancer configured to handle cookie-based session persistence forwards requests to the backend server specified by the cookie's session information.
    • For non-sticky HTTP requests, the load balancer applies policy and weight criteria to every incoming request and determines an appropriate backend server. Multiple requests from the same client could be directed to different servers.

    Round Robin

    Round Robin is the default load balancer policy. This policy distributes incoming traffic sequentially to each server in a backend set list. After each server has received a connection, the load balancer repeats the list in the same order.

    Round Robin is a simple load balancing algorithm. It works best when all the backend servers have similar capacity and the processing load required by each request does not vary significantly.

    Least Connections

    The Least Connections policy routes incoming non-sticky request traffic to the backend server with the fewest active connections. This policy helps you maintain an equal distribution of active connections with backend servers. As with the round robin policy, you can assign a weight to each backend server and further control traffic distribution.

     Tip

    In TCP use cases, a connection can be active but have no current traffic. Such connections do not serve as a good load metric.

    IP Hash

    The IP Hash policy uses an incoming request's source IP address as a hashing key to route non-sticky traffic to the same backend server. The load balancer routes requests from the same client to the same backend server as long as that server is available. This policy honors server weight settings when establishing the initial connection.

    IP Hash ensures that requests from a particular client are always directed to the same backend server, as long as it is available.

    You cannot add a backend server marked as Backup to a backend set that uses the IP Hash policy.

     Warning

    Multiple clients that connect to the load balancer through a proxy or NAT router appear to have the same IP address. If you apply the IP Hash policy to your backend set, the load balancer routes traffic based on the incoming IP address and sends these proxied client requests to the same backend server. If the proxied client pool is large, the requests could flood a backend server.

     

     

     

  • 相关阅读:
    JAVA导出EXCEL表格
    解决springboot配置@ControllerAdvice不能捕获 NoHandlerFoundException问题
    Mysql 查看定时器 打开定时器 设置定时器时间
    IDEA @Autowired 出现红色下划线 报红
    IntelliJ IDEA报warn class is never used
    UML类图符号 各种关系说明以及举例
    提升单元测试体验的利器--Mockito使用总结
    maven2中snapshot快照库和release发布库的应用
    Maven最佳实践-distributionManagement
    访问GitLab的PostgreSQL数据库
  • 原文地址:https://www.cnblogs.com/rsapaper/p/13213032.html
Copyright © 2011-2022 走看看