zoukankan      html  css  js  c++  java
  • [Storm] 并发度的理解

    Tasks & executors relation 

    Q1. However I'm a bit confused by the concept of "task". Is a task an running instance of the component(spout or bolt) ? An executor having multiple tasks actually is saying the same component is executed for multiple times by the executor, am I correct ?

    A1: Yes, and yes 

    task只是某个component(spout或者bolt)的实例。Executor线程在执行期间会调用该task的nextTuple或execute方法

    Q2. Moreover in a general parallelism sense, Storm will spawn a dedicated thread(executor) for a spout or bolt, but what is contributed to the parallelism by an executor(thread) having multiple tasks ?

    A2: Running more than one task per executor does not increase the level of parallelism -- an executor always has one thread that it uses for all of its tasks, which means that tasks run serially on an executor.

    运行多个task并不会增加并行度,因为一个executor只是一个线程,这意味着它会顺序执行所有的task

    • The number of executor threads can be changed after the topology has been started (see storm rebalance command).
    • The number of tasks of a topology is static.

    And by definition, there is the invariant of #executors <= #tasks.

    一个topology的task个数是固定的,但是executor数(线程数)是可以动态改变的。默认的,executor数 <= tasks数

    So one reason for having 2+ tasks per executor thread is to give you the flexibility to expand/scale up the topology through the storm rebalance command in the future without taking the topology offline. For instance, imagine you start out with a Storm cluster of 15 machines but already know that next week another 10 boxes will be added. Here you could opt for running the topology at the anticipated parallelism level of 25 machines already on the 15 initial boxes (which is, of course, slower than 25 boxes). Once the additional 10 boxes are integrated you can then storm rebalancethe topology to make full use of all 25 boxes without any downtime.

    Another reason to run 2+ tasks per executor is for (primarily functional) testing. For instance, if your dev machine or CI server is only powerful enough to run, say, 2 executors alongside all the other stuff running on the machine, you can still run 30 tasks (here: 15 per executor) to see whether code such as your custom Storm grouping is working as expected.

    一个executor运行2+task数的情况通常有:

    • 为了给topology运行提供多大的灵活度,在运行中可以扩展并发度
    • 为了功能测试

    In practice we normally we run 1 task per executor.

    PS: Note that Storm will actually spawn a few more threads behind the scenes. For instance, each executor has its own "send thread" that is responsible for handling outgoing tuples. There are also "system-level" background threads for e.g. acking tuples that run alongside "your" threads. IIRC the Storm UI counts those acking threads in addition to "your" threads.

    实际上我们通常是 executors数 = task数

    Storm如何partition

    我们知道 A stream grouping defines how that stream should be partitioned among the bolt's tasks. 简单理解,partition就是定义数据如何分给多个每个task。Storm给出了spout/bolt之间的数据partition算法

    1. Shuffle grouping: Tuples are randomly distributed across the bolt's tasks in a way such that each bolt is guaranteed to get an equal number of tuples.
    2. Fields grouping: The stream is partitioned by the fields specified in the grouping. For example, if the stream is grouped by the "user-id" field, tuples with the same "user-id" will always go to the same task, but tuples with different "user-id"'s may go to different tasks.
    3. All grouping: The stream is replicated across all the bolt's tasks. Use this grouping with care.
    4. Global grouping: The entire stream goes to a single one of the bolt's tasks. Specifically, it goes to the task with the lowest id.
    5. None grouping: This grouping specifies that you don't care how the stream is grouped. Currently, none groupings are equivalent to shuffle groupings. Eventually though, Storm will push down bolts with none groupings to execute in the same thread as the bolt or spout they subscribe from (when possible).
    6. Direct grouping: This is a special kind of grouping. A stream grouped this way means that the producer of the tuple decides which task of the consumer will receive this tuple. Direct groupings can only be declared on streams that have been declared as direct streams. Tuples emitted to a direct stream must be emitted using one of the [emitDirect](javadocs/backtype/storm/task/OutputCollector.html#emitDirect(int, int, java.util.List) methods. A bolt can get the task ids of its consumers by either using the provided TopologyContext or by keeping track of the output of the emit method inOutputCollector (which returns the task ids that the tuple was sent to).
    7. Local or shuffle grouping: If the target bolt has one or more tasks in the same worker process, tuples will be shuffled to just those in-process tasks. Otherwise, this acts like a normal shuffle grouping.

    然而,如何对进入Spout的数据做partition呢?这个问题storm并没有回答。有几种思路

      1. 用户自定义partition的方式,比如基于某个字段进行shuffle

      2. 干脆不考虑partition,spout的并发设置为1,task个数也为1

      3. 还有一种是spout有并发度,多线程抢占式的消费数据,这时候可能需要有一个保持一致性状态的锁。

    Reference

    http://stackoverflow.com/questions/17257448/what-is-the-task-in-storm-parallelism

    http://www.cnblogs.com/yufengof/p/storm-worker-executor-task.html

    http://storm.apache.org/releases/0.9.6/Understanding-the-parallelism-of-a-Storm-topology.html

  • 相关阅读:
    课堂Scrum站立会议演示
    每周工作进度及工作量统计
    连连看的设计与实现——四人小组项目(GUI)
    连连看的设计与实现——四人小组项目(NABCD)
    用户模板和用户场景
    对MySQL 存储过程中乱码的破解
    MySQL数据库同步的实现
    解决MySQL无法远程访问的3方案
    使用SQL Server 2014内存数据库时需要注意的地方
    navicat for sql server中文版|SQL Server管理及开发工具(Navicat for SQL Server)下载 v11.2.13
  • 原文地址:https://www.cnblogs.com/qingwen/p/5662829.html
Copyright © 2011-2022 走看看