zoukankan      html  css  js  c++  java
  • windows Concurrency Runtimewindows的并行编程模型

    原来还以为是一个多任务调度。

    最终发现类似于openmp或者 Windows thread pool 。


    Overview of the Concurrency Runtime(微软的并发运行时)

     

    This document provides an overview of the Concurrency Runtime. It describes the benefits of the Concurrency Runtime, when to use it, and how its components interact with each other and with the operating system and applications.

    A runtime for concurrency provides uniformity and predictability to applications and application components that run simultaneously. Two examples of the benefits of the Concurrency Runtime are cooperative(协同的) task scheduling and cooperative blocking. (协同的任务调度和协同的同步)

    The Concurrency Runtime uses a cooperative task scheduler that implements a work-stealing algorithm to efficiently distribute work among computing resources. For example, consider an application that has two threads that are both managed by the same runtime. If one thread finishes its scheduled task, it can offload work from the other thread. This mechanism balances the overall workload of the application.(协同任务调度可以在动态的线程之间分配任务,比如两个线程,如果有一个先执行完,他可以把任务分发到这个线程上)

    The Concurrency Runtime also provides synchronization primitives that use cooperative blocking to synchronize access to resources. For example, consider a task that must have exclusive access to a shared resource. By blocking cooperatively, the runtime can use the remaining quantum to perform another task as the first task waits for the resource. This mechanism promotes maximum usage of computing resources.(异步原语)

    The Concurrency Runtime is divided into four components: the Parallel Patterns Library (PPL), the Asynchronous Agents Library, the Task Scheduler, and the Resource Manager. These components reside between the operating system and applications. The following illustration shows how the Concurrency Runtime components interact among the operating system and applications:

    Concurrency Runtime Architecture

    The Concurrency Runtime Architecture

    The Concurrency Runtime is highly composable, that is, you can combine existing functionality to do more. The Concurrency Runtime composes many features, such as parallel algorithms, from lower-level components.

    The Concurrency Runtime also provides synchronization primitives that use cooperative blocking to synchronize access to resources. For more information about these synchronization primitives, see Synchronization Data Structures.

    The following sections provide a brief overview of what each component provides and when to use it.

    Parallel Patterns Library 细粒度并行:数据并行和任务并行

    The Parallel Patterns Library (PPL) provides general-purpose containers and algorithms for performingfine-grained parallelism. The PPL enables imperative data parallelism by providing parallel algorithms that distribute computations on collections or on sets of data across computing resources. It also enables task parallelism by providing task objects that distribute multiple independent operations across computing resources.

    Use the Parallel Patterns Library when you have a local computation that can benefit from parallel execution. For example, you can use the Concurrency::parallel_for algorithm to transform an existing for loop to act in parallel.

    For more information about the Parallel Patterns Library, see Parallel Patterns Library (PPL).

    Asynchronous Agents Library粗粒度并行: 基于角色的编程模型和消息传递接口

    The Asynchronous Agents Library (or just Agents Library) provides both an actor-based programming model and message passing interfaces for coarse-grained dataflow and pipelining tasks. Asynchronous agents enable you to make productive use of latency by performing work as other components wait for data.

    Use the Agents Library when you have multiple entities that communicate with each other asynchronously. For example, you can create an agent that reads data from a file or network connection and then uses the message passing interfaces to send that data to another agent.

    For more information about the Agents Library, see Asynchronous Agents Library.

    Task Scheduler 任务分配器

    The Task Scheduler schedules and coordinates tasks at run time. The Task Scheduler is cooperative and uses a work-stealing algorithm to achieve maximum usage of processing resources.

    The Concurrency Runtime provides a default scheduler so that you do not have to manage infrastructure details. However, to meet the quality needs of your application, you can also provide your own scheduling policy or associate specific schedulers with specific tasks.

    For more information about the Task Scheduler, see Task Scheduler (Concurrency Runtime).

    Resource Manager资源管理器

    The role of the Resource Manager is to manage computing resources, such as processors and memory. The Resource Manager responds to workloads as they change at run time by assigning resources to where they can be most effective.

    The Resource Manager serves as an abstraction over computing resources and primarily interacts with the Task Scheduler. Although you can use the Resource Manager to fine-tune the performance of your libraries and applications, you typically use the functionality that is provided by the Parallel Patterns Library, the Agents Library, and the Task Scheduler. These libraries use the Resource Manager to dynamically rebalance resources as workloads change.

    [go to top]

    Many of the types and algorithms that are defined by the Concurrency Runtime are implemented as C++ templates. Some of these types and algorithms take as a parameter a routine that performs work. This parameter can be a lambda function, a function object, or a function pointer. These entities are also referred to as work functions or work routines.

    Lambda expressions are an important new Visual C++ language feature because they provide a succinct way to define work functions for parallel processing. Function objects and function pointers enable you to use the Concurrency Runtime with your existing code. However, we recommend that you use lambda expressions when you write new code because of the safety and productivity benefits that they provide.

    The following example compares the syntax of lambda functions, function objects, and function pointers in multiple calls to the Concurrency::parallel_for_each algorithm. Each call to parallel_for_each uses a different technique to compute the square of each element in a std::array object.

    // comparing-work-functions.cpp
    // compile with: /EHsc
    #include <ppl.h>
    #include <array>
    #include <iostream>
    
    using namespace Concurrency;
    using namespace std;
    
    // Function object (functor) class that computes the square of its input.
    template<class Ty>
    class SquareFunctor
    {
    public:
       void operator()(Ty& n) const
       {
          n *= n;
       }
    };
    
    // Function that computes the square of its input.
    template<class Ty>
    void square_function(Ty& n)
    {
       n *= n;
    }
    
    int wmain()
    {
       // Create an array object that contains 5 values.
       array<int, 5> values = { 1, 2, 3, 4, 5 };
    
       // Use a lambda function, a function object, and a function pointer to 
       // compute the square of each element of the array in parallel.
    
       // Use a lambda function to square each element.
       parallel_for_each(values.begin(), values.end(), [](int& n){n *= n;});
    
       // Use a function object (functor) to square each element.
       parallel_for_each(values.begin(), values.end(), SquareFunctor<int>());
    
       // Use a function pointer to square each element.
       parallel_for_each(values.begin(), values.end(), &square_function<int>);
    
       // Print each element of the array to the console.
       for_each(values.begin(), values.end(), [](int& n) { 
          wcout << n << endl;
       });
    }
    
    
    

    This example produces the following output.

    1
    256
    6561
    65536
    390625
    

    For more information about lambda functions in C++, see Lambda Expressions in C++.

    [go to top]

    The following table shows the header files that are associated with each component of the Concurrency Runtime:

    Component

    Header Files

    Parallel Patterns Library (PPL)

    ppl.h

    concurrent_queue.h

    concurrent_vector.h

    Asynchronous Agents Library

    agents.h

    Task Scheduler

    concrt.h

    Resource Manager

    concrtrm.h

    The Concurrency Runtime is declared in the Concurrency namespace. The Concurrency::details namespace supports the Concurrency Runtime framework, and is not intended to be used directly from your code.

    The Concurrency Runtime is provided as part of the C Runtime Library (CRT). For more information about how to build an application that uses the CRT, see C Run-Time Libraries.


    georgh说:TPL这个东西比openmp强大很多,因为:1可以动态调整使得保证负载较均衡2可以控制父线程子线程的调度。可以尝试一下。

    参考如下:

    http://msdn.microsoft.com/en-us/library/dd504870(v=vs.100).aspx

    http://www.drdobbs.com/parallel/concurrency-runtime-crt-the-task-schedul/227500291?pgno=2
    http://technet.microsoft.com/zh-cn/library/ff461340

  • 相关阅读:
    图解 SQL 各种连接查询之间的区别
    虚拟机Ubuntu无法上网问题解决过程
    SQL语言(二) java怎样连接操作数据库中的数据
    SQL语言(一)
    编写简单的用户登录界面
    Java
    java第一阶段测试
    Net Core linux docker 部署异常
    .Net Core Cap 异常
    记.Net 创建文件
  • 原文地址:https://www.cnblogs.com/catkins/p/5270755.html
Copyright © 2011-2022 走看看