zoukankan      html  css  js  c++  java
  • Joblib-lightweight piplining tool

    Joblib

    https://joblib.readthedocs.io/en/latest/index.html

    https://github.com/joblib/joblib

    轻量流水线工具

    (1)对于记忆模式, 使用上是透明的,并且具有懒计算特性。

    (2)对于简单的并行计算是容易的。

    Joblib is a set of tools to provide lightweight pipelining in Python. In particular:

    1. transparent disk-caching of functions and lazy re-evaluation (memoize pattern)
    2. easy simple parallel computing

    Joblib is optimized to be fast and robust on large data in particular and has specific optimizations for numpy arrays. It is BSD-licensed.

    愿景

    提供更好的性能,和对于耗时任务的端点续跑。

    (1)避免重复的计算,计算第二次。--提高性能。

    (2)透明地持久化到磁盘中。-- 对现有程序影响最小,并且在程序崩溃后可以重启续跑。

    Vision

    The vision is to provide tools to easily achieve better performance and reproducibility when working with long running jobs.

    • Avoid computing the same thing twice: code is often rerun again and again, for instance when prototyping computational-heavy jobs (as in scientific development), but hand-crafted solutions to alleviate this issue are error-prone and often lead to unreproducible results.
    • Persist to disk transparently: efficiently persisting arbitrary objects containing large data is hard. Using joblib’s caching mechanism avoids hand-written persistence and implicitly links the file on disk to the execution context of the original Python object. As a result, joblib’s persistence is good for resuming an application status or computational job, eg after a crash.

    Joblib addresses these problems while leaving your code and your flow control as unmodified as possible (no framework, no new paradigms).

    主要特性

    (1)透明地和快速地缓存输出结果到磁盘中。

    (2)极其好的并行小帮手。

    (3)快速和压缩的持久化特性。

    Main features

    1. Transparent and fast disk-caching of output value: a memoize or make-like functionality for Python functions that works well for arbitrary Python objects, including very large numpy arrays. Separate persistence and flow-execution logic from domain logic or algorithmic code by writing the operations as a set of steps with well-defined inputs and outputs: Python functions. Joblib can save their computation to disk and rerun it only if necessary:

      >>> from joblib import Memory
      >>> cachedir = 'your_cache_dir_goes_here'
      >>> mem = Memory(cachedir)
      >>> import numpy as np
      >>> a = np.vander(np.arange(3)).astype(np.float)
      >>> square = mem.cache(np.square)
      >>> b = square(a)                                   
      ________________________________________________________________________________
      [Memory] Calling square...
      square(array([[0., 0., 1.],
             [1., 1., 1.],
             [4., 2., 1.]]))
      ___________________________________________________________square - 0...s, 0.0min
      
      >>> c = square(a)
      >>> # The above call did not trigger an evaluation
      
    2. Embarrassingly parallel helper: to make it easy to write readable parallel code and debug it quickly:

      >>> from joblib import Parallel, delayed
      >>> from math import sqrt
      >>> Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(10))
      [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]
      
    3. Fast compressed Persistence: a replacement for pickle to work efficiently on Python objects containing large data ( joblib.dump & joblib.load ).

    Persistence

    https://joblib.readthedocs.io/en/latest/persistence.html

    此例子是简单数据类型。

    对于对象类型依然生效。

    例如下面链接中机器学习模型对象:

    https://blog.csdn.net/weixin_45252110/article/details/98883571

    A simple example

    First create a temporary directory:

    >>> from tempfile import mkdtemp
    >>> savedir = mkdtemp()
    >>> import os
    >>> filename = os.path.join(savedir, 'test.joblib')
    

    Then create an object to be persisted:

    >>> import numpy as np
    >>> to_persist = [('a', [1, 2, 3]), ('b', np.arange(10))]
    

    which is saved into filename:

    >>> import joblib
    >>> joblib.dump(to_persist, filename)  
    ['...test.joblib']
    

    The object can then be reloaded from the file:

    >>> joblib.load(filename)
    [('a', [1, 2, 3]), ('b', array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]))]
    

    Pipelining -- based on parallel API

    https://joblib.readthedocs.io/en/latest/auto_examples/parallel_random_state.html#sphx-glr-auto-examples-parallel-random-state-py

    使用Parallel接口,可以将若干异步任务,并行执行。

    执行完毕后, Parallel接口才返回,继续执行后续代码。

    random_state = np.random.randint(np.iinfo(np.int32).max, size=n_vectors)
    
    random_vector = Parallel(n_jobs=2, backend=backend)(delayed(
        stochastic_function_seeded)(10, rng) for rng in random_state)
    print_vector(random_vector, backend)
    
    random_vector = Parallel(n_jobs=2, backend=backend)(delayed(
        stochastic_function_seeded)(10, rng) for rng in random_state)
    print_vector(random_vector, backend)
  • 相关阅读:
    Linux 添加ssh 公钥访问
    phpStorm设置显示代码行号
    apache启动报错:Cannot load php5apache2_2.dll into server
    安装apache服务出错,无法启动此程序,因为计算机中丢失VCRUNTIME140.dll 尝试重新安装此程序以解决此问题
    Windows下搭建PHP开发环境
    启动apache服务时报错【the requested operation has failed】
    [Java]知乎下巴第0集:让我们一起来做一个知乎爬虫吧哦耶【转】
    Yii2中request的使用
    php和js实现文件拖拽上传
    jQuery秒表、闹钟、计时器和报警插件
  • 原文地址:https://www.cnblogs.com/lightsong/p/13815862.html
Copyright © 2011-2022 走看看