zoukankan      html  css  js  c++  java
  • Jittor框架API

    Jittor框架API

    这里是Jittor主模块的API文档,可以通过import jittor来获取该模块。

    classjittor.ExitHooks

    exc_handler(exc_typeexc*args)

    exit(code=0)

    hook()

    classjittor.Function(*args**kw)

    Function Module for customized backward operations

    Example 1 (Function can have multiple input and multiple output, and user can store value for backward computation):

    import jittor as jt

    from jittor import Function

     

    class MyFunc(Function):

        def execute(self, x, y):

            self.x = x

            self.y = y

            return x*y, x/y

     

        def grad(self, grad0, grad1):

            return grad0 * self.y, grad1 * self.x

    a = jt.array(3.0)

    b = jt.array(4.0)

    func = MyFunc()

    c,d = func(a, b)

    da, db = jt.grad(c+d*3, [a, b])

    assert da.data == 4

    assert db.data == 9

    Example 2(Function can return None for no gradiant, and gradiant can also be None):

    import jittor as jt

    from jittor import Function

     

    class MyFunc(Function):

        def execute(self, x, y):

            self.x = x

            self.y = y

            return x*y, x/y

     

        def grad(self, grad0, grad1):

            assert grad1 is None

            return grad0 * self.y, None

    a = jt.array(3.0)

    b = jt.array(4.0)

    func = MyFunc()

    c,d = func(a, b)

    d.stop_grad()

    da, db = jt.grad(c+d*3, [a, b])

    assert da.data == 4

    assert db.data == 0

    classmethodapply(*args**kw)

    dfs(parentskcallbackcallback_leave=None)

    classjittor.Module(*args**kw)

    apply(func)

    children()

    dfs(parentskcallbackcallback_leave=None)

    eval()

    execute(*args**kw)

    extra_repr()

    is_training()

    load(path)

    load_parameters(params)

    load_state_dict(params)

    modules()

    mpi_param_broadcast(root=0)

    named_modules()

    named_parameters()

    parameters()

    register_forward_hook(func)

    register_pre_forward_hook(func)

    save(path)

    state_dict()

    train()

    jittor.argmax(xdimkeepdims: jittor_core.ops.bool = False)

    jittor.argmin(xdimkeepdims: jittor_core.ops.bool = False)

    jittor.array(datadtype=None)

    jittor.attrs(var)

    jittor.clamp(xmin_v=Nonemax_v=None)

    jittor.clean()

    jittor.detach(x)

    jittor.dirty_fix_pytorch_runtime_error()

    This funtion should be called before pytorch.

    Example:

    import jittor as jt

    jt.dirty_fix_pytorch_runtime_error()

    import torch

    jittor.display_memory_info()

    jittor.fetch(*args)

    Async fetch vars with function closure.

    Example 1:

    for img,label in enumerate(your_dataset):

        pred = your_model(img)

        loss = critic(pred, label)

        acc = accuracy(pred, label)

        jt.fetch(acc, loss,

            lambda acc, loss:

                print(f"loss:{loss} acc:{acc}"

        )

    Example 2:

    for i,(img,label) in enumerate(your_dataset):

        pred = your_model(img)

        loss = critic(pred, label)

        acc = accuracy(pred, label)

        # variable i will be bind into function closure

        jt.fetch(i, acc, loss,

            lambda i, acc, loss:

                print(f"#{i}, loss:{loss} acc:{acc}"

        )

    classjittor.flag_scope(**jt_flags)

    jittor.flatten(inputstart_dim=0end_dim=-1)

    flatten dimentions by reshape

    jittor.format(vspec)

    jittor.full(shapevaldtype='float32')

    jittor.full_like(xval)

    jittor.get_len(var)

    jittor.grad(losstargets)

    jittor.jittor_exit()

    jittor.liveness_info()

    jittor.load(path)

    classjittor.log_capture_scope(**jt_flags)

    log capture scope

    example:

    with jt.log_capture_scope(log_v=0) as logs:

        LOG.v("...")

    print(logs)

    jittor.make_module(funcexec_n_args=1)

    jittor.masked_fill(xmaskvalue)

    classjittor.no_grad(**jt_flags)

    no_grad scope, all variable created inside this scope will stop grad.

    Example:

    import jittor as jt

     

    with jt.no_grad():

        ...

    jittor.norm(xkdim)

    jittor.normal(meanstdsize=Nonedtype='float32')

    jittor.ones(shapedtype='float32')

    jittor.ones_like(x)

    jittor.permute(x*dim)

    Declaration: VarHolder* transpose(VarHolder* x, NanoVector axes=NanoVector())

    jittor.pow(xy)

    classjittor.profile_scope(warmup=0rerun=0**jt_flags)

    profile scope

    example:

    with jt.profile_scope() as report:

        ......

    print(report)

    jittor.rand(*sizedtype='float32'requires_grad=False)

    jittor.randn(*sizedtype='float32'requires_grad=False)

    jittor.reshape(x*shape)

    Declaration: VarHolder* reshape(VarHolder* x, NanoVector shape)

    jittor.safepickle(objpath)

    jittor.safeunpickle(path)

    jittor.save(params_dictpath)

    jittor.single_process_scope(rank=0)

    Code in this scope will only be executed by single process.

    All the mpi code inside this scope will have not affect. mpi.world_rank() and mpi.local_rank() will return 0, world_size() will return 1,

    example:

    @jt.single_process_scope(rank=0)

    def xxx():

        ...

    jittor.size(vdim=None)

    jittor.sqr(x)

    jittor.squeeze(xdim)

    jittor.start_grad(x)

    jittor.std(x)

    jittor.to_bool(v)

    jittor.to_float(v)

    jittor.to_int(v)

    jittor.transpose(x*dim)

    Declaration: VarHolder* transpose(VarHolder* x, NanoVector axes=NanoVector())

    jittor.type_as(ab)

    jittor.unsqueeze(xdim)

    jittor.view(x*shape)

    Declaration: VarHolder* reshape(VarHolder* x, NanoVector shape)

    jittor.vtos(v)

    jittor.zeros(shapedtype='float32')

    jittor.zeros_like(x)

    jittor.core

    以下为Jittor的内核API,内核API可以通过jittor.core.XXX或者jittor.XXX直接访问。

    classjittor_core.DumpGraphs

    inputs

    Declaration: vector<vector<int>> inputs;

    nodes_info

    Declaration: vector<string> nodes_info;

    outputs

    Declaration: vector<vector<int>> outputs;

    classjittor_core.MemInfo

    total_cpu_ram

    Declaration: int64 total_cpu_ram;

    total_cuda_ram

    Declaration: int64 total_cuda_ram;

    classjittor_core.NanoString

    classjittor_core.NanoVector

    append()

    Declaration: inline void push_back_check_overflow(int64 v)

    classjittor_core.RingBuffer

    clear()

    Declaration: inline void clear()

    is_stop()

    Declaration: inline bool is_stop()

    keep_numpy_array()

    Declaration: inline void keep_numpy_array(bool keep)

    pop()

    Declaration: PyObject* pop()

    push()

    Declaration: void push(PyObject* obj)

    recv()

    Declaration: PyObject* pop()

    send()

    Declaration: void push(PyObject* obj)

    stop()

    Declaration: inline void stop()

    total_pop()

    Declaration: inline uint64 total_pop()

    total_push()

    Declaration: inline uint64 total_push()

    jittor_core.Var

    jittor_core.jittor_core.Var 的别名

    jittor_core.cleanup()

    Declaration: void cleanup()

    jittor_core.clear_trace_data()

    Declaration: void clear_trace_data()

    jittor_core.display_memory_info()

    Declaration: void display_memory_info(const char* fileline=””, bool dump_var=false, bool red_color=false)

    jittor_core.dump_all_graphs()

    Declaration: DumpGraphs dump_all_graphs()

    jittor_core.dump_trace_data()

    Declaration: PyObject* dump_trace_data()

    jittor_core.fetch_sync()

    Declaration: vector<ArrayArgs> fetch_sync(const vector<VarHolder*>& vh)

    classjittor_core.flags

    addr2line_path

    Document:

    addr2line_path(type:string, default:””): Path of addr2line.

    Declaration: string _get_addr2line_path()

    cache_path

    Document:

    cache_path(type:string, default:””): Cache path of jittor

    Declaration: string _get_cache_path()

    cc_flags

    Document:

    cc_flags(type:string, default:””): Flags of C++ compiler

    Declaration: string _get_cc_flags()

    cc_path

    Document:

    cc_path(type:string, default:””): Path of C++ compiler

    Declaration: string _get_cc_path()

    cc_type

    Document:

    cc_type(type:string, default:””): Type of C++ compiler(clang, icc, g++)

    Declaration: string _get_cc_type()

    check_graph

    Document:

    check_graph(type:int, default:0): Unify graph sanity check.

    Declaration: int _get_check_graph()

    compile_options

    Document:

    compile_options(type:fast_shared_ptr<loop_options_t>, default:{}): Override the default loop transfrom options

    Declaration: fast_shared_ptr<loop_options_t> _get_compile_options()

    cuda_archs

    Document:

    cuda_archs(type:vector<int>, default:{}): Cuda arch

    Declaration: vector<int> _get_cuda_archs()

    enable_tuner

    Document:

    enable_tuner(type:int, default:1): Enable tuner.

    Declaration: int _get_enable_tuner()

    exclude_pass

    Document:

    exclude_pass(type:string, default:””): Don’t run certian pass.

    Declaration: string _get_exclude_pass()

    extra_gdb_cmd

    Document:

    extra_gdb_cmd(type:string, default:””): Extra command pass to GDB, seperate by(;) .

    Declaration: string _get_extra_gdb_cmd()

    gdb_attach

    Document:

    gdb_attach(type:int, default:0): gdb attach self process.

    Declaration: int _get_gdb_attach()

    gdb_path

    Document:

    gdb_path(type:string, default:””): Path of GDB.

    Declaration: string _get_gdb_path()

    has_pybt

    Document:

    has_pybt(type:int, default:0): GDB has pybt or not.

    Declaration: int _get_has_pybt()

    jit_search_kernel

    Document:

    jit_search_kernel(type:int, default:0): Jit search for the fastest kernel.

    Declaration: int _get_jit_search_kernel()

    jit_search_rerun

    Document:

    jit_search_rerun(type:int, default:10):

    Declaration: int _get_jit_search_rerun()

    jit_search_warmup

    Document:

    jit_search_warmup(type:int, default:2):

    Declaration: int _get_jit_search_warmup()

    jittor_path

    Document:

    jittor_path(type:string, default:””): Source path of jittor

    Declaration: string _get_jittor_path()

    l1_cache_size

    Document:

    l1_cache_size(type:int, default:32768): size of level 1 cache (byte)

    Declaration: int _get_l1_cache_size()

    lazy_execution

    Document:

    lazy_execution(type:int, default:1): Default enabled, if disable, use immediately eager execution rather than lazy execution, This flag makes error message and traceback infomation better. But this flag will raise memory consumption and lower the performance.

    Declaration: int _get_lazy_execution()

    log_silent

    Document:

    log_silent(type:int, default:0): The log will be completely silent.

    Declaration: int _get_log_silent()

    log_sync

    Document:

    log_sync(type:int, default:0): Set log printed synchronously.

    Declaration: int _get_log_sync()

    log_v

    Document:

    log_v(type:int, default:0): Verbose level of logging

    Declaration: int _get_log_v()

    log_vprefix

    Document:

    log_vprefix(type:string, default:””): Verbose level of logging prefix

    example: log_vprefix=’op=1,node=2,executor.cc:38$=1000’ Declaration: string _get_log_vprefix()

    no_grad

    Document:

    no_grad(type:bool, default:0): No grad for all jittor Var creation

    Declaration: bool _get_no_grad()

    nvcc_flags

    Document:

    nvcc_flags(type:string, default:””): Flags of CUDA C++ compiler

    Declaration: string _get_nvcc_flags()

    nvcc_path

    Document:

    nvcc_path(type:string, default:””): Path of CUDA C++ compiler

    Declaration: string _get_nvcc_path()

    profiler_enable

    Document:

    profiler_enable(type:int, default:0): Enable profiler.

    Declaration: int _get_profiler_enable()

    profiler_hide_relay

    Document:

    profiler_hide_relay(type:int, default:0): Profiler hide relayed op.

    Declaration: int _get_profiler_hide_relay()

    profiler_rerun

    Document:

    profiler_rerun(type:int, default:0): Profiler rerun.

    Declaration: int _get_profiler_rerun()

    profiler_warmup

    Document:

    profiler_warmup(type:int, default:0): Profiler warmup.

    Declaration: int _get_profiler_warmup()

    python_path

    Document:

    python_path(type:string, default:””): Path of python interpreter

    Declaration: string _get_python_path()

    rewrite_op

    Document:

    rewrite_op(type:int, default:1): Rewrite source file of jit operator or not

    Declaration: int _get_rewrite_op()

    stat_allocator_total_alloc_byte

    Document:

    stat_allocator_total_alloc_byte(type:size_t, default:0): Total alloc byte

    Declaration: size_t _get_stat_allocator_total_alloc_byte()

    stat_allocator_total_alloc_call

    Document:

    stat_allocator_total_alloc_call(type:size_t, default:0): Number of alloc function call

    Declaration: size_t _get_stat_allocator_total_alloc_call()

    stat_allocator_total_free_byte

    Document:

    stat_allocator_total_free_byte(type:size_t, default:0): Total alloc byte

    Declaration: size_t _get_stat_allocator_total_free_byte()

    stat_allocator_total_free_call

    Document:

    stat_allocator_total_free_call(type:size_t, default:0): Number of alloc function call

    Declaration: size_t _get_stat_allocator_total_free_call()

    trace_depth

    Document:

    trace_depth(type:int, default:10): trace depth for GDB.

    Declaration: int _get_trace_depth()

    trace_py_var

    Document:

    trace_py_var(type:int, default:0): Trace py stack max depth for debug.

    Declaration: int _get_trace_py_var()

    try_use_32bit_index

    Document:

    try_use_32bit_index(type:int, default:0): If not overflow, try to use 32 bit type as index type.

    Declaration: int _get_try_use_32bit_index()

    update_queue_auto_flush_delay

    Document:

    update_queue_auto_flush_delay(type:int, default:2): when size of a update queue is great than this value, update queue trigger auto flush(default 2).

    Declaration: int _get_update_queue_auto_flush_delay()

    use_cuda

    Document:

    use_cuda(type:int, default:0): Use cuda or not. 1 for trying to use cuda, 2 for forcing to use cuda.

    Declaration: int _get_use_cuda()

    use_cuda_managed_allocator

    Document:

    use_cuda_managed_allocator(type:int, default:1): Enable cuda_managed_allocator

    Declaration: int _get_use_cuda_managed_allocator()

    use_nfef_allocator

    Document:

    use_nfef_allocator(type:int, default:0): Enable never free exact fit allocator

    Declaration: int _get_use_nfef_allocator()

    use_parallel_op_compiler

    Document:

    use_parallel_op_compiler(type:int, default:16): Number of threads that parallel op comiler used, default 16, set this value to 0 will disable parallel op compiler.

    Declaration: int _get_use_parallel_op_compiler()

    use_sfrl_allocator

    Document:

    use_sfrl_allocator(type:int, default:1): Enable sfrl allocator

    Declaration: int _get_use_sfrl_allocator()

    use_stat_allocator

    Document:

    use_stat_allocator(type:int, default:0): Enable stat allocator

    Declaration: int _get_use_stat_allocator()

    jittor_core.gc()

    Declaration: void gc_all()

    jittor_core.get_device_count()

    Declaration: inline int get_device_count()

    jittor_core.get_mem_info()

    Declaration: inline MemInfo get_mem_info()

    jittor_core.grad()

    Declaration: vector<VarHolder*> _grad(VarHolder* loss, const vector<VarHolder*>& targets)

    jittor_core.graph_check()

    Declaration: void do_graph_check()

    jittor_core.hash()

    Document:

    simple hash function

    Declaration: inline uint hash(const char* input)

    jittor_core.number_of_hold_vars()

    Declaration: inline static uint64 get_number_of_hold_vars()

    jittor_core.number_of_lived_ops()

    Declaration: inline static int64 get_number_of_lived_ops()

    jittor_core.number_of_lived_vars()

    Declaration: inline static int64 get_number_of_lived_vars()

    jittor_core.print_trace()

    Declaration: inline static void __print_trace()

    jittor_core.seed()

    Declaration: void set_seed(int seed)

    jittor_core.set_lock_path()

    Declaration: void set_lock_path(string path)

    jittor_core.set_seed()

    Declaration: void set_seed(int seed)

    jittor_core.sync()

    Declaration: void sync(const vector<VarHolder*>& vh=vector<VarHolder*>(), bool device_sync=false)

    jittor_core.sync_all()

    Declaration: void sync_all(bool device_sync=false)

    jittor_core.tape_together()

    Declaration: void tape_together(

    const vector<VarHolder*>& taped_inputs, const vector<VarHolder*>& taped_outputs, GradCallback&& grad_callback

    )

    jittor.ops

    这里是Jittor的基础算子模块的API文档,该API可以通过jittor.ops.XXX或者jittor.XXX直接访问。

    jittor_core.ops.abs()

    Declaration: VarHolder* abs(VarHolder* x)

    jittor_core.ops.acos()

    Declaration: VarHolder* acos(VarHolder* x)

    jittor_core.ops.acosh()

    Declaration: VarHolder* acosh(VarHolder* x)

    jittor_core.ops.add()

    Declaration: VarHolder* add(VarHolder* x, VarHolder* y)

    jittor_core.ops.all_()

    Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.any_()

    Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.arccos()

    Declaration: VarHolder* acos(VarHolder* x)

    jittor_core.ops.arccosh()

    Declaration: VarHolder* acosh(VarHolder* x)

    jittor_core.ops.arcsin()

    Declaration: VarHolder* asin(VarHolder* x)

    jittor_core.ops.arcsinh()

    Declaration: VarHolder* asinh(VarHolder* x)

    jittor_core.ops.arctan()

    Declaration: VarHolder* atan(VarHolder* x)

    jittor_core.ops.arctanh()

    Declaration: VarHolder* atanh(VarHolder* x)

    jittor_core.ops.arg_reduce()

    Declaration: vector<VarHolder*> arg_reduce(VarHolder* x, NanoString op, int dim, bool keepdims)

    jittor_core.ops.argsort()

    Document: *

    Argsort Operator Perform an indirect sort by given key or compare function.

    x is input, y is output index, satisfy:

    x[y[0]] <= x[y[1]] <= x[y[2]] <= … <= x[y[n]]

    or

    key(y[0]) <= key(y[1]) <= key(y[2]) <= … <= key(y[n])

    or

    compare(y[0], y[1]) && compare(y[1], y[2]) && …

    • [in] x: input var for sort
    • [in] dim: sort alone which dim
    • [in] descending: the elements are sorted in descending order or not(default False).
    • [in] dtype: type of return indexes
    • [out] index: index have the same size with sorted dim
    • [out] value: sorted value

    Example:

    index, value = jt.argsort([11,13,12])

    # return [0 2 1], [11 12 13]

    index, value = jt.argsort([11,13,12], descending=True)

    # return [1 2 0], [13 12 11]

    index, value = jt.argsort([[11,13,12], [12,11,13]])

    # return [[0 2 1],[1 0 2]],  [[11 12 13],[11 12 13]]

    index, value = jt.argsort([[11,13,12], [12,11,13]], dim=0)

    # return [[0 1 0],[1 0 1]],  [[11 11 12],[12 13 13]]

    Declaration: vector<VarHolder*> argsort(VarHolder* x, int dim=-1, bool descending=false, NanoString dtype=ns_int32)

    jittor_core.ops.array()

    Declaration: VarHolder* array__(PyObject* obj)

    jittor_core.ops.array_()

    Declaration: VarHolder* array_(ArrayArgs&& args)

    jittor_core.ops.asin()

    Declaration: VarHolder* asin(VarHolder* x)

    jittor_core.ops.asinh()

    Declaration: VarHolder* asinh(VarHolder* x)

    jittor_core.ops.atan()

    Declaration: VarHolder* atan(VarHolder* x)

    jittor_core.ops.atanh()

    Declaration: VarHolder* atanh(VarHolder* x)

    jittor_core.ops.binary()

    Declaration: VarHolder* binary(VarHolder* x, VarHolder* y, NanoString p)

    jittor_core.ops.bitwise_and()

    Declaration: VarHolder* bitwise_and(VarHolder* x, VarHolder* y)

    jittor_core.ops.bitwise_not()

    Declaration: VarHolder* bitwise_not(VarHolder* x)

    jittor_core.ops.bitwise_or()

    Declaration: VarHolder* bitwise_or(VarHolder* x, VarHolder* y)

    jittor_core.ops.bitwise_xor()

    Declaration: VarHolder* bitwise_xor(VarHolder* x, VarHolder* y)

    jittor_core.ops.bool()

    Declaration: VarHolder* bool_(VarHolder* x)

    jittor_core.ops.broadcast()

    Declaration: VarHolder* broadcast_to(VarHolder* x, NanoVector shape, NanoVector dims=NanoVector()) Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())

    jittor_core.ops.broadcast_var()

    Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())

    jittor_core.ops.candidate()

    Document: *

    Candidate Operator Perform an indirect candidate filter by given a fail condition.

    x is input, y is output index, satisfy:

    not fail_cond(y[0], y[1]) and

    not fail_cond(y[0], y[2]) and not fail_cond(y[1], y[2]) and

    ...

    ... and not fail_cond(y[m-2], y[m-1])

    Where m is number of selected candidates.

    Pseudo code:

    y = []

    for i in range(n):

        pass = True

        for j in y:

            if (@fail_cond):

                pass = false

                break

        if (pass):

            y.append(i)

    return y

    • [in] x: input var for filter
    • [in] fail_cond: code for fail condition
    • [in] dtype: type of return indexes
    • [out] index: .

    Example:

    jt.candidate(jt.random(100,2), '(@x(j,0)>@x(i,0))or(@x(j,1)>@x(i,1))')

    # return y satisfy:

    #    x[y[0], 0] <= x[y[1], 0] and x[y[1], 0] <= x[y[2], 0] and ... and x[y[m-2], 0] <= x[y[m-1], 0] and

    #    x[y[0], 1] <= x[y[1], 1] and x[y[1], 1] <= x[y[2], 1] and ... and x[y[m-2], 1] <= x[y[m-1], 1]

    Declaration: VarHolder* candidate(VarHolder* x, string&& fail_cond, NanoString dtype=ns_int32)

    jittor_core.ops.cast()

    Declaration: VarHolder* unary(VarHolder* x, NanoString op)

    jittor_core.ops.ceil()

    Declaration: VarHolder* ceil(VarHolder* x)

    jittor_core.ops.clone()

    Declaration: VarHolder* clone(VarHolder* x)

    jittor_core.ops.code()

    Document: *

    Code Operator for easily customized op.

    • [in] shape: the output shape, a integer array
    • [in] dtype: the output data type
    • [in] inputs: A list of input jittor Vars
    • [in] cpu_src: cpu source code string, buildin value:
      • in{x}, in{x}_shape{y}, in{x}_stride{y}, in{x}_type, in{x}_p, @in0(…)
      • out{x}, out{x}_shape{y}, out{x}_stride{y}, out{x}_type, out{x}_p, @out0(…)
      • out, out_shape{y}, out_stride{y}, out_type, out_p, @out(…)
    • [in] cpu_header: cpu header code string.
    • [in] cuda_src: cuda source code string.
    • [in] cuda_header: cuda header code string.

    Example-1:

    from jittor import Function

    import jittor as jt

     

    class Func(Function):

        def execute(self, x):

            self.save_vars = x

            return jt.code(x.shape, x.dtype, [x],

                cpu_src='''

                    for (int i=0; i<in0_shape0; i++)

                        @out(i) = @in0(i)*@in0(i)*2;

                ''')

     

        def grad(self, grad_x):

            x = self.save_vars

            return jt.code(x.shape, x.dtype, [x, grad_x],

                cpu_src='''

                    for (int i=0; i<in0_shape0; i++)

                        @out(i) = @in1(i)*@in0(i)*4;

                ''')

     

    a = jt.random([10])

    func = Func()

    b = func(a)

    print(b)

    print(jt.grad(b,a))

    Example-2:

    a = jt.array([3,2,1])

    b = jt.code(a.shape, a.dtype, [a],

        cpu_header="""

            #include <algorithm>

            @alias(a, in0)

            @alias(b, out)

        """,

        cpu_src="""

            for (int i=0; i<a_shape0; i++)

                @b(i) = @a(i);

            std::sort(&@b(0), &@b(in0_shape0));

        """

    )

    assert (b.data==[1,2,3]).all()

    Example-3:

    #This example shows how to set multiple outputs in code op.

    a = jt.array([3,2,1])

    b,c = jt.code([(1,), (1,)], [a.dtype, a.dtype], [a],

        cpu_header="""

            #include <iostream>

            using namespace std;

        """,

        cpu_src="""

            @alias(a, in0)

            @alias(b, out0)

            @alias(c, out1)

            @b(0) = @c(0) = @a(0);

            for (int i=0; i<a_shape0; i++) {

                @b(0) = std::min(@b(0), @a(i));

                @c(0) = std::max(@c(0), @a(i));

            }

            cout << "min:" << @b(0) << " max:" << @c(0) << endl;

        """

    )

    assert b.data == 1, b

    assert c.data == 3, c

    Example-4:

    #This example shows how to use dynamic shape of jittor variables.

    a = jt.array([5,-4,3,-2,1])

     

    # negtive shape for max size of vary dimension

    b,c = jt.code([(-5,), (-5,)], [a.dtype, a.dtype], [a],

        cpu_src="""

            @alias(a, in0)

            @alias(b, out0)

            @alias(c, out1)

            int num_b=0, num_c=0;

            for (int i=0; i<a_shape0; i++) {

                if (@a(i)>0)

                    @b(num_b++) = @a(i);

                else

                    @c(num_c++) = @a(i);

            }

            b->set_shape({num_b});

            c->set_shape({num_c});

        """

    )

    assert (b.data == [5,3,1]).all()

    assert (c.data == [-4,-2]).all()

    CUDA Example-1:

    #This example shows how to use CUDA in code op.

    import jittor as jt

    from jittor import Function

    jt.flags.use_cuda = 1

     

    class Func(Function):

        def execute(self, a, b):

            self.save_vars = a, b

            return jt.code(a.shape, a.dtype, [a,b],

                cuda_src='''

                    __global__ static void kernel1(@ARGS_DEF) {

                        @PRECALC

                        int i = threadIdx.x + blockIdx.x * blockDim.x;

                        int stride = blockDim.x * gridDim.x;

                        for (; i<in0_shape0; i+=stride)

                            @out(i) = @in0(i)*@in1(i);

                    }

                    kernel1<<<(in0_shape0-1)/1024+1, 1024>>>(@ARGS);

                ''')

     

        def grad(self, grad):

            a, b = self.save_vars

            return jt.code([a.shape, b.shape], [a.dtype, b.dtype], [a, b, grad],

                cuda_src='''

                    __global__ static void kernel2(@ARGS_DEF) {

                        @PRECALC

                        int i = threadIdx.x + blockIdx.x * blockDim.x;

                        int stride = blockDim.x * gridDim.x;

                        for (; i<in0_shape0; i+=stride) {

                            @out0(i) = @in2(i)*@in1(i);

                            @out1(i) = @in2(i)*@in0(i);

                        }

                    }

                    kernel2<<<(in0_shape0-1)/1024+1, 1024>>>(@ARGS);

                ''')

           

    a = jt.random([100000])

    b = jt.random([100000])

    func = Func()

    c = func(a,b)

    print(c)

    print(jt.grad(c, [a, b]))

    CUDA Example-2:

    #This example shows how to use multi dimension data with CUDA.

    import jittor as jt

    from jittor import Function

    jt.flags.use_cuda = 1

     

    class Func(Function):

        def execute(self, a, b):

            self.save_vars = a, b

            return jt.code(a.shape, a.dtype, [a,b],

                cuda_src='''

                    __global__ static void kernel1(@ARGS_DEF) {

                        @PRECALC

                        for (int i=blockIdx.x; i<in0_shape0; i+=gridDim.x)

                        for (int j=threadIdx.x; j<in0_shape1; j+=blockDim.x)

                            @out(i,j) = @in0(i,j)*@in1(i,j);

                    }

                    kernel1<<<32, 32>>>(@ARGS);

                ''')

     

        def grad(self, grad):

            a, b = self.save_vars

            return jt.code([a.shape, b.shape], [a.dtype, b.dtype], [a, b, grad],

                cuda_src='''

                    __global__ static void kernel2(@ARGS_DEF) {

                        @PRECALC

                        for (int i=blockIdx.x; i<in0_shape0; i+=gridDim.x)

                        for (int j=threadIdx.x; j<in0_shape1; j+=blockDim.x) {

                            @out0(i,j) = @in2(i,j)*@in1(i,j);

                            @out1(i,j) = @in2(i,j)*@in0(i,j);

                        }

                    }

                    kernel2<<<32, 32>>>(@ARGS);

                ''')

           

    a = jt.random((100,100))

    b = jt.random((100,100))

    func = Func()

    c = func(a,b)

    print(c)

    print(jt.grad(c, [a, b]))

    Declaration: VarHolder* code(NanoVector shape, NanoString dtype, vector<VarHolder*>&& inputs={}, string&& cpu_src=””, vector<string>&& cpu_grad_src={}, string&& cpu_header=””, string&& cuda_src=””, vector<string>&& cuda_grad_src={}, string&& cuda_header=””) Declaration: vector<VarHolder*> code_(vector<NanoVector>&& shapes, vector<NanoString>&& dtypes, vector<VarHolder*>&& inputs={}, string&& cpu_src=””, vector<string>&& cpu_grad_src={}, string&& cpu_header=””, string&& cuda_src=””, vector<string>&& cuda_grad_src={}, string&& cuda_header=””) Declaration: vector<VarHolder*> code__(vector<VarHolder*>&& inputs, vector<VarHolder*>&& outputs, string&& cpu_src=””, vector<string>&& cpu_grad_src={}, string&& cpu_header=””, string&& cuda_src=””, vector<string>&& cuda_grad_src={}, string&& cuda_header=””)

    jittor_core.ops.copy()

    Declaration: VarHolder* copy(VarHolder* x)

    jittor_core.ops.cos()

    Declaration: VarHolder* cos(VarHolder* x)

    jittor_core.ops.cosh()

    Declaration: VarHolder* cosh(VarHolder* x)

    jittor_core.ops.divide()

    Declaration: VarHolder* divide(VarHolder* x, VarHolder* y)

    jittor_core.ops.empty()

    Declaration: VarHolder* empty(NanoVector shape, NanoString dtype=ns_float32)

    jittor_core.ops.equal()

    Declaration: VarHolder* equal(VarHolder* x, VarHolder* y)

    jittor_core.ops.erf()

    Declaration: VarHolder* erf(VarHolder* x)

    jittor_core.ops.exp()

    Declaration: VarHolder* exp(VarHolder* x)

    jittor_core.ops.fetch()

    Declaration: VarHolder* fetch(vector<VarHolder*>&& inputs, FetchFunc&& func)

    jittor_core.ops.float32()

    Declaration: VarHolder* float32_(VarHolder* x)

    jittor_core.ops.float64()

    Declaration: VarHolder* float64_(VarHolder* x)

    jittor_core.ops.floor()

    Declaration: VarHolder* floor(VarHolder* x)

    jittor_core.ops.floor_divide()

    Declaration: VarHolder* floor_divide(VarHolder* x, VarHolder* y)

    jittor_core.ops.getitem()

    Declaration: VarHolder* getitem(VarHolder* x, VarSlices&& slices)

    jittor_core.ops.greater()

    Declaration: VarHolder* greater(VarHolder* x, VarHolder* y)

    jittor_core.ops.greater_equal()

    Declaration: VarHolder* greater_equal(VarHolder* x, VarHolder* y)

    jittor_core.ops.index()

    Document: *

    Index Operator generate index of shape.

    It performs equivalent Python-pseudo implementation below:

    n = len(shape)-1

    x = np.zeros(shape, dtype)

    for i0 in range(shape[0]): # 1-st loop

        for i1 in range(shape[1]): # 2-nd loop

            ...... # many loops

            for in in range(shape[n]) # n+1 -th loop

                x[i0,i1,...,in] = i@dim

    • [in] shape: the output shape, a integer array
    • [in] dim: the dim of the index.
    • [in] dtype: the data type string, default int32

    Example:

    print(jt.index([2,2], 0)())

    # output: [[0,0],[1,1]]

    print(jt.index([2,2], 1)())

    # output: [[0,1],[0,1]]

    Declaration: VarHolder* index(NanoVector shape, int64 dim, NanoString dtype=ns_int32) Declaration: vector<VarHolder*> index_(NanoVector shape, NanoString dtype=ns_int32)Document: * shape dependency version of index op

    jt.index_var(a, 1) similar with jt.index(a.shape, 1)

    Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)Document: * shape dependency version of index op

    jt.index_var(a) similar with jt.index(a.shape)

    Declaration: vector<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)

    jittor_core.ops.index_var()

    Document: * shape dependency version of index op

    jt.index_var(a, 1) similar with jt.index(a.shape, 1)

    Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)Document: * shape dependency version of index op

    jt.index_var(a) similar with jt.index(a.shape)

    Declaration: vector<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)

    jittor_core.ops.int16()

    Declaration: VarHolder* int16_(VarHolder* x)

    jittor_core.ops.int32()

    Declaration: VarHolder* int32_(VarHolder* x)

    jittor_core.ops.int64()

    Declaration: VarHolder* int64_(VarHolder* x)

    jittor_core.ops.int8()

    Declaration: VarHolder* int8_(VarHolder* x)

    jittor_core.ops.left_shift()

    Declaration: VarHolder* left_shift(VarHolder* x, VarHolder* y)

    jittor_core.ops.less()

    Declaration: VarHolder* less(VarHolder* x, VarHolder* y)

    jittor_core.ops.less_equal()

    Declaration: VarHolder* less_equal(VarHolder* x, VarHolder* y)

    jittor_core.ops.log()

    Declaration: VarHolder* log(VarHolder* x)

    jittor_core.ops.logical_and()

    Declaration: VarHolder* logical_and(VarHolder* x, VarHolder* y)

    jittor_core.ops.logical_not()

    Declaration: VarHolder* logical_not(VarHolder* x)

    jittor_core.ops.logical_or()

    Declaration: VarHolder* logical_or(VarHolder* x, VarHolder* y)

    jittor_core.ops.logical_xor()

    Declaration: VarHolder* logical_xor(VarHolder* x, VarHolder* y)

    jittor_core.ops.max()

    Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.maximum()

    Declaration: VarHolder* maximum(VarHolder* x, VarHolder* y)

    jittor_core.ops.mean()

    Declaration: VarHolder* reduce_mean(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_mean_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_mean__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.min()

    Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.minimum()

    Declaration: VarHolder* minimum(VarHolder* x, VarHolder* y)

    jittor_core.ops.mod()

    Declaration: VarHolder* mod(VarHolder* x, VarHolder* y)

    jittor_core.ops.multiply()

    Declaration: VarHolder* multiply(VarHolder* x, VarHolder* y)

    jittor_core.ops.negative()

    Declaration: VarHolder* negative(VarHolder* x)

    jittor_core.ops.not_equal()

    Declaration: VarHolder* not_equal(VarHolder* x, VarHolder* y)

    jittor_core.ops.numpy_code()

    Document: *

    Numpy Code Operator for easily customized op.

    • [in] shape: the output shape, a integer array
    • [in] dtype: the output data type
    • [in] inputs: A list of input jittor Vars
    • [in] forward: function, represents forward python function
    • [in] backward: A list of function, represents gradiant for each input

    Example-1:

    def forward_code(np, data):

        a = data["inputs"][0]

        b = data["outputs"][0]

        np.add(a,a,out=b)

     

    def backward_code(np, data):

        dout = data["dout"]

        out = data["outputs"][0]

        np.copyto(out, dout*2.0)

     

    a = jt.random((5,1))

    b = jt.numpy_code(

        a.shape,

        a.dtype,

        [a],

        forward_code,

        [backward_code],

    )

    Example-2:

    def forward_code(np, data):

        a,b = data["inputs"]

        c,d = data["outputs"]

        np.add(a,b,out=c)

        np.subtract(a,b,out=d)

     

    def backward_code1(np, data):

        dout = data["dout"]

        out = data["outputs"][0]

        np.copyto(out, dout)

     

    def backward_code2(np, data):

        dout = data["dout"]

        out_index = data["out_index"]

        out = data["outputs"][0]

        if out_index==0:

            np.copyto(out, dout)

        else:

            np.negative(dout, out)

     

    a = jt.random((5,1))

    b = jt.random((5,1))

    c, d = jt.numpy_code(

        [a.shape, a.shape],

        [a.dtype, a.dtype],

        [a, b],

        forward_code,

        [backward_code1,backward_code2],

    )

    Declaration: VarHolder* numpy_code(NanoVector shape, NanoString dtype, vector<VarHolder*>&& inputs, NumpyFunc&& forward, vector<NumpyFunc>&& backward) Declaration: vector<VarHolder*> numpy_code_(vector<NanoVector>&& shapes, vector<NanoString>&& dtypes, vector<VarHolder*>&& inputs, NumpyFunc&& forward, vector<NumpyFunc>&& backward) Declaration: VarHolder* numpy_code__(NanoVector shape, NanoString dtype, vector<VarHolder*>&& inputs, NumpyFunc&& forward) Declaration: vector<VarHolder*> numpy_code___(vector<NanoVector>&& shapes, vector<NanoString>&& dtypes, vector<VarHolder*>&& inputs, NumpyFunc&& forward)

    jittor_core.ops.pow()

    Declaration: VarHolder* pow(VarHolder* x, VarHolder* y)

    jittor_core.ops.prod()

    Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.product()

    Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.random()

    Declaration: VarHolder* random(NanoVector shape, NanoString dtype=ns_float32, NanoString type=ns_uniform)

    jittor_core.ops.reduce()

    Declaration: VarHolder* reduce(VarHolder* x, NanoString op, int dim, bool keepdims=false) Declaration: VarHolder* reduce_(VarHolder* x, NanoString op, NanoVector dims=NanoVector(), bool keepdims=false)

    jittor_core.ops.reduce_add()

    Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.reduce_bitwise_and()

    Declaration: VarHolder* reduce_bitwise_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.reduce_bitwise_or()

    Declaration: VarHolder* reduce_bitwise_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.reduce_bitwise_xor()

    Declaration: VarHolder* reduce_bitwise_xor(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.reduce_logical_and()

    Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.reduce_logical_or()

    Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.reduce_logical_xor()

    Declaration: VarHolder* reduce_logical_xor(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.reduce_maximum()

    Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.reduce_minimum()

    Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.reduce_multiply()

    Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.reindex()

    Document: *

    Reindex Operator is a one-to-many map operator. It performs equivalent Python-pseudo implementation below:

    # input is x, output is y

    n = len(shape)-1

    m = len(x.shape)-1

    k = len(overflow_conditions)-1

    y = np.zeros(shape, x.dtype)

    for i0 in range(shape[0]): # 1-st loop

        for i1 in range(shape[1]): # 2-nd loop

            ...... # many loops

            for in in range(shape[n]) # n+1 -th loop

                if is_overflow(i0,i1,...,in):

                    y[i0,i1,...,in] = overflow_value

                else:

                    # indexes[i] is a c++ style integer expression consisting of i0,i1,...,in

                    y[i0,i1,...,in] = x[indexes[0],indexes[1],...,indexes[m]]

     

    # is_overflow is defined as following

    def is_overflow(i0,i1,...,in):

        return (

            indexes[0] < 0 || indexes[0] >= x.shape[0] ||

            indexes[1] < 0 || indexes[1] >= x.shape[1] ||

            ......

            indexes[m] < 0 || indexes[m] >= x.shape[m] ||

     

            # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,in

            overflow_conditions[0] ||

            overflow_conditions[1] ||

            ......

            overflow_conditions[k]

        )

    • [in] x: A input jittor Var
    • [in] shape: the output shape, a integer array
    • [in] indexes: array of c++ style integer expression, its length should be the same with the number of dimension of x, some buildin variables it can use are:
    • XDIM, xshape0, ..., xshapen, xstride0, ..., xstriden
    • YDIM, yshape0, ..., yshapem, ystride0, ..., ystridem
    • i0, i1, ..., in
    • @e0(...), @e1(...) for extras input index
    • e0p, e1p , ... for extras input pointer
    • [in] overflow_value: overflow value
    • [in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes
    • [in] extras: extra var used for index

    Example Convolution implemented by reindex operation:

    def conv(x, w):

        N,H,W,C = x.shape

        Kh, Kw, _C, Kc = w.shape

        assert C==_C

        xx = x.reindex([N,H-Kh+1,W-Kw+1,Kh,Kw,C,Kc], [

            'i0', # Nid

            'i1+i3', # Hid+Khid

            'i2+i4', # Wid+KWid

            'i5', # Cid

        ])

        ww = w.broadcast_var(xx)

        yy = xx*ww

        y = yy.sum([3,4,5]) # Kh, Kw, C

        return y, yy

    Declaration: VarHolder* reindex(VarHolder* x, NanoVector shape, vector<string>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={}, vector<VarHolder*>&& extras={})Document: * Alias x.reindex([i,j,k]) ->

    x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])

    Declaration: VarHolder* reindex_(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={})

    jittor_core.ops.reindex_reduce()

    Document: *

    Reindex Reduce Operator is a many-to-one map operator. It performs equivalent Python-pseudo implementation below:

    # input is y, output is x

    n = len(y.shape)-1

    m = len(shape)-1

    k = len(overflow_conditions)-1

    x = np.zeros(shape, y.dtype)

    x[:] = initial_value(op)

    for i0 in range(y.shape[0]): # 1-st loop

        for i1 in range(y.shape[1]): # 2-nd loop

            ...... # many loops

            for in in range(y.shape[n]) # n+1 -th loop

                # indexes[i] is a c++ style integer expression consisting of i0,i1,...,in

                xi0,xi1,...,xim = indexes[0],indexes[1],...,indexes[m]

                if not is_overflow(xi0,xi1,...,xim):

                    x[xi0,xi1,...,xim] = op(x[xi0,xi1,...,xim], y[i0,i1,...,in])

     

    # is_overflow is defined as following

    def is_overflow(xi0,xi1,...,xim):

        return (

            xi0 < 0 || xi0 >= shape[0] ||

            xi1 < 0 || xi1 >= shape[1] ||

            ......

            xim < 0 || xim >= shape[m] ||

     

            # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,in

            overflow_conditions[0] ||

            overflow_conditions[1] ||

            ......

            overflow_conditions[k]

        )

    • [in] y: A input jittor Var
    • [in] op: a string represent the reduce operation type
    • [in] shape: the output shape, a integer array
    • [in] indexes: array of c++ style integer expression, its length should be the same with length of shape, some buildin variables it can use are:
    • XDIM, xshape0, ..., xshapem, xstride0, ..., xstridem
    • YDIM, yshape0, ..., yshapen, ystride0, ..., ystriden
    • i0, i1, ..., in
    • @e0(...), @e1(...) for extras input index
    • e0p, e1p , ... for extras input pointer
    • [in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes.
    • [in] extras: extra var used for index

    Example

    Pooling implemented by reindex operation:

    def pool(x, size, op):

        N,H,W,C = x.shape

        h = (H+size-1)//size

        w = (W+size-1)//size

        return x.reindex_reduce(op, [N,h,w,C], [

            "i0", # Nid

            f"i1/{size}", # Hid

            f"i2/{size}", # Wid

            "i3", # Cid

        ])

    Declaration: VarHolder* reindex_reduce(VarHolder* y, NanoString op, NanoVector shape, vector<string>&& indexes, vector<string>&& overflow_conditions={}, vector<VarHolder*>&& extras={})

    jittor_core.ops.reindex_var()

    Document: * Alias x.reindex([i,j,k]) ->

    x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])

    Declaration: VarHolder* reindex_(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={})

    jittor_core.ops.reshape()

    Declaration: VarHolder* reshape(VarHolder* x, NanoVector shape)

    jittor_core.ops.right_shift()

    Declaration: VarHolder* right_shift(VarHolder* x, VarHolder* y)

    jittor_core.ops.round()

    Declaration: VarHolder* round(VarHolder* x)

    jittor_core.ops.setitem()

    Declaration: VarHolder* setitem(VarHolder* x, VarSlices&& slices, VarHolder* y, NanoString op=ns_void)

    jittor_core.ops.sigmoid()

    Declaration: VarHolder* sigmoid(VarHolder* x)

    jittor_core.ops.sin()

    Declaration: VarHolder* sin(VarHolder* x)

    jittor_core.ops.sinh()

    Declaration: VarHolder* sinh(VarHolder* x)

    jittor_core.ops.sqrt()

    Declaration: VarHolder* sqrt(VarHolder* x)

    jittor_core.ops.subtract()

    Declaration: VarHolder* subtract(VarHolder* x, VarHolder* y)

    jittor_core.ops.sum()

    Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.ops.tan()

    Declaration: VarHolder* tan(VarHolder* x)

    jittor_core.ops.tanh()

    Declaration: VarHolder* tanh(VarHolder* x)

    jittor_core.ops.tape()

    Declaration: VarHolder* tape(VarHolder* x)

    jittor_core.ops.ternary()

    Declaration: VarHolder* ternary(VarHolder* cond, VarHolder* x, VarHolder* y)

    jittor_core.ops.transpose()

    Declaration: VarHolder* transpose(VarHolder* x, NanoVector axes=NanoVector())

    jittor_core.ops.uint16()

    Declaration: VarHolder* uint16_(VarHolder* x)

    jittor_core.ops.uint32()

    Declaration: VarHolder* uint32_(VarHolder* x)

    jittor_core.ops.uint64()

    Declaration: VarHolder* uint64_(VarHolder* x)

    jittor_core.ops.uint8()

    Declaration: VarHolder* uint8_(VarHolder* x)

    jittor_core.ops.unary()

    Declaration: VarHolder* unary(VarHolder* x, NanoString op)

    jittor_core.ops.where()

    Document: *

    Where Operator generate index of true condition.

    • [in] cond: condition for index generation
    • [in] dtype: type of return indexes
    • [out] out: return an array of indexes, same length with number of dims of cond

    Example:

    jt.where([[0,0,1],[1,0,0]])

    # return ( [0,2], [1,0] )

    Declaration: vector<VarHolder*> where(VarHolder* cond, NanoString dtype=ns_int32)

    jittor.Var

    这里是Jittor的基础变量类的API文档。该API可以通过my_jittor_var.XXX直接访问。

    jittor_core.Var.abs()

    Declaration: VarHolder* abs(VarHolder* x)

    jittor_core.Var.acos()

    Declaration: VarHolder* acos(VarHolder* x)

    jittor_core.Var.acosh()

    Declaration: VarHolder* acosh(VarHolder* x)

    jittor_core.Var.add()

    Declaration: VarHolder* add(VarHolder* x, VarHolder* y)

    jittor_core.Var.all_()

    Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.any_()

    Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.arccos()

    Declaration: VarHolder* acos(VarHolder* x)

    jittor_core.Var.arccosh()

    Declaration: VarHolder* acosh(VarHolder* x)

    jittor_core.Var.arcsin()

    Declaration: VarHolder* asin(VarHolder* x)

    jittor_core.Var.arcsinh()

    Declaration: VarHolder* asinh(VarHolder* x)

    jittor_core.Var.arctan()

    Declaration: VarHolder* atan(VarHolder* x)

    jittor_core.Var.arctanh()

    Declaration: VarHolder* atanh(VarHolder* x)

    jittor_core.Var.arg_reduce()

    Declaration: vector<VarHolder*> arg_reduce(VarHolder* x, NanoString op, int dim, bool keepdims)

    jittor_core.Var.argsort()

    Document: *

    Argsort Operator Perform an indirect sort by given key or compare function.

    x is input, y is output index, satisfy:

    x[y[0]] <= x[y[1]] <= x[y[2]] <= … <= x[y[n]]

    or

    key(y[0]) <= key(y[1]) <= key(y[2]) <= … <= key(y[n])

    or

    compare(y[0], y[1]) && compare(y[1], y[2]) && …

    • [in] x: input var for sort
    • [in] dim: sort alone which dim
    • [in] descending: the elements are sorted in descending order or not(default False).
    • [in] dtype: type of return indexes
    • [out] index: index have the same size with sorted dim
    • [out] value: sorted value

    Example:

    index, value = jt.argsort([11,13,12])

    # return [0 2 1], [11 12 13]

    index, value = jt.argsort([11,13,12], descending=True)

    # return [1 2 0], [13 12 11]

    index, value = jt.argsort([[11,13,12], [12,11,13]])

    # return [[0 2 1],[1 0 2]],  [[11 12 13],[11 12 13]]

    index, value = jt.argsort([[11,13,12], [12,11,13]], dim=0)

    # return [[0 1 0],[1 0 1]],  [[11 11 12],[12 13 13]]

    Declaration: vector<VarHolder*> argsort(VarHolder* x, int dim=-1, bool descending=false, NanoString dtype=ns_int32)

    jittor_core.Var.asin()

    Declaration: VarHolder* asin(VarHolder* x)

    jittor_core.Var.asinh()

    Declaration: VarHolder* asinh(VarHolder* x)

    jittor_core.Var.assign()

    Declaration: VarHolder* assign(VarHolder* v)

    jittor_core.Var.atan()

    Declaration: VarHolder* atan(VarHolder* x)

    jittor_core.Var.atanh()

    Declaration: VarHolder* atanh(VarHolder* x)

    jittor_core.Var.binary()

    Declaration: VarHolder* binary(VarHolder* x, VarHolder* y, NanoString p)

    jittor_core.Var.bitwise_and()

    Declaration: VarHolder* bitwise_and(VarHolder* x, VarHolder* y)

    jittor_core.Var.bitwise_not()

    Declaration: VarHolder* bitwise_not(VarHolder* x)

    jittor_core.Var.bitwise_or()

    Declaration: VarHolder* bitwise_or(VarHolder* x, VarHolder* y)

    jittor_core.Var.bitwise_xor()

    Declaration: VarHolder* bitwise_xor(VarHolder* x, VarHolder* y)

    jittor_core.Var.bool()

    Declaration: VarHolder* bool_(VarHolder* x)

    jittor_core.Var.broadcast()

    Declaration: VarHolder* broadcast_to(VarHolder* x, NanoVector shape, NanoVector dims=NanoVector()) Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())

    jittor_core.Var.broadcast_var()

    Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())

    jittor_core.Var.candidate()

    Document: *

    Candidate Operator Perform an indirect candidate filter by given a fail condition.

    x is input, y is output index, satisfy:

    not fail_cond(y[0], y[1]) and

    not fail_cond(y[0], y[2]) and not fail_cond(y[1], y[2]) and

    ...

    ... and not fail_cond(y[m-2], y[m-1])

    Where m is number of selected candidates.

    Pseudo code:

    y = []

    for i in range(n):

        pass = True

        for j in y:

            if (@fail_cond):

                pass = false

                break

        if (pass):

            y.append(i)

    return y

    • [in] x: input var for filter
    • [in] fail_cond: code for fail condition
    • [in] dtype: type of return indexes
    • [out] index: .

    Example:

    jt.candidate(jt.random(100,2), '(@x(j,0)>@x(i,0))or(@x(j,1)>@x(i,1))')

    # return y satisfy:

    #    x[y[0], 0] <= x[y[1], 0] and x[y[1], 0] <= x[y[2], 0] and ... and x[y[m-2], 0] <= x[y[m-1], 0] and

    #    x[y[0], 1] <= x[y[1], 1] and x[y[1], 1] <= x[y[2], 1] and ... and x[y[m-2], 1] <= x[y[m-1], 1]

    Declaration: VarHolder* candidate(VarHolder* x, string&& fail_cond, NanoString dtype=ns_int32)

    jittor_core.Var.cast()

    Declaration: VarHolder* unary(VarHolder* x, NanoString op)

    jittor_core.Var.ceil()

    Declaration: VarHolder* ceil(VarHolder* x)

    jittor_core.Var.clone()

    Declaration: VarHolder* clone(VarHolder* x)

    jittor_core.Var.compile_options

    Declaration: inline loop_options_t compile_options()

    jittor_core.Var.copy()

    Declaration: VarHolder* copy(VarHolder* x)

    jittor_core.Var.cos()

    Declaration: VarHolder* cos(VarHolder* x)

    jittor_core.Var.cosh()

    Declaration: VarHolder* cosh(VarHolder* x)

    jittor_core.Var.data

    Document: * Get a numpy array which share the data with the var. Declaration: inline DataView data()

    jittor_core.Var.debug_msg()

    Declaration: string debug_msg()

    jittor_core.Var.detach()

    Document:

    detach the grad

    Declaration: inline VarHolder* detach()

    jittor_core.Var.divide()

    Declaration: VarHolder* divide(VarHolder* x, VarHolder* y)

    jittor_core.Var.double()

    Declaration: VarHolder* float64_(VarHolder* x)

    jittor_core.Var.dtype

    Declaration: inline NanoString dtype()

    jittor_core.Var.equal()

    Declaration: VarHolder* equal(VarHolder* x, VarHolder* y)

    jittor_core.Var.erf()

    Declaration: VarHolder* erf(VarHolder* x)

    jittor_core.Var.exp()

    Declaration: VarHolder* exp(VarHolder* x)

    jittor_core.Var.expand()

    Declaration: VarHolder* broadcast_to(VarHolder* x, NanoVector shape, NanoVector dims=NanoVector()) Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())

    jittor_core.Var.expand_as()

    Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())

    jittor_core.Var.fetch_sync()

    Declaration: ArrayArgs fetch_sync()

    jittor_core.Var.float()

    Declaration: VarHolder* float32_(VarHolder* x)

    jittor_core.Var.float32()

    Declaration: VarHolder* float32_(VarHolder* x)

    jittor_core.Var.float64()

    Declaration: VarHolder* float64_(VarHolder* x)

    jittor_core.Var.floor()

    Declaration: VarHolder* floor(VarHolder* x)

    jittor_core.Var.floor_divide()

    Declaration: VarHolder* floor_divide(VarHolder* x, VarHolder* y)

    jittor_core.Var.getitem()

    Declaration: VarHolder* getitem(VarHolder* x, VarSlices&& slices)

    jittor_core.Var.greater()

    Declaration: VarHolder* greater(VarHolder* x, VarHolder* y)

    jittor_core.Var.greater_equal()

    Declaration: VarHolder* greater_equal(VarHolder* x, VarHolder* y)

    jittor_core.Var.index()

    Document: * shape dependency version of index op

    jt.index_var(a, 1) similar with jt.index(a.shape, 1)

    Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)Document: * shape dependency version of index op

    jt.index_var(a) similar with jt.index(a.shape)

    Declaration: vector<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)

    jittor_core.Var.index_var()

    Document: * shape dependency version of index op

    jt.index_var(a, 1) similar with jt.index(a.shape, 1)

    Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)Document: * shape dependency version of index op

    jt.index_var(a) similar with jt.index(a.shape)

    Declaration: vector<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)

    jittor_core.Var.int()

    Declaration: VarHolder* int32_(VarHolder* x)

    jittor_core.Var.int16()

    Declaration: VarHolder* int16_(VarHolder* x)

    jittor_core.Var.int32()

    Declaration: VarHolder* int32_(VarHolder* x)

    jittor_core.Var.int64()

    Declaration: VarHolder* int64_(VarHolder* x)

    jittor_core.Var.int8()

    Declaration: VarHolder* int8_(VarHolder* x)

    jittor_core.Var.is_stop_fuse()

    Declaration: inline bool is_stop_fuse()

    jittor_core.Var.is_stop_grad()

    Declaration: inline bool is_stop_grad()

    jittor_core.Var.item()

    Document: * Get one item data Declaration: ItemData item()

    jittor_core.Var.left_shift()

    Declaration: VarHolder* left_shift(VarHolder* x, VarHolder* y)

    jittor_core.Var.less()

    Declaration: VarHolder* less(VarHolder* x, VarHolder* y)

    jittor_core.Var.less_equal()

    Declaration: VarHolder* less_equal(VarHolder* x, VarHolder* y)

    jittor_core.Var.log()

    Declaration: VarHolder* log(VarHolder* x)

    jittor_core.Var.logical_and()

    Declaration: VarHolder* logical_and(VarHolder* x, VarHolder* y)

    jittor_core.Var.logical_not()

    Declaration: VarHolder* logical_not(VarHolder* x)

    jittor_core.Var.logical_or()

    Declaration: VarHolder* logical_or(VarHolder* x, VarHolder* y)

    jittor_core.Var.logical_xor()

    Declaration: VarHolder* logical_xor(VarHolder* x, VarHolder* y)

    jittor_core.Var.max()

    Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.maximum()

    Declaration: VarHolder* maximum(VarHolder* x, VarHolder* y)

    jittor_core.Var.mean()

    Declaration: VarHolder* reduce_mean(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_mean_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_mean__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.min()

    Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.minimum()

    Declaration: VarHolder* minimum(VarHolder* x, VarHolder* y)

    jittor_core.Var.mod()

    Declaration: VarHolder* mod(VarHolder* x, VarHolder* y)

    jittor_core.Var.multiply()

    Declaration: VarHolder* multiply(VarHolder* x, VarHolder* y)

    jittor_core.Var.name()

    Declaration: inline VarHolder* name(const char* s) Declaration: inline const char* name()

    jittor_core.Var.ndim

    Declaration: inline int ndim()

    jittor_core.Var.negative()

    Declaration: VarHolder* negative(VarHolder* x)

    jittor_core.Var.not_equal()

    Declaration: VarHolder* not_equal(VarHolder* x, VarHolder* y)

    jittor_core.Var.numel()

    Declaration: inline int64 numel()

    jittor_core.Var.numpy()

    Declaration: ArrayArgs fetch_sync()

    jittor_core.Var.prod()

    Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.product()

    Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reduce()

    Declaration: VarHolder* reduce(VarHolder* x, NanoString op, int dim, bool keepdims=false) Declaration: VarHolder* reduce_(VarHolder* x, NanoString op, NanoVector dims=NanoVector(), bool keepdims=false)

    jittor_core.Var.reduce_add()

    Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reduce_bitwise_and()

    Declaration: VarHolder* reduce_bitwise_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reduce_bitwise_or()

    Declaration: VarHolder* reduce_bitwise_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reduce_bitwise_xor()

    Declaration: VarHolder* reduce_bitwise_xor(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reduce_logical_and()

    Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reduce_logical_or()

    Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reduce_logical_xor()

    Declaration: VarHolder* reduce_logical_xor(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reduce_maximum()

    Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reduce_minimum()

    Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reduce_multiply()

    Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.reindex()

    Document: *

    Reindex Operator is a one-to-many map operator. It performs equivalent Python-pseudo implementation below:

    # input is x, output is y

    n = len(shape)-1

    m = len(x.shape)-1

    k = len(overflow_conditions)-1

    y = np.zeros(shape, x.dtype)

    for i0 in range(shape[0]): # 1-st loop

        for i1 in range(shape[1]): # 2-nd loop

            ...... # many loops

            for in in range(shape[n]) # n+1 -th loop

                if is_overflow(i0,i1,...,in):

                    y[i0,i1,...,in] = overflow_value

                else:

                    # indexes[i] is a c++ style integer expression consisting of i0,i1,...,in

                    y[i0,i1,...,in] = x[indexes[0],indexes[1],...,indexes[m]]

     

    # is_overflow is defined as following

    def is_overflow(i0,i1,...,in):

        return (

            indexes[0] < 0 || indexes[0] >= x.shape[0] ||

            indexes[1] < 0 || indexes[1] >= x.shape[1] ||

            ......

            indexes[m] < 0 || indexes[m] >= x.shape[m] ||

     

            # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,in

            overflow_conditions[0] ||

            overflow_conditions[1] ||

            ......

            overflow_conditions[k]

        )

    • [in] x: A input jittor Var
    • [in] shape: the output shape, a integer array
    • [in] indexes: array of c++ style integer expression, its length should be the same with the number of dimension of x, some buildin variables it can use are:
    • XDIM, xshape0, ..., xshapen, xstride0, ..., xstriden
    • YDIM, yshape0, ..., yshapem, ystride0, ..., ystridem
    • i0, i1, ..., in
    • @e0(...), @e1(...) for extras input index
    • e0p, e1p , ... for extras input pointer
    • [in] overflow_value: overflow value
    • [in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes
    • [in] extras: extra var used for index

    Example Convolution implemented by reindex operation:

    def conv(x, w):

        N,H,W,C = x.shape

        Kh, Kw, _C, Kc = w.shape

        assert C==_C

        xx = x.reindex([N,H-Kh+1,W-Kw+1,Kh,Kw,C,Kc], [

            'i0', # Nid

            'i1+i3', # Hid+Khid

            'i2+i4', # Wid+KWid

            'i5', # Cid

        ])

        ww = w.broadcast_var(xx)

        yy = xx*ww

        y = yy.sum([3,4,5]) # Kh, Kw, C

        return y, yy

    Declaration: VarHolder* reindex(VarHolder* x, NanoVector shape, vector<string>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={}, vector<VarHolder*>&& extras={})Document: * Alias x.reindex([i,j,k]) ->

    x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])

    Declaration: VarHolder* reindex_(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={})

    jittor_core.Var.reindex_reduce()

    Document: *

    Reindex Reduce Operator is a many-to-one map operator. It performs equivalent Python-pseudo implementation below:

    # input is y, output is x

    n = len(y.shape)-1

    m = len(shape)-1

    k = len(overflow_conditions)-1

    x = np.zeros(shape, y.dtype)

    x[:] = initial_value(op)

    for i0 in range(y.shape[0]): # 1-st loop

        for i1 in range(y.shape[1]): # 2-nd loop

            ...... # many loops

            for in in range(y.shape[n]) # n+1 -th loop

                # indexes[i] is a c++ style integer expression consisting of i0,i1,...,in

                xi0,xi1,...,xim = indexes[0],indexes[1],...,indexes[m]

                if not is_overflow(xi0,xi1,...,xim):

                    x[xi0,xi1,...,xim] = op(x[xi0,xi1,...,xim], y[i0,i1,...,in])

     

    # is_overflow is defined as following

    def is_overflow(xi0,xi1,...,xim):

        return (

            xi0 < 0 || xi0 >= shape[0] ||

            xi1 < 0 || xi1 >= shape[1] ||

            ......

            xim < 0 || xim >= shape[m] ||

     

            # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,in

            overflow_conditions[0] ||

            overflow_conditions[1] ||

            ......

            overflow_conditions[k]

        )

    • [in] y: A input jittor Var
    • [in] op: a string represent the reduce operation type
    • [in] shape: the output shape, a integer array
    • [in] indexes: array of c++ style integer expression, its length should be the same with length of shape, some buildin variables it can use are:
    • XDIM, xshape0, ..., xshapem, xstride0, ..., xstridem
    • YDIM, yshape0, ..., yshapen, ystride0, ..., ystriden
    • i0, i1, ..., in
    • @e0(...), @e1(...) for extras input index
    • e0p, e1p , ... for extras input pointer
    • [in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes.
    • [in] extras: extra var used for index

    Example

    Pooling implemented by reindex operation:

    def pool(x, size, op):

        N,H,W,C = x.shape

        h = (H+size-1)//size

        w = (W+size-1)//size

        return x.reindex_reduce(op, [N,h,w,C], [

            "i0", # Nid

            f"i1/{size}", # Hid

            f"i2/{size}", # Wid

            "i3", # Cid

        ])

    Declaration: VarHolder* reindex_reduce(VarHolder* y, NanoString op, NanoVector shape, vector<string>&& indexes, vector<string>&& overflow_conditions={}, vector<VarHolder*>&& extras={})

    jittor_core.Var.reindex_var()

    Document: * Alias x.reindex([i,j,k]) ->

    x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])

    Declaration: VarHolder* reindex_(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={})

    jittor_core.Var.requires_grad

    Declaration: inline bool get_requires_grad()

    jittor_core.Var.right_shift()

    Declaration: VarHolder* right_shift(VarHolder* x, VarHolder* y)

    jittor_core.Var.round()

    Declaration: VarHolder* round(VarHolder* x)

    jittor_core.Var.setitem()

    Declaration: VarHolder* setitem(VarHolder* x, VarSlices&& slices, VarHolder* y, NanoString op=ns_void)

    jittor_core.Var.shape

    Declaration: inline NanoVector shape()

    jittor_core.Var.share_with()

    Declaration: inline VarHolder* share_with(VarHolder* other)

    jittor_core.Var.sigmoid()

    Declaration: VarHolder* sigmoid(VarHolder* x)

    jittor_core.Var.sin()

    Declaration: VarHolder* sin(VarHolder* x)

    jittor_core.Var.sinh()

    Declaration: VarHolder* sinh(VarHolder* x)

    jittor_core.Var.sqrt()

    Declaration: VarHolder* sqrt(VarHolder* x)

    jittor_core.Var.stop_fuse()

    Declaration: inline VarHolder* stop_fuse()

    jittor_core.Var.stop_grad()

    Declaration: inline VarHolder* stop_grad()

    jittor_core.Var.subtract()

    Declaration: VarHolder* subtract(VarHolder* x, VarHolder* y)

    jittor_core.Var.sum()

    Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)

    jittor_core.Var.swap()

    Declaration: inline VarHolder* swap(VarHolder* v)

    jittor_core.Var.sync()

    Declaration: void sync(bool device_sync = false)

    jittor_core.Var.tan()

    Declaration: VarHolder* tan(VarHolder* x)

    jittor_core.Var.tanh()

    Declaration: VarHolder* tanh(VarHolder* x)

    jittor_core.Var.tape()

    Declaration: VarHolder* tape(VarHolder* x)

    jittor_core.Var.ternary()

    Declaration: VarHolder* ternary(VarHolder* cond, VarHolder* x, VarHolder* y)

    jittor_core.Var.uint16()

    Declaration: VarHolder* uint16_(VarHolder* x)

    jittor_core.Var.uint32()

    Declaration: VarHolder* uint32_(VarHolder* x)

    jittor_core.Var.uint64()

    Declaration: VarHolder* uint64_(VarHolder* x)

    jittor_core.Var.uint8()

    Declaration: VarHolder* uint8_(VarHolder* x)

    jittor_core.Var.unary()

    Declaration: VarHolder* unary(VarHolder* x, NanoString op)

    jittor_core.Var.uncertain_shape

    Declaration: inline NanoVector uncertain_shape()

    jittor_core.Var.update()

    Document:

    update parameter and global variable,

    different from assign, it will stop grad between origin var and assigned var, and will update in the background

    Declaration: VarHolder* update(VarHolder* v)

    jittor_core.Var.where()

    Document: *

    Where Operator generate index of true condition.

    • [in] cond: condition for index generation
    • [in] dtype: type of return indexes
    • [out] out: return an array of indexes, same length with number of dims of cond

    Example:

    jt.where([[0,0,1],[1,0,0]])

    # return ( [0,2], [1,0] )

    Declaration: vector<VarHolder*> where(VarHolder* cond, NanoString dtype=ns_int32)

    jittor.Misc

    这里是Jittor的基础算子模块的API文档,该API可以通过jittor.misc.XXX或者jittor.XXX直接访问。

    jittor.misc.all(xdim=[])

    jittor.misc.any(xdim)

    jittor.misc.arange(start=0end=Nonestep=1dtype=None)

    jittor.misc.arctan2(yx)

    jittor.misc.auto_parallel(nsrc**kw)

    auto parallel(CPU and GPU) n-d for loop function like below:

    Before:

    void inner_func(int n0, int i0, int n1, int i1) {

    }

    for (int i0=0; i0<n0; i0++)

    for (int i1=0; i1<n1; i1++)

    inner_func(n0, i0, n1, i1, …);

    After:

    @python.jittor.auto_parallel(2) void inner_func(int n0, int i0, int n1, int i1) {

    }

    inner_func(n0, 0, n1, 0, …);

    jittor.misc.chunk(xchunksdim=0)

    Splits a var into a specific number of chunks. Each chunk is a view of the input var.

    Last chunk will be smaller if the var size along the given dimension dim is not divisible by chunks.

    Args:

    input (var) – the var to split.

    chunks (int) – number of chunks to return.

    dim (int) – dimension along which to split the var.

    Example:

    >>> x = jt.random((10,3,3))

    >>> res = jt.chunk(x, 2, 0)

    >>> print(res[0].shape, res[1].shape)

    [5,3,3,] [5,3,3,]

    jittor.misc.cross(inputotherdim=-1)

    Returns the cross product of vectors in dimension dim of input and other.

    the cross product can be calculated by (a1,a2,a3) x (b1,b2,b3) = (a2b3-a3b2, a3b1-a1b3, a1b2-a2b1)

    input and other must have the same size, and the size of their dim dimension should be 3.

    If dim is not given, it defaults to the first dimension found with the size 3.

    Args:

    input (Tensor) – the input tensor.

    other (Tensor) – the second input tensor

    dim (int, optional) – the dimension to take the cross-product in.

    out (Tensor, optional) – the output tensor.

    Example:

    >>> input = jt.random((6,3))

    >>> other = jt.random((6,3))

    >>> jt.cross(input, other, dim=1)

    [[-0.42732686  0.6827885  -0.49206433]

    [ 0.4651107   0.27036983 -0.5580432 ]

    [-0.31933784  0.10543461  0.09676848]

    [-0.58346975 -0.21417202  0.55176204]

    [-0.40861478  0.01496297  0.38638002]

    [ 0.18393655 -0.04907863 -0.17928357]]

    >>> jt.cross(input, other)

    [[-0.42732686  0.6827885  -0.49206433]

    [ 0.4651107   0.27036983 -0.5580432 ]

    [-0.31933784  0.10543461  0.09676848]

    [-0.58346975 -0.21417202  0.55176204]

    [-0.40861478  0.01496297  0.38638002]

    [ 0.18393655 -0.04907863 -0.17928357]]

    jittor.misc.cumprod(xdim=0)

    jittor.misc.cumsum(xdim=None)

    x: [batch_size, N], jt.var

    the cumulative sum of x

    jittor.misc.cumsum_backward(npdata)

    jittor.misc.cumsum_forward(npdata)

    jittor.misc.deg2rad(x)

    jittor.misc.diag(xdiagonal=0)

    jittor.misc.expand(xshape)

    jittor.misc.flip(xdim=0)

    Reverse the order of a n-D var along given axis in dims.

    Args:

    input (var) – the input var.

    dims (a list or tuple) – axis to flip on.

    Example:

    >>> x = jt.array([[1,2,3,4]])

    >>> x.flip(1)

    [[4 3 2 1]]

    jittor.misc.gather(xdimindex)

    jittor.misc.hypot(ab)

    jittor.misc.index_fill_(xdimindexsval)

    Fills the elements of the input tensor with value val by selecting the indices in the order given in index.

    Args:

    x - the input tensor dim - dimension along which to index index – indices of input tensor to fill in val – the value to fill with

    jittor.misc.kthvalue(inputkdim=Nonekeepdim=False)

    jittor.misc.log2(x)

    jittor.misc.make_grid(xnrow=8padding=2normalize=Falserange=Nonescale_each=Falsepad_value=0)

    jittor.misc.median(xdim=Nonekeepdim=False)

    jittor.misc.meshgrid(*tensors)

    Take N tensors, each of which can be 1-dimensional vector, and create N n-dimensional grids, where the i th grid is defined by expanding the i th input over dimensions defined by other inputs.

    jittor.misc.nms(detsthresh)

    dets jt.array [x1,y1,x2,y2,score] x(:,0)->x1,x(:,1)->y1,x(:,2)->x2,x(:,3)->y2,x(:,4)->score

    jittor.misc.nonzero(x)

    Return the index of the elements of input tensor which are not equal to zero.

    jittor.misc.normalize(inputp=2dim=1eps=1e-12)

    Performs L_p normalization of inputs over specified dimension.

    Args:

    input – input array of any shape

    p (float) – the exponent value in the norm formulation. Default: 2

    dim (int) – the dimension to reduce. Default: 1

    eps (float) – small value to avoid division by zero. Default: 1e-12

    Example:

    >>> x = jt.random((6,3))

    [[0.18777736 0.9739261  0.77647036]

    [0.13710196 0.27282116 0.30533272]

    [0.7272278  0.5174613  0.9719775 ]

    [0.02566639 0.37504175 0.32676998]

    [0.0231761  0.5207773  0.70337296]

    [0.58966476 0.49547017 0.36724383]]

    >>> jt.normalize(x)

    [[0.14907198 0.7731768  0.61642134]

    [0.31750825 0.63181424 0.7071063 ]

    [0.5510936  0.39213243 0.736565  ]

    [0.05152962 0.7529597  0.656046  ]

    [0.02647221 0.59484214 0.80340654]

    [0.6910677  0.58067477 0.4303977 ]]

    jittor.misc.python_pass_warper(mod_funcargskw)

    jittor.misc.rad2deg(x)

    jittor.misc.randperm(ndtype='int64')

    jittor.misc.repeat(x*shape)

    Repeats this var along the specified dimensions.

    Args:

    x (var): jittor var.

    shape (tuple): int or tuple. The number of times to repeat this var along each dimension.

    Example:

    >>> x = jt.array([1, 2, 3])

    >>> x.repeat(4, 2)

    [[ 1,  2,  3,  1,  2,  3],

    [ 1,  2,  3,  1,  2,  3],

    [ 1,  2,  3,  1,  2,  3],

    [ 1,  2,  3,  1,  2,  3]]

    >>> x.repeat(4, 2, 1).size()

    [4, 2, 3,]

    jittor.misc.repeat_interleave(xrepeatsdim=None)

    jittor.misc.save_image(xfilepathnrow: int = 8padding: int = 2normalize: bool = Falserange=Nonescale_each=Falsepad_value=0format=None)

    jittor.misc.searchsorted(sortedvaluesright=False)

    Find the indices from the innermost dimension of sorted for each values.

    Example:

    sorted = jt.array([[1, 3, 5, 7, 9], [2, 4, 6, 8, 10]])

    values = jt.array([[3, 6, 9], [3, 6, 9]])

    ret = jt.searchsorted(sorted, values)

    assert (ret == [[1, 3, 4], [1, 2, 4]]).all(), ret

     

    ret = jt.searchsorted(sorted, values, right=True)

    assert (ret == [[2, 3, 5], [1, 3, 4]]).all(), ret

     

    sorted_1d = jt.array([1, 3, 5, 7, 9])

    ret = jt.searchsorted(sorted_1d, values)

    assert (ret == [[1, 3, 4], [1, 3, 4]]).all(), ret

    jittor.misc.split(dsplit_sizedim)

    Splits the tensor into chunks. Each chunk is a view of the original tensor.

    If split_size is an integer type, then tensor will be split into equally sized chunks (if possible). Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by split_size.

    If split_size is a list, then tensor will be split into len(split_size) chunks with sizes in dim according to split_size_or_sections.

    Args:

    d (Tensor) – tensor to split.

    split_size (int) or (list(int)) – size of a single chunk or list of sizes for each chunk

    dim (int) – dimension along which to split the tensor.

    jittor.misc.stack(xdim=0)

    Concatenates sequence of vars along a new dimension.

    All vars need to be of the same size.

    Args:

    x (sequence of vars) – sequence of vars to concatenate.

    dim (int) – dimension to insert. Has to be between 0 and the number of dimensions of concatenated vars (inclusive).

    Example:

    >>> a1 = jt.array([[1,2,3]])

    >>> a2 = jt.array([[4,5,6]])

    >>> jt.stack([a1, a2], 0)

    [[[1 2 3]

    [[4 5 6]]]

    jittor.misc.t(x)

    jittor.misc.tolist(x)

    jittor.misc.topk(inputkdim=Nonelargest=Truesorted=True)

    jittor.misc.triu_(xdiagonal=0)

    Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0.

    The upper triangular part of the matrix is defined as the elements on and above the diagonal.

    Args:

    x – the input tensor.

    diagonal – the diagonal to consider,default =0

    jittor.misc.unbind(xdim=0)

    Removes a var dimension.

    Returns a tuple of all slices along a given dimension, already without it.

    Args:

    input (var) – the var to unbind

    dim (int) – dimension to remove

    Example:

    a = jt.random((3,3)) b = jt.unbind(a, 0)

    jittor.misc.unique(x)

    Returns the unique elements of the input tensor.

    Args:

    x– the input tensor.

    jittor.misc.view_as(xy)

    人工智能芯片与自动驾驶
  • 相关阅读:
    2. 逻辑运算
    1. 条件
    6. 可变不可变类型
    5. 基本运算符
    4. 与用户交互
    12 .命名的EIGRP和EIGRP v6
    11. EIGRP路由SIA
    Redis 快速入门 -- Redis 快速入门(2)
    Redis 快速入门 -- Redis教程(1)
    Redis 百度百科
  • 原文地址:https://www.cnblogs.com/wujianming-110117/p/14397220.html
Copyright © 2011-2022 走看看