zoukankan      html  css  js  c++  java
  • Acceleration for ML 论文导读

    Energy efficient parallel neuromorphic architectures with approximate arithmetic on FPGA

    Motivation

    To address the slow operation and high energy and resource consumption problem caused by realizing spiking neural network (SNN) using software.

    Problem

    1. software : slow operation, high energy consumption and space resources
    2. analog circuits: hard to reconfigure and intrinsically sensitive to process, voltage and temperature (PVT) Var.
    3. FPGA: most of works focus on the acceleration of SNN without considering energy consumption and efficiency of resource utilization.
    4. This work presented the parallel neuromorphic processor architectures with approximate arithmetic for SNN on FPGA.

    There is no related work part in this paper.


    In-Datacenter Performance Analysis of a Tensor Processing Unit

    Motivation

    This paper evaluates a custom ASIC - called a Tensor Processing Unit (TPU) to accelerates the inference phase of neural networks (NN).

    Problem

    Many NN applications have hard response time deadline. Hence, inference phase must response quickly when user do some action. While CPU and GPU are poor in response.

    All works are focus on hardware processing, such as DRAM, hardware protocol and so on.

  • 相关阅读:
    手机自动化测试:appium源码分析之bootstrap九
    手机自动化测试:appium源码分析之bootstrap八
    手机自动化测试:appium源码分析之bootstrap七
    HashMap
    Java 泛型
    LinkedList
    ArrayList
    Synchronzied(内置锁)
    第十四章 构建自定义的同步工具
    第十三章 ReentrantLock 简介
  • 原文地址:https://www.cnblogs.com/wpqwpq/p/10574605.html
Copyright © 2011-2022 走看看