zoukankan      html  css  js  c++  java
  • Various Optimization Algorithms For Training Neural Network

    from

    https://towardsdatascience.com/optimizers-for-training-neural-network-59450d71caf6

     

    Optimizers help to get results faster

    Gradient Descent

    Stochastic Gradient Descent

    Mini-Batch Gradient Descent

    Momentum

    Nesterov Accelerated Gradient

     

    NAG vs momentum at local minima

    Adagrad

     

    A derivative of loss function for given parameters at a given time t.

     

    Update parameters for given input i and at time/iteration t

    AdaDelta

     

    Update the parameters

    Adam

     

    First and second order of momentum

    Update the parameters

    Comparison between various optimizers

     

    Comparison 1

     

    comparison 2

    Conclusions

    出处:http://www.cnblogs.com/lightsong/ 本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接。
  • 相关阅读:
    Leetcode No.121
    Leetcode No.97 ***
    (描述需要改进) Leetcode No.71 **
    (描述需要改进)Leetcode No.68 **
    Leetcode No.72 ***
    【笔记】存储位置/修改表/字符集.【3(完结创建表)】
    redis 事件驱动模型解析
    redis 官网文档学习笔记 简单翻译
    redis 官网文档 sentinel 简单翻译 && 简单总结QA
    redis 学习笔记 总
  • 原文地址:https://www.cnblogs.com/lightsong/p/14643083.html
Copyright © 2011-2022 走看看