zoukankan      html  css  js  c++  java
  • 吴恩达《深度学习》-课后测验-第二门课 (Improving Deep Neural Networks:Hyperparameter tuning, Regularization and Optimization)-Week 1

    Week 1 Quiz - Practical aspects of deep learning(第一周测验 - 深度学习的实践)

    1. If you have 10,000,000 examples, how would you split the train/dev/test set? (如果你有 10,000,000 个样本,你会如何划分训练/开发/测试集?)

    【 】98% train . 1% dev . 1% test(训练集占 98% , 开发集占 1% , 测试集占 1%)

    答案

    2. The dev and test set should: (开发和测试集应该)

    【 】Come from the same distribution (来自同一分布)

    答案

    3.If your Neural Network model seems to have high variance, what of the following would be promising things to try? (如果你的神经网络模型似乎有很高的方差,下列哪个尝试是可能解 决问题的?)

    【 】Add regularization(添加正则化)

    【 】Get more training data (获取更多的训练数据)

    答案

    全对

    4. You are working on an automated check-out kiosk for a supermarket, and are building a classifier for apples, bananas and oranges. Suppose your classifier obtains a training set error of 0.5%, and a dev set error of 7%. Which of the following are promising things to try to improve your classifier? (Check all that apply.) (你在一家超市的自动结帐亭工作,正在为苹果,香蕉和橘子制作分类器。 假设您的分类器在训练集上有 0.5%的错误,以及开发集上有7%的错误。 以下哪项尝试是有希望改善你的分类器的分类效果的?)

    【 】Increase the regularization parameter lambda (增加正则化参数 lambda)

    【 】Get more training data (获取更多的训练数据)

    答案

    全对

    5. What is weight decay? (什么是权重衰减?)

    【 】A regularization technique (such as L2 regularization) that results in gradient descent shrinking the weights on every iteration. (正则化技术(例如 L2 正则化)导致梯度下降在每次迭代时权重收缩。)

    答案

    6. What happens when you increase the regularization hyperparameter lambda? (当你增加正 则化超参数 lambda 时会发生什么?)

    【 】Weights are pushed toward becoming smaller (closer to 0) (权重会变得更小(接近 0))

    答案

    7. With the inverted dropout technique, at test time: (在测试时候使用 dropout)

    【 】You do not apply dropout (do not randomly eliminate units) and do not keep the 1/keep_prob factor in the calculations used in training(不要随机消除节点,也不要在训练中使用的计算中保留 1 / keep_prob 因子)

    答案

    8. Increasing the parameter keep_prob from (say) 0.5 to 0.6 will likely cause the following: (Check the two that apply) (将参数 keep_prob 从(比如说)0.5 增加到 0.6 可能会导致以下 情况)

    【 】Reducing the regularization effect (正则化作用减弱)

    【 】Causing the neural network to end up with a lower training set error(使神经网络在结束时会在训练集上表现好一些。)

    答案

    全对

    9. Which of these techniques are useful for reducing variance (reducing overfitting)? (Check all that apply.) (以下哪些技术可用于减少方差(减少过拟合))

    【 】Dropout

    【 】L2 regularization (L2 正则化)

    【 】Data augmentation(数据增强)

    答案

    全对

    10. Why do we normalize the inputs x? (为什么我们要归一化输入 x?)

    【 】It makes the cost function faster to optimize(它使成本函数更快地进行优化)

    答案



    Week 1 Code Assignments:

    ✧Course 2 - 改善深层神经网络 - 第一周测验 - 深度学习的实践

    assignment1_1:Initialization)

    https://github.com/phoenixash520/CS230-Code-assignments

    assignment1_2:Regularization

    https://github.com/phoenixash520/CS230-Code-assignments

    assignment1_3:Gradient Checking

    https://github.com/phoenixash520/CS230-Code-assignments

  • 相关阅读:
    Flutter & Dart 安装在window系统
    HAWQ配置之客户端访问
    HAWQ配置之HDFS HA
    HAWQ集成Yarn HA作为资源管理服务
    ambari 安装HDP3.0.1后,启动服务的问题记录
    【Clojure 基本知识】小技巧s
    [转帖]Loading Data into HAWQ
    【Clojure 基本知识】 关于函数参数的各种高级用法
    【Clojure 基本知识】 ns宏的 指令(关键字) requrie的用法
    Linux系统解析域名的先后顺序【转帖】
  • 原文地址:https://www.cnblogs.com/phoenixash/p/12092355.html
Copyright © 2011-2022 走看看