zoukankan      html  css  js  c++  java
  • 统计学一些名称中英文对照

    奥卡姆剃刀:Occam’s razor
    半监督学习:semi-supervised learning
    标注:tagging
    不完全数据:incomplete-data
    参数空间:parameter space
    残差:residual
    测试集:test set
    测试数据:test data
    测试误差:test error
    策略:strategy
    成对马可尔可夫性:pairwise Markov property
    词性标注:part of speech tagging
    代价函数:cost function
    代理损失函数:surrogate loss function
    带符号的距离:signed distance
    动态规划:dynamjc programming
    对偶算法:dual algorithm
    对偶问题:dual problem
    对数几率:log odds
    对数似然损失函数:log-likelihood loss function
    对数损失函数:logarithmic loss function
    对数线性模型:log linear model
    多数表决规则:majority voting rule
    多项逻辑斯蒂回归模型:multi-nominal logistic regression model
    多项式核函数:polynominal kenel function
    二项逻辑斯蒂回归模型:binomial logistic regression model
    罚项:penalty term
    泛化能力:generalization ability
    泛化误差:generalization error
    泛化误差上界:generalization error bound
    非监督学习:unsupervised learning
    非线性支持向量机:non-linear support vector machine
    分类:clssification
    分类器:classifier
    分类与回归树:clssification and regression tree, CART
    分离超平面:speparating hyperplane
    风险函数:risk function
    改进的迭代尺度法:inproved iterative scaling ,IIS
    概率近似正确:probabilistic approximately correct , PAC
    概率无向图模型:probabilistic undirected graphical model
    感知机:perceptron
    高斯核函数:Gaussian kernel function
    高斯混合模型:Gaussian mixture model
    根节点:root node
    估计误差:estimation error
    观测变量:observable variable
    观测序列:observation sequence
    广义拉格朗日函数:generalized Lagrange function
    广义期望极大:generalized expectation maximization ,GEM
    规范化因子:normalization factor
    过拟合:over-fitting
    海赛矩阵:Hesse matrix
    函数间隔:function margin
    合页损失函数:hinge loss function
    核方法:kernel method
    核函数:kernel function
    互信息:mutual information
    划分:partition
    回归:regresion
    基尼系数:Gini index
    极大-极大算法:maximization-maximization algorithm
    极大似然估计:maximum likelihood estimation
    几何间隔:geometric margin
    几率:odds
    加法模型:additive model
    假设空间:hypothesis space
    间隔:margin
    监督学习:supervised learning
    剪枝:pruning
    交叉验证:cross validation
    节点:node
    结构风险最小化:structural risk minimization ,SRM
    解码:decoding
    近似误差:approximation error
    经验风险:empirical entropy
    经验风险最小化:empirical risk minimization , ERM
    经验熵:empirical emtropy
    经验损失:empirical loss
    经验条件熵:empirical conditional entropy
    精确率:precision
    径向基函数:radial basis function
    局部马尔科夫性:local Markov property
    决策函数:decision function
    决策树:decision tree
    决策树桩:decision stump
    绝对损失函数:absolute loss function
    拉格朗日乘子:Lagrange multiplier
    拉格朗日对偶性:Lagrange duality
    拉格朗日函数:Lagrange function
    拉普拉斯平滑:Laplace smoothing
    类:class
    类标记:class label
    留一交叉验证:leave-one-out cross validation
    逻辑斯蒂分布:logistic distribution
    逻辑斯蒂回归:logistic regression
    马尔科夫随机场:Markov random field
    曼哈顿距离:Manhattan distance
    模型:model
    模型选择:model selection
    内部节点:internal model
    纳特:nat
    拟牛顿法:quasi Newton method
    牛顿法:Newton method
    欧氏距离:Euclidean distance
    判别方法:discrimination approach
    判别模型:discriminative model
    偏置:bias
    平方损失函数:quadratic loss function
    评价准则:evaluation criterion
    朴素贝叶斯:na?ve Bayes
    朴素贝叶斯算法:na?ve Bayes algorithm
    期望极大算法(EM算法):expectations maximization algorithm
    期望损失:expected loss
    前向分布算法:forward stagewise algorithm
    前向-后向算法:forward-backward algorithm
    潜在变量:latent variable
    强化学习:reinforcement learning
    强可学习:strongly learnable
    切分变量:splitting variable
    切分点:splitting point
    全局马尔科夫性:global Markov property
    权值:weight
    权值向量:weight vector
    软间隔最大化:soft margin maximization
    弱可学习:weakly learnable
    熵:entropy
    生成方法:generative approach
    生成模型:generative model
    实例:instance
    势函数:potential function
    输出空间:output space
    输入空间:input space
    数据:data
    算法:algorithm
    随机梯度下降法:stochastic gradient descent
    损失函数:loss function
    特异点:outlier
    特征函数:feature function
    特征空间:feature space
    特征向量:feature vector
    梯度提升:gradient boosting
    梯度下降法:gradient descent
    提升:boosting
    提升树:boosting tree
    提早停止:early stopping
    条件熵:conditional entropy
    条件随机场:conditional randomfield , CRF
    统计机器学习:statistical machine learning
    统计学习:statistical learning
    统计学习方法:statistical learning method
    统计学习理论:statistical learning theory
    统计学习应用:application of statistical learning
    凸二次规划:convex quadratic programming
    图:graph
    团:clique
    完全数据:complete-data
    维特比算法:Viterbi algorithm
    文本分类:text classification
    误差率:error rate
    希尔伯特空间:Hilbert space
    线性分类模型:linear classification model
    线性分类器:linear classifier
    线性可分数据集:linearly separable data set
    线性可分支持向量机:linear support vector machine in linearly separable case
    线性链:linear chain
    线性链条件随机场:linear chain conditional random field
    线性扫描:linear scan
    线性支持向量机:linear support vector machine
    信息增益:information gain
    信息增益比:information gain ratio
    序列最小最优化:sequential minimal optimization
    学习率:learning rate
    训练集:training set
    训练数据:training data
    训练误差:training error
    验证集:validation set
    叶节点:leaf node
    因子分解:factorization
    隐变量:hidden variable
    隐马尔可夫模型:hidden Markov model, HMM
    硬间隔最大化:hard margin maximization
    有向边:directed edge
    余弦相似度:cosine similarity
    预测:prediction
    原始问题:primal problem
    再生核希尔伯特空间:reproducing kernel Hilbert space,RKHS
    召回率:recall
    正定核函数:positive definite kernel function
    正则化:regularization
    正则化项:regularizer
    支持向量:support vector
    支持向量机:support vector machine ,SVM
    指示函数:indicator function
    指示损失函数:exponential loss function
    中位数:median
    状态序列:state sequence
    准确率:accuracy
    字符串核函数:string kernel function
    最大后验概率估计:maximum posterior probability estimation, MAP
    最大间隔法:maximum margin method
    最大熵模型:maximum entropy method
    最大团:maximal clique
    最速下降法:steepest descent
    最小二乘法:least squares
    最小二乘回归树:least squares regression tree
    0-1损失函数:0-1 loss function
    Adaboost算法:AdaBoost algorithm
    Baum-Welch算法:
    BFGS算法:Broyden-Fletcher-Goldfarb-Shanno algorithm ,BFGS algorithm
    Broyden类算法:Broyden’s algorithm
    C4.5算法:
    DFP算法:Davidon-Fletcher-Powell algorithm
    EM算法、
    F函数、
    Gram矩阵:Gram matrix
    ID3算法、
    Jensen不等式:Jensen inequality
    Kd树、
    K近邻法:K-nearest neighbor,K-NN

    L1范数:L1 norm
    L2范数:L2 norm
    Lp范数:Lp distance
    Minkowski距离:Minkowski distance
    Q函数、
    S形曲线:sigmoid curve
    S折交叉验证:S-fold cross validation

  • 相关阅读:
    Html禁止粘贴 复制 剪切
    表单标签
    自构BeanHandler(用BeansUtils)
    spring配置中引入properties
    How Subcontracting Cockpit ME2ON creates SD delivery?
    cascadia code一款很好看的微软字体
    How condition value calculated in sap
    Code in SAP query
    SO Pricing not updated for partial billing items
    Javascript learning
  • 原文地址:https://www.cnblogs.com/Tdazheng/p/12097926.html
Copyright © 2011-2022 走看看