zoukankan      html  css  js  c++  java
  • Bennett acceptance ratio

    Bennett acceptance ratio

    From Wikipedia, the free encyclopedia
     
     

    The Bennett acceptance ratio method (sometimes abbreviated to BAR) is an algorithm for estimating the difference in free energy between two systems (usually the systems will be simulated on the computer). It was suggested by Charles H. Bennett in 1976.[1]

    Preliminaries[edit]

    Take a system in a certain super (i.e. Gibbs) state. By performing a Metropolis Monte Carlo walk it is possible to sample the landscape of states that the system moves between, using the equation

    {displaystyle p({	ext{State}}_{x}
ightarrow {	ext{State}}_{y})=min left(e^{-eta \,Delta U},1
ight)=M(eta \,Delta U)}

    where ΔU = U(Statey) − U(Statex) is the difference in potential energy, β = 1/kT (T is the temperature in kelvins, while k is the Boltzmann constant), and {displaystyle M(x)equiv min(e^{-x},1)} is the Metropolis function. The resulting states are then sampled according to the Boltzmann distribution of the super state at temperature T. Alternatively, if the system is dynamically simulated in the canonical ensemble (also called the NVT ensemble), the resulting states along the simulated trajectory are likewise distributed. Averaging along the trajectory (in either formulation) is denoted by angle brackets {displaystyle leftlangle cdots ight angle }{displaystyle leftlangle cdots 
ight
angle }.

    Suppose that two super states of interest, A and B, are given. We assume that they have a common configuration space, i.e., they share all of their micro states, but the energies associated to these (and hence the probabilities) differ because of a change in some parameter (such as the strength of a certain interaction). The basic question to be addressed is, then, how can the Helmholtz free energy change (ΔF = FB − FA) on moving between the two super states be calculated from sampling in both ensembles? Note that the kinetic energy part in the free energy is equal between states so can be ignored. Note also that the Gibbs free energy corresponds to the NpT ensemble.

    The general case[edit]

    Bennett shows that for every function f satisfying the condition {displaystyle f(x)/f(-x)equiv e^{-x}} (which is essentially the detailed balance condition), and for every energy offset C, one has the exact relationship

    {displaystyle e^{-eta (Delta F-C)}={frac {leftlangle fleft(eta (U_{	ext{B}}-U_{	ext{A}}-C)
ight)
ight
angle _{	ext{A}}}{leftlangle fleft(eta (U_{	ext{A}}-U_{	ext{B}}+C)
ight)
ight
angle _{	ext{B}}}}}

    where UA and UB are the potential energies of the same configurations, calculated using potential function A (when the system is in superstate A) and potential function B (when the system is in the superstate B) respectively.

    The basic case[edit]

    Substituting for f the Metropolis function defined above (which satisfies the detailed balance condition), and setting C to zero, gives

    {displaystyle e^{-eta Delta F}={frac {leftlangle Mleft(eta (U_{	ext{B}}-U_{	ext{A}})
ight)
ight
angle _{	ext{A}}}{leftlangle Mleft(eta (U_{	ext{A}}-U_{	ext{B}})
ight)
ight
angle _{	ext{B}}}}}

    The advantage of this formulation (apart from its simplicity) is that it can be computed without performing two simulations, one in each specific ensemble. Indeed, it is possible to define an extra kind of "potential switching" Metropolis trial move (taken every fixed number of steps), such that the single sampling from the "mixed" ensemble suffices for the computation.

    The most efficient case[edit]

    Bennett explores which specific expression for ΔF is the most efficient, in the sense of yielding the smallest standard error for a given simulation time. He shows that the optimal choice is to take

    1. {displaystyle f(x)equiv {frac {1}{1+e^{x}}}}, which is essentially the Fermi–Dirac distribution (satisfying indeed the detailed balance condition).
    2. {displaystyle Capprox Delta F}. This value, of course, is not known (it is exactly what one is trying to compute), but it can be approximately chosen in a self-consistent manner.

    Some assumptions needed for the efficiency are the following:

    1. The densities of the two super states (in their common configuration space) should have a large overlap. Otherwise, a chain of super states between A and B may be needed, such that the overlap of each two consecutive super states is adequate.
    2. The sample size should be large. In particular, as successive states are correlated, the simulation time should be much larger than the correlation time.
    3. The cost of simulating both ensembles should be approximately equal - and then, in fact, the system is sampled roughly equally in both super states. Otherwise, the optimal expression for C is modified, and the sampling should devote equal times (rather than equal number of time steps) to the two ensembles.

    Multistate Bennett acceptance ratio[edit]

    The multistate Bennett acceptance ratio (MBAR) is a generalization of the Bennett acceptance ratio that calculates the (relative) free energies of several multi states. It essentially reduces to the BAR method when only two super states are involved.

    Relation to other methods[edit]

    The perturbation theory method[edit]

    This method, also called Free energy perturbation (or FEP), involves sampling from state A only. It requires that all the high probability configurations of super state B are contained in high probability configurations of super state A, which is a much more stringent requirement than the overlap condition stated above.

    The exact (infinite order) result[edit]

    {displaystyle e^{-eta Delta F}=leftlangle e^{-eta (U_{	ext{B}}-U_{	ext{A}})}
ight
angle _{	ext{A}}}

    or

    {displaystyle Delta F=-kTcdot log leftlangle e^{eta (U_{	ext{A}}-U_{	ext{B}})}
ight
angle _{	ext{A}}}

    This exact result can be obtained from the general BAR method, using (for example) the Metropolis function, in the limit {displaystyle C
ightarrow -infty }. Indeed, in that case, the denominator of the general case expression above tends to 1, while the numerator tends to {displaystyle e^{eta C}leftlangle e^{-eta (U_{	ext{B}}-U_{	ext{A}})}
ight
angle _{	ext{A}}}. A direct derivation from the definitions is more straightforward, though.

    The second order (approximate) result[edit]

    Assuming that {displaystyle U_{	ext{B}}-U_{	ext{A}}ll kT} and Taylor expanding the second exact perturbation theory expression to the second order, one gets the approximation

    {displaystyle Delta Fapprox leftlangle U_{	ext{B}}-U_{	ext{A}}
ight
angle _{	ext{A}}-{frac {eta }{2}}left(leftlangle (U_{	ext{B}}-U_{	ext{A}})^{2}
ight
angle _{	ext{A}}-left(leftlangle (U_{	ext{B}}-U_{	ext{A}})
ight
angle _{	ext{A}}
ight)^{2}
ight)}

    Note that the first term is the expected value of the energy difference, while the second is essentially its variance.

    The first order inequalities[edit]

    Using the convexity of the log function appearing in the exact perturbation analysis result, together with Jensen's inequality, gives an inequality in the linear level; combined with the analogous result for the B ensemble one gets the following version of the Gibbs-Bogoliubov inequality:

    {displaystyle langle U_{	ext{B}}-U_{	ext{A}}
angle _{	ext{B}}leq Delta Fleq langle U_{	ext{B}}-U_{	ext{A}}
angle _{	ext{A}}}

    Note that the inequality agrees with the negative sign of the coefficient of the (positive) variance term in the second order result.

    The thermodynamic integration method[edit]

    writing the potential energy as depending on a continuous parameter, {displaystyle U_{	ext{A}}=U(lambda =0),U_{	ext{B}}=U(lambda =1),}

    one has the exact result {displaystyle {frac {partial F(lambda )}{partial lambda }}=leftlangle {frac {partial U(lambda )}{partial lambda }}
ight
angle _{lambda }} This can either be directly verified from definitions or seen from the limit of the above Gibbs-Bogoliubov inequalities when {displaystyle {	ext{A}}=lambda ^{+},{	ext{B}}=lambda ^{-}}. we can therefore write

    {displaystyle Delta F=int _{0}^{1}leftlangle {frac {partial U(lambda )}{partial lambda }}
ight
angle \,dlambda }

    which is the thermodynamic integration (or TI) result. It can be approximated by dividing the range between states A and B into many values of λ at which the expectation value is estimated, and performing numerical integration.

    Implementation[edit]

    The Bennett acceptance ratio method is implemented in modern molecular dynamics systems, such as Gromacs. Python-based code for MBAR and BAR is available for download at [2].

    References[edit]

    1. Jump up^ Charles H. Bennett (1976) Efficient estimation of free energy differences from Monte Carlo data. Journal of Computational Physics 22 : 245–268 [1]

    External links[edit]

  • 相关阅读:
    spring事务注解@Transactional注解失效场景
    Dubbo中服务消费者和服务提供者之间的请求和响应过程
    说说Java的Unsafe类
    java程序二叉树的深度优先和广度优先遍历
    重复注解与类型注解
    git pull 和 git fetch的区别?
    Java8新特性系列(Interface)
    二十种健康食品排行榜
    赞美的时机
    越过胆怯这道栅栏
  • 原文地址:https://www.cnblogs.com/Simulation-Campus/p/8779620.html
Copyright © 2011-2022 走看看