zoukankan      html  css  js  c++  java
  • Hardware Solutions CACHE COHERENCE AND THE MESI PROTOCOL

    COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION

    Hardware-based solutions are generally referred to as cache coherence protocols.
    These solutions provide dynamic recognition at run time of potential inconsistency
    conditions. Because the problem is only dealt with when it actually arises, there
    is more effective use of caches, leading to improved performance over a software
    approach. In addition, these approaches are transparent to the programmer and the
    compiler, reducing the software development burden.
    Hardware schemes differ in a number of particulars, including where the state
    information about data lines is held, how that information is organized, where coher-
    ence is enforced, and the enforcement mechanisms. In general, hardware schemes
    can be divided into two categories: directory protocols and snoopy protocols.

    DIRECTORY PROTOCOLS Directory protocols collect and maintain information
    about where copies of lines reside. Typically, there is a centralized controller that is
    part of the main memory controller, and a directory that is stored in main memory.
    The directory contains global state information about the contents of the various
    local caches. When an individual cache controller makes a request, the centralized
    controller checks and issues necessary commands for data transfer between
    memory and caches or between caches. It is also responsible for keeping the state
    information up to date; therefore, every local action that can affect the global state
    of a line must be reported to the central controller.
    Typically, the controller maintains information about which processors have
    a copy of which lines. Before a processor can write to a local copy of a line, it
    must request exclusive access to the line from the controller. Before granting this
    exclusive access, the controller sends a message to all processors with a cached
    copy of this line, forcing each processor to invalidate its copy. After receiving
    acknowledgments back from each such processor, the controller grants exclusive
    access to the requesting processor. When another processor tries to read a line
    that is exclusively granted to another processor, it will send a miss notification
    to the controller. The controller then issues a command to the processor hold-
    ing that line that requires the processor to do a write back to main memory. The
    line may now be shared for reading by the original processor and the requesting
    processor.
    Directory schemes suffer from the drawbacks of a central bottleneck and the
    overhead of communication between the various cache controllers and the central
    controller. However, they are effective in large-scale systems that involve multiple
    buses or some other complex interconnection scheme.

    SNOOPY PROTOCOLS Snoopy protocols distribute the responsibility for
    maintaining cache coherence among all of the cache controllers in a multiprocessor.
    A cache must recognize when a line that it holds is shared with other caches.

    When an update action is performed on a shared cache line, it must be announced
    to all other caches by a broadcast mechanism. Each cache controller is able to
    “snoop” on the network to observe these broadcasted notifications, and react
    accordingly.
    Snoopy protocols are ideally suited to a bus-based multiprocessor, because
    the shared bus provides a simple means for broadcasting and snooping. However,
    because one of the objectives of the use of local caches is to avoid bus accesses, care
    must be taken that the increased bus traffic required for broadcasting and snooping
    does not cancel out the gains from the use of local caches.
    Two basic approaches to the snoopy protocol have been explored: write inval-
    idate and write update (or write broadcast). With a write-invalidate protocol, there
    can be multiple readers but only one writer at a time. Initially, a line may be shared
    among several caches for reading purposes. When one of the caches wants to per-
    form a write to the line, it first issues a notice that invalidates that line in the other
    caches, making the line exclusive to the writing cache. Once the line is exclusive, the
    owning processor can make cheap local writes until some other processor requires
    the same line.
    With a write-update protocol, there can be multiple writers as well as multiple
    readers. When a processor wishes to update a shared line, the word to be updated is
    distributed to all others, and caches containing that line can update it.
    Neither of these two approaches is superior to the other under all circum-
    stances. Performance depends on the number of local caches and the pattern of
    memory reads and writes. Some systems implement adaptive protocols that employ
    both write-invalidate and write-update mechanisms.
    The write-invalidate approach is the most widely used in commercial multi-
    processor systems, such as the Pentium 4 and Power PC. It marks the state of every
    cache line (using two extra bits in the cache tag) as modified, exclusive, shared, or
    invalid. For this reason, the write-invalidate protocol is called MESI. In the remain-
    der of this section, we will look at its use among local caches across a multiproces-
    sor. For simplicity in the presentation, we do not examine the mechanisms involved
    in coordinating among both level 1 and level 2 locally as well as at the same time
    coordinating across the distributed multiprocessor. This would not add any new
    principles but would greatly complicate the discussion.

  • 相关阅读:
    ARRAYLIST使用方法
    学习如何把数据库数据提取为XML(转)
    jquery常用技巧及常用方法列表
    邮件发送
    DataSet/XMl相互操作
    jquery Tab效果和动态加载
    Ajax 显示XML
    dropdownlist动态数据绑定
    sql 拼接
    javascriptxmlxslt操作
  • 原文地址:https://www.cnblogs.com/rsapaper/p/6252422.html
Copyright © 2011-2022 走看看