zoukankan      html  css  js  c++  java
  • how processor caches work

    注:  这篇文章是对Igor Ostrovsky博客gallery-of-processor-cache-effects的总结学习。

        原文介绍了高速缓存在程序中的工作方式,尤其是cache line的介绍醍醐灌顶。文中还举了很多通过例子来解释处理器是如何读取cache的,也让人印象深刻。

        这篇文章总结是用英语写的,一来是希望借此锻炼一下我的英语实用能力,第二则是原文中很多词汇,我不知如何恰当的翻译成汉语,直接实用英语读起来更通顺。

    this is a summery of reading the blog of Igor Ostrovsky.

     

     

    example1:        cache line

    int[] arr = new int[64 * 1024 * 1024];
    
    // Loop 1
    for (int i = 0; i < arr.Length; i++) arr[i] *= 3;
    
    // Loop 2
    for (int i = 0; i < arr.Length; i += 16) arr[i] *= 3;

      the two loops cost 80 and 78ms respectively on the same machine. 

      the running time of this loops is dominated by the memory accesses to array, not the integer multiplications.

    for (int i = 0; i < arr.Length; i += K) arr[i] *= 3;

        the running times of tloop for different step values(K) is:

        Today's CPUs donot access memory byte by byte. Instead, they fetch memory in chunks of 64 bytes, which is called cache lines.

        So proper alignment of the data is important for reducing the touches of the cache lines.

     example2. cache size

        L1 caches are per-core

        L2 caches are shared between pairs of cores.

        In this test environment, we use the core with a 32KB data cache, a32KB instruction cache and a 4MB L2 data cache.

    int steps = 64 * 1024 * 1024; // Arbitrary number of steps
    int lengthMod = arr.Length - 1;
    for (int i = 0; i < steps; i++)
    {
        arr[(i * 16) & lengthMod]++; // (x & lengthMod) is equal to (x % arr.Length) 
    }    

    the process time changs with array size:

       

        example3:

    int steps = 256 * 1024 * 1024;
    int[] a = new int[2];
    
    // Loop 1
    for (int i=0; i<steps; i++) { a[0]++; a[0]++; }
    
    // Loop 2
    for (int i=0; i<steps; i++) { a[0]++; a[1]++; }

        loop2 is twice faster than loop1

        which can be explained in the perspective of pipe line, a write-read hazard occurs in loop1.

        example4:

    public static long UpdateEveryKthByte(byte[] arr, int K)
    {
        Stopwatch sw = Stopwatch.StartNew();
        const int rep = 1024*1024; // Number of iterations – arbitrary
    
    int p = 0;
        for (int i = 0; i < rep; i++)
        {
            arr[p]++;
            p += K;
            if (p >= arr.Length) p = 0;
        }
    
        sw.Stop();
        return sw.ElapsedMilliseconds;
    }

        generally speaking, 16-way set associative cache are adopted in L2 cache.

    that is to say, each cache line can be stored in any one of 16 particular slots in the cache.

    4MB = 16K * 64Bytes

    16Bytes for each cache line and 16 slots for each set and 4K set.

    example5:

        false cache line sharing 

        multi-cores machine, caches must be used safely. That means when one processor modifies a value in its cache, other processors cannot operate the same vaule anymore. Sine cache are worked in cache lines, the entire cache line will be invalidated in all caches.

  • 相关阅读:
    5步教你完成小熊派开发板贴片
    了解JS压缩图片,这一篇就够了
    【华为云推官招募】加入云推官,月入8万的兼职不是梦
    JavaScript中的正则表达式详解
    一瓶可乐的自动售货机指令“旅程”
    年近而立,Java何去何从?
    数据平台、大数据平台、数据中台……你确定能分得清吗?
    微软看上的Rust 语言,安全性真的很可靠吗
    云图说丨手把手教你为容器应用配置弹性伸缩策略
    Spark优化之小文件是否需要合并?
  • 原文地址:https://www.cnblogs.com/leohan2013/p/3310733.html
Copyright © 2011-2022 走看看