zoukankan      html  css  js  c++  java
  • .NET:CLR via C# Thread Basics

    A thread is a Windows concept whose job is to virtualize the CPU.

    Thread Overhead

    • Thread kernel object The operating system allocates and initializes one of these data structures for each thread created in the system. The data structure contains a bunch of properties  that describe the thread. This data structure also contains what is called the thread’s context. The context is a block of memory that contains a set of the CPU’s registers. For the x86, x64, and ARM CPU architectures, the thread’s context uses approximately 700, 1,240, or 350 bytes of memory, respectively.
    • Thread environment block (TEB) The TEB is a block of memory allocated and initialized in user mode (address space that application code can quickly access). The TEB consumes 1 page of memory (4 KB on x86, x64 CPUs, and ARM CPUs). The TEB contains the head of the thread’s exception-handling chain. Each try block that the thread enters inserts a node in the head of this chain; the node is removed from the chain when the thread exits the try block. In addition, the TEB contains the thread’s thread-local storage data and some data structures for use by Graphics Device Interface (GDI) and OpenGL graphics.
    • User-mode stack The user-mode stack is used for local variables and arguments passed to methods. It also contains the address indicating what the thread should execute next when the current method returns. By default, Windows allocates 1 MB of memory for each thread’s user-mode stack. More specifically, Windows reserves the 1 MB of address space and sparsely commits physical storage to it as the thread actually requires it when growing the stack.
    • Kernel-mode stack The kernel-mode stack is also used when application code passes arguments to a kernel-mode function in the operating system. For security reasons, Windows copies any arguments passed from user-mode code to the kernel from the thread’s user-mode stack to the thread’s kernel-mode stack. Once copied, the kernel can verify the arguments’ values, and because the application code can’t access the kernel-mode stack, the application can’t modify the arguments’ values after they have been validated and the operating system kernel code begins to operate on them. In addition, the kernel calls methods within itself and uses the kernel-mode stack to pass its own arguments, to store a function’s local variables, and to store return addresses. The kernel-mode stack is 12 KB when running on a 32-bit Windows system and 24 KB when running on a 64-bit Windows system.
    • DLL thread-attach and thread-detach notifications Windows has a policy that whenever a thread is created in a process, all unmanaged DLLs loaded in that process have their DllMain method called, passing a DLL_THREAD_ATTACH flag. Similarly, whenever a thread dies, all DLLs in the process have their DllMain method called, passing it a DLL_THREAD_DETACH flag. Some DLLs need these notifications to perform some special initialization or cleanup for each thread created/destroyed in the process. For example, the C-Runtime library DLL allocates some thread-local storage state that is required should the thread use functions contained within the C-Runtime library.

    now we’re going to start talking about context switching. A computer with only one CPU in it can do only one thing at a time. Therefore, Windows has to share the actual CPU hardware among all the threads (logical CPUs) that are sitting around in the system.

    At any given moment in time, Windows assigns one thread to a CPU. That thread is allowed to run for a time-slice (sometimes referred to as a quantum). When the time-slice expires, Windows context switches to another thread. Every context switch requires that Windows performs the following actions:

    • Save the values in the CPU’s registers to the currently running thread’s context structure inside the thread’s kernel object.
    • Select one thread from the set of existing threads to schedule next. If this thread is owned by another process, then Windows must also switch the virtual address space seen by the CPU before it starts executing any code or touching any data.
    • Load the values in the selected thread’s context structure into the CPU’s registers.

    After the context switch is complete, the CPU executes the selected thread until its time-slice expires, and then another context switch happens again. Windows performs context switches about every 30 ms. Context switches are pure overhead; that is, there is no memory or performance benefit that comes from context switches. Windows performs context switching to provide end users with a robust and responsive operating system.

    the performance hit is much worse than you might think. Yes, a performance hit occurs when Windows context switches to another thread. But the CPU was executing another thread, and the previously running thread’s code and data reside in the CPU’s caches so that the CPU doesn’t have to access RAM memory as much, which has significant latency associated with it. When Windows context switches to a new thread, this new thread is most likely executing different code and accessing different data that is not in the CPU’s cache. The CPU must access RAM memory to populate its cache so it can get back to a good execution speed. But then, about 30 ms later, another context switch occurs.

    A thread can voluntarily end its time-slice early, which happens quite frequently. Threads typically wait for I/O operations (keyboard, mouse, file, network, etc.) to complete. For example, Notepad’s thread usually sits idle with nothing to do; this thread is waiting for input. If the user presses the J key on the keyboard, Windows wakes Notepad’s thread to have it process the J keystroke. It may take Notepad’s thread just 5 ms to process the key, and then it calls a Win32 function that tells Windows that it is ready to process the next input event. If there are no more input events, then Windows puts Notepad’s thread into a wait state (relinquishing the remainder of its time-slice) so that the thread is not scheduled on any CPU until the next input stimulus occurs. This improves overall system performance because threads that are waiting for I/O operations to complete are not scheduled on a CPU and do not waste CPU time; other threads can be scheduled on the CPU instead.

    when Windows was originally designed, single-CPU computers were commonplace, and Windows added threads to improve system responsiveness and reliability. Today, threads are also being used to improve scalability, which can happen only on computers that have multiple cores in them.

    CLR Threads and Windows Threads

    Today, the CLR uses the threading capabilities of Windows, so Part V of this book is really focusing on how the threading capabilities of Windows are exposed to developers who write code by using the CLR.

  • 相关阅读:
    01--图解数据结构之数组实现容器
    00--图解数据结构之开篇+容器基类
    O3-开源框架使用之Butterknife 8.8.1及源码浅析
    3-VII-RecyclerView的item操作
    D11-Android自定义控件之动画篇3-插值器与估值器
    D10-Android自定义控件之动画篇2-动画监听
    /var/spool/postfix/maildrop 出现大量文件原因和解决办法
    golang捕获迭代变量,实参的估值时刻
    Golang标准库深入
    golang中type关键字使用
  • 原文地址:https://www.cnblogs.com/happyframework/p/3408724.html
Copyright © 2011-2022 走看看