zoukankan      html  css  js  c++  java
  • Operation System Concepts Ch.4 Thread

    Overview

    Process creation is heavy-weight while thread creation is light-weight

    can simplify code, increase efficiency

    Benefits: responsiveness, sharing, economy, scalability

    multicore, multiprocessor, parallelism, concurrency

    Data parallelism: same data distributed across multicores, same operation on each

    Task parallelism: each thread performing unique operation on different cores

    Amdahl's Law

    Multithreading Models

    User/Kernel

    User threads: management by user-level threads library

    Kernel threads: support by kernel

    Kernel have a thread table for kernel threads, while Process have own thread table for user threads

    Why user thread? no kernel intervention, efficient

    Why not user thread? one blocked, all blocked

    Why kernel thread? can on different processors

    Why not kernel thread? slow

    Models

    Many-to-One (one block all block, can not parallel, few now)

    One-to-One (more concurrency, number restricted)

    Many-to-Many (sufficient)

    Two-Level (allow 1:1 and M:M)

    M:M for server, 1:1 for PC

    Thread Libraries

    Library: user space or kernel-level

    Pthreads: either as user-level or kernel level, a specification not implementation

    Implicit Threading

    creation and management of threads done by compilers

    thread pools: create a number of threads in a pool where they await work (faster, large number)

    Threading Issues

    fork() and exec()

    two version of fork(): copy all threads or copy one thread

    exec() replace all thread

    if exec() immediately after fork() then copy all is unnecessary

    Cancellation

    async: immediately

    deferred: allow target thread to periodically check if cancelled

    disabled: remains pending until enables it

    Signal Handling

    signals are used to notify a process that a event has occurred

    where should a signal be delivered for multi-threaded?

    all threads in a process share a same handler

    Thread-Local Storage

    each thread have its own data, across functions, similar to static data

    Why not use thread stack? life cycle reasons.

    Scheduler Activations

    How to maintain an appropriate number of kernel threads allocated to the app?

    Intermediate data structure between user and kernel threads: lightweight process (LWP), a virtual processor on which process can schedule user thread to run, each LWP attached to kernel thread

    Upcalls: from kernel to the upcall handler in the thread library

    When blocked, the attached LWP also blocks. The kernel makes an upcall and then allocates a new LWP to the application.

  • 相关阅读:
    activeMq-1 快速入门
    netty2 案例:数据通信
    SQL学习分享之数据链接(二)
    SQL学习 (一)
    CSS的定位重叠
    CSS 伪类 学习
    Jmeter 初学(三)
    玩转codeacademy (三)
    玩转codecademy (二)
    玩转codecademy(首次体会对象的乐趣) (一)
  • 原文地址:https://www.cnblogs.com/mollnn/p/14701629.html
Copyright © 2011-2022 走看看