zoukankan      html  css  js  c++  java
  • Operation System Concepts Ch.4 Thread

    Overview

    Process creation is heavy-weight while thread creation is light-weight

    can simplify code, increase efficiency

    Benefits: responsiveness, sharing, economy, scalability

    multicore, multiprocessor, parallelism, concurrency

    Data parallelism: same data distributed across multicores, same operation on each

    Task parallelism: each thread performing unique operation on different cores

    Amdahl's Law

    Multithreading Models

    User/Kernel

    User threads: management by user-level threads library

    Kernel threads: support by kernel

    Kernel have a thread table for kernel threads, while Process have own thread table for user threads

    Why user thread? no kernel intervention, efficient

    Why not user thread? one blocked, all blocked

    Why kernel thread? can on different processors

    Why not kernel thread? slow

    Models

    Many-to-One (one block all block, can not parallel, few now)

    One-to-One (more concurrency, number restricted)

    Many-to-Many (sufficient)

    Two-Level (allow 1:1 and M:M)

    M:M for server, 1:1 for PC

    Thread Libraries

    Library: user space or kernel-level

    Pthreads: either as user-level or kernel level, a specification not implementation

    Implicit Threading

    creation and management of threads done by compilers

    thread pools: create a number of threads in a pool where they await work (faster, large number)

    Threading Issues

    fork() and exec()

    two version of fork(): copy all threads or copy one thread

    exec() replace all thread

    if exec() immediately after fork() then copy all is unnecessary

    Cancellation

    async: immediately

    deferred: allow target thread to periodically check if cancelled

    disabled: remains pending until enables it

    Signal Handling

    signals are used to notify a process that a event has occurred

    where should a signal be delivered for multi-threaded?

    all threads in a process share a same handler

    Thread-Local Storage

    each thread have its own data, across functions, similar to static data

    Why not use thread stack? life cycle reasons.

    Scheduler Activations

    How to maintain an appropriate number of kernel threads allocated to the app?

    Intermediate data structure between user and kernel threads: lightweight process (LWP), a virtual processor on which process can schedule user thread to run, each LWP attached to kernel thread

    Upcalls: from kernel to the upcall handler in the thread library

    When blocked, the attached LWP also blocks. The kernel makes an upcall and then allocates a new LWP to the application.

  • 相关阅读:
    FluentValidation 验证框架笔记1
    AutoMapper 笔记1
    MediatR框架笔记1
    vscode调试python时提示无法将“conda”项识别为 cmdlet、函数、脚本文件或可运行程序的名称的解决方法
    Selenium使用自带浏览器自动化
    Selenium启动Chrome插件(Chrome Extensions)
    Gitee,Github 图片转直链
    CentOS 7.3 修改root密码 passwd: Authentication token manipulation error
    阿里云服务器 被入侵植入dhpcd导致cpu飙升100%问题
    Github 切换分支
  • 原文地址:https://www.cnblogs.com/mollnn/p/14701629.html
Copyright © 2011-2022 走看看