zoukankan      html  css  js  c++  java
  • Operation System Concepts Ch.4 Thread

    Overview

    Process creation is heavy-weight while thread creation is light-weight

    can simplify code, increase efficiency

    Benefits: responsiveness, sharing, economy, scalability

    multicore, multiprocessor, parallelism, concurrency

    Data parallelism: same data distributed across multicores, same operation on each

    Task parallelism: each thread performing unique operation on different cores

    Amdahl's Law

    Multithreading Models

    User/Kernel

    User threads: management by user-level threads library

    Kernel threads: support by kernel

    Kernel have a thread table for kernel threads, while Process have own thread table for user threads

    Why user thread? no kernel intervention, efficient

    Why not user thread? one blocked, all blocked

    Why kernel thread? can on different processors

    Why not kernel thread? slow

    Models

    Many-to-One (one block all block, can not parallel, few now)

    One-to-One (more concurrency, number restricted)

    Many-to-Many (sufficient)

    Two-Level (allow 1:1 and M:M)

    M:M for server, 1:1 for PC

    Thread Libraries

    Library: user space or kernel-level

    Pthreads: either as user-level or kernel level, a specification not implementation

    Implicit Threading

    creation and management of threads done by compilers

    thread pools: create a number of threads in a pool where they await work (faster, large number)

    Threading Issues

    fork() and exec()

    two version of fork(): copy all threads or copy one thread

    exec() replace all thread

    if exec() immediately after fork() then copy all is unnecessary

    Cancellation

    async: immediately

    deferred: allow target thread to periodically check if cancelled

    disabled: remains pending until enables it

    Signal Handling

    signals are used to notify a process that a event has occurred

    where should a signal be delivered for multi-threaded?

    all threads in a process share a same handler

    Thread-Local Storage

    each thread have its own data, across functions, similar to static data

    Why not use thread stack? life cycle reasons.

    Scheduler Activations

    How to maintain an appropriate number of kernel threads allocated to the app?

    Intermediate data structure between user and kernel threads: lightweight process (LWP), a virtual processor on which process can schedule user thread to run, each LWP attached to kernel thread

    Upcalls: from kernel to the upcall handler in the thread library

    When blocked, the attached LWP also blocks. The kernel makes an upcall and then allocates a new LWP to the application.

  • 相关阅读:
    如何改变Activity在当前任务堆栈中的顺序,Intent参数大全
    SQL删除重复记录,并只保留一条
    SpringCloud+Eureka+Feign+Ribbon+zuul的简化搭建流程和CRUD练习
    Spring Cloud Bus 消息总线
    Spring Cloud之Swagger集群搭建
    nginx-ZUUL集群
    spring boot swagger-ui.html 404
    jenkins 部署docker 容器 eureka 集群 完整配置 多台服务器
    Linux(Centos)之安装Nginx及注意事项
    Idea 导出Modules =>jar
  • 原文地址:https://www.cnblogs.com/mollnn/p/14701629.html
Copyright © 2011-2022 走看看