zoukankan      html  css  js  c++  java
  • 操作系统作业答案整理

    期末复习的时候把操作系统的作业的答案整理了一遍,顺手放上博客吧。

    题目基本来自Operating System Concepts,答案基本上不是来自instructor's manual就是自己整理的。

    What is the microkernel? What are the advantages and disadvantages of using the microkernel approach?

    • A microkernel consists only of essential components of an OS kernel. In such a system, all the nonenssential components in a traditional kernel will be implemented as system or user-level programs.
    • Advantages:

      • It is easier to extend. New services are added to user space without changing kernel, even the kernel has to be modified, the changes tend to be fewer because the kernel itself is very small.
      • It is easier to port since it's easy to adapt the microkernel to another hardware design
      • It provides more security and reliability, since most services are running as user processes. Failed services will not affect the kernel.
    • Disadvantages:

      • The performance of a microkernel may suffer due to increased system-function overhead.

    Describe what Virtual Machine is and list some advantages(benefits) of VM.

    • A virtual machine (VM) is a software-based emulation of a computer. Virtual machines operate based on the computer architecture and functions of a real or hypothetical computer.
    • Advantages:
      • Each virtual machine is completely isolated from each other, therefore the various system resources will be completed protected.
      • VM makes it easy to switch from one OS/environment to another
      • System development can be done without shutting down running systems

    What is the difference between System Call and API? Why we usually use API rather than System Call?

    • Differences

      • Sytem calls are closer to OS while APIs are closer to application programmers
      • The functions that make up an API typically invoke the actual system calls on behalf of the application programmer
    • Benefits

      • Portability: a program can run on multiple systems as long as they provide the same APIs
      • Actual system calls can often be more detailed and difficult to work with than the APIs.

    Please describe the concept of process and its five possible states clearly.

    • Process is programs loaded into the memory and in execution.
    • States:
      • New. The process is being created.
      • Running. Instructions are being executed.
      • Waiting. The process is waiting for some event to occur (such as an I/O completion or reception of a signal).
      • Ready. The process is waiting to be assigned to a processor.
      • Terminated. The process has finished execution.

    Please describe the differences between process and thread.

    • processes are typically independent, while threads exist as subsets of a process
    • processes carry considerably more state information than threads, whereas multiple threads within a process share process state as well as memory and other resources
    • processes have separate address spaces, whereas threads share their address space
    • processes interact only through system-provided inter-process communication mechanisms
    • context switching between threads in the same process is typically faster than context switching between processes.

    Please describe the differences among shot-term, medium-term, and long-term scheduling.

    • Short-term scheduler selects from among the processes that are ready to execute and allocates the CPU to one of them. It schedules process execution
    • Medium-term scheduler selects the processes to swap out of memory to improve process mix or to free up memories to use. It schedules process swapping
    • Long-term scheduler select processes from the process pool on disk and load them into memory for exection. It schedules process loading

    Please describe the actions taken by a kernel to context-switch between processes.

    When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run.

    What are the benefits of multithreaded programming?

    • Responsiveness. Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user.
    • Resource sharing. Threads share the memory and the resources of the process to which they belong without extra arrangement.
    • Economy. Because threads share the resources of the process to which they belong, it is more economical to create and context-switch threads.
    • Scalability. The benefits of multithreading can be even greater in a multiprocessor architecture, where threads may be running in parallelon different processing cores.

    Table

    Many-to-one Model

    • definition
      • Many user-level threads maps to single kernel thread
    • advantages
      • User threads are controlled by user-level threading library, no system calls required
      • No context switch for user threads
      • Fewer system dependencies; portable
    • disadvantages
      • No parallel execution of threads - can’t exploit multiple processors
      • All threads block when one thread blocks

    One-to-one Model

    • definition
      • Each user-level thread maps to one kernel thread
    • advantages
      • When one thread blocks, other threads can continue to execute.
      • Threads can be executed in parallel by different processors
    • disadvantages
      • Each user thread requires creation of kernel thread, which is expensive; Too many threads causes huge overhead for maintaining kernel threads
      • Each thread requires kernel resources; limits number of total threads
      • There is context switch for a kernel thread when the user thread mapping to it is switched.

    Many-to-many Model

    • definition
      • Many user-level threads maps to a equal or smaller number of kernel threads
    • advantages
      • When one thread blocks, other threads can continue to execute.
      • Threads can be executed in parallel by different processors
      • User threads are controlled by user-level threading library, no system calls required
    • disadvantages
      • The cooperation of the kernel and the user-space threading increases complexity, making it hard to implement
      • When user threads as many as kernel threads block, the others will still block

    Two-level Model

    • definition
      • Many user-level threads are mapped to a smaller or equal number of kernel threads while a user-level thread can still be bounded to a kernel thread.
    • advantages
      • Same as many-to-many
      • Important threads won't block just because other threads block
    • disadvantages
      • The cooperation of the kernel and the user-space threading increases complexity, making it hard to implement
      • When user threads bounded to multiple kernel threads and as many as kernel threads block, the others will still block

    Why can we optimize a web sever with the thread pool?

    create a number of threads at process startup and place them into a pool, where they sit and wait for work. When a server receives a request, it awakens a thread from this pool—if one is available—and passes it the request for service. Once the thread completes its service, it returns to the pool and awaits more work. If the pool contains no available thread, the server waits until one becomes free

    1. Servicing a request with an existing thread is faster than waiting to create a thread.
    2. A thread pool limits the number of threads that exist at any one point. This is particularly important on systems that cannot support a large number of concurrent threads.
    3. Separating the task to be performed from the mechanics of creating the task allows us to use different strategies for running the task.For example, the task could be scheduled to execute after a time delay or to execute periodically

    Please describe the differences between preemptive scheduling and nonpreemptive scheduling.

    • Nonpreemptive scheduling
      • only schedule when a process switches from running state to waiting state or when a process terminates.
      • A process can occupy the CPU until it terminates or switches to waiting state
      • easy to implement
      • better for batch system
    • Preemptive scheduling
      • can schedule in other situation
      • A process may have to release the CPU before it terminates or switches to waiting state
      • processes won't occupy the CPU for too long
      • larger overhead: context switch
      • need some special hardware e.g. a timer
      • better for time-sharing system

    What is a critical section? what the three requirements does a solution to the critical-section problem satisfy?

    A critical section is a segment of code in process doing something critical e.g. modifying common variables. No two processes can execute in their critical sections at the same time.

    1. Mutual exclusion. If process Piis executing in its critical section, then no other processes can be executing in their critical sections.
    2. Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in decidingwhichwillenteritscriticalsectionnext,andthisselectioncannot be postponed indefinitely.
    3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

    What is the meaning of the term busy waiting? What other kinds of waiting are there in an operating system? Can busy waiting be avoided altogether? Explain your answer.

    Answer: Busy waiting means that a process is waiting for a condition to be satisfied in a tight loop without relinquishing the processor. Alternatively, a process could wait by relinquishing the processor, and block on a condition and wait to be awakened at some appropriate time in the future.

    Busy waiting can be avoided but incurs the overhead associated with putting a process to sleep and having to wake it up when the appropriate program state is reached.

    Show that, if the wait() and signal() semaphore operations are notexecuted atomically, then mutual exclusion may be violated.

    Answer: A wait operation atomically decrements the value associated with a semaphore. If two wait operations are executed on a semaphore when its value is 1, if the two operations are not performed atomically, then it is possible that both operations might proceed to decrement the semaphore value thereby violating mutual exclusion.

    List the 4 necessary conditions of deadlock.

    1. Mutual exclusion. At least one resource must be held in a nonsharable mode; that is, only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released.
    2. Hold and wait. A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes.
    3. No preemption. Resources cannot be preempted; that is, a resource can be released only voluntarily by the process holding it, after that process has completed its task.
    4. Circular wait. A set {P0, P1, ..., Pn} of waiting processes must exist such that P0is waiting for a resource held by P1, P1is waiting for a resource held by P2, ..., Pn−1 is waiting for a resource held by Pn, and Pn is waiting for a resource held by P0.

    Please explain the difference between internal and external fragmentation.

    Internal Fragmentation is the area in a region or a page that is not used by the job occupying that region or page. This space is unavailable for use by the system until that job is finished and the page or region is released.

    Consider a paging system with the page table stored in memory.

    1. If a memory reference takes 200 nanoseconds, how long does a paged memory reference take?
    2. If we add associative registers, and 75 percent of all page-table references are found in the associative registers, what is the effective memory reference time? (Assume that finding a page-table entry in the associative registers takes zero time, if the entry is there.)

    Answer:

    1. 400 nanoseconds; 200 nanoseconds to access the page table and 200 nanoseconds to access the word in memory.
    2. Effective access time = 0.75 x (200 nanoseconds) + 0.25 x (400 nanoseconds) = 250nanoseconds

    Discuss situations under which the most frequently used page-replacement algorithm generates fewer page faults than the least recently used page-replacement algorithm. Also discuss under what circumstance does the opposite holds.

    Consider the sequence in a system that holds four pages in memory:1 2 3 4 4 4 5 1.The most frequently used page replacement algorithm evicts page4 while fetching page5, while the LRU algorithm evicts page 1. This is unlikely to happen much in practice. For the sequence 1 2 3 4 4 4 5 1 the LRU algorithm makes the right decision.

    What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?

    Answer: Thrashing iscaused byunderallocationof the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.

    Consider a file system where a file can be deleted and its disk space reclaimed while links to that file still exist.

    • What problems may occur if a new file is created in the same storage area or with the same absolute path name?
    • How can these problems be avoided?

    Answer: Let F1 be the old file and F2 be the new file. A user wishing to access F1 through an existing link will actually access F2. Note that the access protection for file F1 is used rather than the one associated with F2.

    This problem can be avoided by insuring that all links to a deleted file are deleted also. This can be accomplished in several ways:

    1. maintain a list of all links to a file, removing each of them when the file is deleted
    2. retain the links, removing them when an attempt is made to access a deleted file
    3. maintain a file reference list (or counter), deleting the file only after all links or refer- ences to that file have been deleted.

    Consider a system that supports 5000 users. Suppose that you want to allow 4990 of these users to be able to access one file.

    1. How would you specify this protection scheme in UNIX?
    2. Could you suggest another protection scheme that can be used more effectively for this purpose than the scheme provided by UNIX?

    Answer

    1. There are two methods for achieving this:
      1. Create an access control list with the names of all 4990 users.
      2. Put these 4990 users in one group and set the group access accordingly. This scheme cannot always be implemented since user groups are restricted by the system.
    2. The universe access information applies to all users unless their name appears in the access-control list with differentaccess permission. With this scheme you simply put the names of the remaining ten users in the access control list but with no access privileges allowed.

    Explain the purpose of the open and close operations.

    Answer:

    1. The open operation informs the system that the named file is about to become active.
    2. The close operation informs the system that the named file is no longer in active use by the user who issued the close operation.

    Consider a file system that uses a modifed contiguous-allocation scheme with support for extents.

    A file is a collection of extents, with each extent corresponding to a contiguous set of blocks. A key issue in such systems is the degree of variability in the size of the extents. What are the advantages and disadvantages of the following schemes:

    1. All extents are of the same size, and the size is predetermined.
    2. Extents can be of any size and are allocated dynamically.
    3. Extents can be of a few fixed sizes, and these sizes are predetermined.

    Answer:

    1. If all extents are of the same size, and the size is predeter- mined, then it simplifies the block allocation scheme. A simple bit map or free list for extent swould suffice.
    2. If the extents can be of any size and are allocated dynamically, then more complex allocation schemes are required. It might be difficult to find an extent of the appropriate size and there might be external fragmentation. One could use the Buddy system allocator discussed in the previous chapters to design an ap- propriate allocator.
    3. When the extents can be of a few fixed sizes, and these sizes are predetermined, one would have to maintain a separate bitmap or free list for each possible size. This scheme is of intermediate complexity and of intermediate flexibility in comparison to the earlier schemes.

    Compare the throughput achieved by a RAID Level5 organization with that achieved by a RAID Level 1 organization for the following:

    1. Read operations on single blocks
    2. Read operations on multiple contiguous blocks

    Answer:

    1. The amount of throughput depends on the number of disks in the RAID system. A RAID Level 5 comprising of a parity block for every set of four blocks spread over five disks can support four to five operations simultaneously. A RAID Level 1 comprising of two disks can support two simultaneous operations. Of course, there is greater flexibility in RAID Level 1 as to which copy of a block could be accessed and that could provide performance benefits by taking into account position of disk head.
    2. RAID Level 5 organization achieves greater bandwidth for accesses to multiple contiguous blocks since the adjacent blocks could be simultaneously accessed. Such bandwidth improvements are not possible in RAID Level 1.

    State the advantages and disadvantages of placing functionality in a device controller, rather than in the kernel.

    Answer:

    1. Bugs are less likely to cause an operating system crash
    2. Performance can be improved by utilizing dedicated hardware and hard-coded algorithms
    3. The kernel is simplified by moving algorithms out of it Three disadvantages: Bugs are harder to fix—a new firmware version or new hardware is needed

    Why might a system use interrupt-driven I/O to manage a single serial port, but polling I/O to manage a front-end processor, such as a terminal concentrator?

    Polling can be more efficient than interrupt-driven I/O. This is the case when the I/O is frequent and of short duration. Even though a single serial port will perform I/O relatively infrequently and should thus use interrupts, a collection of serial ports such as those in a terminal concentrator can produce a lot of shortI/Ooperations, and interrupting for each one could create a heavy load on the system. A well-timed polling loop could alleviate that load without wasting many resources through looping with no I/O needed.

    How does DMA increase system concurrency? How does it complicate hardware design?

    DMA increases system concurrency by allowing the CPU to perform tasks while the DMA system transfers data via the system and memory buses. Hardware design is complicated because the DMA controller must be integrated into the system, and the system must allow the DMA controller to be a bus master. Cycle stealing may also be necessary to allow the CPU and DMA controller to share use of the memory bus.

  • 相关阅读:
    Struts2配置文件讲解
    分布式与集群的区别
    ANDROID中FRAGMENT的两种创建方式
    一个让echarts中国地图包含省市轮廓的技巧
    图解Spark API
    对NP问题的一点感想
    laravel框架容器管理的一些要点
    RedisRepository分享和纠错
    javascript中对数据文本格式化的思考
    LazyMan深入解析和实现
  • 原文地址:https://www.cnblogs.com/joyeecheung/p/3831994.html
Copyright © 2011-2022 走看看