zoukankan      html  css  js  c++  java
  • windows下的性能监视器。

    主要看几个参数和阀值

    Counter (Parent Object) Recommended Range
    % CPU Time (System) 0-90% ( > 90% indicates potential processor bottleneck; may also indicate thread contention problem; investigate Context Switches/sec and System Calls/sec for potential thread issues)
    % Privileged Time (System) 0-40% ( > 40% indicates excessive system activity; correlate with System Calls/sec)
    Context Switches/sec (System) 0-10,000 ( > 10,000 may indicate too many threads contending for resources; correlate with System Calls/sec and threads counter in Windows Task Manager to identify process responsible)
    File Control Operations/sec (System) Ratio dependent (The combined rate of file system operations that are neither reads nor writes [file control/manipulation only, non-data related]. Inverse of File Data Operations/sec)
    File Data Operations/sec (System) Ratio dependent (Combined rate of all read/write operations for all logical drives. Inverse of File Control Operations/sec)
    System Calls/sec (System) 0-20,000 ( > 20,000 indicates potentially excessive Windows system activity; correlate with Context Switches/sec and threads counter in Windows Task Manager to identify process responsible)
    Interrupts/sec (System) 0-5000 (> 5000 indicates possible excessive hardware interrupts; justification is dependent on device activity)
    Pages/sec (Memory) 0-200 ( > 200 warrants investigation into memory subsystem; define reads (pages in) versus writes (pages out); check for proper paging file and resident disk configuration; May indicate application memory allocation problems, heap management issues)
    Average Disk Queue Length (Logical Disk) 0-2 ( > 2 indicates potential disk I/O bottleneck due to I/O subsystem request queue growing; correlate with Average Disk sec/Transfer)
    Average Disk sec/Transfer (Logical Disk) 0-.020 ( > .020 seconds indicates excessive request transfer latency and potential disk I/O bottleneck; define reads/sec versus writes/sec; correlate with Average Disk Queue Length)
    Bytes Total/sec (Network Interface) Depends upon interface type (10baseT, 100baseT) A potential network I/O bottleneck exists when throughput approaches theoretical maximum for interface type. (For example, 10baseT theoretical maximum = 10 x 1,000,000 bits = 100 Mbits/sec divided by 8 = 12.5 Mbytes/sec)
    Packets/sec (Network Interface) Depends upon interface type (10baseT, 100baseT)

     Common Counters and Recommended Ranges

    image

    So what are we really measuring with the Physical disk performance object -> Avg. Disk sec/Transfer (or /Read, or /Write) counter? 
    We are measuring all the time spent below the partition manager level. 
    When the IO request is sent by the Partition Manager down the stack we time stamp it, when it arrives back we time stamp it again and calculate the time difference. The time difference is the latency.

    This means we are accounting for the time spent in the following components:

    1. Class Driver – manages the device type, such as disks, tapes, etc.
    2. Port Driver – manages the transport protocol, such as SCSI, FC, SATA, etc.
    3. Device Miniport Driver – This is the device driver for the Storage Adapter. It is supplied by the vendor of the device (Raid Controller, and FC HBA).
    4. Disk Subsystem – This includes everything below the Device Miniport Driver – This could be as simple as a cable connected to a single physical hard disk, or as complex as a Storage Area Network.

    How disk queuing affects the measured latency in Perfmon? 
    There is only a limited number of IO a disk subsystem can accept at a given time. The excess IO gets queued until the disk can accept IO again. The time IO spends in the queues below the Partition Manager is accounted in the Perfmon physical disk latency measurements. As queues grow larger and IO has to wait longer, the measured latency also grows.

    There are a multiple queues below the Partition Manager level:

    • Microsoft Port Driver Queue -SCSIport or Storport queue.
    • Vendor Supplied Device Driver Queue- OEM Device driver.
    • Hardware Queues - such as disk controller queue, SAN switches queue, array controller queue, hard disk queue.
    • Although not a queue, we also account for the time the hard disk spends actively servicing the IO and the travel time all the way back to the partition manager level to be marked as completed.

    Finally, special attention to the Port Driver Queue (for SCSI Storport.sys). 

    The Port Driver is the last Microsoft component to touch an IO before we hand it off to the vendor supplied Device Miniport Driver. 
    If the Device Miniport Driver can’t accept any more IO because its queue and/or the hardware queues below are saturated, we will start accumulating IO on the Port Driver Queue. The size of the Microsoft Port Driver queue is limited only by the available system memory (RAM) and can grow very large, causing large measured latency. 

    In Conclusion: 
    The time the IO spent in queue is added to the disk latency in perfmon.

    To keep the queue under control you have to tune your applications to limit the maximum number of outstanding I/O operations they generate. That’s a subject for another blog post.

     

  • 相关阅读:
    电力三维基础信息平台
    基于三维GIS技术的输电线路地理信息系统的设计与实现
    linux学习笔记(1)
    linux学习笔记(4)
    linux学习笔记(8)
    linux学习笔记(6)
    linux学习笔记(3)
    linux 学习笔记(2)
    linux学习笔记(7)
    ASCII码表完整版
  • 原文地址:https://www.cnblogs.com/jjkv3/p/2946330.html
Copyright © 2011-2022 走看看