zoukankan      html  css  js  c++  java
  • Performance Test tool for Dynamics 365

     

    The Performance Test tool is a tool that is available from within the D365 F&O client from within the System Administration module.

    System Administration > Periodic Tasks > Run Performance Test

    image

    This tool basically allows you to run some very simple, controlled and repeatable test that will give you an indication on how long specific micro operations related to data retrieval and modification take.

    How to run the tool

    From there you get a form that looks like this:

    image

    This basically allows you to configure a certain number of tests to be performed as well as a number of times you want those tests executed. Once you have set up your test click run and it will execute those tests.

    Couple of things to know here. The more tests selected the longer this will take obviously.

    The record count to test is a value that must be between 10 and 100 000 and it will be brought back to within those bounds if below/above.

    If you’re looking to test something specific, you should disable most of the others.

    Anyhow, with a setting of 1000, what will happen is that the tool will perform 1000 inserts into a table, then run through the other tests selected so … make 1000 updates on that table, perform 1000 selects on the clustered index, 1000 selects on the unique index, 1000 record set insert to a separate table, 1000 inserts to tempdb and so on and so forth.

    As a recap, this is doing tests mostly related to database access (insert update select and delete) as well as accessing and manipulating the data within, tempdb, in memory tables and AOS cache, and doing all these operations as many times as you’ve specified.

    How to read and understand the results

    Reading the result

    After running the tool, you will get an output that looks like this:

    image

    Of course, the list here depends on the tests that were selected.

    One thing to specify right off the bat, is that all the values provided here are in milliseconds (as stated on the first result but not repeated).

    For every single test it corresponds to x (1000 in my example) operations. Though it is worth mentioning some things.

    Inserts test is measuring the time to perform x times the insertion of 1 row … while recordlist and recordset tests are doing 1 operation affecting x rows.

    Regarding the selects on clustered index, it is selecting 1 record 10 times and then the next until the total operations is x (so with 1000 we are testing 100 records). This is a hybrid test between SQL and AOS caching (unless caching has been disabled in the options), whereas the unique index cache hit is fully AOS record cache and the test without cache on unique index and non-unique index test are both fully going to SQL for each select.

    As recap, we have discussed several different tests, some that are mostly just SQL, some that are pretty much AOS caching related and others that are hybrid.

    Interpreting the results

    It is important to understand what this test can help you with and what it cannot help you with.

    First, this test is only running very lightweight queries, as many times as you specify.

    As a result, this gives very little information on how powerful the SQL server is, but more how fast it can process multiple small queries, which is typically the type of workload you would expect from an ERP. It also means that you have control over how long the test is going to run, but that in any case, it will mostly have very little impact on the rest of the system.

    This also means that a test like the insert … which requires very little work from the SQL server (typically less than 0.5 ms to actually do the insert) will give you a close look at the latency between that AOS and the SQL server if and onlyif the DTU on the SQL server and CPU on the AOS are very low when you run the test.

    It is also a good test to compare the impact of high DTU, as if the server is under high DTU you might expect some additional waiting hence the that time would increase, and you can compare the results on high a low DTU.

    On the contrary, a test like SGOC or cache hits should have no correlation to the SQL DTU … but those should be affected by AOS CPU.

    So, if you want to see the effect of CPU on the AOS those are the tests you might want to run.

    The SGOC test is a good test to see the impact of having lots of AOS as this needs to be written to all AOS.

    This tool also provides a good insight on how much more efficient recordset operations are.

    Apart from providing insights on how certain types of operations are affected by different kind of load, this also allows us to see the impact of the increasing the number of records.

    For instance, you might want to compare 1000 rows with 10 000 rows, which will show you how these operations scale with the amount of records (linear? logarithmic? etc.)

    On another note, and this is very important, this tool is not meant to compare one environment with another. There multiple reasons for that.

    Different environments might have different number of AOS which will affect things like cache especially when it comes to SGOC.

    Different environments might be in different Azure regions which might have different latency.

    Different environment might have different sizing and different load.

    How it works under the hood and impact

    If we look under the hood, the first thing I’d like to point out is that tool basically uses his own tables:

     image

    This tells us that the tool is not touching any transactional tables.

    These are fairly simple tables with a couple of generic fields of different types:

     image

    If we look at little bit at how the tool works, we can see that those tables are cleaned before and after each test:

    image

    This ensures that before and after every test we guarantee that there no records left in those tables. The tables used by the tool only have data for as long as the test is running.

    Then we are simply calling sequentially every single test that has been selected one at a time.

     image

    If we look at the detail of a given test here is what we can see.

    image

    We are simply starting a timer running the operation and stopping the timer and reporting the time it took to run the operation.

    We can also see from the above that the data that is used in the table (I choose the 1st step doing the population on purpose) is simply basic data.

    From the above we can see that the impact on the environment from running this test with a “small” value such as 1000 rows is pretty much minimal.

    With that said, we’ve gone through:

    • How to access the tool
    • How to run the tool
    • How to read and interpret the results
    • How the tool works under the hood

    I hope this was useful for you, this concludes this blog entry.

  • 相关阅读:
    捷微商城小程序上线啦~
    JEECG 新版在线文档WIKI正式发布
    https 详解
    css 3 新特性
    js 基础(一)
    BFC
    .Net、C# 汉字转拼音,简体繁体转换方法
    丰富“WinForms” 的一个别样"项目"(学生管理)
    学生管理系统1
    学生管理系统
  • 原文地址:https://www.cnblogs.com/lingdanglfw/p/15187863.html
Copyright © 2011-2022 走看看