### Author Topic: A short critical analysis of Paoloni’s code timing algorithm  (Read 333 times)

#### Biterider

• Moderator
• Member
• Posts: 1082
• ObjAsm Developer
##### A short critical analysis of Paoloni’s code timing algorithm
« on: January 20, 2023, 06:29:48 AM »
Hi
I would like to clarify a few points regarding the use of the Paoloni code benchmark algorithm.

The calculation of the variances provides information about how strongly disturbances by the operating system or hardware influence the measurements.
Since the algorithm only selects the minimum time value from each measurement step, all other values are discarded. The argument behind is that this minimum value is the undisturbed value that the CPU can reach when processing only the analyzed code. This is a premise that must be checked at the end of the procedure. If this cannot be guaranteed, the entire procedure is no longer valid. More about this later.
The concept expressed as a formula looks like this:

Measured Execution Time = Minimal Execution Time + Random Disturbances

A linear regression is performed using these unperturbed minimum values from each procedural step. The idea here is that the behavior is linear, and that one can calculate the exact execution time increment for each procedural step. As a by-product, the overhead is also calculated.

Minimal Execution Time = Overhead + Iterations * Code Execution Time

For our purposes, only the slope of the regression line is relevant.
The better it fits the undisturbed minimum values, the better the Determination Coefficient R2 value and the measurement accuracy.

Disturbances that influence the measured execution time have their origin in the operating system and in the hardware.
The Paoloni algorithm can only compensate for them if an undisturbed minimum value is measured in each procedural step. If this is not the case, more probing is required. If this also does not work, the algorithm must be run in a more suitable environment than Windows, such as UEFI or a Linux-like operating system. This last step requires more effort and is only necessary if the undisturbed minimum values are not achieved. A very good piece of code for UEFI is available here in the forum.

When looking at the measured values, in some cases a perfect linearity is not achieved, possibly due to hardware influences affecting the execution speed. In this case, the measurement run must be repeated.

One problem with this approach is that the longer the tested code is, the less likely it is that an unperturbed minimum can be measured, to the point where this procedure is no longer feasible.

References:
http://masm32.com/board/index.php?topic=10604.0
http://masm32.com/board/index.php?topic=10614.0
http://masm32.com/board/index.php?topic=10099.0

Biterider
« Last Edit: January 23, 2023, 02:45:23 AM by Biterider »