### Author Topic: Code benchmark on Windows  (Read 738 times)

#### Biterider

• Moderator
• Member
• Posts: 1082
• ObjAsm Developer
##### Code benchmark on Windows
« on: January 12, 2023, 05:39:00 AM »
Hi
I spent some time adapting the code from this thread http://masm32.com/board/index.php?topic=10099.msg110903#msg110903 to run on Windows.
The code was originally developed for Linux and adapted for UEFI and now further developed for Windows.

The intent was to see if such a benchmark was feasible and gave reproducible results.
Looking at the numbers and comparing them to the UEFI version, the code execution time looks very similar and the coefficient of determination R2 looks remarkably good.

If you want to test it, the code is on Github and the binary in the attachment. It requires DebugCenter to get the results.

Biterider

#### HSE

• Member
• Posts: 2494
• AMD 7-32 / i3 10-64
##### Re: Code benchmark on Windows
« Reply #1 on: January 12, 2023, 08:11:03 AM »
Hi Biterider!!

Fantastic! Very simple to test little algorithms.

Code: [Select]
`Spurious =    16          [-]         Number of Spurious min. ValuesMaxMaxDiff =  84868       [Tick]      Max. Spread of the ValuesMeanVar =     4.166E+0005 [Tick^2]    Mean VarianceVarOfVars =   1.214E+0012 [Tick^4]    Variance of VariancesMinsMSE =     5.546E+0002 [Tick^2]    Linear Regression MSER2 =          9.973E-0001 [-]         Determination CoeficientOverhead =    78.4        [Tick]      Measurement OverheadMeasure =     15.5        [Tick]      Code Execution Time`
the coefficient of determination R2 looks remarkably good.

You can't have a bad R2, forget that  . You have to see variance, and Variance of Variances is at list 106 times bigger than microLinux or UEFI.

Don't look very good at all:

Code: (Whitout loop) [Select]
`Spurious =   9           [-]           Number of Spurious min. ValuesMaxMaxDiff =  89794        [Tick]     Max. Spread of the ValuesMeanVar =     1.005E+0005 [Tick^2]    Mean VarianceVarOfVars =    4.369E+0010 [Tick^4]  Variance of VariancesMinsMSE =     2.145E+0000 [Tick^2]    Linear Regression MSER2 =          8.739E-0003 [-]         Determination CoeficientOverhead =   101.4       [Tick]      Measurement OverheadMeasure =     0.0         [Tick]       Code Execution Time`
Code: (With loop but without target) [Select]
`Spurious =   1           [-]           Number of Spurious min. ValuesMaxMaxDiff =  88474        [Tick]     Max. Spread of the ValuesMeanVar =     1.095E+0005 [Tick^2]    Mean VarianceVarOfVars =    6.304E+0010 [Tick^4]  Variance of VariancesMinsMSE =     1.918E+0001 [Tick^2]    Linear Regression MSER2 =          9.694E-0001 [-]         Determination CoeficientOverhead =   100.6       [Tick]      Measurement OverheadMeasure =     0.9         [Tick]       Code Execution Time`
Code: (With loop and target) [Select]
`Spurious =   2           [-]           Number of Spurious min. ValuesMaxMaxDiff =  95388        [Tick]     Max. Spread of the ValuesMeanVar =     3.608E+0005 [Tick^2]    Mean VarianceVarOfVars =    2.637E+0011 [Tick^4]  Variance of VariancesMinsMSE =     1.112E+0002 [Tick^2]    Linear Regression MSER2 =          9.994E-0001 [-]         Determination CoeficientOverhead =   82.7         [Tick]       Measurement OverheadMeasure =     15.4       [Tick]      Code Execution Time`
HSE
Equations in Assembly: SmplMath

#### Biterider

• Moderator
• Member
• Posts: 1082
• ObjAsm Developer
##### Re: Code benchmark on Windows
« Reply #2 on: January 13, 2023, 02:40:25 AM »
Hi HSE
I agree with you that the "quality" of the measurement in Windows is not as good as in the other environments mentioned.
The question we need to answer is whether it is good enough for code timing.

The criteria presented by Paolini and the method he showed seem to me to be much better than the simple and fallacious start-stop measurement method.
Repeatability and linearity are the criteria for me to validate the method. Variance, variance of variances and so on are parameters that define accuracy and dispersion.
The last parameter is much worse under Windows, as expected, but still the approach is reasonable and well applicable.

Just my 2 cents

Biterider

#### HSE

• Member
• Posts: 2494
• AMD 7-32 / i3 10-64
##### Re: Code benchmark on Windows
« Reply #3 on: January 13, 2023, 08:36:19 AM »
Hi Biterider!

The criteria presented by Paoloni and the method he showed seem to me to be much better than the simple and fallacious start-stop measurement method.

No doubt.

Repeatability and linearity are the criteria for me to validate the method.

Not for Paoloni. Repeatability is just an expression of measurement variance, and that is the thing. But results are very stable because are processed sample groups, not individual samples

Variance, variance of variances and so on are parameters that define accuracy and dispersion.

Exatly this is what make a method reliable or not for Paoloni.

Just my 2 cents

A complete piece.

I think a more complex design is necesary because I see little strange things, also in UEFI, perhaps related to core speed variations. Probably that is a processor response to load or temperature. But that is a complete speculation
« Last Edit: January 23, 2023, 02:46:19 AM by Biterider »
Equations in Assembly: SmplMath

#### Biterider

• Moderator
• Member
• Posts: 1082
• ObjAsm Developer
##### Re: Code benchmark on Windows
« Reply #4 on: January 20, 2023, 02:19:02 AM »
Hi
I took a closer look at the data from the benchmark application and found an issue that also occurs in other related projects.

The problem lies in the linear regression, which assumes that the first data is at X=0, but in reality it is at X=1.
For this reason, the offset, called overhead in this algorithm, needs to be corrected by one slope unit.

I have made the corrections in the Github projects, but those who use the algorithm must also fix their code.

Biterider

#### HSE

• Member
• Posts: 2494
• AMD 7-32 / i3 10-64
##### Re: Code benchmark on Windows
« Reply #5 on: January 20, 2023, 04:18:26 AM »
Hi Biterider!

The problem lies in the linear regression, which assumes that the first data is at X=0, but in reality it is at X=1.

No. You begin with x=0:
Code: [Select]
`    xor rsi, rsi         ···    xor edi, edi        ···    mov ebx, esi`
Just in the slope there is an extra because:
Code: [Select]
`    dec ebx`
HSE
Equations in Assembly: SmplMath

#### Biterider

• Moderator
• Member
• Posts: 1082
• ObjAsm Developer
##### Re: Code benchmark on Windows
« Reply #6 on: January 20, 2023, 04:37:39 AM »
Hi HSE
No, it is like I said.
The index starts with zero but for the following calculations you have to think of it as X=1.
That is why you have to correct the offset/overhead value, otherwise you will get the offset at X=1 and that is not what you want.
The axis intersection is at X=0.

Biterider

#### HSE

• Member
• Posts: 2494
• AMD 7-32 / i3 10-64
##### Re: Code benchmark on Windows
« Reply #7 on: January 20, 2023, 05:19:29 AM »
Hi
No..
No...

What I can say? With same code, here begin with x=0  :
Code: [Select]
`SIZE_OF_STAT  equ 2BOUND_OF_LOOP equ 3`
Code: [Select]
`ebx = 0 [BenchmarkWin.asm, 127]ebx = 0 [BenchmarkWin.asm, 127]ebx = 1 [BenchmarkWin.asm, 127]    testing opcodes [BenchmarkWin.asm, 130]ebx = 1 [BenchmarkWin.asm, 127]    testing opcodes [BenchmarkWin.asm, 130]ebx = 2 [BenchmarkWin.asm, 127]    testing opcodes [BenchmarkWin.asm, 130]    testing opcodes [BenchmarkWin.asm, 130]ebx = 2 [BenchmarkWin.asm, 127]    testing opcodes [BenchmarkWin.asm, 130]    testing opcodes [BenchmarkWin.asm, 130]`
Equations in Assembly: SmplMath

#### Biterider

• Moderator
• Member
• Posts: 1082
• ObjAsm Developer
##### Re: Code benchmark on Windows
« Reply #8 on: January 20, 2023, 05:31:02 AM »
Hi HSE
Sorry, I think I was wrong.
The first iteration does not execute the test code and as such, it is evaluated at X=0.
I'll have to roll back the code.

Biterider

#### HSE

• Member
• Posts: 2494
• AMD 7-32 / i3 10-64
##### Re: Code benchmark on Windows
« Reply #9 on: January 20, 2023, 05:56:46 AM »

Equations in Assembly: SmplMath