Hi Guys,
I see quite a few posts around saying that various routines take x amount of t-cycles.
How do you measure this?
http://masm32.com/board/index.php?topic=49.0
this is one way...
there are other more current methods as well.
browse some postings in the laboratory where such topics are posted....
Cool. Will take a look. Thanks :biggrin:
Sometimes they use cycle counters tailored to particular code pieces. generally smaller code would be executed x amount of times. then cycles or milliseconds used are counted. this is done several times. then it is posted as tested including timer or cycle counter, to allow testing on other hardware or os version....
jj2007, dedndave (btw - where is Dave,??), hutch (on a good day :P) and others may be able to explain better,..
A few of the members here are quite adept at code optimization. When you do decide what you want to optimize, post it in the Laboratory where other members can help you test on other platforms - hardware, os version, etc... or even give suggestions on areas that need improvement. :idea:
Awesome, thanks again. :icon_cool:
Word of wisdom here, only ever trust real time, not cycle counts, make the test long enough and measure the duration. The last processor where cycles mattered was an i386, a 486 and later used pipelines, later again multiple pipelines and the only thing that matters is instruction scheduling.
Yeah, I was thinking for practical purposes to have a massive loop and measure the time taken to complete the loop, adjusting and re-testing.
BTW : Is there any quick way to determine number of instructions executed, per discreet thread or
cycle / time slice count ? :biggrin:
Sorry if this is off topic....
PPS: I'd like to measure workload performance as a "pressure" value over 1000 instructions.
Perhaps some-one has done such before, I'am hoping