include \masm32\include\masm32rt.inc
.code
start:
mov ebx, rv(GetProcessHeap)
invoke HeapAlloc, ebx, 0, 33
xchg eax, esi
mov byte ptr [esi+33], "x" ; oops!
invoke HeapValidate, ebx, 0, 0
print LastError$()
invoke HeapFree, ebx, 0, esi
print LastError$()
exit
end start
So far, so clear: 33 bytes allocated, 33 bytes used (or was it 34 with esi+33? anybody good at math? ;))
And, as we all expect, LastError$ is 2 x "The operation completed successfully."However, if you build this snippet with RichMasm, you'll get a different message:
## HEAP[DebugHeapTestMasm32.exe]:
## Heap block at 00583F08 modified at 00583F31 past requested size of 21
The operation completed successfully.
## HEAP[DebugHeapTestMasm32.exe]:
## Heap block at 00583F08 modified at 00583F31 past requested size of 21
## HEAP[DebugHeapTestMasm32.exe]:
## Invalid address specified to RtlFreeHeap( 00580000, 00583F10 )
The operation completed successfully.
** DebugHeap found errors **
From
version 1 March 2016 onwards, RichMasm will debug console mode applications (and this includes Gui apps built with subsys console, so that's how you can test windows applications).
This feature can be suppressed by inserting OPT_NoDebug somewhere in the code (same for OPT_Wait 0, i.e. don't wait for a keypress). It will also not be used if RichMasm finds an
int 3 somewhere in the source, as this will invoke Olly instead.
My tests with timing code from The Laboratory showed no performance differences to non-debugged code.