Whenever I compile my program the compiler adds the instruction int3 (0xCC) in between my code. Sometimes it doesn't too so I'm a bit bugged why it does that and what int3 actually does. Could anyone explain to my why the instruction is added to my code ?
Thanks in advance,
Jannes
Hi Jannes,
It is used for aligning the following procedure. You can use nop (db 90h) to do it as well.
What exactly do you mean with aligning the following procedure? you mean an instruction should always start at like an even address or something so to prevent it from being on a odd byte it is inserted to get to an even number ? or something totally different ? :/
Quote from: gelatine1 on October 27, 2014, 07:55:33 AMWhat exactly do you mean with aligning the following procedure? you mean an instruction should always start at like an even address or something so to prevent it from being on a odd byte?
Exactly. Often, compilers try to align procs to a 16-byte boundary, so that the procedure fits entirely into a slot of the instruction cache.
I thought that the INT 3 instruction generates a special one byte opcode that is intended for calling the debug exception handler.
I use it to stop program flow while debugging.
the intended purpose of INT 3 is to "trap" after execution of an instruction
it allows debuggers to perform single-step or breakpoint (kernel-mode required under windows)
however, under windows user-mode, it generates an exception because it is a privileged instruction
if you have a debugger set up as the just-in-time exception handler, it will come up
it is the only single-byte INT instruction
that is done so that it may replace any instruction in the code stream
for example, CLD is a single byte instruction
in order to replace it without clobbering the next instruction, INT 3 has to be only 1 byte in size
when you set a breakpoint in a debugger, it remembers (saves) the orginal byte
then replaces it with 0CCh, INT 3
when the breakpoint is reached, trap occurs, and the debugger may examine the saved value to restore it