Floating point data is handled 'behind the scene' by most (if not all) other programming languages; when compared, they are generally considered as absolute values in the same way that integer data is considered. This is where newcomers to assembly (or even to the subject itself) must be educated to think differently.

Hello sir raymond;

Why?

To me hardware engineers used binary to create fpu unit. Passing current or not in a wire. Digital instead of analogic. Integer numbers.

They do not recreated the wheel, probably used real4 hardware and expand that to real8, ... .

Internally, radix 10 being used we can't express all numbers. An example is 1.0 that will be: 1/2 + 1/4 + 1/8 + ..., 0,5+0,25+0,125+0,0625+ ... = 0,9375. So, in pratice 1.0 will never be reached, only 0,99... . Using integer we reach 1. It's necessary round up in this case or a lookup table.

But again, if it's necessary think in fractional part as I do now, the result will be stored in integer binary form, signal, mantissa, number, a form/encoding.

If I think in integer way, I can have byte,word,dword,qword. Any of these values can fit in a byte range. Like real4,real8,... can fit in real4 .

Well, I understood arithmetic encoding. If I have a number with 99 zeros and 1 one; this number can fit in Claude Shannon entropy formula:

99/100 * log2 (99/100) + 1/100 * log2 (1/100) == 0,080793136 bits.

So, to represent each symbol in that message (99 zeros and 1 one) its necessary 0,080793136 bits.

Final message, whole number, will have a size of 0,080793136*100 digits == 8,0793136 bits (fractional) or 9 (integer) bits to be represented. And fractional part isn't above 5 to be round up. I can't use 8 bits because will give data loss; 8,07 bits should be rounded up to 9 bits to be represented without data loss.

This is the reason I'm asking why? Final encoding/form will be stored in bits.