News:

Masm32 SDK description, downloads and other helpful links
Message to All Guests

Main Menu

Small favour needed, test precision with sprintf.

Started by hutch--, September 05, 2018, 02:14:34 PM

Previous topic - Next topic

nidud

#45
deleted

hutch--

 :biggrin:

If you are worried about the final digit count in a FLOAT or DOUBLE, you need a large number library. With the testing I have been doing you do a range of calculations in DOUBLE (REAL8) then reverse them to see how much difference (if any) and in most instances you end up with the same number. There are of course applications for very large numbers, physics, astronomy and even location on the planet in terms of latitude and longitude if real accuracy is required but the vast number of tasks are overkilled with DOUBLE. A 32 bit float is not all that useful for engineering and similar calculations but with the speed of floating point units in modern hardware, graphics are viable in 32 bit floating point.

raymond

QuoteIf you are worried about the final digit count in a FLOAT or DOUBLE, you need a large number library.

I beg to differ from that.
- If you are worried about the final digit count in a FLOAT (real4), you use a DOUBLE which has significantly better accuracy than the FLOAT.
- If you are worried about the final digit count in a DOUBLE (real8), you use an extended DOUBLE  which has a somewhat better accuracy than the DOUBLE.
- Only if you are worried about the final digit count in an extended DOUBLE (real10) would you need to use a large number library (or some other hardware which can process real16s).

As a scientist, I would really need concrete examples where applied sciences could need accuracy requiring bignum libraries. The only field I could imagine would be theoretical mathematics which may have very little application in real life.
Whenever you assume something, you risk being wrong half the time.
http://www.ray.masmcode.com

hutch--

Hmmmm,

Astronomy, precision navigation to exact locations, calculations related to travelling up near the speed of light, some branches of physics etc .... anything that REAL10 does not have a high enough level of precision for.

aw27

#49
When we talk about Significant Digits we talk about precision not accuracy. The IEEE standard defines four different precisions: single, double, single-extended, and double-extended.
For a quick about the differences between precision and accuracy.

We need High Precision Floating Point libraries also to escape from accumulated errors when we do zillions of operations on a data set. Although we know that the pyramids were done without any electronic calculator or even a slide rule and that for playing a DirectX game a Real4 is enough, cutting edge physics, astronomy and other fields will not do with the precision levels of the IEEE standards.

nidud

#50
deleted

aw27

The 32, 64 and 80 bits IEEE standards are built into the FPU and is better to conform than invent our own. Above that let's see, no need to rush and pay $99.00 to read IEEE 754-2008.
I personally like the MPFR (*) library - in their own words, it copies the good ideas from the ANSI/IEEE-754, namely for correct rounding and exceptions. It has no flying limits - you choose the precision you want.

* I have ported MPFR 3.15 to Windows (not to the Cygwin or Mingw crap) and is in my website. The MPFR guys do not mention it because it is related to a dispute over, guess what:

The ZIP file containing MPIR and MPFR binaries that you are
distributing does not meet the license requirements for GPL/LGPL
software because it does not contain any license text and it does not
indicate where the source code for MPIR and MPFR and any necessary
build files can be obtained.

It is NOT sufficient to only update your web page.

I would be most grateful if you would either update the ZIP file to
provided these details or, iIf you are not able to do this, stop
distributing it.


Well, I actually mention it but the guy has not seen it or wanted a spotlight turned to it. These GNU guys are like this.

Siekmanski

There is (as nidud has already noted ) a relatively new 16 bit floating-point format.

sign bit: 1
exponent: 5 bits
mantissa: 10 bits

This format is used in several computer graphics environments. ( such as OpenGl and DirectX )
The advantage over 32-bit single-precision binary formats is that it requires half the storage and bandwidth (at the expense of precision and range).

F16C instruction set,

There are variants that convert four floating-point values in an XMM register or 8 floating-point values in a YMM register.

The instructions are abbreviations for "vector convert packed half to packed single" and vice versa:
VCVTPH2PS xmmreg,xmmrm64 – convert four half-precision floating point values in memory or the bottom half of an XMM register to four single-precision floating-point values in an XMM register.
VCVTPH2PS ymmreg,xmmrm128 – convert eight half-precision floating point values in memory or an XMM register (the bottom half of a YMM register) to eight single-precision floating-point values in a YMM register.
VCVTPS2PH xmmrm64,xmmreg,imm8 – convert four single-precision floating point values in an XMM register to half-precision floating-point values in memory or the bottom half an XMM register.
VCVTPS2PH xmmrm128,ymmreg,imm8 – convert eight single-precision floating point values in a YMM register to half-precision floating-point values in memory or an XMM register.

The 8-bit immediate argument to VCVTPS2PH selects the rounding mode. Values 0–4 select nearest, down, up, truncate, and the mode set in MXCSR.RC.

Support for these instructions is indicated by bit 29 of ECX after CPUID with EAX=1.

Quote from: AW on September 11, 2018, 05:29:26 AM
X=(ln 10)/(ln 2)=3.3219280948873623478703194294894

64/X = 19.26 => 19/20
52/X 15.65 => 15/16
23/X = 6.92 => 6/7

Your X is in fact the reciprocal of Log10(2.0) -> 1.0 / 0.30102999566398119521373889472449

Simplified calculation,

X=0.30102999566398119521373889472449

64*X = 19.26
52*X = 15.65
23*X = 6.92
Creative coders use backward thinking techniques as a strategy.

raymond

Quote from: hutch-- on September 12, 2018, 03:27:27 PM
Hmmmm,

Astronomy, precision navigation to exact locations, calculations related to travelling up near the speed of light, some branches of physics etc .... anything that REAL10 does not have a high enough level of precision for.

- Precision navigation to exact locations
The circumference of the earth is generally considered to be 25000 miles. Assuming that such a figure is perfectly exact, it would be approximately equal to 40233600000 millimeters. This means that the location or the distance between any two points on the earth's surface can be computed to within 1 mm of precision with only 11 significant digits. What could be the need for higher precision?

One basic math principle is that the accuracy of any computation cannot be anymore accurate than the least accurate component used for the computation. For example, if only the first 4 digits of the value used above for the circumference of the earth are accurate (i.e. the circumference would be +/- 10 miles), any computation using it should not be reported with more than 4 significant digits; even if it would be obtained with a precision of 7 or more  significant digits, only the first 4 may be accurate and any additional ones would only distort the real accuracy of the result.

The speed of light is given in the literature with 9 significant digits (299792458 m/s) with a measurement uncertainty of 4 parts per billion. If, for example, you wanted to convert that constant to ft/s with the same accuracy, you must then use the ratio of feet/meter with a precision of 9 significant digits.

This precision/accuracy debate always reminds me of a detail when I was working. A document from a U.S. association needed to be modified to include information in the metric system along those of the U.S. system. One item pertained to taking a quart sample of water for analysis. Believe it or not, this had been converted into a 0,95 liter sample!!!!
Whenever you assume something, you risk being wrong half the time.
http://www.ray.masmcode.com

hutch--

Ray,

I think you are reading this the wrong way, successive levels of precision have always had their place but you will see over time that increased levels of precision keep being needed. Long ago what we call integers were simple enough up to 10 fingers, later we had roman numerals, later still we had arabic numerals and over time fractions. converting this to decimal is handy as it currently suits digital hardware but the range limits start to be a problem in at least some tasks.

The example that Jose posted with the landing area on Mars is a clear example of the need for higher precision. Travelling at very high speed (light) leaves room for tiny percentages of a calculation being light years out and again it depends on the level of precision required. Then there are things done at the atomic level where you don't need accumulated errors creep in. You may not need the precision to calculate the interest on investments but someone will.

A well known programmer who was involved in the creation of Microsoft once said that you would never need more than 64k of memory. These types of predictions tend to die very quickly, my current work computer has 64 gig of memory and the next one will probably need much more.

Just as an example, the best angle measuring tool I own divides degrees into minutes (A Brown & Sharpe vernier protractor) but in most instances a school kid's approximate degree protractor does the job, it purely depends on the task.

jj2007

NASA has a dedicated site: Space Math Problems Sorted by NASA Mission and Program. Maybe somebody can find an example there demonstrating that REAL10 is not enough.

aw27

I had a boss who used to claim that black and white Hercules graphics cards would become the standard for professionals - color monitors are distracting and only suitable for games. Fear of change, attachment to the habits and the old things is a symptom of ageing. I can understand that.  :t

But on this thread's line lets read David H. Bailey and see what he thinks.

raymond

Quote from: AW on September 13, 2018, 08:16:55 PM
I had a boss who used to claim that black and white Hercules graphics cards would become the standard for professionals - color monitors are distracting and only suitable for games. Fear of change, attachment to the habits and the old things is a symptom of ageing. I can understand that.  :t

But on this thread's line lets read David H. Bailey and see what he thinks.

Very interesting article. But, most of the quoted examples generally require bignum libraries. Those examples also indicate that requirements for greater precision is primarily for a micro niche of applications in advanced research; very little of it, if any, is related to common activities.

Most probably less than one in a million programmers would ever have a real need for such precision (and may also prefer using a HLL. :()

Whenever you assume something, you risk being wrong half the time.
http://www.ray.masmcode.com

nidud

#58
deleted

aw27

REAL2 typedef WORD

Visual Studio uses:
typedef uint16_t HALF