Thanks Magnus,

Sometimes you can get reasonable size reduction with creating pre-calculate lookup tables with indexed values such as used in 256 indexed colored pictures to represent larger bit values 32-64-128 and replace them with a byte/word stored as index number to reduce the size.

But sometimes the fastest and smallest way is to calculate it on the fly in code without lookup tables.

It all depends on the algorithm you create and the type of data you need to work with.

Write smart equations, simplify them to the max, removing all variables you can do without.

Try finding repeatable patterns and index them.

It's always fun to find a smart way to reduce data that doesn't fit so well in standard packing algorithms.

A great candidate for massive size reduction is 3D object vector data.

Symmetric objects you can store halve and mirror the other halve when depacking.

Some objects have many repeating 3D vector data, 8 tentacles can be saved as 1 and the other 7 can be translated in position when depacking etc.

Never store normals, calculate them when depacking.

Many object parts can be 100% calculated using trigonometric formulas and bend in shape with curve routines ( exp, sin cos combinations etc.)

Another cool technique is sub-division to remove vertices for storage and rebuild them when depacking.

Reducing bit depth without losing resolution:

For example, 8 bits is enough to represent a highly detailed vector object with a screen resolution of 10 bits.

If you create a 3 dimensional space divided in 3 by 3 by 3 cubes, you can reduce and store a 32 bit float as 8 bit ( subtracting the cube offset ) in each space cube,

when depacking add the xyz offset of each cube to the coordinates per cube to restore the original values and store them as floats in memory.

You don't need index numbers to identify the 27 cubes, only the coordinate count per cube to reconstruct the data.

If you divide the 3D space of the object in even more 3D cubes you can reduce the 3D object size even more.

Math rules!