Hi SiekManski

Calculating the average or standard deviation is one way to do it too when the source bitmap has no luma as in the gray image.

What i did on this previous version was search for the nearest value rather then calculating the average.

The data in Source A (The colored one) contains a table with 256 structures whose members contains the pointers to the gray/luminance values on it. The gray is used as an index.

So, as i posted previously, on the Table from the colored one we have a pointer to all luminance existant on it but indexed according to the gray value. (That is nothing more than the luma range itself).

So, we are trying to colorized a pixel on the gray image whose gray is 100, but on the source image (the colored) we don´t have any pixel that will represent that same "gray", i looked for the one nearest to it. So i start looking on the table (containing the structures) if it has a "gray" 101, 102, 103 etc and do the same backwards looking for 99, 98, 97. Then i only calculate the nearest value.

Say on the colored image we have this sequence of 'gray" (On our array of 256 structure): 96 97 98 103 105 108. Since we are looking for pixel 100 (on the gray image), i get the previous and next "gray" that actually is present on the colored image. On this case it is 98 and 103. Then i simply choose the one that is closer to 100. In case i pick up 98. I do that because 98 is closer to 100 then 103 is closer to 100. I use their deltas, rather then computing the average between them. So, Delta1 = 100-98 = 2 Delta2 = 103-100 = 3. The smaller delta (2) is the pixel nearest to the value of 100. So, i use 98 to fill the gap.

Of course, we could fill the gap simply calculating the average among them (103+98)/2 = 100.5. But remember that we are using luminance/gray as an index on a table ? They are in integer values. So we could never find the proper value when it comes to a fraction. That´s why i´m choosing the nearest one on this preliminary tests.

You can´t stretch the chroma randomly. If you do that, you will be out of the range of hue too. The best always looking for the luminance and using the table (described below) to retrieve the chroma/hue back. If the luma don´t exists, you can do as i said previously on finding the nearest, or even estimating the average or doing it by the neighbor etc. What you can´t do is rely on chroma or even on hue for retrieve that back. You always need to use the luminance as the guidance of the equations.

The best choice to extend the limitation of 256 colors is calculating the luminance of the neighbour pixels. Similar to when we are using a signature to identify images, for example. On this way, we can also retrieve texture as well. We only need to use a small matrix to do this. Let´s say, 3*3 of the gray pixels and 3x3 on the colored ones. And then we look 1st for their positions for a full match. Since the odds that one 3x3 matrix is exactly the same as another is too low it is unlikely that an error is generated. You may think that we will need a huge database of 3x3 pixels to be used as samples, but, probably not. If we succeed to fix the CieLCH equations we will limit the total amount of colors to search and, by consequence, the total amount of luminance combinations as well.

From all those 16 million of colors, probably more then a half of it are perceptual similar to another one. I didn´t estimated yet the amount of unique colors, because i´m struggling trying to fix that damn CieLab/CieLCH equations, but the goal is forcing the algorithm (The RGBtoCieLCh function and it´s reversal) to be restricted on a table containing only

256 gray colors (each gray color correspond to a unique luminance value)

256 chroma values (same as above, each chroma value seems to be limited to the same range as the luma and the difference between the max and min are extremelly low)

360 (or most likely 338) different hue angles

The hue angle is being a particular problem. The original equation simply does not fits to the results. There is a difference in the chroma/hue fraction of 1.125. I fixed it as proposed, but, the difference remains there. perhaps, i´ll need to multiply Y with 1.125 e divide X and Z by [2/(3-1.125)] to adjust that properly.

I didn´t tested yet. Whenever i fix one part, this difference appears on another. The problem is that X, Y and Z are attached to each other biasing on the matrix multiplication of Red, Green and Blue with those tristimulus values from sRGB for example. If i simply multiply Y, i´ll end up with an error, because X and Z also uses the same Red, Green and Blue values just multiplied with their own tristmulus values.

I know i´m close to the solution, because the difference of 1.125 seems to be a fixed value, but i can´t actually see where to fix it. The new paper i wrote is a complete mess, btw. I do the equations and write it down on the doc, but when i see an error, i put it on the same paper to don´t forget later what i was doing. :icon_mrgreen: :icon_mrgreen: :icon_mrgreen:

The good thing is that the properties of Luma, Hue and Chroma to be used on the backward computation are all there now :) I simply need to adjust the equation for fit the relation between chroma and hue values.

What i did found about Chroma is that i can put them on the same table as in Luma range and ended up in something like this:

`Gray/Color Luminosity (Min) Chroma Min`

0 0 0

1 2.741066938704112e-1 26.0519460866180125

2 5.48213387740825841e-1 26.49074207984956658

So, when a pixel have the value a luminosity in the range of 2.74e-1 to 5.48e-1, it means , in fact, that this pixel will necessarily generate a gray color whose value is "1" (That we can use as an index)

It also means that whenever a pixel falls on the range of gray "1" (No matter what was the original combination that created that gray=1) it will always have a chromacity range from 26.05194 to a max of 26.49074207....

The problem of the "normal" way is that when they do the conversion they don´pt check those kind of limits and accept extrapolations on the backward computation (CieLCh to RGB). So, it will inevitably clip the final values of R, G, B.

Say that we are trying to decrease/increase the chromacity of an image. On the "normal" way we can use whatever luminance we want with whatever chroma value, but, doing that we will clip the result generating a color that simply was not supposed to be there.

So, using a table, it is easier to make the values be restricted on their own boundaries.

If we are looking for luma = 5.6e-1 (or looking for gray = 2. It really don´t matter because they are the same thing. Gray=Luma range), all we need to do is search for it on the table where the luminance have a value of 2 (for example, that corresponds to that range of 5.48...e-1 to xxx) and take the chroma from there.

And the hue you may be able to use the one you inputted. What it seems is that the difference of chromacity is not enough to make a color be distinguishable from another. What it do matters is the hue angle.

Now...all i need to know is that if there is a similar limit for hue as well. (it seems there do exists, but i simply can´t find it using their own "normal' equations.)

The final result i hope we can get is something like this:

`Gray/Color Luminosity (Min) Chroma Min Hue Min`

0 0 0

1 2.741066938704112e-1 26.0519460866180125 0

2 5.48213387740825841e-1 26.49074207984956658 35.1215º

If there is also the same limits for hue (Which seems to exists) then the backward computation will be extremely easy as a pointer to tables.

What i´m finding in all of this is that the concepts of CIE are incorrect. Luminosity is simply the frequency of a luminous waveform, and what they call "chroma' is, in fact, the intensity of the luminosity over a pixel. And hue is the actual "chroma". They claims that Luminosity is completely isolated from Chroma and Hue, but that´s pure rubbish. Luma and "Chroma' seems to be faces of the same thing, but one is related to the wavelenght and other the intensity of it. Hue is the actual chroma that make our eyes identify different colors.