Adobe Math


The subject of Adobe’s quasi 15-bit math has been discussed in the Adobe forums before.   In addition, errors in the Lab mode histograms and in the JavaScript SolidColor object have been posted in the scripting forums and acknowledged by Adobe.   This discussion is simply a recap of some of these observations.   The core problems seem to be related to Lab mode numbers, but since this is the Adobe reference for color conversions it is important.

The fact that RGB-LAB-RGB conversions will alter colors has also been discussed.   I agree that this will happen if the image contains out of gamut colors in one or both of the color spaces.   And that 8-bit rounding can destroy any image.   But in 16-bit mode, with all image colors in gamut, any rounding errors should be minimal.   This does not seem to be the case.

Let me make it clear that the errors I have encountered are not related to floating point math or 15-bit integers.   Floating point rounding is in the order of fourteen digits to the right.   For numbers in the range of 0 to 255, 15-bit integer errors are on the order of 1/128 (0.0078125).   For color accuracy, both of these are insignificant.

Quasi 15-bit

Image files store color values as binary integer numbers.   We should all be familiar with the RGB decimal values, 0 to 255.   This is the range of integers available when stored as unsigned 8-bit values in binary computer encoding.   Note that the mid-point in this range is 127.5, not an integer.   If we use unsigned 16-bit encoding the range becomes 0 to 65535.   When signed the max value is 32767.   But 255 properly scaled to 16-bit is actually 32760.   Adobe uses a quasi 15-bit range of 0 to 32768 originally inherited from Aldus.   This provides an unambiguous mid-point of 16384 and allows some multiply operations to be performed by bit shifting.   These advantages are clearly debatable with modern personal computers.   So be it.   We can use any rules we want, as long as they are applied consistently.

Lab mode values are different.   The L value has a decimal range of 0 to 100.   The ab values have a decimal range of –128 to +127.   This is in the Adobe documentation and the Photoshop Color Picker enforces it, just as 0-255 is enforced in RGB values.

In Adobe 15-bit mode the L value is scaled 0 to 32768.   The ab values are scaled –16384 to +16384. With this scaling the mid-point is not at histogram level 128, but is at 127.5 a non-integer.   Note that with true 16-bit scaling the range is –16384 to +16256 and the integer mid-point (zero) is at level 128.   Values above 16256 would be invalid.   So be it.   We can use any rules we want, as long as they are applied consistently.

If you use the Photoshop Color Picker to enter the Lab values 100, -128, +127 you will see 32768, –16384, and +16256 in the 16-bit Info panel.   These Color Picker values are correct in true 16-bit math but not in Adobe 15-bit math.   Houston, we have a problem.

While doing some color testing, I discovered some other simple cases that demonstrate the situation.

The Color Picker and Info Panel

With ProPhoto RGB as your default RGB color space and with an image open in ProPhoto RGB, if you enter the RGB values 238, 203, 0 in the Color Picker the corresponding Lab values will show as 88, 19, 127.   If you then paint with this color, or place the eyedropper over the foreground color the Info panel will show Lab values of 88, 19, 128.   The range of Lab ab values is –128 to +127.   The Color Picker will not allow you to enter an ab value of +128.   This +128 shows as 16384 in the 16-bit display, which is clearly above +127 in 16-bit math.   In this case the max value is ambiguous if not just plain wrong.

Figure 1

For another example, set the Info panel to display both RGB and Lab values in 16-bit.   If you enter the Lab values 100, 0, 0 the L channel shows as 32768 in the 16-bit display.   And the RGB values are all 255 at 32737.   But if you enter RGB 255 it shows as 32768 and the equivalent L value is 100 at 32768.   Clearly the math is not symmetric.

The Histogram

Open a small new image in Lab mode.   Set the histogram to the all channels view.   Then paint with lab colors.   You will see that ab –1 and 0 are at levels 127 and 128 respectively.   Then use ab +1 and 0.   They are both at level 128.   Now we have an ambiguous mid-point.   An ab value of +127 will correspond to histogram level 254.   And an ab value of –128 will correspond to histogram level 0.   Fine for 16-bit math.

But this does not correlate with quasi 15-bit math.   In that case, +1 should be level 129 and +127 should be level 255.   The histogram shows the negative values scaled correctly but all positive values are incorrect.

The SolidColor JavaScript Object

The JavaScript SolidColor object will accept and transform color values in RGB, Lab, HSB, or other selected color modes.   These transforms are based on your current Photoshop color preference settings.   However, the JavaScript SolidColor object is returning and expecting the wrong Lab values for RGB transformations.

The problem with the SolidColor object is similar to those described with the histogram.   The Lab values returned when the ColorModel is RGB are incorrect.   Similarly, when the ColorModel is Lab the input values need to be adjusted to get the correct RGB equivalents.

For example, neutral RGB values (all equal) should return a,b = 0,0.   Instead they return a,b = -0.5,-0.5.   And Lab input values 0,0 should yield neutral RGB values.   They don’t.

When transforming from Lab to RGB the Lab values first need to be adjusted by subtracting a scaled factor based on how close it is to zero.   The factors work out to be 0 = 0.5 (128/256), -128 = 0.0 (0/256), and +127 = 0.9961 (255/256).   Likewise, when transforming from RGB to Lab the returned Lab values must also be adjusted.

The detailed mechanics of these adjustments can be seen in several of my sample scripts.   One of these is the Color Calculator.

Photoshop RAW Images

Photoshop also has it’s own .RAW file extension.   This is not to be confused with any camera RAW files or formats.   If you save a file in this format it will be flattened and the pixel colors are written a simple array of bytes.   Three bytes for each pixel in 8-bit mode and six each in 16-bit mode.   There is no metadata describing the image at all.

Be aware that in 16-bit mode, these are true 16-bit integers.   They are not Adobe quasi 15-bit integers.   This means that if you want to compare the values to the PS Info displays, they need to be converted to 15-bit mode.   This is not difficult as long as it is recognized.   But as part of my testing a utility that uses this, I opened a ProPhoto RGB mode RAW image in Photoshop.   All the color values were toasted, giving a weird colorcast throughout the image.   It appears that PhotoShop cannot read its own image and preserve the color numbers.   But I will admit that I spent very little time researching this.

Unrelated: the Lab White Point

This is unrelated to 16-bit or quasi 15-bit math, but the question of what white point should be assumed for Lab mode was raised in one of the forum discussions.   I would like to address it here.   The latest GretagMacbeth Color Checker datasheet gives the target values for each of the 24 patches in both CIE Lab and ICC sRGB color modes.   All 24 patches are in gamut in sRGB.   The National Institute of Standards and Technology accredits the Gretag values.

Converting these Lab values to sRGB with Photoshop, the cyan patch is out of gamut.   The only way I could make it be in gamut was by switching to the Microsoft ICM as the conversion engine.   Even with this, there are still differences in the sRGB values across the board.   And these differences are much greater than simple rounding errors.

Others have noted this also and suggest that it is due to the adaptation of the Lab white point to that of sRGB (D65).   It was further suggested that the white point in Lab mode is D50.   This would be a problem with any such CAT algorithm since the Lab white point is actually Illuminant E.

This is obvious if you dissect the CIE math for color matching.   The XYZ values are the measured values times the light source values times the CIE standard observer values.   Each of these is a set of spectral measurements from approximately 400-700 nm (the visible spectrum).   The standard observer values are given at Illuminant E.   Thus the resulting XYZ values are at the Illuminant of the light source.   When these are converted to Lab values the white point of the source illuminant is removed, leaving the Lab values at Illuminant E.

Of course that assumes that the software conforms to the CIE standards.   There are no CIE police.   If you assume Lab D50 when the values are at Illuminant E all RGB conversions will be wrong.   If you force D50 via an adaptive transform, everyone else’s conversions will be wrong.

Once again, it is not so important which rules you follow as long as they are followed consistently.   Most colorimeters and other color measurement devices follow the CIE model.


One might make an argument that the Info displays and SolidColor objects are the only errors.   But I have seen enough to convince me that the math is just plain wrong.   Since Lab is the Adobe reference space, these errors are being propagated throughout the color space conversions.   I see them in the scripting interface, in the histograms, and also in the Info panels.   It looks to me like there is a mixture of true 16-bit math and some quasi 15-bit math in the Adobe Lab routines.   Not consistent!   And just to set the record straight, none of this has anything to do with simple rounding errors.

The problem is not in the PS Info panels or the histogram.   These are only reporting the effects of the core error.   The core error is that Adobe is not consistent in how it converts from 15-bit values to 16-bit values in some places.   It seems to be most prevalent in Lab mode.   So conversions between Lab and RGB are compromised.   And there are places in Photoshop where true 16-bit or floating-point operations are required.   One of these is the JavaScript interface.   I am certain that there are others, even within pure Adobe internals.

I suspect that RGB to Lab conversions expecting 15-bit input but returning true 16-bit values.   Then Lab to RGB conversions are again expecting 15-bit input but returning true 16-bit values.   There are so many variables in these conversions, it is extremely difficult to prove anything.   The errors are amplified in the Lab ab values highlighting the problem.   Thus there are issues with the ab mid-point and the ab min/max in some situations.   And I observe that the Lab/RGB conversions seem to introduce greater than rational errors in 16-bit mode.   At least to me.

The problems appear in multiple modes and tasks.   So I believe they are in one common Adobe function.   Without access to the code, I could not even speculate where it is.   And I don't want access to the code.   It is up to Adobe to address it.   I am merely trying to document some of the verifiable observations.

I sincerely hope that Adobe is going to address this in CS3 if not sooner.

That said, I still believe it might be time for Adobe to drop the quasi 15-bit math.   Leaf Aptus has cameras that shoot true 16-bit math already.   How long will it be before 16-bit cameras are affordable to more of us?   Some have suggested that these are not true 16-bit in the first place.   But both Leaf and Dalsa (the sensor manufacturer) insist they are.

But the most compelling reason is that every conversion from or to 15-bit math introduces unavoidable rounding errors.   If multiple conversions are required, the errors will quickly multiply.

I hope you also gained some new insight from this article.   If you have any comments, or suggestions, I would welcome your input.   Please send me an  Email

Rags Gardner
Rags Int., Inc.
204 Trailwood Drive
Euless, TX 76039
(817) 267-2554
Send Email
August 22, 2006
December 7, 2006

This page last updated on: Wednesday October 03 2007
You are visitor number 18,261 since 08/24/06