Jump to content

Narrowband

Simple alternative to LAB?


Recommended Posts

I'm writing some code to combine L with RGB and inevitably looking at using the LAB colour space. I understand that LAB enables separation of luminance and chrominance, allowing separate processing of the two types of information e.g. stretching of luminance, saturation of chrominance. But I've read contradictory things about the Lum/Chrom separation in LAB. Here for example it is suggested that L and A are not independent: http://www.schursastrophotography.com/techniques/LRGBcc.html. My own experience to date is that the L overwhelms the colour data unless the RGBs are themselves stretched prior to going into LAB space, potentially damaging RGB ratios. 

That led me to wonder about the following simple alternative to LAB.

1. Compute a synthetic luminance (I'm using the conventional weighted sum 0.2125*red  + 0.7154*green + 0.0721*blue).

2. Compute a luminance weight, defined as  the stretched L divided by synthetic luminance, at each pixel

3. Multiply each of R, G and B by the luminance weight.

As I see it, the two key results are that

* the synthetic luminance of the new image is equal to the stretched luminance L

* colour ratios are preserved (except perhaps at some edge cases)

Surely this is what we want in an LRGB combination? Or perhaps I'm missing something.

Here's an example for a 200x200 pixel portion of the open cluster NGC 2169 (about 1 minutes worth of 10s subs for each of LRGB captures with a Lodestar X2 mono guide camera ~ 2.1 arcsec/pixel, hence the blockiness). All of the LRGB channels are gradient-subtracted and background-subtracted.

1305472192_ScreenShot2019-05-05at20_32_25.thumb.png.4f279115fd495f467c74c86e22d00bd4.png

Top left is RGB combination only (i.e. no scaling via luminance -- no L at all in this one). Top right is LAB based on this unscaled RGB but with a power-stretched luminance. Note the lack of colour saturation. Bottom left is the simple approach outlined here. Bottom right is LAB applied to the scaled RGB. These are all mildly stretched with a simple power stretch applied to the combined image (i.e. to all channels). I chose a stretch that generates a similar 'stellar depth' and amount of chrominance noise to aid comparison of the lower pair -- this is the comparison that I'm interested in. LAB is quite sensitive to the stretch actually and to get a similar depth one ends up with quite 'hard-edged' stars. 

I see somewhat more intense colours for the fainter stars in the simple approach without any commensurate additional saturation of the brighter stars. No doubt with appropriate processing the two lower panels could turn out to be quite similar. The biggest difference is perhaps in computation time: the simple approach is about 10 times faster.

I guess my question is: does anyone do the simple and obvious thing outlined above, or are there some pitfalls I haven't considered? Or does LAB have some advantages that cannot be obtained as easily with the simpler approach? (I'm aware of that AB saturation manipulation in LAB for instance, and perhaps that would be harder to achieve, although the saturation is already good as is with the simpler approach compared to the upper left LAB for example).

Any thoughts or comments welcome!

Martin

 

 

Link to comment
Share on other sites

It really depends on what you are trying to achieve.

LAB color space is with orthonormal basis, which means that each component is independent of the other (much like regular 3d space - change in one coordinate does not affect other coordinates). Same holds for RGB color space. In fact all trichromatic color spaces are designed to use orthonormal basis.

On the other hand, we can't completely separate luminance and chrominance components of color - this is because of the way we perceive color. I'll go a bit into some details, hope that is ok.

Take for example RGB color space, and let's look at a property of color space - how much "distance" in color space affects our perception of properties of color, and in general how our perception of things impacts the way we see colors.

Let's look at some colors, here is pure red, pure blue and pure green color - meaning they have 1, 0, 0; 0, 1, 0; 0, 0, 1; coordinates. Each color is equally distant from the black (0,0,0).

image.png.45ef81a3e5874e82fcb5f77971382f3e.png

But take a look at what luminance each color represent:

image.png.29d20cb327f763a56a46d6135ebf7bac.png

If you "switch" between observing first and second image, you can tell that these are in fact luminances of each of the colors, in fact, here is more elaborate diagram of that:

image.png.cc0375f4defb2cedefaa9647ef1e5ca9.png

So if we move certain distance from pure black in RGB color space we will clearly get three separate "pure" colors, but we won't get same luminosity - this goes to show that RGB color space is something we call non perceptually uniform space - meaning for same total coordinate change we get different perceptual change - depending on start position and direction of move - we can perceive same vector to be larger or smaller change.

Lab color space is designed to be perceptually uniform (as can be) - which means that same change in LAB coordinates should represent same perceptual change in color (or as close as possible). Original CIELAB actually failed at this, but 2000 revision sorted much of issues. For further info see here:

https://en.wikipedia.org/wiki/Color_difference

Ok, now let's look at another thing that is of importance. Look at these two colors:

image.png.29c2c5d86db4f36d045ca77998bff2c2.png

There are definitively two different colors next to each other. Right one seems to have more green/blue, while left one is almost grey. If you observe those two rectangles as distinct colors without having "notion" of light intensity your brain will see them as different colors. But if you observe them with lighting information taken into account - like on the same image, as part of the same wall - one part of it being in direct sunlight while other part being in shade - your brain will interpret both as same color but different luminance. This is very important to understand - same color will look differently depending on context!

To emphasize this point further - look at this famous optical illusion that exploits this fact:

image.png.aedd964c14caee6cc4b0c3a58d80e30e.png

Look at the color of cylinder (well, I did not get hue right above, but close enough :D ) - but also notice checker board tiles and their color. A and B clearly look like "black" and "white" tiles, but are in fact of the same exact shade of grey. It is our brain that interprets luminosity information and we see them as different color altogether.

Now, back to original question and color processing of astronomical images. It depends what you are trying to achieve. If you want to try out "scientific" route and get accurate color representation in image (yes, it can actually be done within limits), and since you will be programming your processing and therefore what I'm about to suggest is going to be entirely plausible, you can do following:

1. Take R, G and B information (calibrated, stacked and wiped but still linear data) and based on sensor characteristics or star data create 3x3 transform matrix into CIE XYZ color space. That is absolute colorimetric space and can therefore record any intensity of any color that human eye can distinguish, and is good baseline to start working from.

Easiest way to achieve transform would be to do plate solve and get stellar class for bunch of stars in the image. Next step would be to compute based on Plank's law for radiation of black body of certain temperature (from stellar class) - XYZ values. Good source of matching functions in spreadsheet form can be found online, like this one:

http://www.brucelindbloom.com/index.html?SpectCalcSpreadsheets.html

Next you measure R, G and B values for those stars (simple aperture photometry). This way you obtain pairs of vectors for computation of transformation. Simple least squares method will give you transform matrix. It is actually better not to think of R, G and B from your sensor as colors at this stage - those are just spectral measurements that you will convert to XYZ values.

Once you have XYZ values, you need to extract ab (or color) information from those. First thing that you need to do is match intensity (not luminance) of each pixel. Scale XYZ by their sum, or convert to CIE xyY color space. Discard Y component at this point (or rather set it to unity) and convert to LAB (here you can opt for choice of standard illuminant - like D50 or D65 - it will impact color cast of final result). Keep AB part.

Take your luminance and stretch it. Once you are happy with it add above AB to get full LAB set. Convert to linear RGB and apply gamma of 2.2 to convert to sRGB.

Voila - you have accurate color (within both your camera and sRGB gamut).

Hope above makes sense - if you need clarification, please ask ...

 

Edited by vlaiv
typo ...
  • Thanks 1
Link to comment
Share on other sites

Thanks for taking the time to comment Vlaiv.

Finding appropriate ways to perform calibration is on the agenda, ideally with some fallbacks in cases where plate solving is not available, so your suggested process is something I will take into account.

What I'm finding when it comes to colour in AP is that there are a lot of opinions, sometimes with a scientific veneer,  but just as things are getting interesting most discussions become mini-tutorials on how to apply techniques or fix colour issues using PS or PI. 

None of this alters the fact that there is a ground truth out there if instrument and sky calibration is carried out effectively i.e. in the steady state a star continues to emit at a range of wavelengths regardless of who is capturing those photons here on Earth. And it is that information that I am trying to get at and represent as directly as possible. Of course, in the end everything is a visualisation, but some visualisations are more closely related to the source than others. Although incomplete, I like the approach taken here for instance: http://www.vendian.org/mncharity/dir3/starcolor/details.html

What I'm aiming for is an approach which preserves the hue component (after appropriate calibration) and essentially gives the user control over saturation and luminosity stretching and  not too much else. Since this will be incorporated in a near-live observation application, I want to minimise the need for adjustments and automate as much as possible, and quickly. 

I also need to collect more LRGB examples to see whether the simpler approach holds up.

cheers

Martin

 

Link to comment
Share on other sites

Well, if you want as accurate representation as possible, then try to understand what I've outlined above. I think it is quite accurate approach.

Approach outlined above is very similar if not the same as one provided on the web page you linked.

I don't know if you know and understand whole color space mumbo-jumbo, but it is not too hard to get the basics of it - I don't have any background in the subject and did my research for exactly the same purpose - how to represent colors in AP accurately, and now I think I have pretty solid understanding of it.

You may try to do a simple software just for exercise - one that takes star temperature (or alternatively stellar class / color index) and provides different color spaces components - like sRGB, LAB and alike.

Plank's law is straight forward, and you will find XYZ color matching functions on web page I linked to above. You need to "integrate" flux from black body of particular temperature (there are couple of places online where you can find stellar class / color index to temperature conversion formula) times X, Y and Z matching functions to get X, Y and Z values for CIEXYZ color space.

After that, it is straight forward - you can look up transforms to other color spaces online (I think all of them are on wikipedia).

We could even team up on that one and do joint open source project to do that?

Link to comment
Share on other sites

I'm implementing 3 or 4 different approaches to combine L with RGB (LAB, HSV, the simpleton approach and perhaps some basic layer technique) and the user will be able to simply select one and see the effect pretty much immediately. I'm not a big fan of HSV but will include it in early versions purely for experimental purposes.

I'd love to work on this together at some point but I need to get the basic colour stuff into the next release of Jocular first. It sounds like we have similar aims in terms of true colour.

I'm working my way through the acronym soup of colour spaces. This is a great site: http://www.handprint.com/HP/WCL/color7.html#CIELAB It contains some interesting comments such as how well CIELAB does considering its origin as a "hack" and how it doesn't do so well for yellow and blue (which is worrying given that these are found a lot in AP!). The take-home message for me is that colour spaces like LAB are by no means perfect nor necessarily appropriate and it is likely to be possible to improve things.

This is also a good resource on some recent ideas although I'm not sure whether the more complex models have much relevance in AP: http://rit-mcsl.org/fairchild//PDFs/AppearanceLec.pdf

Martin

 

Link to comment
Share on other sites

I think it would be beneficial to think in terms of steps that need to be done in order to color compose image.

1. Colorimetric calibration

2. Color transfer on luminance

3. Converting to sRGB color space for display.

Let's look at step two for a moment - let's assume we have accurate color information for each pixel and we want to combine that with stretched luminance. First step is to remove any luminance information from color image, or to put it in different words - what color would each pixel have if light emitted had exactly the same brightness as every other pixel.

Once we know that - pure "color" information without lightness information - it is simple step of applying that color information to stretched luminance. Stretched luminance is just regular luminance but in altered "brightness" state - like when we rearrange objects in the sky to bring them closer and hence make them brighter in relation to other objects in the frame.

Anyways, to do all of that we need to work in color space that has perceptually uniform separation of L and chromacity. First step is to alter color so that L is uniform for each pixel (this is actually subject for debate, as I think better results will be if we leave some graduation in L as it affects saturation - when we hit noise floor we don't want those noise pixels to be rich in color - this is point where L can go slowly down to 0, but it should be 1 for most of things in the image). Next we substitute that L with stretched luminance and we have result. Only color spaces that let you do that are ones where L is separate - HSV, CIELAB or CIELUV.

You can implement all of those and do comparison and see which one you like the best, or maybe let users decide on color space to be used?

But in order to have color information - it needs to be properly calibrated as different sensors have different response across wavelengths. Here things are straight forward - CIEXYZ is color space to be used, and we need some sort of information how to convert recorded R, G and B channels to CIEXYZ color space. Again couple of approaches possible:

1. rely on per channel gain factors and liking of user to do color balance

2. rely on user having proper white flat panel - flat calibrated frames will be "two point" color balanced - black and white point

3. rely on sensor QE graphs to compute appropriate transform to CIEXYZ

4. rely on plate solving, or alternatively user stellar class selection (select star, choose stellar class, repeat dozens of times).

5. Maybe try to implement some statistical analysis of recorded stars? Depending on abundance of star types in the sky, do photometry on bunch of stars in the frame and try to guess what R, G and B values belong to each stellar class and then compute XYZ transform.

Last step is real easy - just application of proper transforms.

Link to comment
Share on other sites

  • 5 weeks later...

Just wanted to let you know that I've done some experimenting, and Lab is no longer on my list of suitable color spaces for channel combination / color calibration.

I've found very straight forward and accurate way to easily calibrate / color combine with a few perks as well (keeping star color from saturation for example).

It turns out that "best" technique has been known for a long time - at least I've seen it implemented in Nebulosity 4 for example - except there is a twist to it (maybe it is implemented there as well).

There is no need to use XYZ color space as linear RGB (regardless of color gamut of paricular RGB space - this holds true for linear part of sRGB as well) is in fact affine transform of XYZ. This means that any scaling of vectors in XYZ (like boosting Y component - which is luminance and other two accordingly to keep color information) transforms accordingly into linear RGB. We can treat luminance stretch as boost in luminosity - in fact it is precisely that.

Anyways, here is fairly simple outline:

Prepare stacks (for color subs this means background wiping - needs to be 0 mean value) and stretch luminance as you see fit (DDP or whatever you use). Mind you that luminance is going to be stretched a bit more so you will need to account for this.

This is due to nature of sRGB space and computer displays - linear RGB is transformed by gamma transform - that is last step to be applied.

At this point you should do color calibration if you intend to do it (anything from simple component scaling to regular 3x3 transform matrix). Next we apply RGB ratio - find max(R, G, B ) and divide each R, G and B channels with this value. Take care that you a) - don't have negative values in R, G and B, and b) that you don't have pixels that have 0,0,0 - as this will lead to division by zero (max of 0,0,0 is again 0).

You can set all values lower than median background to that value, or if you wiped background - turn all values below 0 into 0 and add some very small offset to each pixel - like 1e-10 or something like that - this will ensure you have no negative values and no pixels are 0,0,0

Now you multiply such R, G and B with stretched luminance. As a final step you apply gamma transform for sRGB space

image.png.9e403b0031eff69dfd71cf76416908a3.png

 

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Sounds interesting. Have you tried it on many examples yet? I ask because the method I mentioned in the original post works well on some examples and less well than LAB on others. LAB seems a little less sensitive to colour noise. Since my last (and only) session since the original post I've collected LRGB for 5 targets in case you want to try out your method...

Martin

Link to comment
Share on other sites

12 minutes ago, Martin Meredith said:

Sounds interesting. Have you tried it on many examples yet? I ask because the method I mentioned in the original post works well on some examples and less well than LAB on others. LAB seems a little less sensitive to colour noise. Since my last (and only) session since the original post I've collected LRGB for 5 targets in case you want to try out your method...

Martin

It would be interesting to compare approaches. If you want, you can post linear stacks for those 5 cases and we can see the results with this method.

I can do both with and without color calibration. Color calibration at this stage is a bit more involved, so it will take some time to process all 5. This is because I still don't have any code written for automating that process - it is in "the pipeline" (any day now :D ). I need to manually check photometric data for each calibration star online.

 

Link to comment
Share on other sites

Do we know what Photoshop does with the luminance channel when it is applied as a layer in blend mode Luminosity? Is this equivalent to it's appearing as L in LAB colour? (The obvious and useful difference being that, as a Luminosity layer, its opacity is adjustable.)

Olly

Link to comment
Share on other sites

13 minutes ago, ollypenrice said:

Do we know what Photoshop does with the luminance channel when it is applied as a layer in blend mode Luminosity? Is this equivalent to it's appearing as L in LAB colour? (The obvious and useful difference being that, as a Luminosity layer, its opacity is adjustable.)

Olly

I have no idea :D - here is what Adobe help says on the matter:
 

Quote

 

Luminosity

Creates a result color with the hue and saturation of the base color and the luminance of the blend color. This mode creates the inverse effect of Color mode.

 

It does not however say what color space transform is calculated in. Given that it mentions hue and saturation - probably HSV?

I don't work with PS - I use Gimp, and there for example there is no clear distinction between linear RGB and sRGB. You can select how the data is interpreted, but there is no way to transform between the two. So if I for example load linear data, I can't tell it to transform it to sRGB gamma applied data.

We get linear data from our camera and if we want to truthfully display that image on monitor - we either need to convert it to sRGB or apply ICC color management profile - but software needs to know that data is linear. I don't know if it assumes that or if it assumes that it is sRGB by default (which would be wrong because gamma is not applied) - for all data that does not have color space assigned.

  • Like 1
Link to comment
Share on other sites

On 06/06/2019 at 11:05, vlaiv said:

It would be interesting to compare approaches. If you want, you can post linear stacks for those 5 cases and we can see the results with this method.

I can do both with and without color calibration. Color calibration at this stage is a bit more involved, so it will take some time to process all 5. This is because I still don't have any code written for automating that process - it is in "the pipeline" (any day now :D ). I need to manually check photometric data for each calibration star online.

 

Here's a link to the open cluster NGC 6709. These are linear fits for stacked L, R, G and B. If this is suitable/interesting I'll post the other objects.

https://www.dropbox.com/s/b82b5q7kra1n2b2/NGC6709.zip?dl=0

There isn't a great deal of easily-useable colour info on this cluster online that I can find, although the brightest stars do have colour data. e.g. https://www.researchgate.net/publication/230942346_Multicolor_CCD_photometry_and_stellar_evolutionary_analysis_of_NGC_1907_NGC_1912_NGC_2383_NGC_2384_and_NGC_6709_using_synthetic_color-magnitude_diagrams

In LAB space with saturation turned down for subtle colours (and no colour balancing), it looks like this using live-stacking of LRGB:

 227440951_NGC670908Jun19_10_27_23.png.a39458a5c7f1660a509fd6b1a4ba9363.png

When comparing against any colour data, bear in mind this was around 28 degrees elevation (I was looking for any OC in the S and there are very few around at this time of year without staying up late ;-). There will be more and higher OCs in a month or two!

cheers

Martin

 

 

 

Link to comment
Share on other sites

Here is the result of just color transfer (no color calibration yet, I'm about to do that as well, just to see how much difference there will be):

image.png.fd78817eade50f22ceb8bfd3c642fd54.png

I did not stretch as aggressively as you. Maybe if I do another round, this time with more aggressive stretch, for comparison purposes?

image.png.6ef0f2b8de20237e9d2ba5845d9961b5.png

I would say that this type of color transfer produces more saturated stars? I'm certain that I prefer less aggressively stretched version though.

I'm going to have a go now at color calibration - that might be a problem in such dense star field (accuracy of aperture photometry with so many close stars).

  • Like 1
Link to comment
Share on other sites

@Martin Meredith, here is result of color calibration:

image.png.5ae3d44fea7338a2678f2de2db0d13c3.png

There seems to be very little change, probably because data was already balanced pretty good. This image is a bit cropped - just to be able to wipe background better. I had a problem finding suitable stars both to measure and in catalog. I managed to get 10 of them spaced pretty good (in terms of temperature). There is difference in red saturation, and a bit of blue also (hottest star that I've found is about 8200K - which is F type - whitish - very pale blue).

Here is calibration data set:

image.png.b2d452db3626c26d61732bcb920b012e.png

Here is calibration comparison (normalized R, G and B values - reference and measured after transform):

image.png.e531b00987805e066013879a99f1897e.png

Transform matrix (rather strange values, but works :D ) :

image.png.abf747b445ae87559a6053baee51abf4.png

( calibrated red = r*0.345... +g*1577... - b*1.0172.... etc)

I guess this method of color calibration can be improved by taking SNR into account and also using RANSAC to remove outliers (either due to error in photometric measurement or due to error in B-V / star catalog). More calibration stars is certainly going to improve things. Another possible course of action would be to include higher order terms into transform (like quadratic).

 

  • Like 1
Link to comment
Share on other sites

Nice work. Would it be possible for you to identify the stars you're using without too much work on your part?

BTW Did you need to take elevation into account in the end?

One big difference I note is that mine has a number of redder stars (I don't mean the artefacts at the top). For instance, the curved 'triple' of stars of diminishing magnitude just L of centre with the peach, dark peach and red stars is different. 

I think the example I sent was over-stretched (I noticed I'd left it on my most powerful stretch by mistake). I have a liking for dense star fields and the background stars are indeed signal so it is good to see them 🙂 I guess this is the closest I can get to what I personally prefer (using arcsinh stretch for L and power for RGB, and LAB combination). 

cheers

Martin

 

667316764_NGC670910Jun19_08_28_53.png.b512cd20268dc339d5b38e57cbb2c55e.png

Edited by Martin Meredith
Link to comment
Share on other sites

2 hours ago, Martin Meredith said:

Nice work. Would it be possible for you to identify the stars you're using without too much work on your part?

BTW Did you need to take elevation into account in the end?

One big difference I note is that mine has a number of redder stars (I don't mean the artefacts at the top). For instance, the curved 'triple' of stars of diminishing magnitude just L of centre with the peach, dark peach and red stars is different. 

I think the example I sent was over-stretched (I noticed I'd left it on my most powerful stretch by mistake). I have a liking for dense star fields and the background stars are indeed signal so it is good to see them 🙂 I guess this is the closest I can get to what I personally prefer (using arcsinh stretch for L and power for RGB, and LAB combination). 

cheers

Martin

 

667316764_NGC670910Jun19_08_28_53.png.b512cd20268dc339d5b38e57cbb2c55e.png

You mean mark calibration stars on the image? It should not be too much trouble, I'll see to it today when I get a bit of free time - I have star names recorded and it should not be too much trouble to look up their coordinates and do the marking.

I did not account for any atmospheric effects - method should do that by itself. It tries to create transform so that what ever is recorded (taking into account instrument and atmosphere response) gets mapped to calculated linear RGB values. Currently there can be deviation from exact values (listed in comparison table) in couple of cases (or mix there of):

1. Poor SNR in photometric measurement / "polluted" data. I'm using simple aperture photometry in AstroImageJ, this means that star signal is summed over inner aperture and background is subtracted from outer aperture (3 diameters - outer star diameter, inner/outer "background" diameter). With dense star fields of low resolution it is quite easy to have background stars in background aperture which skews results.

2. There is also question of SNR of reference star photometry. Good catalogs have error of each magnitude measurement listed, but I did not take these into account. One way to account for these would be to use weighted least squares method - where weights depend on both measured data SNR and reference data error.

3. General problem of sensor / instrument response mapping to color space. Each camera has different QE over wavelengths and filters used have different characteristics - for example OSC cameras are usually sensitive across the whole range of wavelengths in each color, while RGB filters tend to have very hard cut off values. This means that each instrument will have different gamut that needs to be matched against human eye sensitivity to wavelengths and color perception. In most cases there will be some residual error on some colors because there is no 1:1 mapping. This error can be further reduced by using higher order terms, but I'm still trying to figure out what would be the best way to go about this. I'm afraid that non linearity of transform might actually increase mapping error if I'm not careful. So far, I've concluded that probably the best way to go about it is to work on unity vectors when transforming. As is, I can just apply transform matrix to each pixel regardless of its intensity, because transform is affine - it will work the same for low and high intensity pixel. When doing higher order terms - outcome will depend on pixel intensity (square of 0.1 is much different than square of 1.0), so idea would be to calculate transform for unity RGB vectors (I'm already doing that), but also to scale each pixel to unity prior to transform and then scale it back by same amount after transform.

Will post image with marked stars later.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.