Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Astrophotography pop art - critique of the process


vlaiv

Recommended Posts

2 minutes ago, tomato said:

Just adds to the discussion…

 It is good presentation to help people understand that perception of color is one thing and physics of color something else.

If you take the same color and put it in different contexts - and people in one context will see it as brown, and in another see it as orange - it is not color that changes it is something else.

This is not limited to brown - there are plenty of other cases where this happens, but again - that does not mean that we cannot measure it, match it and reproduce it.

Take for example this:

316px-Checker_shadow_illusion.svg.png

You can clearly "see" that A and B are of different color, right?

Well, they are actually the same color:

310px-Grey_square_optical_illusion_proof

There are many things that impact how we perceive particular stimuli. We get the same light hitting our eye - but its our brain that decides what color we "see".

In astrophotography, I think that we need to make sure that first is true - that we take care of light hitting our eye be the same "type" of light that was emitted by celestial target. What color our brain is going to make out of it - is not up to astrophotographer as he/she can't control viewing conditions.

 

  • Like 1
Link to comment
Share on other sites

14 minutes ago, vlaiv said:

 It is good presentation to help people understand that perception of color is one thing and physics of color something else.

If you take the same color and put it in different contexts - and people in one context will see it as brown, and in another see it as orange - it is not color that changes it is something else.

This is not limited to brown - there are plenty of other cases where this happens, but again - that does not mean that we cannot measure it, match it and reproduce it.

Take for example this:

316px-Checker_shadow_illusion.svg.png

You can clearly "see" that A and B are of different color, right?

Well, they are actually the same color:

310px-Grey_square_optical_illusion_proof

There are many things that impact how we perceive particular stimuli. We get the same light hitting our eye - but its our brain that decides what color we "see".

In astrophotography, I think that we need to make sure that first is true - that we take care of light hitting our eye be the same "type" of light that was emitted by celestial target. What color our brain is going to make out of it - is not up to astrophotographer as he/she can't control viewing conditions.

 

So how are the squares created? Differences in brightness?

Olly

Link to comment
Share on other sites

21 hours ago, vlaiv said:

 

Would you feel the same if it were something else:

image.png.dd41e1b322ecb3a3c9737ca6abd0790a.png

Thanks vlaiv, for creating this interesting thread.

When I’m processing astroimages I’m quite happy to play about with colour sliders. I find it fun, plus it’s a good way of creating an image that’s a bit different or otherwise interesting compared to the thousands of other photos out there of exactly the same object. Until this thread I hadn’t really thought about why I’m happy to change colours about, but wouldn’t do the same to other properties, as you did in this example 🤔

Edited by Lee_P
  • Like 1
Link to comment
Share on other sites

10 minutes ago, ollypenrice said:

So how are the squares created? Differences in brightness?

Olly

Not sure what you are asking, but I can explain why we see different color when physical color is the same, or rather I can explain principles of color appearance models which deal with our perception.

When we talk about color in terms of physical quantities we use tristimulus values like RGB or XYZ or LMS and so on - these are coordinates in particular color space and represent physical response of our vision system. These do not describe what is the color we see (our perception or when our brain gets in the mix).

Those are handled by three different quantities - Luminance, Hue and Saturation.

If we want to match perception rather than physical characteristics of the light, here is the process:

We record XYZ or physical component of the light, but we also record environmental conditions - like type of illumination, level of illumination - what is general color surrounding our color and things like that.

From all of these we derive Luminance (or perceived brightness - not to be confused with luminance filter although they are somewhat related), hue and saturation of particular color. When we want to reproduce that perceived color under different circumstances - like when you are sitting in your room at your computer with different type of illumination, different background and so on, we do reverse - we take Luminance, Hue and Saturation and add these new environment factors and we come up with different set of XYZ values that we need to emit to person in order to induce same perceived color.

This is how color appearance models work.

In above case, we have two times "forward" transform of that CAM model.

We have same XYZ stimulus, but we have different environment. Environment here being relationship to other parts of the image and our brain interpreting some parts of image as being in shadow and other parts as being in light. This creates two different Luminance, Hue and Saturation values - in this case two different Luminance values because there is difference in "shadow / light" part.

Although physics of light says we have exact same light/color in both cases - our perception of it differs.

 

 

  • Like 1
Link to comment
Share on other sites

@vlaiv are you saying that I can:
*  gather data from any astro target under any typical astro viewing conditions with my astro camera
*  apply THE raw to XYZ transformation matrix for my astro camera
*  and I'll have a standardised XYZ colour rendition (that you wish the world would accept as such, a bit like we already do for other characteristics (magnitude for star brightness, or light year distance, or that stars are round...)
*  and I can then apply a stock XYZ to RGB transform to view the image in an environment of my choosing knowing the colour rendition will be as intended
*  and I can derive the single RAW to XYZ transformation matrix for my camera as a one off exercise in normal terrestrial conditions?

If so I would be very tempted to enter the AP world.... all the hours of manual tinkering with images that I see discussed on the forums is the main turn off for me.

Do you have another tread somewhere that explains in more detail how to go about deriving the XYZ transformation matrix for a given camera.... steps and suggested software etc?

  • Like 1
Link to comment
Share on other sites

4 minutes ago, globular said:

*  gather data from any astro target under any typical astro viewing conditions with my astro camera

Yes, of course

4 minutes ago, globular said:

*  apply THE raw to XYZ transformation matrix for my astro camera

Yes, except there is no THE raw to XYZ. There is family of RAW to XYZ transforms that you can derive and each one will be more suitable for particular case, but in principle - you can only have one RAW to XYZ that you'll be using if you accept small error in some XYZ values that depends on your camera, chosen transform and original spectrum of the light you are recording - in another words, some spectra will be more precisely recorded as XYZ while some less so.

7 minutes ago, globular said:

*  and I'll have a standardised XYZ colour rendition (that you wish the world would accept as such, a bit like we already do for other characteristics (magnitude for star brightness, or light year distance, or that stars are round...)
*  and I can then apply a stock XYZ to RGB transform to view the image in an environment of my choosing knowing the colour rendition will be as intended

Yes, this part can be clearly defined - except for one step and that is luminance processing step. You can think of it as exposure control - how much of the dark stuff you want to make bright enough to be seen. It also incorporates any sharpening, denoising and all the things that we usually do.

You can also choose whether or not to perform perceptual adaptation in the end. I think that perceptual adaptation would give more pleasing results as it would do what people usually instinctively do - try to boost saturation.

12 minutes ago, globular said:

and I can derive the single RAW to XYZ transformation matrix for my camera as a one off exercise in normal terrestrial conditions?

Yes you can, and only thing you need is calibrated reference.

Your calibrated reference can be DSLR camera, computer screen that is properly calibrated (they usually are not and you need calibration device to perform proper calibration) or perhaps smart phone screen if that one is properly calibrated at the factory.

- if you have DSLR camera, you need to set it to custom white balance and set that to neutral - like this:

image.png.56fd19bd4f58c739e4041cb0886064a2.png

(color balance shift is set to 0,0).

you also need to use Faithful mode on your camera (to avoid any processing by camera - like vivid colors and such that "enhances" the image).

Next thing is to take any display - calibrated or not, in dark room, have it display range of colors and shoot it with both DSLR and your astro camera that you are attempting to calibrate.

With DSLR raws you can use dcraw software and ask it to extract XYZ data for you. You will have raw data from your astro camera as is.

Then you need to measure pixel values (as average over color surface) for each color in both XYZ image and your RAW image and that creates pairs of vectors XYZ and RAW.

You can find transform matrix that transforms RAW into XYZ by using least squares method - this can be done in spreadsheet software.

- if you have calibrated computer screen or calibrated smart phone screen - procedure is very similar.

You take set of colors and display it on your computer screen. For each of those colors you calculate XYZ value from RGB values. There is well defined transform from sRGB to XYZ.  Just make sure your computer screen / smart phone screen is calibrated for sRGB with white point of D65 (sometimes calibration is done for D50 although expected color space sRGB is in fact encoded with D65).

Here you are relying on your display device to properly render XYZ values instead of on DSLR to read of XYZ values from colors displayed.

Rest is the same - record raw with your camera and derive transform matrix based on XYZ, RAW pairs of values for list of colors

 

Once you have your RAW -> XYZ transform, imaging workflow would be like this:

1. Record raw data

2. Apply transform matrix to get XYZ values.

3. Make a copy of Y and that will be your luminance. If you are doing LRGB - use L here instead of copy of Y

4. Stretch that luminance so that you get pleasing monochromatic image of your target, apply any noise reduction and sharpening

5. If needed - apply inverse sRGB gamma function to luminance. This is needed because software displays luminance as sRGB and you want it to be linear - as we will later apply sRGB gamma again

6. Take original XYZ values and replace them with following Xnew = X * StretchedLum / Y, Ynew = StrechedLum, Znew = Z * StrechedLum / Y

This step is in essence - replacing each pixel value with equivalent pixel where light is the same only amplified to match what you saw as luminance.

This produces XYZ of processed image.  Here you have a choice. You can convert it directly to sRGB to have finished image without perception transforms. This is done by simply applying XYZ->sRGB transform. Or you can use for example CAM16 color appearance model.

This would be done as follows:

Take XYZ values for each pixel and set CAM16 parameters as follows:

- illuminant to equal energy illuminant (consistent with emissive source)

- adapting luminance to 0 (we are in outer space and there is no surrounding light - alternatively set it to very low value like 5% or 10%)

- background luminance - same as above 0 or 5%-10%

- surround 0 (dark)

This will give you J, a, b values (for CAM16_UCS)

Take again CAM16 parameters and now set them up as follows:

- illuminant to D65

- adapting luminance to 20%

- background luminance to 20%

- surround to 1.5 (dim to average)

Transform J, a, b to new XYZ

In the end use standard XYZ to sRGB to finish the image.

Link to comment
Share on other sites

Thanks @vlaiv.  On the face of it most of that looks automatable... if there is software with exposed APIs to supply values and perform actions..... surely these exist these days?

One issue springs to mind... solving multiple points in 3 dimension space to create what is, as I understand it, a linear transformation matrix, will only give a very good fit if the behaviour of RAW and the behaviour of XYZ are fundamentally the same/similar.   
In other words, using your temperature conversion analogy above, if you are converting from one linear scale to another linear scale (e.g F to C) then all is well.... but if you have C as one scale and, say, log(1/C) as another scale (a bit odd, but you never know) then a best fit linear transformation is not going to be very good.
My guess is you have done this enough (or know enough about how RAW and XYZ spaces work) to be happy that a linear transform is workable?
In any case the regression technique used to solve for the transform will throw out enough info to calculate goodness of fit... so I guess I'll know myself when I try it.

And then there is my second worry (or may be it's the same one expressed differently)....

35 minutes ago, vlaiv said:

Yes, except there is no THE raw to XYZ. There is family of RAW to XYZ transforms that you can derive and each one will be more suitable for particular case, but in principle - you can only have one RAW to XYZ that you'll be using if you accept small error in some XYZ values that depends on your camera, chosen transform and original spectrum of the light you are recording - in another words, some spectra will be more precisely recorded as XYZ while some less so.

Your process above is to derive a transformation using colours on a calibrated screen / or camera.  Presumably this is a fairly restricted range of frequencies? And in particular will not include lots of frequencies present in different astronomical targets.  So how do you produce a family of transformations? 

And  if you do end up with a family of linear transformations.... couldn't you replace them all with a single polynomial transformation with degrees of freedom set at a level that gives low / acceptable goodness of fit? 

If this is doable then it would seem to help support the hypothesis that this is "fairly accurate".  But if it's not doable doesn't that mean may be it not accurate enough and/or as consistent enough of an approach to become an international standard?  

Link to comment
Share on other sites

8 hours ago, globular said:

*  apply THE raw to XYZ transformation matrix for my astro camera
*  and I can derive the single RAW to XYZ transformation matrix for my camera as a one off exercise in normal terrestrial conditions?

Do you have another tread somewhere that explains in more detail how to go about deriving the XYZ transformation matrix for a given camera.... steps and suggested software etc?

Therein lies the rub. There is no single accepted raw XYZ transformation - even if we all used the exact same camera! (hence the many different "modes" on cameras - see the presentation I linked to earlier).

You can definitely attempt to create a calibration matrix (as included by all manufacturers), but even the construction of such a matrix is an ill-posed problem to begin with.

Also, you cannot really use a random reflective (like a Macbeth chart) or emissive (like a screen) calibration target either, without ensuring that its SPD is representative of the SPD under which your are expecting to record. As you know we record objects with many different SPDs (every star has its own unique SPD). Sure, you can standardise on the SPD of our own sun (which is what G2V calibration does), but this is precisely one of those arbitrary steps. Others use the average SPD of all stars in an image, yet others use the SPD of a nearby galaxy.

Is all lost then?

No, not entirely! As @vlaiv alludes to, it is possible to create colour renditions that vary comparatively little and are replicable across many different setups, cameras and exposures (though not entirely in the way he suggests).

Of course, here too, arbitrary assumptions are made, but they are minimised. The key assumption, is that overall, 1. visual spectrum camera space RGB response is 2. similar.

1. This assumes we operate in the visual spectrum - if your camera has an extended response (many OSCs do), add in a luminance (aka IR/UV cut) filter.

2. This assumes that red, green and blue filters record the same parts of the spectrum. Mercifully, this tends to be very close, no matter the manufacturer. One notable exception is that many consumer-oriented cameras have a bump in sensitivity in the red channel to be able to record violet, whereas many B filters (as used in mono CCDs) do not.

This, in essence, sidesteps one XYZ colour space conversion step entirely and uses a derivative (e.g. still white balanced!) of camera space RGB directly for screen space RGB. This is in, fact, what all AP-specific software does (e.g. PI, APP and ST).

All AP software dispenses with camera response correction matrices entirely, as they simply cannot be constructed for AP scenes without introducing arbitrary assumptions. As a matter of fact, StarTools is the only software that will allow you to your DSLR's  the manufacturer matrix (meant for terrestrial scenes and lighting) if you really wish, but its application is an entirely arbitrary step.

As @vlaiv alludes to as well, the tone curve ("stretch") we use in AP is entirely different (due to things being very faint), which drastically impacts colouring. Local detail / HDR enhancement ("local" stretching) adds to this issue and makes color even harder to manage (which is one of the reasons why some end up with strange "orange" dust lanes in galaxies for example - important brightness context that informs psychovisual aspects of colouring is mangled/lost; see @powerlord's fantastic "Brown - color is weird video").

Indeed, "the" solution is to set aside the colouring as soon as as its calibration has been performed. You then go on to process and stretch the luminance component as needed. The reasoning is simple; objects in outer space should not magically change colour depending on how an earthling stretches the image.

Once you are done processing the luminance portion, you then composite the stretched image and the "unadulterated" colour information (in a colour space of your choosing) to arrive at a "best of both worlds" scenario.

Adhere to this and you can create renditions that have very good colour consistency across different targets, while showing a wealth of colour detail. Crucially the end result will vary markedly little between astrophotographers, no matter conditions or setup/camera used. This approach is StarTools' claim to fame, and is the way it works by default (but can obviously be bypassed / changed completely). With identical white references, you should be able to achieve results that are replicable by others, and that is - not coincidentally - very close to what scientific experiments (and by extension documentary photography) are about.

Edited by jager945
Link to comment
Share on other sites

For anyone interested in the method outlined in my previous post, this is a test image (TIFF), graciously created and donated to the public by Mark Shelley, to explore colour retention and rendering.

It is a linear image, with added noise and added bias (to mimic, say, light pollution).

This is the TIFF stretched with a gamma of 4.0;

star_colour_test_image_gamma4.jpg.c621ce0e6d3a435b51036253521a9b6c.jpg

This is the image once "light pollution" was modelled and subtracted (in the linear domain of course!), and then stretched with a gamma of 4.0;

star_colour_test_image_wiped_gamma4.jpg.53e59f05fafc48b3a76d4fac7b1335b3.jpg

And this is the image once its colours were white balanced independently of the luminance (in the linear domain of course; never "tweak" your colours in PS/GIMP/Affinity once your image is no longer linear - it makes 0 sense from a signal processing PoV!) with a factor of 1.16x for red, and 1.39x for blue vs 1.0x for green, and subsequently composited with a gamma 4.0 stretched luminance;

Notice how the faintest "stars" still have the same colouring perceptually (CIELAB space was used).

star_colour_test_image_wiped_gamma4_colour_CIELAB.jpg.402ce51731c8504872f79380fca69f7f.jpg

You can also, more naively, force R:G:B ratio retention, but this obviously forces perceptual brightness changes depending on colours (blue being notoriously dimmer);

star_colour_test_linear_rgbratio.jpg.9a729298dac2a7fa2d289bd88df5eee6.jpg

In this rendition, the blue "stars" seem much less bright than the white stars, but R:G:B ratios are much better respected (ratios are reasonably well preserved, even in the face of the added noise and the modelling and subtraction of a severe, unknown bias). It's as trade-off. But as always, knowing about these trade-offs allows you to make informed decisions.

Regardless, notice also how colouring in bright "stars" is resolved until no colour information is available due to over-exposure.

FWIW, any basic auto-balancing routine (for example a simple "grey world" implementation) should come close to the ~ 1.16:1.0:1.39 R:G:B colour balance.

The benefits of this method should hopefully be clear; it doesn't matter what exposure time was used, how bright the objects are, or how sensitive your camera is in a particular channel - you should get very similar results. All that is required, is that spectrum response of the individual channels is "ballpark" equal to all other cameras. This tends to be the case - the whole point of the individual channel response for visual spectrum purposes, is to mimic the response of the human eye to begin with.

In essence, this method exploits a de-facto "standard" amongst all cameras; errors will fluctuate around the average of all specific spectral responses of all cameras. In my experience, those deviations, however, tend to be remarkably small (which - again - is to be expected by design). Of course, taking into account the aforementioned caveats (filter response of mono CCD filters, violet "bump", proper IR/UV cut-off etc.). All that remains is making sure your white balance/reference can be argued to be "reasonable" (many ways to do this, as mentioned before; sampling average of foreground stars, nearby galaxy, G2V star, balancing by known juxtaposed processes like Ha vs O-III, etc.).

  • Like 1
Link to comment
Share on other sites

10 hours ago, globular said:

One issue springs to mind... solving multiple points in 3 dimension space to create what is, as I understand it, a linear transformation matrix, will only give a very good fit if the behaviour of RAW and the behaviour of XYZ are fundamentally the same/similar.   
In other words, using your temperature conversion analogy above, if you are converting from one linear scale to another linear scale (e.g F to C) then all is well.... but if you have C as one scale and, say, log(1/C) as another scale (a bit odd, but you never know) then a best fit linear transformation is not going to be very good.
My guess is you have done this enough (or know enough about how RAW and XYZ spaces work) to be happy that a linear transform is workable?
In any case the regression technique used to solve for the transform will throw out enough info to calculate goodness of fit... so I guess I'll know myself when I try it.

Linear transform is well justified (we can even say required) by the nature of light.

We expect our sensor to behave linearly because light behaves linearly - if we amplify light intensity, well we expect photon count to increase linearly (at a given wavelength). When we mix two lights - their photon counts add up.

You want to preserve this linearity and thus transform has to be linear. Naturally we can create better fit using higher order transforms - but that will not work well due to nonlinearity.

10 hours ago, globular said:

Your process above is to derive a transformation using colours on a calibrated screen / or camera.  Presumably this is a fairly restricted range of frequencies? And in particular will not include lots of frequencies present in different astronomical targets.  So how do you produce a family of transformations? 

There are couple of points to understand here.

First is - light from most astronomical objects is rather limited in chromaticity. This is because light comes from stars and most stars behave like black body object at a certain temperature.

Globular clusters - stars

Open clusters - stars

Reflection nebulae - starlight (reflected parts of the same spectra as above)

Emission nebulae - handful of emission lines (Ha, Hb, SII, NII, OIII and so on - which predominant intensity from Ha/Hb, OIII).

image.png.fcbd0559c75667b5ef2e55b9b805419a.png

Above is chromaticity diagram showing all colors without luminance / saturation part (full luminance and saturation). There is small line in the middle of the diagram - that is Plankian locus - all star colors fall along that line (or very close proximity).

Here is the same diagram with sRGB triangle:

image.png.ec29dec6cee0e948e96ce150480862bb.png

Any image encoded as Jpeg or PNG without explicit color profile will be assumed as sRGB image - meaning that it will be able to show any color within that triangle. Don't be confused by colors outside of triangle - actual colors outside of the triangle are a bit more saturated - but they could not be shown in above image so regular colors are used.

In any case - only color of very dim stars can't be shown properly in sRGB image.

On the other hand - No single spectral line can be shown properly in sRGB image. That is something we must live with. Wider Gamut displays will bring close to that capability when they become standard.

Any light source and its color can be represented as a dot in above diagram. If you have mix of such light - resulting light is somewhere inside concave polygon created by source light dots. That is why we can create any light inside that triangle with simple linear combination of R, G and B primary colors of sRGB color space (vertices of triangle). Similarly most emission nebulae colors will fall into triangle between three primary emission lines 500nm, 656nm and 486nm

If you are calibrating against colors that can be shown on computer screen - you are covering quite a bit of astronomical spectra, but also - you'll end up using those colors in your final image anyway, as you can't show any other color outside of sRGB gamut (you could if you used wide gamut image format and you viewed image on wide gamut display).

In the end - there is something called deltaE. That is distance between colors in color space. If color space is uniform color space - well then numerical distance between coordinates corresponds to perceived difference between colors. It turns out that there is deltaE below which we simply can't see difference between two colors - they look the same.

When we use star based color calibration (often offered in astro software) - we are in effect using only points along that Plankian locus line to derive transform matrix. Using colors that are more spread out will lead to better deltaE over more of above graph.

11 hours ago, globular said:

If this is doable then it would seem to help support the hypothesis that this is "fairly accurate".  But if it's not doable doesn't that mean may be it not accurate enough and/or as consistent enough of an approach to become an international standard?  

Camera manufacturers have been using this technique for ages. They manage to do fairly good color reproduction across vast landscape of colors we encounter in daily life.

Check out this for example: https://www.imatest.com/docs/colormatrix/

It shows how to use color checker chart to create custom CCM (color correction matrix) for given lighting conditions in daytime photography (you'll find similarity with above approach).

I would not recommend using color checker passport as that depends on illumination type and you don't have XYZ reference (except when calibrating against DSLR - then you can use anything). For our applications it is better to use emission sources (computer screen in dark room) as thinking of white balance can confuse people (we need not think of white balance at this stage at all).

 

Link to comment
Share on other sites

19 hours ago, Astro Noodles said:

One might ask whether there is a case to be made for removing the human element entirely, to stop people interfering in the process and ruining the colour etc.

I could (and have in the past) make a case for publishing the raw data at a minimum, together with zero or more interpretations of it.

That way, anyone interested enough in the subject can make their own interpretations and value judgements.

  • Like 2
Link to comment
Share on other sites

17 hours ago, Lee_P said:

compared to the thousands of other photos out there of exactly the same object.

That is an important point, IMO.

I do not really understand why so many people spend so much time and effort reproducing what many other people have produced thousands of times before.  Why not do something new? It is not as if there were a shortage of new things to do.

Clearly, it must be a failure of my imagination.

Link to comment
Share on other sites

14 hours ago, vlaiv said:

Yes, of course

Yes, except there is no THE raw to XYZ. There is family of RAW to XYZ transforms that you can derive and each one will be more suitable for particular case, but in principle - you can only have one RAW to XYZ that you'll be using if you accept small error in some XYZ values that depends on your camera, chosen transform and original spectrum of the light you are recording - in another words, some spectra will be more precisely recorded as XYZ while some less so.

Yes, this part can be clearly defined - except for one step and that is luminance processing step. You can think of it as exposure control - how much of the dark stuff you want to make bright enough to be seen. It also incorporates any sharpening, denoising and all the things that we usually do.

You can also choose whether or not to perform perceptual adaptation in the end. I think that perceptual adaptation would give more pleasing results as it would do what people usually instinctively do - try to boost saturation.

Yes you can, and only thing you need is calibrated reference.

Your calibrated reference can be DSLR camera, computer screen that is properly calibrated (they usually are not and you need calibration device to perform proper calibration) or perhaps smart phone screen if that one is properly calibrated at the factory.

- if you have DSLR camera, you need to set it to custom white balance and set that to neutral - like this:

image.png.56fd19bd4f58c739e4041cb0886064a2.png

(color balance shift is set to 0,0).

you also need to use Faithful mode on your camera (to avoid any processing by camera - like vivid colors and such that "enhances" the image).

Next thing is to take any display - calibrated or not, in dark room, have it display range of colors and shoot it with both DSLR and your astro camera that you are attempting to calibrate.

With DSLR raws you can use dcraw software and ask it to extract XYZ data for you. You will have raw data from your astro camera as is.

Then you need to measure pixel values (as average over color surface) for each color in both XYZ image and your RAW image and that creates pairs of vectors XYZ and RAW.

You can find transform matrix that transforms RAW into XYZ by using least squares method - this can be done in spreadsheet software.

- if you have calibrated computer screen or calibrated smart phone screen - procedure is very similar.

You take set of colors and display it on your computer screen. For each of those colors you calculate XYZ value from RGB values. There is well defined transform from sRGB to XYZ.  Just make sure your computer screen / smart phone screen is calibrated for sRGB with white point of D65 (sometimes calibration is done for D50 although expected color space sRGB is in fact encoded with D65).

Here you are relying on your display device to properly render XYZ values instead of on DSLR to read of XYZ values from colors displayed.

Rest is the same - record raw with your camera and derive transform matrix based on XYZ, RAW pairs of values for list of colors

 

Once you have your RAW -> XYZ transform, imaging workflow would be like this:

1. Record raw data

2. Apply transform matrix to get XYZ values.

3. Make a copy of Y and that will be your luminance. If you are doing LRGB - use L here instead of copy of Y

4. Stretch that luminance so that you get pleasing monochromatic image of your target, apply any noise reduction and sharpening

5. If needed - apply inverse sRGB gamma function to luminance. This is needed because software displays luminance as sRGB and you want it to be linear - as we will later apply sRGB gamma again

6. Take original XYZ values and replace them with following Xnew = X * StretchedLum / Y, Ynew = StrechedLum, Znew = Z * StrechedLum / Y

This step is in essence - replacing each pixel value with equivalent pixel where light is the same only amplified to match what you saw as luminance.

This produces XYZ of processed image.  Here you have a choice. You can convert it directly to sRGB to have finished image without perception transforms. This is done by simply applying XYZ->sRGB transform. Or you can use for example CAM16 color appearance model.

This would be done as follows:

Take XYZ values for each pixel and set CAM16 parameters as follows:

- illuminant to equal energy illuminant (consistent with emissive source)

- adapting luminance to 0 (we are in outer space and there is no surrounding light - alternatively set it to very low value like 5% or 10%)

- background luminance - same as above 0 or 5%-10%

- surround 0 (dark)

This will give you J, a, b values (for CAM16_UCS)

Take again CAM16 parameters and now set them up as follows:

- illuminant to D65

- adapting luminance to 20%

- background luminance to 20%

- surround to 1.5 (dim to average)

Transform J, a, b to new XYZ

In the end use standard XYZ to sRGB to finish the image.

I really like this explanation. Well done Sir!

  • Thanks 1
Link to comment
Share on other sites

14 hours ago, vlaiv said:

Then you need to measure pixel values (as average over color surface) for each color in both XYZ image and your RAW image and that creates pairs of vectors XYZ and RAW.

You can find transform matrix that transforms RAW into XYZ by using least squares method - this can be done in spreadsheet software.

If I keep these pairs, rather than solving for a best fit linear transform matrix, can I not then feed them into the image process workflow and get it to calculate a transform that is optimised for the astro image being taken?   

Something like:  for each pixel (or region of pixels) find the pair that is closest; count how many times each pair is used across the image; and solve for a linear transform matrix using the weighted pairs.  i.e. calculate a transform matrix that fits best in the regions most important to the image being processed.

This then seems similar to the well established process of feeding in flats to inform the workflow about imperfections in your light path; this is feeding in information to inform the workflow about the colour profile of your camera.

Anything else you do (with generalised tools or manual slider movements) will have to reply on typical averages, that most camera designs are similar, the judgement or preferences of the user, etc.  We could do that with imperfections.... but we don't because we know we'd get it wrong... maybe removing a very interesting feature.

Link to comment
Share on other sites

4 minutes ago, globular said:

Something like:  for each pixel (or region of pixels) find the pair that is closest; count how many times each pair is used across the image; and solve for a linear transform matrix using the weighted pairs.  i.e. calculate a transform matrix that fits best in the regions most important to the image being processed.

Interesting idea, and yes, that will minimize overall error over the image.

Not sure if it how much benefit it would bring. It needs to be analyzed in context of deltaE. In case that you weigh pairs based on frequency of likelihood that particular RAW is encountered in the image - then you'll reduce deltaE for majority of pixels in the image, but will raise deltaE for very few pixels in the image.

If you look at most of astro images - majority of the pixels are in fact background pixels - that don't contain much color in the first place.

It seems that you run a risk of increasing deltaE in few more important pixels - like stars and objects so you could lower deltaE for majority of background pixels?

I guess that better approach would be to pick calibration colors in such way as to represent majority of colors that will be encountered in actual objects.

Usually this is used:

image.png.b5478302942f0d6d58e8190b224e0e25.png

But even that one is interesting because of Orange / Brown thing. First two squares (top left) are actually very close in hue although brightness is different.

I would say that best approach would be to take this diagram:

434px-SRGB_chromaticity_CIE1931.svg.png

And sample points inside triangle in some regular way, and then add another set of colors from this:

image.png.40d51035e3bdf7ee62a0b6aa6cb8a238.png

Which is range of colors from black body of a certain temperature.

By the way, there is way to do all of this purely in calculation - derive appropriate RAW - XYZ transform matrix of your choosing by only doing calculations - provided you have exact QE sensitivity of your imaging system.

Instead of using triplets, your input parameters will be different spectra. In that case, you are not limited to sRGB set of colors for calibration and you can even do special type of calibration for emission type targets as you can take any combination of Ha/Hb/OIII into account.

Problem is - getting exact QE curves for your setup. From what I've seen - published sensor QE curves and measured QE curves often differ significantly.

 

Link to comment
Share on other sites

38 minutes ago, Xilman said:

That is an important point, IMO.

I do not really understand why so many people spend so much time and effort reproducing what many other people have produced thousands of times before.  Why not do something new? It is not as if there were a shortage of new things to do.

Clearly, it must be a failure of my imagination.

hmm. well it's the nature of AP isn't it ? We are not discovers here. Even the best picture we capture isn't going to be as good as Hubbles. And no one is discovering a new nebula and posting their pics of it.

As has been said before - for me anyway, its the challenge. As JFK said - we'd don't do it because it is easy, we do it because it is hard.

Tonight I will be shooting M27 and NGC7635. However much time I spend taking subs and processing, they ain't going to be as good as the best out there. But unlike doing the 100m fastest, at least for me that is not the point -I'm not tryinf to make it the best ever.

For me, the point is the challenge of it all - taking a picture of a pin point sized bit of the sky in my backgarden and managing to make it into an awesome looking image I TOOK. It will be mine, not exactly the same as any other one. Maybe colors are different, maybe processing different, maybe my framing or whatever - but 'I DONE THAT' - I think sums it up.

I'm not really looking for approval by anyone. Nor do I require validation or evaluation against other pictures.

I doubt I'm alone there ? The pleasure is in the journey as they say.

It's no different that say.. cycling. You could say - what is the point of cycling to get to town X. It's faster to drive. You can take more with you, etc. Well, the point is, I enjoy cycling. that's it. no more purpose required.

I enjoy all that is involved to get an image. If I am happy with the image - then I had a good 'journey' - whether I ended up with something that someone else likes or not is of little interest to me frankly - other than perhaps others who feel the same way as me.

My favourite shot I've taken so far is this one - I made it bright and 'tie-dye'. AND it is false colour. It's how I want it to look. So in that evaluation it is 100% correct. 😜

 

933838934_1basic3layer.stretch1_NS.1.stretchsmooth.contrast.clarity..patch.withMonoStars.jpg.dc3c1a9b61e4246f1650d4f753be6d14.thumb.jpg.a543404ca0e81b00854ca056f6ed5ded.jpg

  • Like 5
Link to comment
Share on other sites

 

15 minutes ago, vlaiv said:

It seems that you run a risk of increasing deltaE in few more important pixels - like stars and objects so you could lower deltaE for majority of background pixels?

Rather than counting perhaps using brightness as a weight would help reduce the dominance of background and ensure the best colour rendition is in the brighter areas?

And as we only need to produce the calibration pairs once per camera (not each time we image like we do for flats) we could have a large number of colour samples rather than just a handful.  (But I agree that colours more likely to be in astro images are more important if only a limited number of them are used).

Link to comment
Share on other sites

When I go on a sightseeing holiday I take lots of pictures.  In the future I look back on them and they remind me of the trip and provoke happy memories.

I do not come home and collate an album of stock images of all the sights I saw.  They would, no doubt, be better lit, better framed, better images of the sights, but they would not have any of me in them nor the memories of the experience of creating them. 

  • Like 1
Link to comment
Share on other sites

18 minutes ago, powerlord said:

hmm. well it's the nature of AP isn't it ? We are not discovers here. Even the best picture we capture isn't going to be as good as Hubbles. And no one is discovering a new nebula and posting their pics of it.

As has been said before - for me anyway, its the challenge. As JFK said - we'd don't do it because it is easy, we do it because it is hard..

I'm not really looking for approval by anyone. Nor do I require validation or evaluation against other pictures.

...

It's no different that say.. cycling. You could say - what is the point of cycling to get to town X. It's faster to drive. You can take more with you, etc. Well, the point is, I enjoy cycling. that's it. no more purpose required.

I am not arguing for discoveries, or for competing against Hubble. Far from it! Neither am I approving or disapproving of any pictures you wish to take.

When you go cycling (for pleasure, not transport) do you restrict yourself to only one or two standardized journeys or do you like to go exploring routes you have only rarely or never taken before?

What I am arguing for is for people to try taking images of objects which lie off the well-trodden tracks.

There are many, many objects in the sky which very few people image. Some of them are beautiful (in my eye anyway). Why not give some of them a try some time?  That's all.

Link to comment
Share on other sites

What sort of colour balance standard is used in this image? Would it matter if the colour balancing was done differently? Are those stars really pink? What colours should it really have? Is the image in any way inferior because of the palette? Would others process it differently? Is it really a good representation, or should it be regarded as pop-art? 

See the source image

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Astro Noodles said:

What sort of colour balance standard is used in this image? Would it matter if the colour balancing was done differently? Are those stars really pink? What colours should it really have? Is the image in any way inferior because of the palette? Would others process it differently? Is it really a good representation, or should it be regarded as pop-art? 

That image is very distinct as color in that case has nothing to do with art.

Color here is used simply to represent certain element. It is purely scientific stuff - Ha is mapped to green, OIII is mapped to blue and SII is mapped to red - SHO == RGB.

If you split channels of that image - you'll get SII, Ha and OIII images without any additional processing (they are stretched individually as mono images).

As such is really does not belong in this discussion as color component here does not relate to color but to "chemistry".

  • Like 1
Link to comment
Share on other sites

1 minute ago, vlaiv said:

That image is very distinct as color in that case has nothing to do with art.

Color here is used simply to represent certain element. It is purely scientific stuff - Ha is mapped to green, OIII is mapped to blue and SII is mapped to red - SHO == RGB.

If you split channels of that image - you'll get SII, Ha and OIII images without any additional processing (they are stretched individually as mono images).

As such is really does not belong in this discussion as color component here does not relate to color but to "chemistry".

I would contend that as you can find the image on t-shirts, mouse mats, coffee mugs etc, that it is indeed pop-art. It was you who introduced the idea that images not processed to produce a standardised outcome might be considered pop-art.

You can't determine what is and what is not relevant to this thread by dismissing those who don't want to discuss the specifics that you do. 

Link to comment
Share on other sites

6 minutes ago, Astro Noodles said:

I would contend that as you can find the image on t-shirts, mouse mats, coffee mugs etc, that it is indeed pop-art. It was you who introduced the idea that images not processed to produce a standardised outcome might be considered pop-art.

You can't determine what is and what is not relevant to this thread by dismissing those who don't want to discuss the specifics that you do. 

You are quite right, I can't determine what is and what is not relevant to this discussion, and I'm not going to try to force my opinion or what not on people.

Idea behind this thread was to point out that people don't pay much attention to color management in astrophotography. Pop art side of things was just a side joke because I made mosaic in first post to demonstrate vast landscape of colors that are produced from the same data (and thus enforce idea that color management is neglected) and it resembled pop art by Andy Warhol.

I'm not saying that one should use "standardized" method of processing their images, what I am saying is that if one wants to faithfully reproduce color of their subject (faithfulness part is open to debate) there is a standard set of mathematical operations to follow and I'd like for people to be familiarized with that.

Similarly - image you posted followed very standardized set of mathematical operations to produce colors that are in the image - except those are not "true / faithful" colors of that object - nor elements / wavelengths involved.

In that matter - we can discuss above image - it has very standardized way of composing things and yes - multiple people following that standard will produce similar colors in that image as well. Where it fails to relate to my original objection is faithfulness of color.

 

  • Like 3
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.