Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

m36


alacant

Recommended Posts

Just now, alacant said:

Hi. Yes but only ever used levels. 

Thanks, will give it a go.

I'm not recommending that you do that :D - I was just showing that it can be done really easily. I don't like effect that this sort of processing produces.

Link to comment
Share on other sites

Hi,

Without singling anyone out here, there may be a few misconceptions here in this thread about coloring, histograms, background calibration, noise and clipping.

In case there is, for those interested I will address some of these;

On colouring

As many already know, coloring in AP is a highly subjective thing. First off, B-V to Kelvin to RGB is fraught with arbitrary assumptions about white points, error margins, filter characteristics. Add to that atmospheric extinction, exposure choices, non-linear camera response, post processing choices (more on those later) and it becomes clear that coloring is... challenging. And that's without squabbling about aesthetics in the mix.

There is, however, one bright spot in all this. And that is that the objects out there 1. don't care how they are being recorded and processed - they will keep emitting exactly the same radiation signature, 2. often have siblings, twins or analogs in terms of chemical makeup, temperature of physical processes goings on.

You can exploit point 1 and 2 for the purpose of your visual spectrum color renditions;

1. Recording the same radiation signature will yield the same R:G:B ratios in the linear domain (provided you don't over expose and our camera's response is linear throughout the dynamic range of course). If I record 1:2:3 for R:G:B for one second, then I should/will record 2:4:6 if my exposure is two seconds instead (or is, for example, twice as bright). The ratios remain constant, just the multiplication factor changes. If I completely ignore the multiplication factor, I can make the radiation signature I'm recording exposure independent. Keeping this exposure independent radiation signature separate from the desired luminance processing, now allows for the luminance portion - once finished to taste - to be colored consistently. Remember objects out there don't magically change color (hue, nor saturation) depending on how a single human chose his/her exposure setting!  (note that even this coloring is subject to arbitrary decisions with regards to R:G:B ratio color retention)

2. Comparable objects (in terms of radiation signatures, processes, chemical makeup) should - it can be argued - look the same. For example, HII areas and the processes within them are fairly well understood; hot, short-lived O and B-class blue giants are often born here and ionise and blow away the gas around their birthplace. E.g. you're looking at blue stars, red Ha emissions, blue reflection nebulosity. Mix the red Ha emissions and blue reflection nebulosity and you get a purple/pink. Once you have processed a few different objects in this manner (or even compare other people's images with entirely different gear who have processed images in this manner), you will start noticing very clear similarities in coloring. And you'd be right. HII areas/knots in nearby galaxies will have the exact same coloring and signatures as M42, M17, etc. That's because they're the same stuff, undergoing the same things. An important part of science is ensuring proof is repeatable for anyone who chooses to repeat it.

Or... you can just completely ignore preserving R:G:B ratios and radiation signature in this manner and 'naively' squash and stretch hue and saturation along with the luminance signal, as has been the case in legacy software for the past few decades. I certainly will not pass judgement on that choice (as long as it's an informed, conscious aesthetic choice and not the result of a 'habit' or dogma). If that's your thing - power to you! (in ST it's literally a single click if you're not on board with scientific color constancy)

On background calibration and histograms

Having a discernible bell ('Gaussian') curve around a central background value is - in a perfectly 'empty', linear image of a single value/constant signal - caused by shot/Poissonian noise. Once it is stretched this bell curve will become lopsided (right side of the mode (mode = peak of the bell curve) will expand, while the left side will contract). This bell curve noise signature is not (should not be) visible any more in a finished image, especially if the signal to noise ratio was good (and thus the width/FWHM of the bull curve was small to begin with).

Background subtraction that is based around a filtered local minimum (e.g. determining a "darkest" local background value by looking at the minimum value in an area of the image) will often correct undershoot (e.g. subtracting more than a negative outlier can 'handle' - resulting in a negative value). Undershooting is then often corrected by increasing the pedestal to accommodate the - previously - negative value. However a scalar to such values is often applied first  before raising the pedestal value. E.g. the application of a scalar, rather than truncation, means that values are not clipped! The result is that outliers are still very much in the image, but occupy less dynamic range than they otherwise would, freeing up dynamic range for 'real' signal. This can manifest itself in a further 'squashing' of the area left of the mode, depending on subsequent stretching. Again, the rationale here is that values to the left of the mode are outliers (they are darker than the detected 'real' background, even after filtering and allowance for some deviation from the mode).  

Finally, there is nothing prescribing a particular value for the background mode, other than personal taste, catering to poorly calibrated screens, catering to specific color profiles (sRGB specifies a linear response for the first few values, for example), or perhaps the desire to depict/incorporate natural atmospheric/earth-bound phenomena like Gegenschein or airglow. 

I hope this information helps anyone & wishing you all clear skies!

Edited by jager945
  • Thanks 2
Link to comment
Share on other sites

39 minutes ago, jager945 said:

Without singling anyone out here, there may be a few misconceptions here in this thread about coloring, histograms, background calibration, noise and clipping.

I will have to disagree with you on several points.

In fact even you are in disagreement with yourself on certain points, like this one:

40 minutes ago, jager945 said:

Add to that atmospheric extinction, exposure choices, non-linear camera response,

Important point being - non linear camera response, and then in next segment you write:

41 minutes ago, jager945 said:

1. Recording the same radiation signature will yield the same R:G:B ratios in the linear domain (provided you don't over expose and our camera's response is linear throughout the dynamic range of course). If I record 1:2:3 for R:G:B for one second, then I should/will record 2:4:6 if my exposure is two seconds instead (or is, for example, twice as bright). The ratios remain constant, just the multiplication factor changes.

Which is definition of camera linearity - as long as for same source and different exposures, results have ratio of intensity same as ratio of exposure lengths - camera is linear.

In fact - most cameras these days are very linear in their response, or at least linear enough not to impact color that much.

46 minutes ago, jager945 said:

As many already know, coloring in AP is a highly subjective thing. First off, B-V to Kelvin to RGB is fraught with arbitrary assumptions about white points, error margins, filter characteristics. Add to that atmospheric extinction, exposure choices, non-linear camera response, post processing choices (more on those later) and it becomes clear that coloring is... challenging. And that's without squabbling about aesthetics in the mix.

This is common misconception. Color theory, while having certain tolerance / error is well defined. There is well defined absolute color space - XYZ color space, and any source out there will have particular coordinate in XYZ color space, or more importantly xy chromaticity - because, as you pointed out magnitude of tristimulus vector depends on exposure length among other things.

Given transform matrix between color space of camera and XYZ color space - one can transform raw values into XYZ color space, with certain error - which depends both on chosen transform matrix but also on characteristics of camera. Transform that produces least amount of error is usually taken.

Once we have xy chromaticity of source - then it is easy to transform that to color space of reproduction device. There will again be some errors involved - this time because display devices are not capable of displaying whole XYZ color space for example - gamut is lower. Choice of sRGB color space for images that will be shared on internet and viewed on computer screens is very good choice as it is de dacto standard and expected color space if color profile is not included with image.

All above means that if two people using different cameras shoot the same target and process color in the same proper way - they will get the same result - within above described limits (just camera transform matrix errors since both will produce images in same gamut space - sRGB so that source of error will be the same).

Not only that such images will look the same, but if you take all spectra of all sources in the image and ask people to say how much recorded image is different in color from those spectra (pixel - for pixel) you will get least total error compared to any other type of color processing - or in another words it will be best possible match in color.

Link to comment
Share on other sites

  Hi vlaiv,

I'm sad that the point I was trying to make was not clear enough. That point boling down to; processing color information along with luminance (e.g. making linear data non-linear) destroys/mangles said color information in ways that make comparing images impossible.

There are ample examples on the web or astrobin of white, desaturated M42 cores in long (but not over exposing!) exposures of M42, whereas M42's core is really a teal green (due to dominant OIII emissions), a colour which is even visible with the naked eye in a big enough scope. I'm talking about effects like these;

3ac420e1-66ee-4976-967a-4b3a299c2f9d.jpg

Left is stretching color information along with luminance (as often still practiced), right is retaining color RGB ratios; IMHO the color constancy is instrumental in relaying that the O-III emissions are continuing (and indeed the same) in the darker and brighter parts of the core, while the red serpentine/ribbon feature clearly stretches from the core into the rest of the complex.

Colouring in AP is a highly subjective thing, for a whole host of different reasons already mentioned, but chiefly because color perception is a highly subjective thing to begin with. My favourite example of this;

spacer.png

(it may surprise you to learn that the squares in the middle of each cube face are the same RGB colour)

I'm not looking for an argument . All I can do is wish you the best of luck with your personal journey in trying to understanding the fundamentals of image and signal processing.

Wishing you all clear skies,

Edited by jager945
Link to comment
Share on other sites

3 minutes ago, alacant said:

Ah, so my non-symmetrical histogram is what you'd expect in a finished image?

TIA

ss4.jpg.56d24e2f02488001b9f76649c41b8a46.jpg.82ee9ec9c1303bb9a081a59dcbf53d87.jpg

 

That histogram looks quite "healthy" to me indeed if it is indeed of the image behind the dialog. The mode (peak of the bell curve) and the noise (and treatment of it) on both sides of the mode seems like a good example of what I was trying to explain.

  • Thanks 1
Link to comment
Share on other sites

2 minutes ago, alacant said:

Hi. It's the image posted here, but I'm told that the background is wrong. I honesty can't see it!

Cheers and clear skies

If it's the first image at the start of this thread, then the histogram does look a little weird in that the there is very little to the right of the mode (it's mostly all contained in one bin when binned to 256 possible values). Is this what you're seeing too? (could be some artifact of JPEG compression as well).

Regardless, the background doesn't look weird to me; I can see plenty of undulation/noise/detail as a result of the right part of the "bell curve" containing plenty of Poissonian noise/detail if I zoom in. What is the main objection of those saying it is "wrong"?

Untitled.png.a3744c9acc668396a4a791dbe1fa6209.png

Link to comment
Share on other sites

5 minutes ago, jager945 said:

If it's the first image at the start of this thread,

Hi. No, it's this one. To our -untrained eyes- it looks fine. We don't have high end equipment; this was taken at 1200mm with an old canon 450d. I'm not sure we can improve...

1307069008_11-36(copy).jpg.cd71c68d05d0ac5e475edf3bfd3e7fbe.thumb.jpg.429a110ed432cd8f9ab82b563a261f21.jpg

ss5.jpg.63a95c2a2691e3961734404a58e5999e.jpg

Link to comment
Share on other sites

9 minutes ago, alacant said:

Hi. No, it's this one. To our -untrained eyes- it looks fine. We don't have high end equipment; this was taken at 1200mm with an old canon 450d. I'm not sure we can improve...

I'm sorry... I'm at a loss as to what would be wrong or weird about this image.  😕

Even the most modest equipment can do great things under good skies and competent acquisition techniques.

Link to comment
Share on other sites

19 minutes ago, jager945 said:

I'm sorry... I'm at a loss as to what would be wrong or weird about this image.  😕

Even the most modest equipment can do great things under good skies and competent acquisition techniques.

That's not the background which sparked the conversation. It's much better, as several of us have agreed. The one in question is the first in the thread.  When I open a screen grab in Ps S3 I see exactly the histogram I would expect to see from this image. It (the screen grab of the original) is obviously black clipped.117533551_clipped4.JPG.f972054fa0d76393c9e49777494e7e2b.JPG

How much of this is arising from the use of a screen grab I don't know but I would be surprised if the fact of using a grab accounted for the clipping illustrated above.

Olly

Link to comment
Share on other sites

1 hour ago, jager945 said:

I'm sad that the point I was trying to make was not clear enough. That point boling down to; processing color information along with luminance (e.g. making linear data non-linear) destroys/mangles said color information in ways that make comparing images impossible.

It really depends on how you process your color image. What do you think about following approach:

ratio_r = r / max(r, g, b)
ratio_g = g / max(r, g, b)
ratio_b = b / max(r, g, b)

final_r = gamma(inverse_gamma(stretched_luminance)*ratio_r)
final_g = gamma(inverse_gamma(stretched_luminance)*ratio_g)
final_b = gamma(inverse_gamma(stretched_luminance)*ratio_b)

where r,g,b are color balanced - or (r,g,b) = (raw_r, raw_g, raw_b) * raw_to_xyz_matrix * xyz_to_linear_srgb_matrix

This approach keeps proper rgb ratio in linear phase regardless of how much you blow out luminance due to processing so there is no color bleed. It does sacrifice wanted light intensity distribution but we are already using non linear transforms on intensity so it won't matter much.

2 hours ago, jager945 said:

Colouring in AP is a highly subjective thing, for a whole host of different reasons already mentioned, but chiefly because color perception is a highly subjective thing to begin with. My favourite example of this;

Again - it is not subjective thing unless you make it. I agree about perception. Take photograph printed on paper of anything and use yellow light and complain how color is subjective - it is not fault in photograph - it contains proper color information (within gamut of media used to display image). I also noticed that you mention monitor calibration in first post as something relevant to this topic - it is irrelevant to proper color calibration. Image will contain proper information and with proper display medium it will show intended color. It can't be responsible for your decision to view it on wrong display device.

Spectrum of light is physical thing - it is absolute and not left to interpretation. We are in a sense measuring this physical quantity and trying to reproduce this quantity. We are not actually reproducing spectrum with our displays but tristimulus value since our vision system will give same response to different spectra as long as they stimulate receptors in our eye in equal measure. This is physical process and that is what we are "capturing" here. What comes after that and how our brain interprets things is outside of this realm.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

It really depends on how you process your color image. What do you think about following approach:

ratio_r = r / max(r, g, b)
ratio_g = g / max(r, g, b)
ratio_b = b / max(r, g, b)

final_r = gamma(inverse_gamma(stretched_luminance)*ratio_r)
final_g = gamma(inverse_gamma(stretched_luminance)*ratio_g)
final_b = gamma(inverse_gamma(stretched_luminance)*ratio_b)

where r,g,b are color balanced - or (r,g,b) = (raw_r, raw_g, raw_b) * raw_to_xyz_matrix * xyz_to_linear_srgb_matrix

This approach keeps proper rgb ratio in linear phase regardless of how much you blow out luminance due to processing so there is no color bleed. It does sacrifice wanted light intensity distribution but we are already using non linear transforms on intensity so it won't matter much.

Again - it is not subjective thing unless you make it. I agree about perception. Take photograph printed on paper of anything and use yellow light and complain how color is subjective - it is not fault in photograph - it contains proper color information (within gamut of media used to display image). I also noticed that you mention monitor calibration in first post as something relevant to this topic - it is irrelevant to proper color calibration. Image will contain proper information and with proper display medium it will show intended color. It can't be responsible for your decision to view it on wrong display device.

Spectrum of light is physical thing - it is absolute and not left to interpretation. We are in a sense measuring this physical quantity and trying to reproduce this quantity. We are not actually reproducing spectrum with our displays but tristimulus value since our vision system will give same response to different spectra as long as they stimulate receptors in our eye in equal measure. This is physical process and that is what we are "capturing" here. What comes after that and how our brain interprets things is outside of this realm.

Yup, that looks like a simple way to process luminance and color separately.

With regards to the colouring, you're almost getting it; you will notice, for example, color space conversions requires you specify an illuminant. For sRGB it's D65 (6500K). E.g. an image is supposed to be viewed under cloudy sky conditions in an office environment.

However in space there is no "standard" illuminant. That's why it is equally valid to take a G2V star as an illuminant, a random selection of stars as an illuminant, or a nearby galaxy in the frame as an illuminant, yet in photometry, white stars are bluer again than the sun, which is considered a yellow star. The method you choose here for your color rendition is arbitrary.

Further to colorspaces, the CIELab and CIE 1931 colorspaces were precisely crafted using by the interpretation of colors by people (a group of 17 observers in the instance of 1931). Color perception is not absolute. It is subject to intepretation and (cultural) consensus.

Edited by jager945
Link to comment
Share on other sites

12 minutes ago, jager945 said:

Yup, that looks like a simple way to process luminance and color separately.

With regards to the colouring, you're almost getting it; you will notice, for example, color space conversions requires you specify an illuminant. For sRGB it's D65 (6500K). E.g. an image is supposed to be viewed under cloudy sky conditions in an office environment.

However in space there is no "standard" illuminant. That's why it is equally valid to take a G2V star as an illuminant, a random selection of stars as an illuminant, or a nearby galaxy in the frame as an illuminant, yet in photometry, white stars are bluer again than the sun, which is considered a yellow star. The method you choose here for your color rendition is arbitrary.

I'm not sure that you understand concept of illuminant.

You are right that there is no standard illuminant in space - if you have emission sources then there is no illuminant. Illuminant is only important when you try to do color matching to a color that is reflective in nature.

XYZ is absolute color space, and therefore does not require illuminant. Viewing in certain predefined conditions - like office environment with certain level of lighting, for a given color space ensures that if you take a colored object and look at it and look at its image on the screen - color will be matched in your brain (you will see the same thing).

With astrophotography there is no such object that needs to be illuminated to see its color - we are already getting the light / spectra from stars as they are - if we observe it on computer monitor in dark room - it will be as if we were looking at "boosted" star light coming thru the opening in a wall while we sit in dark room. If we do that in lit up office - same thing - monitor will show us "color" that we would see as if there was an opening in the wall and amplified star light was coming thru it.

Again - not talking about perception of the color but rather physical quantity. You can make sure that your images indeed represent values of xy chromaticity derived from XYZ space and that is what is called proper color as it will produce same brain stimuli if viewed on properly calibrated monitor (within gamut of that display device) that you would get from looking at actual starlight amplified enough to match that intensity under same circumstances/conditions, and all of that with minimal error (it's not going to be 100% due to factors discussed but it will be closest match).

Link to comment
Share on other sites

32 minutes ago, vlaiv said:

I'm not sure that you understand concept of illuminant.

I'm going to respectfully bow out here. 😁

If you wish to learn more about illuminants (and their whitepoints) in the context of color spaces and color space conversions, have a look here;

https://en.wikipedia.org/wiki/Standard_illuminant

Lastly, I will leave with you a link/site that discusses different attempts to generate RGB values from blackbody temperatures, and discusses some pros and cons of choosing different white points, methods/formulas. sources for each, and the problem of intensity.

http://www.vendian.org/mncharity/dir3/blackbody/

http://www.vendian.org/mncharity/dir3/starcolor/details.html

Clear skies,

Edited by jager945
Link to comment
Share on other sites

4 minutes ago, jager945 said:

I'm going to respectfully bow out here. 😁

If you wish to learn more about illuminants (and their whitepoints) in the context of color spaces and color space conversions, have a look here;

https://en.wikipedia.org/wiki/Standard_illuminant

Lastly, I will leave with you a link/site that discusses different attempts to generate RGB values from blackbody temperatures, and discusses some pros and cons of choosing different white points, methods/formulas. sources for each, and the problem of intensity.

http://www.vendian.org/mncharity/dir3/blackbody/

Clear skies,

I do understand that you might not want to discuss this further as we are moving away from the topic of this thread and I agree.

I will just quote first sentence of wiki article that you linked to:

Quote

A standard illuminant is a theoretical source of visible light with a profile (its spectral power distribution) which is published. Standard illuminants provide a basis for comparing images or colors recorded under different lighting.

Last part of sentence was emphasized by me.

We only use D65 as it is part of sRGB standard - same as gamma function associated with that color space. It is way of encoding color information in that "format" - which is de facto standard for internet and images without color profile.

You really don't need any sort of illuminant information when doing color calibration - just use custom RAW->XYZ (that will depend on camera) and standard XYZ->sRGB transform and you are done - no need to define white point or take illuminant in consideration.

Link to comment
Share on other sites

2 hours ago, alacant said:

They say it's clipped.

That's only a secondary observation in my case. I began by looking at the picture and finding the sky unnaturally dark and smooth or glossy. The idea that it's clipped comes after the observation. There may be another explanation but, as I demonstrated above, the screen grab is showing as black clipped in Ps. 

A background with very consistent pixel values looks smooth and smoothness is, of course, a property of surfaces.  As a 2D object trying to reproduce a 3D experience the last thing a picture wants is a 'visible' surface. It will seem to reflect the observer's gaze back out of the illusory space it sets out to create.

Also, when we look up at the night sky from a dark site with adapted eyes, it doesn't look all that dark and it certainly doesn't look glossy.

There are, therefore, several different reasons underpinning the common feeling that a natural looking image should have a little grain and a little lightness - but the imager is entirely free to do as he or she wishes. I'm not making up rules.

Olly

  • Thanks 1
Link to comment
Share on other sites

55 minutes ago, alacant said:

 

I just downloaded it and the average value of the background is 9 -> it looks very dark to me.  Almost like it was clipped and then 'unclipped'.   But I know very little :D  except to take the free advice - us amateurs on here are in a very privileged position to get constructive feedback from the creme de la creme of AP.

  • Thanks 1
Link to comment
Share on other sites

12 hours ago, alacant said:

 

I only took the histo of a screen grab. Have you posted a link to the full linear stack somewhere? If so I've missed it but it would be interesting to look at it in my familiar software. What we haven't considered is that calibration might be eating into the black point. Or maybe even the stacking software is doing something wrong.

Olly

Link to comment
Share on other sites

Just now, ollypenrice said:

the histo of a screen grab

Hi

That's almost certainly why your histogram is clipped. I thought it maybe misleading for beginners. Anyway, it doesn't matter; any beginner would have lost it way before then!

Our question was about star colour, not background but as a result of the thread we've been able to fix both:)

Thanks again. I think we've solved all the issues now.

 

Link to comment
Share on other sites

11 hours ago, tooth_dr said:

it looks very dark to me

Hi

Yes, thanks. We've got it now, and fixed it:)

Thanks for your feedback. We really do appreciate the time taken to help us.

Clear skies from all here with chocolate con churros after an all night imaging session in Alicante.

Exhausted!

Edited by alacant
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.