Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Problem getting colours right with ZWO533mc


Recommended Posts

42 minutes ago, jager945 said:

I'd really like to know where the disconnect is here.

Let's try to establish that.

I like your experiment. I think it clearly captures where disconnect is and hopefully we will agree on that after I answer your questions.

43 minutes ago, jager945 said:

How is your daylight Macbeth-calibrated matrix going to help you colour-correct the 10 pictures to reveal the "true" colours of that bird accurately in all 10 shots?

First thing that I want to point out is that there are no true colors of the bird. Object does not have a color. Light reflected of the object has color and color of that light depends on properties of the source of the light and properties of object. Thus bird has properties related to EM spectrum - reflectance but not color.

What Macbeth-calibrated matrix for DSLR does is help establish mathematical relationship between two things - measurement of spectrum of light (irrespective if that spectrum comes from the light source directly or reflected of object) - and display standard.

48 minutes ago, jager945 said:

Are you just going to use your D65-acquired matrix and tell me the bird I saw was a drab yellow/brown/orange?

Way matrix was acquired only determines domain of accuracy. We can use any illuminant we like if we work with reflectivity, and we can choose one that will be more accurate in color calibration in domain of interest - or we can use one that will provide us with relatively uniform accuracy across gamut of interest.

This accuracy depends on how much spectral response of our sensor is different than XYZ standard observer. Imagine for a moment that we have a sensor that has the same spectral sensitivity as XYZ standard observer. In that case - you can use any illuminant and you will always derive perfect matrix - being unitary with zero color transform error. In this case - it absolutely does not matter what illuminant you use or whether you calibrate against emissive source. You will always derive same matrix.

When sensor has slightly different response curves - it introduces slight error in this conversion process (already mentioned ill posed problem as it has no single solution) and here we can choose illuminant that will reduce error in one part of the gamut versus another part.

Due to the way you designed your experiment, I'm confident that D65 calibrated matrix will introduce minimal perceptual error in transformed color information. So yes, I can just use D65 (or D50 for that matter) acquired matrix.

56 minutes ago, jager945 said:

Would you maintain the colors are correct?

I will maintain that colors are correct within above mentioned error and certainly more correct than using no calibration at all.

57 minutes ago, jager945 said:

Would you attempt to colour balance?

If you mean white balance - then no. I'm not interested in how that particular bird will look like if I view it under certain conditions. I'm only interested in recording what the bird looks like under conditions that were present at the scene. It is very likely that 10 images what we capture in this experiment will all show different colors on the bird due to fact that bird is flying around and passing lights of different temperature (btw - intensity does not determine color of incandescent light - only temperature).

We often use term white balance in astrophotography - but that terms should be avoided as correct term is color correction / calibration. White balance is term related to the color transform that allows us to change image captured under one type of illumination and make it look like it was taken under another type of illumination. In particular it is related to the fact that our vision system adopts to the type of lighting and we tend to see "white" objects as white under different illumination.

1 hour ago, jager945 said:

Would you colour balance the 10 images differently, depending on what type of light lit the bird the most during its flight?

No, as I said above - I'm interested in color of the light reflected of the bird in any instant rather than (wrong) notion of the "color of the bird". I would color correct each of 10 images in exact same way and they are very likely to produce different sRGB values - or different color to observer watching images on screen.

1 hour ago, jager945 said:

Would you just average out all light temperatures for all shots and use that as you white balance?

No. Like said above - we won't be doing white balance.

1 hour ago, jager945 said:

If you decided to color balance with a custom whitepoint, would you maintain the Macbeth D65-illuminant derived color matrix is still applicable?

We won't be doing white balance. We will be doing RAW to XYZ color space conversion as a first step. XYZ color space is absolute color space and it does not have defined white point. In it we just record information of the captured light in handy triplet of numbers that is enough so we can reproduce that color later on with acceptable accuracy (of course - visible part of spectrum has infinite degrees of freedom but it turns out that we only need 3 numbers to record information that is responsible for human perception of color with enough accuracy).

After that, if we produce image that has no ICC color profile - like regular image that will be posted online (jpg, png without profile), we will convert XYZ coordinates into sRGB color space that has white point of D65 and appropriate gamma function. We will not change anything in numbers to account for the fact that sRGB uses D65 - it is just the feature of said color space and we don't need to white balance anything to D65. There is well defined transform between XYZ and sRGB and we will use that.

1 hour ago, jager945 said:

Where did I say I implemented a feature I believe is "wrong"? I used the word "inconsequential". That's because, per the actual title of one of the articles I gave to you, colour matrix construction for a tri-stimulus camera is an ill-posed problem (particularly for AP purposes).

If you don't know what "ill-posed" means, it is a term used in mathematics to describe a problem that - in its mildest form - does not have one unique solution (deconvolution is another example of an ill-posed problem in astrophotography).

In other words, a DSLR-manufacturer matrix-corrected rendition in ST, is just one more of an infinite set of plausible colour renditions (even within the more stringent terrestrial parameter confines such as a single light source of a known power spectrum) that no-one can claim is the only, right solution. Ergo, the inclusion of this functionality in ST is neat, but ultimately inconsequential (but not "wrong!"); it is not "better" than any other rendition. Being able to use the manufacturer's terrestrial D65-based color matrix is particularly inconsequential outside the confines of terrestrial parameters. But if you really want to, now you can.

I agree that it is one of infinite set of possible transform matrices. However, we can easily answer if it is inconsequential or not.

XYZ is absolute color space and we have defined metric (deltaE) that will tell us if two colors in XYZ color space (or more specifically xy chromaticity so we can skip intensity bit) are perceptually the same or not, or how close are they perceptually.

We know what sort of colors we are likely to encounter in astronomical images. Since most stars have spectra that is very close to Plankian locus - their colors will lie very close to Plankian locus. We have emission lines. I believe that comprises 95% of all colors we are likely to encounter in astronomical images. Rest are reflection nebulae that are star light reflected of dust. These are also very close to Plankian locus as reflectivity of dust tends to be uniform across spectrum (star light is either shifted towards the red or towards the blue when scattered of the dust).

All we need to do is take specially designed matrix that minimizes error for above present type of spectra and compare results to matrix from DSLR manufacturer and see how much perceptual error there is in obtained XYZ coordinates with first versus second. Perceptual error is small - we will get pretty good color match (probably below detection threshold in most cases).

In fact - one can do that themselves easily - by using two different matrices - one derived by say D65 and other with another illuminant. One shoots raw scene and then does raw -> XYZ -> sRGB conversion with one and other matrix (no white balancing of any kind) - and compares two resulting images.

1 hour ago, jager945 said:

I.e. it sums up all the stuff I keep trying to explain to you - picking an illuminant/white reference matters to colour, picking your stretch matters to colour, picking your colourspace/tri-stimulus conversion method matters to colour, picking your luminance/chrominance compositing matters to colour. And none of these choices in AP are prescribed, nor standardised, nor fixed to one magic value or number or method or technique.

You can't choose D50 white point if you are working with sRGB standard (you can but then it is no longer sRGB standard). If you follow the standard - you can't get different results.

Say you want to determine color of a star in sRGB standard. Whether you use real spectrum of the star or black body approximation if you don't have access to the actual spectrum is not important - method is the same.

Take the spectrum and calculate XYZ coordinates using standard observer. Convert to sRGB color space using standard transform from XYZ to sRGB (known matrix and gamma function - part of sRGB standard).

There is no room for changing parameters - they are defined in standard. You can change things - but then you are producing values in some other color space and not sRGB and the fact that you are getting different numbers should not surprise you. That does not mean that color is arbitrary - it just means that you are arbitrary in your adherence to standard.

1 hour ago, jager945 said:

E.g. the creator of the table you posted as an "accurate star color range", now as of 2016, in hindsight, prefers a whopping 700K yellower D58 rendition of his table for use on his sRGB D65 screen. And that is his absolute right, as it is just as valid of a choice.

Again - that is nonsense - not what you have written (I don't mean to insult you in any way)

but the fact that sRGB standard somehow depends on the way you calibrate your display. sRGB is a standard and white point of sRGB is D65. You can't change it.

You can calibrate your monitor to some other white point value and that is fine - but you need to create color profile for your calibration so that your operating system knows when it displays an image to do color space conversion so it will convert sRGB back to XYZ and then from XYZ to the color space of your computer screen.

What feels like white - has nothing to do with recorded and reproduced color of light.

I can record temperature of some object and say - it is 18C.

I can then take some other object and regulate its temperature to be also 18C.

You will touch this other object on a scorching summer day and you'll notice that it is pleasantly chilly to the touch, but if you touch it on very cold winter night - it will feel distinctly warm to you.

What it feels like - has nothing with the fact that you can record temperature of an object and make other object be of exact same temperature.

Similarly - there is no absolute white - what color you see as white will depend on you viewing conditions. But you can record a "spectrum" and replay a "spectrum" (although we are using only three values for it rather than many more - enough for color reproduction) regardless if viewer is viewing in right conditions or not. I as astrophotograper am responsible for recording and encoding information properly. Viewer is responsible for having their monitor calibrated properly - there is nothing I can do to change their viewing conditions.

1 hour ago, jager945 said:

I have no idea what "scientifically color calibrated" result would even mean or what that would look like. Where did you read/hear that?

Perhaps you are referring to the optional/default "scientific colour constancy" mode? It helps viewers and researchers more easily see areas of identical hue, regardless of brightness, allowing for easy object comparison (chemical makeup, etc.). It was created to address the sub-optimal practice of desaturating highlights due to stretching chrominance data along with the luminance data. A bright O-III dominant area should ideally - psychovisually - show the same shade of teal green as a dim O-III emission dominant area. It should not be whiter or change hue. The object out there doesn't magically change perceived colour or saturation depending on whether an earthling chose a different stretch or exposure time. An older tool you may be more familiar with, that aimed to do the same thing, is an ArcSinH stretch. However that still stretches chrominance along with colour, rather than cleanly separating the two.

Yes, that is my bad - I did mean scientific colour constancy - and since it has been some time since I last used startools - I forgot the exact name. Furthermore - I was under impression that it is somehow related to scientifically accurate color calibration versus artistic (that is mentioned in other options) - but again, that is my fault for not understanding the terminology properly.

Regardless of all of this - here is an idea that you might like to explore:

Extract luminance and let that be stretched in any way so desired (no need to do particular stretch).

Find RGB ratios of color calibrated data as r/max(r,g,b), g/max(r,g,b), b/max(r,g,b).

In order to transfer color onto luminance use following approach:

srgb_gamma(inverse_srgb_gamma(lum) * rgb_ratio).

That way you are treating stretched luminance not as luminance - but as light intensity that affects RGB ratio. Visually user will be happy with their processing of luminance, but since it is displayed - OS assumes sRGB values so real intensity is gamma encoded - you need to move it back to linear - apply rgb ratio and then again encode it to sRGB for final presentation.

R, G and B should be derived from raw -> XYZ -> linear sRGB  using any available color correction matrix (raw -> XYZ). Even if that means using D65 derived one.

You might not agree with me on merit of doing so - but why not give it a try and make possible for users of your software to do it?

 

Link to comment
Share on other sites

This is all not really helping the original post that much.

Johan can you upload a single sub as the original raw fits file then we can have a play with it as Gorran suggested, won't take long for us to collectively see if there is anything wrong with the camera, or if its just the colour scaling.

cheers

Lee

 

Edited by Magnum
  • Like 1
Link to comment
Share on other sites

1 hour ago, Magnum said:

This is all not really helping the original post that much.

Johan can you upload a single sub as the original raw fits file then we can have a play with it as Gorran suggested, won't take long for us to collectively see if there is anything wrong with the camera, or if its just the colour scaling.

cheers

Lee

 

It has been suggested a number of times, but it is down to the OP to do so if he wishes / has the time.

You are quite right that this highly technical and rather heated discussion might not be helping OP much - but I don't think it is irrelevant.

I apologize if the tone of discussion caused discomfort to some readers - that certainly was not my intention, and I can only hope that it will indeed provide some value to people that are interested in the topic.

In any case, I would notice that if there is no proper color in astronomical images - original inquiry by OP does not really make much sense - there is no way to get the colors right, or in another words - we may as well keep the green in the image - because there is no accurate color in astro images anyway, right?

Link to comment
Share on other sites

15 minutes ago, vlaiv said:

It has been suggested a number of times, but it is down to the OP to do so if he wishes / has the time.

Don't think he has been back, maybe your discussion has scared him off 😄

Dave

  • Sad 1
Link to comment
Share on other sites

I'm interested in the topic and I'm enjoying ALL the contributions.
I would prefer that personal attacks and/or the devaluing of each others contributions did not take place - but I accept that this can be difficult when opposing views clash.

  • Like 2
Link to comment
Share on other sites

I hope he gets back and was not scared off. Could it be that he saved his subs as jpg (8bit), some systems do that as default, and in that case it may be impossible to bring the red and blue channels up?

Edited by gorann
  • Like 1
Link to comment
Share on other sites

14 minutes ago, gorann said:

I hope he gets back and was not scared off. Could it be that he saved his subs as jpg (8bit), some systems do that as default, and in that case it may be impossible to bring the red and blue channels up?

Could be - but histogram of green shows distinct signature of stretch.

OP also images with ASI183 and filters / LRGB and DSLR also. I don't think that error here is in 8bit or jpeg format at capture time.

What could be is issue with DSS. I've experienced that, some of my images, from DSLR so not sure if it is applicable here - were clipped at red part of the spectrum. I think it has something to do with the way DSS decodes / handles raw files.

My flats were not working properly because of that and I had red channel completely clipped.

As soon as I used another software (FitsWork) to extract raw data in fits format and did all my processing in ImageJ - everything was fine.

Fact that blue and red are similarly clipped here - reminds me of the thing I encountered.

Edited by vlaiv
added coma for readability
  • Like 2
Link to comment
Share on other sites

1 hour ago, vlaiv said:

It has been suggested a number of times, but it is down to the OP to do so if he wishes / has the time.

You are quite right that this highly technical and rather heated discussion might not be helping OP much - but I don't think it is irrelevant.

I apologize if the tone of discussion caused discomfort to some readers - that certainly was not my intention, and I can only hope that it will indeed provide some value to people that are interested in the topic.

 

I didn't mean you or anyone in particular Vlaiv, I was including myself too 😛

But yes its all interesting

  • Like 2
Link to comment
Share on other sites

7 hours ago, vlaiv said:

First thing that I want to point out is that there are no true colors of the bird. Object does not have a color. Light reflected of the object has color and color of that light depends on properties of the source of the light and properties of object. Thus bird has properties related to EM spectrum - reflectance but not color.

Indeed.  But...

7 hours ago, vlaiv said:

I'm confident that D65 calibrated matrix will introduce minimal perceptual error in transformed color information.

and

7 hours ago, vlaiv said:

You can't choose D50 white point if you are working with sRGB standard

I think we found the problem. You seem to be under the mistaken impression there is only one illuminant in the story. There are - at the very least - always two (and almost always more).

  1. There is at least one illuminant (and associated white point/power spectrum) for the scene, whether it is the sun, moon, indoor lighting, etc.
  2. There is at least one illuminant (and associated white poin/power spectrumt) associated with your display medium.  Fun fact, in the case of the sRGB standard, there are actually two more illuminants involved; the encoding ambient illuminant (D50), and the typical ambient illuminant (also D50). The latter two specify the assumptions made with regards to the viewing environment.
  3. There may be one illuminant (and associated white point/power spectrum) involved if using a camera matrix. Any associated matrix/profile and white point are only valid for the scene you based it off. It is only valid for lighting conditions that precisely match that illuminant's power spectrum.

You seem to be under the mistaken impression that your camera matrix and white balance, must match the intended display medium's illuminant. That's obviously not the case.

Look up any RGB to XYZ conversion formula, and they will all have parameters/terms that specify the input illuminant characteristics.

Look up any XYZ to RGB conversion formula, and they will all have parameter/terms that specify the output illuminant characteristics.

Nowhere does it state or require input and output to have the same illuminant.

9 hours ago, vlaiv said:

We won't be doing white balance.

This misunderstanding leads to your very curious insistence that the bird, which any observer in that room would see and experience to be multi-colored, was brown/orange/yellow instead. By refusing to color balance, and not taking into account the conditions in the room, you are not replicating what an observer saw or experienced at all. This is fundamentally incompatible with practising documentary photography.

Of course, in the thought experiment, you simply lacked the proper matrix and white balance factors to correct for. The latter is 100% fair of course; you have no way of knowing these precise parameters for the room. But you can definitely approximate them by making some plausible assumptions, based on the content of your data/images.

8 hours ago, vlaiv said:

Imagine for a moment that we have a sensor that has the same spectral sensitivity as XYZ standard observer. In that case - you can use any illuminant and you will always derive perfect matrix - being unitary with zero color transform error. In this case - it absolutely does not matter what illuminant you use or whether you calibrate against emissive source. You will always derive same matrix.

That's simply not true. It is not at all how that works, even with a "perfect" sensor with 100% even spectral sensitivity. You seem to be under the mistaken impression that your camera matrix that was acquired under D65 conditions will be completely identical to a camera matrix that was acquired under, say, LED lighting. That is not the case (though the results may not differ/matter much depending on the power spectra differences in illuminants after a simple white balance correction).

Try deriving a matrix with a monochromatic light source and see how far that gets you. For example, how much light is reflected per coloured Macbeth square, heavily depends on the power spectrum of the illuminant ;

example_spd.png

It would be a single peak in the case of monochromatic light, or a complex profile in any other case. At its most extreme, a blue square will reflect no light in the case of a red monochromatic source, etc. If multiple illuminants of different power spectra are at play in a scene (such as stars), things get very complex indeed, to the point of not mattering any more, saturating the entire power spectrum if their light is taken as an aggregate. Hence the common practice of sampling foreground stars or using entire nearby galaxies as a white reference, and eschewing DSLR matrices entirely.

9 hours ago, vlaiv said:

Again - that is nonsense - not what you have written (I don't mean to insult you in any way)

I'm confused; are you saying a source you cited yourself is spouting nonsense? Or just that part that you don't agree with?

9 hours ago, vlaiv said:

sRGB is a standard and white point of sRGB is D65. You can't change it.

You can calibrate your monitor to some other white point value and that is fine - but you need to create color profile for your calibration so that your operating system knows when it displays an image to do color space conversion so it will convert sRGB back to XYZ and then from XYZ to the color space of your computer screen.

That's not what the author is proposing. He is merely proposing to use a different input illuminant of D58 (for the "scene"/stars/temperatures) when converting his star temperature data to XYZ, he is not proposing to use a different output illuminant (for the screen) when converting XYZ to sRGB. It correctly remains at D65. The 2006-updated Perl code is here if there is any doubt;

http://www.vendian.org/mncharity/dir3/blackbody/UnstableURLs/toolv2_pl.txt

&useSystem($SystemsRGB); sets system output space to "sRGB system (D65)"

if(want '-D58') { &set_wp($IlluminantD58); } ; switch to set illuminant to D58 for calculation of XYZ components

9 hours ago, vlaiv said:

Regardless of all of this - here is an idea that you might like to explore:

Thank you. That is more or less how colour calibration in StarTools works; luminance is processed completely independently of chrominance. Whatever stretch you used, however you enhanced detail, has no effect on the colouring. Once you hit the Color module you can change your colouring without affecting brightness at all (you can even select whether you want that to be perceptually constant in CIELAB space, or channel-constant in RGB space). You can even mimic the way an sRGB stretch desaturates emissive-sourced colours if you so wish.

What you can't do in StarTools, however, is have your chrominance data get stretched and squashed along with the luminance/detail. E.g. you will never see orange dust lanes in a galaxy with StarTools - they will always look red or brown; even psychovisual context-based colour perception is taken into account this way (see the "brown" video).

Link to comment
Share on other sites

7 hours ago, jager945 said:

There is at least one illuminant (and associated white point/power spectrum) for the scene, whether it is the sun, moon, indoor lighting, etc.

In astrophotography, you said it yourself - we don't have illuminant in cosmic scene - we have only light reaching us.

7 hours ago, jager945 said:

There is at least one illuminant (and associated white poin/power spectrumt) associated with your display medium.  Fun fact, in the case of the sRGB standard, there are actually two more illuminants involved; the encoding ambient illuminant (D50), and the typical ambient illuminant (also D50). The latter two specify the assumptions made with regards to the viewing environment.

These are only important as far as defining the standard / transform from XYZ and viewing conditions - not important to recorded signal. Once you have recorded signal - you then just use defined standard to encode it in sRGB because it will be viewed with device that is suppose to work on sRGB standard (or if it is calibrated differently it will manage color transform in profile).

8 hours ago, jager945 said:

There may be one illuminant (and associated white point/power spectrum) involved if using a camera matrix. Any associated matrix/profile and white point are only valid for the scene you based it off. It is only valid for lighting conditions that precisely match that illuminant's power spectrum.

You are under impression that we are supposed to do white balance. We are not.

White balance is only needed if you want to match perceptual side of things - which we don't since there is no match for original scene.

Imagine you have very light gray wall and all the other colors in the scene that you are viewing (not imaging) are rich and saturated colors. We will perceive wall as being white - it will become our perceptual white point. Now take piece of white paper and put it against the wall - your perception will instantly change - white piece of paper is whiter than the wall and it will become your white point - and the perception of the wall will turn light gray.

Anyone can do above experiment. Did we change physical color associated with the wall? No. If we record it and later "replay" it - it will be same power spectrum - nothing changed about physical nature of that light.

I'm not trying to capture perceptual color - we can't do that as we have no profile for "floating in outer space with robot eyes capable of collecting vast amounts of light and looking at that galaxy" - I'm just trying to capture physical nature of the light.

If we capture starlight in the box and let it shine so we can view that starlight and set it on desk next to our computer screen and we produce same color via method that I'm describing on the screen - people looking at those two will say: These two have same color. That is the goal of all of this - color matching not trying to record perceptual thing.

8 hours ago, jager945 said:

You seem to be under the mistaken impression that your camera matrix and white balance, must match the intended display medium's illuminant. That's obviously not the case.

Look up any RGB to XYZ conversion formula, and they will all have parameters/terms that specify the input illuminant characteristics.

Look up any XYZ to RGB conversion formula, and they will all have parameter/terms that specify the output illuminant characteristics.

Nowhere does it state or require input and output to have the same illuminant.

Both forward and backward RGB to XYZ are mathematically related as being inverse of one another so same rules certainly apply for both.

As for actual definition:

image.thumb.png.3661aa5006af98233c47cf246c95b44b.png

Only requirement is that XYZ values are scaled so that D65 has Y of 1. This is because sRGB is relative color space (all intensities are measured against white and black points as percentage rather than absolute boundless values).

Other than that - there is no mention of illuminant in forward transform - nor is there similarly mention of illuminant in backward transform.

8 hours ago, jager945 said:

This misunderstanding leads to your very curious insistence that the bird, which any observer in that room would see and experience to be multi-colored, was brown/orange/yellow instead. By refusing to color balance, and not taking into account the conditions in the room, you are not replicating what an observer saw or experienced at all. This is fundamentally incompatible with practising documentary photography.

Of course, in the thought experiment, you simply lacked the proper matrix and white balance factors to correct for. The latter is 100% fair of course; you have no way of knowing these precise parameters for the room. But you can definitely approximate them by making some plausible assumptions, based on the content of your data/images.

That is correct - we are not capturing what observer experienced - no one can do that except for observer to write an essay on that topic or maybe some strange neurological electrode probe thingy (what can be captured in our brain is really beyond me) and that is not what the documentary photography is all about. In fact - quick online search on the definition of documentary photography gives this:

Quote

Documentary photography is a style of photography that provides a straightforward and accurate representation of people, places, objects and events, and is often used in reportage

If you put two people in the room - you might easily get two different experiences of the scene - which one is accurate? However, if you put two cameras in the same room and process the data in proper way - you should get the same result (or very similar with maybe some colors being slightly off - depending on each sensor characteristics / gamut of the camera).

8 hours ago, jager945 said:

That's simply not true. It is not at all how that works, even with a "perfect" sensor with 100% even spectral sensitivity. You seem to be under the mistaken impression that your camera matrix that was acquired under D65 conditions will be completely identical to a camera matrix that was acquired under, say, LED lighting. That is not the case (though the results may not differ/matter much depending on the power spectra differences in illuminants after a simple white balance correction).

Oh it is 100% true - and easily proven by math.

image.png.415dad71203314ead52e9968717989b4.png

Here we have expression for XYZ that depends on matching functions x_bar, y_bar, z_bar and L - spectrum of recorded light (regardless if it is emitted from the star or reflected of the object).

If we have sensor that has 100% the same spectral response of its three raw components as x_bar, y_bar and z_bar - then both XYZ and RAW will produce same values for same L - spectrum. You can't have same expressions with same numbers produce different results.

Ok?

Since we now have bunch of triplets XYZ on one side and RAW on the other side - that are identical (we have different XYZ values for different spectra and different RAW values for different spectra but XYZ is equal to corresponding RAW). There is only one "number" that has property of:

1 * N = 1

34 * N = 34

10001 * N = 10001

I put "number" in quotes because here we are talking about multiplication matrix rather than number. In any case - solution is always unitary matrix - regardless of what set of spectra (L) you are using to derive the matrix.

I made this point to show that any set of spectra can be used to derive transform matrix - only difference between set 1 and set 2 will be in error they produce. If sensor QE matches XYZ standard observer matching functions then error will be 0 in any case. More sensor matches XYZ curves - less you have to worry about choice of spectral set used to derive transform.

8 hours ago, jager945 said:

That's not what the author is proposing. He is merely proposing to use a different input illuminant of D58 (for the "scene"/stars/temperatures) when converting his star temperature data to XYZ, he is not proposing to use a different output illuminant (for the screen) when converting XYZ to sRGB. It correctly remains at D65. The 2006-updated Perl code is here if there is any doubt;

http://www.vendian.org/mncharity/dir3/blackbody/UnstableURLs/toolv2_pl.txt

&useSystem($SystemsRGB); sets system output space to "sRGB system (D65)"

if(want '-D58') { &set_wp($IlluminantD58); } ; switch to set illuminant to D58 for calculation of XYZ components

Again that is wrong on the part of that author. There is no illuminant in deriving XYZ color of a star - look up previous screen shot - XYZ values are integrals of matching function times power spectrum of light that we want to record. That is it - no illuminant is ever mentioned. XYZ is absolute color space used to record color of light.

People often confuse this because another description is given:

image.png.6f5d49dedec1c66138d91f1ab6c4d6c3.png

Which simply means - we take a light source and we shine it on object - or thru filter and resulting spectrum is given by S * I. This is basic physics - and has nothing to do with XYZ calculations - it simply states that reflected or transmitted spectrum will have original frequencies attenuated by reflectance / transmittance coefficients of object / filter.

When recording XYZ - it does not matter which light source was used to shine upon object - process is the same. XYZ values will be different - because light that reaches sensor will be different - but process is the same and does not depend on "illuminant" in any way.

8 hours ago, jager945 said:

Thank you. That is more or less how colour calibration in StarTools works; luminance is processed completely independently of chrominance. Whatever stretch you used, however you enhanced detail, has no effect on the colouring. Once you hit the Color module you can change your colouring without affecting brightness at all (you can even select whether you want that to be perceptually constant in CIELAB space, or channel-constant in RGB space). You can even mimic the way an sRGB stretch desaturates emissive-sourced colours if you so wish.

What you can't do in StarTools, however, is have your chrominance data get stretched and squashed along with the luminance/detail. E.g. you will never see orange dust lanes in a galaxy with StarTools - they will always look red or brown; even psychovisual context-based colour perception is taken into account this way (see the "brown" video).

While that is admirable, I must say it's not worth much if one does not do proper color calibration first.

What good is all that if the color you are working is not correct?

Link to comment
Share on other sites

12 hours ago, vlaiv said:

In astrophotography, you said it yourself - we don't have illuminant in cosmic scene - we have only light reaching us.

This the third time you are misquoting what I said. I really doesn't help this back-and-forth if you keep doing that.

Where did I ever claim there are no illuminants in outer space? Every single star is an illuminant. Emissions can be illuminants. Anything that emits visual spectrum light can act as an illuminant for a scene if that light is reflected.

12 hours ago, vlaiv said:

you then just use defined standard

What is this "defined" standard? You keep conflating target medium illuminant (D65 in the case of sRGB) with the source colour space illuminant and transform - for which there is no "defined standard". They are two different things, and both color spaces require specifying. You specify the target medium illuminant (sRGB's D65), but you neglect (or don't know?) you need to specify the source colour space illuminant as well.

13 hours ago, vlaiv said:

Anyone can do above experiment.

That's not what a white point is (if you mean white reference). It's not about luminance/brightnesss. A white reference (for the purpose of this discussion) is a camera space value that defines a stimulus that should be colorless (brightness does not matter - it can be any shade of gray). Using that stimulus, a correction factor can be calculated for the components. Say the white reference is RGB:63,78,99 then all recorded values should be corrected by a factor R = 99/63 (1.57), G=99/78 (1.26), B=99/99 (1.0). You can normalize the factors as well of course to the 0-1 range; R=1.57/1.57(1.0), G=1.26/1.57(0.8), B=1.0/1.57(0.64).

See DCRAW's "-a", "-A", "-w" and "-r" switches. The "-a" switch is precisely how early digital cameras found their white balance ("auto white balance"). Many cheap webcams still work like this - flip on auto white balance, show them a pastel colored piece of paper with a smiley face drawn on it with a marker, and see them white balance away the pastel color to a neutral white/gray.

Quote

White balance is only needed if you want to match perceptual side of things

Well. Yeah? That's documentary photography, is it not? Why do you think the Mars rovers have a calibration target? They could have just used your magic Macbeth chart on earth, no? Why do you think all digital cameras have special modes for indoor and incandescent lighting? You must be fun at parties. Do you keep your after-dusk party shots brown/yellow, because that how they look under a completely non-applicable D65 illuminant (which you seem to be obsessed with?).

13 hours ago, vlaiv said:

we are not capturing what observer experienced

Oh, you're actually serious... :(

Quote

As for actual definition:

Why on earth are you giving the back and forward transform for the sRGB standard here?

You seem to be under the mistaken impression that RAW camera source colour space is some linear version of the sRGB colour space?

Nowhere in any RAW converter code will you find an sRGB -> XYZ forward transform because the RAW source color space is never already sRGB.

What happens in a RAW converter is this;

Camera space RGB -> White balance (as proxy for illuminant) -> XYZ -> matrix correction -> sRGB

You can examine the DCRAW code to verify this.

Quote

Other than that - there is no mention of illuminant in forward transform - nor is there similarly mention of illuminant in backward transform.

No joke; you're on the wrong Wikipedia page. You're not actually converting between two color spaces here. You're just going back and forth between one color space (sRGB) which obviously has the same illuminant, specified in the sRGB standard.

Quote

easily proven by math.

Indeed. It is easy to prove you are wrong.

Lets input a blue square of a Macbeth chart 470nm (aqua-ish) and use a red monochromatic light (650nm) to see if we get any reflection.

The blue square's watt per steradian per square metre, per metre will peak at lambda = 470nm and will quickly fall off for other wavelengths. Pick any number you like for this spectral radiance of the blue square.

Now integrate that over the target power spectrum (which consists solely of our peak at 650nm) and out rolls precisely 0 for X, Y and Z.

In other words, if a Macbeth square isn't red around 650nm (our square in the above example isn't), then it isn't reflected when we use a power spectrum that does not include any power at that part of the spectrum. Your Macbeth chart will only reflect red squares and provide you no (usable) XYZ values for the other squares.

13 hours ago, vlaiv said:

has nothing to do with XYZ calculations

You say that about formulas that literally start with "X =", "Y =" and "Z "= !?

13 hours ago, vlaiv said:

does not depend on "illuminant"

IT LITERALLY SAYS IN THE SNIPPET YOU POSTED; "multiplied by the spectral power distribution of the illuminant".

At this point I really have to ask - are you trolling me? Please be honest. This nonsense is taking up so much of my time and I really don't want to continue this if it's not to anyone's benefit, and if you're just doing this to get a rise out of me.

This is a 100% honest question, as you don't seem to understand or read your own sources, and you make outrageous claims that even a novice photographer would know are just downright bizarre.

Link to comment
Share on other sites

Jager and Vlaiv, you really should find another thread for your discussion. Johan, who just wanted to know if something was wrong with his camera, could by now have given up on SGL forever!

Edited by gorann
  • Like 3
Link to comment
Share on other sites

  • 3 weeks later...
On 02/04/2021 at 17:33, Budgie1 said:

I had this with my ASI294MC Pro but I could process it out with PixInsight using the DynamicBackgroundExtraction tool. Then, when playing with the settings in DSS I found a setting which removed most of the green cast.

In DSS, go to the Settings Menu > Stacking Settings > Light tab > click where it says "RGB Channels Background Calibration" and you get a drop-down menu, click on "RGB Channels Background Calibration" if there isn't a tick next to it and then click on "Options". In the Background Options window I have the Calibration Method set to "Rational" and the RGB Channels Background Calibration set to "Minimum".

Click OK to save the setting changes and run the stack again and most of the green cast should have been removed. ;)

I'm still collecting images to see what happens. For now: but it seems that this setting of parameters for DSS removes te green coloring. If this is so then the probleem is in the processing not recording of images. I want to be sure, and also to understand what happens, so will spend more time on this.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.