Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Colour considerations


neil phillips

Recommended Posts

3 minutes ago, neil phillips said:

Great points Vlaiv. Even the answer to that might be different for some people. Some may prefer actual sky conditions at the time colour. Some may prefer actual image in space true colour. It gets more confusing by the second. I think actual true colour is a better way to think about it. As sky conditions are just changing true colour into earths distorted false colour. So, no consensus there. Just more of a myriad of different hues caused by scatter and refraction 

We can always use "simplest" baseline color as "accurate".

Since color is psycho visual thing - we can kind of utilize "Einstein" approach and ask question like - "what will all observers agree upon?".

Atmosphere is earth bound element - we can't expect it to be relevant for martian or "lunarian" for example - so it is light as captured in space.

Next - we need to avoid subjective thing (which is based on viewing condition and recollection of what color "felt" or "looked" like) and say following:

if we project actual color spectrum from computer screen and one "captured" and then somehow thrown into ones eye - side by side - all will agree that they match or they look the same.

This is simple color matching approach that does not try to replicate color (as a broad term) but just to match whatever person sees given conditions.

It is also luckily for us - simplest as it does not involve perception space transforms (although those are being developed and are now quite accurate and useful).

In order to do that we only really need 2 out of 3 steps above - with third step being replaced with simple XYZ -> sRGB transform that is well defined.

 

  • Like 1
Link to comment
Share on other sites

Thinking about it - we actually need to make sure we all have calibrated equipment and can produce same color image of known object.

I tried to illustrate this point in DSO imaging side of things. There people give themselves quite a bit of creative freedom of interpretation of colors in deep sky objects.

We can start by a simple challenge. Let's pick an object that is readily available and has well defined color. I think that well known brand of something can serve for that as brands want to be recognizable and often do their best to ensure that packaging is the same all over the world.

If we say google for images of CocaCola can - we will see same color of red in most images taken with smart phones. This is because proper color management has been implemented in smart phones (for the most part, although they started adding "beauty filters" more and more :D ).

I bet we can replicate that our mobile phones, but can we do that with our planetary cameras?

  • Like 1
Link to comment
Share on other sites

2 minutes ago, vlaiv said:

Thinking about it - we actually need to make sure we all have calibrated equipment and can produce same color image of known object.

I tried to illustrate this point in DSO imaging side of things. There people give themselves quite a bit of creative freedom of interpretation of colors in deep sky objects.

We can start by a simple challenge. Let's pick an object that is readily available and has well defined color. I think that well known brand of something can serve for that as brands want to be recognizable and often do their best to ensure that packaging is the same all over the world.

If we say google for images of CocaCola can - we will see same color of red in most images taken with smart phones. This is because proper color management has been implemented in smart phones (for the most part, although they started adding "beauty filters" more and more :D ).

I bet we can replicate that our mobile phones, but can we do that with our planetary cameras?

I can see where you're going with this, I think. But already too many variables. Not everyone is going to have a calibrated monitor. For a select few this might be interesting. But not necessarily for the whole

So for those select few interested 

What steps would you suggest. 

  • Like 1
Link to comment
Share on other sites

1 minute ago, neil phillips said:

I can see where you're going with this, I think. But already too many variables. Not everyone is going to have a calibrated monitor. For a select few this might be interesting. But not necessarily for the whole

So for those select few interested 

What steps would you suggest. 

Well, yes, "source of truth" is major problem here as we don't have lab equipment to measure color spectra accurately.

I think that we can keep things simple and assume that our mobile phones are fairly well calibrated in factory? We can use those as our calibration devices in the scope of this experiment.

So idea would be - to take reference color checker pattern, display it on our cell phone and image it with our planetary camera (or cameras - I have two or even three if I use ASI1600 + filters as well) and use that to derive suitable transform.

Then we can take object that we all have access to and are fairly confident it is the same color in all our items and we shoot it and post results.

Color should match fairly well across all our images.

Once we have that - we have a way of matching target color regardless of camera model used. That is important first step.

Link to comment
Share on other sites

1 minute ago, neil phillips said:

Everyone is familiar with these colours

Problem is that those are not "material" colors but rather digital ones (will depend on screen used to display them). I wonder if any of these items are "universal"

image.png.3708c933f4da8c58b8d014e429127930.pngimage.png.5eb60e372e98913aa1c3d03d31264982.pngimage.png.f565f9ef77c8d8741aee8cd6cc2a5ff9.png

 

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

Well, yes, "source of truth" is major problem here as we don't have lab equipment to measure color spectra accurately.

I think that we can keep things simple and assume that our mobile phones are fairly well calibrated in factory? We can use those as our calibration devices in the scope of this experiment.

So idea would be - to take reference color checker pattern, display it on our cell phone and image it with our planetary camera (or cameras - I have two or even three if I use ASI1600 + filters as well) and use that to derive suitable transform.

Then we can take object that we all have access to and are fairly confident it is the same color in all our items and we shoot it and post results.

Color should match fairly well across all our images.

Once we have that - we have a way of matching target color regardless of camera model used. That is important first step.

So your suggesting putting a lens on the camera to image a reference colour on a mobile phone. 

Link to comment
Share on other sites

Just now, neil phillips said:

So your suggesting putting a lens on the camera to image a reference colour on a mobile phone. 

Or placing a mobile phone 40-50m away from telescope and not focusing all the way (so pixels won't resolve) and taking average recorded values over some area.

 

  • Confused 1
Link to comment
Share on other sites

In fact - it is best if we pair camera with telescope that will be used for recording as telescopes can impart "color cast" onto image.

Not all optics provides color neutral image. Reflectors for example have reflectivity curve that is not uniform over whole spectrum (although variations are small - like 1-2%).

Link to comment
Share on other sites

Just now, neil phillips said:

My garden is not that long lol

Well, depends on sensor size and focal length. Simple FOV calculation will show how much of mobile phone will fit on sensor.

Alternative is to use finder guider or other sort of optics that does not have as long FL as primary scope.

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

In fact - it is best if we pair camera with telescope that will be used for recording as telescopes can impart "color cast" onto image.

Not all optics provides color neutral image. Reflectors for example have reflectivity curve that is not uniform over whole spectrum (although variations are small - like 1-2%).

was going to say reflectors are neutral 

Link to comment
Share on other sites

12 minutes ago, neil phillips said:

Not sure 1% would be a huge problem. Though my garden is at a guess 4 or 5 m long

Not a huge problem of course.

19 minutes ago, neil phillips said:

wouldn't the camera lens be a better idea it can be done closer. No telescope needed

Certainly - if one has means to attach camera to lens and owns a lens - then sure.

Guide scope is another alternative as is finder/guider provided there is means to attach camera to it.

Anything that will project more or less focused phone screen image onto sensor will do (as well as  that of an object.

Well - that gives me an interesting idea.

How about pin hole camera?

Simple cover of camera nose piece with tiny hole can be used to project phone screen image in a dark room.

This can be done at very short distance. In fact here is diagram that explains what happens:

image.png.10be34a5a3c70a2cb8806db366343d1e.png

Left is phone screen, middle vertical bar is pinhole cover and right is sensor.

I marked with arrows distances that can be used in simple ratio equation to give us right phone distance to cover sensor completely. All we need to know is distance from pinhole to sensor, sensor size and phone screen size and we can calculate minimum phone distance.

This is maybe the best option as

a) it does not require additional optics

b) it can be done indoors and does not require very large distance

Link to comment
Share on other sites

1 minute ago, vlaiv said:

Not a huge problem of course.

Certainly - if one has means to attach camera to lens and owns a lens - then sure.

Guide scope is another alternative as is finder/guider provided there is means to attach camera to it.

Anything that will project more or less focused phone screen image onto sensor will do (as well as  that of an object.

Well - that gives me an interesting idea.

How about pin hole camera?

Simple cover of camera nose piece with tiny hole can be used to project phone screen image in a dark room.

This can be done at very short distance. In fact here is diagram that explains what happens:

image.png.10be34a5a3c70a2cb8806db366343d1e.png

Left is phone screen, middle vertical bar is pinhole cover and right is sensor.

I marked with arrows distances that can be used in simple ratio equation to give us right phone distance to cover sensor completely. All we need to know is distance from pinhole to sensor, sensor size and phone screen size and we can calculate minimum phone distance.

This is maybe the best option as

a) it does not require additional optics

b) it can be done indoors and does not require very large distance

Sounds better. Rather than a long drawn out explanation. Couldn't you do a youtube video of the procedure starting with pinhole camera recording colour image reference. And the next steps which i am guessing are setting RGB values on the camera to match colour image on mobile screen and laptop ? It would also help others not sure what we are going on about. Much easier for everyone to follow. And a great sticky i could add. 

Link to comment
Share on other sites

1 minute ago, neil phillips said:

Sounds better. Rather than a long drawn out explanation. Couldn't you do a youtube video of the procedure starting with pinhole camera recording colour image reference. And the next steps which i am guessing are setting RGB values on the camera to match colour image on mobile screen and laptop ? It would also help others not sure what we are going on about. Much easier for everyone to follow. And a great sticky i could add. 

I can certainly do that.

Thing is - we don't all need to do it. People can perform calculation of color calibration matrix for particular model of camera and then other can just reuse it. I can post CCM for ASI178, ASI185 + either ZWO UV/IR cut or Baader UV/IR cut filter, ASI1600 with different combination of filters (Baader CCD LRGB or some other combination of absorption planetary filters - just to show that one needs not use RGB model for imaging and still produce RGB image).

With this exercise I was actually hoping for people to confirm what I'm saying and see for themselves that this is actually working - rather than taking my word for it (on more than few occasions people have had doubts about things that I'm saying although I'm not really conveying an opinion but rather facts that can be verified by variety of sources).

If many people, utilizing this approach - produce color of object that we all agree is the same - then it must be working, right? (and anyone can compare image to object itself and see if color is accurate or not).

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

I can certainly do that.

Thing is - we don't all need to do it. People can perform calculation of color calibration matrix for particular model of camera and then other can just reuse it. I can post CCM for ASI178, ASI185 + either ZWO UV/IR cut or Baader UV/IR cut filter, ASI1600 with different combination of filters (Baader CCD LRGB or some other combination of absorption planetary filters - just to show that one needs not use RGB model for imaging and still produce RGB image).

With this exercise I was actually hoping for people to confirm what I'm saying and see for themselves that this is actually working - rather than taking my word for it (on more than few occasions people have had doubts about things that I'm saying although I'm not really conveying an opinion but rather facts that can be verified by variety of sources).

If many people, utilizing this approach - produce color of object that we all agree is the same - then it must be working, right? (and anyone can compare image to object itself and see if color is accurate or not).

No I think this is one case where people can either choose to try it or not simple. So, for me personally I use QHY 462c and Baader UV IR Cut I think it's popular combination ( ZWO alternatives ) 224 also popular. You just need the RGB values on any particular camera by measuring them correct ? and then anyone just punching those values into white balance ?  Am i with you so far ? 

Link to comment
Share on other sites

1 minute ago, neil phillips said:

You just need the RGB values on any particular camera by measuring them correct ? and then anyone just punching those values into white balance ?  Am i with you so far ? 

Not quite

It is a bit more complicated than that (but not much).

Instead of using only 3 values as "RGB weights" - one produces 9 values as color calibration matrix.

3 weights are just main diagonal of 3x3 matrix when used like that and we discard other 6 values as being 0. If we want more accurate result - we should use all 9 (or rather 3x3 matrix).

Then there is matter of color space. People think that there is single RGB space - but actually there is infinite number of RGB spaces - each defined by actual R, G and B primaries used.

What is defacto standard is sRGB variant of RGB color space, but we don't want to produce result in RGB space directly. We want to instead use XYZ color space as it is root of everything color related.

It is absolute color space that is well defined and is modeled based on human vision. Y component closely corresponds to luminance.

sRGB on the other hand is relative color space (difference being that RGB has white point and black point - while XYZ color space does not - it is photon count - which are non negative integers without upper bound and white point) that is also gamma corrected (XYZ is linear color space).

So our first task is to derive 9 values for linear color conversion from camera raw data to XYZ space. We use least squares method with number of measured samples ( say 20 or so different color samples) to do this. It is a bit tedious task, but can be somewhat automated with use of ImageJ and Spreadsheet app.

After we have XYZ data - then rest is just mathematical transforms that are well defined - either going directly from XYZ to sRGB or doing some sort of transform in XYZ space (atmospheric correction or perceptual color transform) and then going to target color space.

XYZ is really the basis for color management - and it is "real raw" color data.

  • Confused 1
Link to comment
Share on other sites

7 minutes ago, vlaiv said:

Not quite

It is a bit more complicated than that (but not much).

Instead of using only 3 values as "RGB weights" - one produces 9 values as color calibration matrix.

3 weights are just main diagonal of 3x3 matrix when used like that and we discard other 6 values as being 0. If we want more accurate result - we should use all 9 (or rather 3x3 matrix).

Then there is matter of color space. People think that there is single RGB space - but actually there is infinite number of RGB spaces - each defined by actual R, G and B primaries used.

What is defacto standard is sRGB variant of RGB color space, but we don't want to produce result in RGB space directly. We want to instead use XYZ color space as it is root of everything color related.

It is absolute color space that is well defined and is modeled based on human vision. Y component closely corresponds to luminance.

sRGB on the other hand is relative color space (difference being that RGB has white point and black point - while XYZ color space does not - it is photon count - which are non negative integers without upper bound and white point) that is also gamma corrected (XYZ is linear color space).

So our first task is to derive 9 values for linear color conversion from camera raw data to XYZ space. We use least squares method with number of measured samples ( say 20 or so different color samples) to do this. It is a bit tedious task, but can be somewhat automated with use of ImageJ and Spreadsheet app.

After we have XYZ data - then rest is just mathematical transforms that are well defined - either going directly from XYZ to sRGB or doing some sort of transform in XYZ space (atmospheric correction or perceptual color transform) and then going to target color space.

XYZ is really the basis for color management - and it is "real raw" color data.

Ok will have to trust you on that. Went over my head. So, you will need 9 values as colour calibration. And use XYZ Colour space.  Best bet Vlaiv is to try to make a youtube video of the procedures so we can see what you're doing in real time, 

I am sure you will get a lot more interest doing a youtube vid, and trying to keep it as simple as possible so most can attempt to try this. 

Other wise its going to get over drawn out. 

  • Like 1
Link to comment
Share on other sites

Interesting post Neil

I personally find all this very subjective as nobody has actually been there. (All probe cams are set by humans on various software platforms)

When I image with RGB filters, I get a completely different colour set to my OSC camera on Jupiter, even though my histogram levels are correct when processing. No amount of work can bring my RGB captures anything like this. The reds are always enhanced etc etc. My colour cams will always bring a similar result to yours without messing about with them. What is also interesting is that I often use a paid for software called AVC Labs photo enhancer to colour calibrate the images, and I much prefer this to Registax., even though out of habit I often return to registax at various stages of my workflow

Best Wishes

Harvey. 

PS. I notice you've now moved up to Sunny Suffolk! LOL

Link to comment
Share on other sites

6 minutes ago, Barv said:

Interesting post Neil

I personally find all this very subjective as nobody has actually been there. (All probe cams are set by humans on various software platforms)

When I image with RGB filters, I get a completely different colour set to my OSC camera on Jupiter, even though my histogram levels are correct when processing. No amount of work can bring my RGB captures anything like this. The reds are always enhanced etc etc. My colour cams will always bring a similar result to yours without messing about with them. What is also interesting is that I often use a paid for software called AVC Labs photo enhancer to colour calibrate the images, and I much prefer this to Registax., even though out of habit I often return to registax at various stages of my workflow

Best Wishes

Harvey. 

PS. I notice you've now moved up to Sunny Suffolk! LOL

Hi Harvey good to see you around. Yes, it is a real minefield isn't it.  Would be interesting if astronauts ever went there for confirmation of its true colour.

Yes, moved to Suffolk 3 years ago now. Dont go out much mind with poor health and covid. But still love astronomy

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.