Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

M101 data set


Rodd

Recommended Posts

23 minutes ago, Rodd said:

I know--that is what I am saying.  If I process an image and get the perfect color balance I may still not have the proper color intensity.  The valance may be perfect but if teh amounts of RGB are too low you wont see any color.

Notice that above bar does not go to gray. Gray is another mix of primary colors. What you are calling low amounts of RGB will end up being dark like left side of the image - it won't get to go to gray as if "color is washed out".

In order to wash out the color - you need to get different mix of R, G and B. In any case - above calibration is not good either. I just realized that I missed not one but two important things - first is photon energy and second is atmospheric extinction vs wavelength. Could be that images end up more saturated then now :D

 

Link to comment
Share on other sites

35 minutes ago, Rodd said:

Looks good.  Why is it so big?  I

That’s my FOV with the ASI 178 and Esprit 150 combination, the idea being to capture detail (whenever conditions allow) on small galaxy targets. It has had varying success, but I will spend more time on this data, having just purchased a PI license and still having plenty of time thanks to the lockdown.

Link to comment
Share on other sites

2 hours ago, tomato said:

That’s my FOV with the ASI 178 and Esprit 150 combination, the idea being to capture detail (whenever conditions allow) on small galaxy targets. It has had varying success, but I will spend more time on this data, having just purchased a PI license and still having plenty of time thanks to the lockdown.

Yes but it isn’t FOV that determines scale but resolution, which is pixel size and focal length. A tiny sensor with the same resolution as a giant sensor will show the object at the same size, just much less of it

Edited by Rodd
Link to comment
Share on other sites

3 hours ago, vlaiv said:

Notice that above bar does not go to gray. Gray is another mix of primary colors. What you are calling low amounts of RGB will end up being dark like left side of the image - it won't get to go to gray as if "color is washed out".

My oh my--how about this.  the fist image is M101 calibrated by using PIs' standard method of color calibration (Using a preview for background neutralization, a second preview for color calibration).  The second image is the same data color calibrated (and background neutralized) using PI's photo-metric color calibration tool.  The images were stretched the same using the histogram tool.  Both were DBE'd first.  No other processing was done so ignore the background  I see a huge difference in these image.  The photo-metric color calibrated image is whiter--and the faint blues stick out more to my eye.  The standard CC method image appears greener/grayer.  I think I will reprocess the data using the photometric method

Standard CC

Image05-BN-CC.thumb.jpg.fe9f12f4bc049a16d9c0ffdda65b930c.jpg

Photo-metric CC

Image05-PCC.thumb.jpg.871c800a77251e130995e81b9966ee9e.jpg

Link to comment
Share on other sites

5 hours ago, vlaiv said:

tice that above bar does not go to gray. Gray is another mix of primary colors. What you are calling low amounts of RGB will end up being dark like left side of the image - it won't get to go to gray as if "color is washed out".

Vlad--in Pixinsight you can modify saturation without altering the color balance.    That is what I am asking about.  Making an image more or less saturated does not imply altering color balance--otherwise there would not be a means to change saturation.  There is color balance, and there is saturation--2 different things.  

Link to comment
Share on other sites

4 hours ago, Rodd said:

Yes but it isn’t FOV that determines scale but resolution, which is pixel size and focal length. A tiny sensor with the same resolution as a giant sensor will show the object at the same size, just much less of it

AfaIk, @tomato has a Moravian with 5.4 um pixels on a Kaf8300 sensor, and an ASI178 with 2.4 um pixels on a small Sony Exmor sensor. Both on an Esprit150. What interests me is if such a dual rig with pixelscale difference of a factor of 2, can resolve much more detail than the kaf alone. My guess is that on most nights it won't make much difference. But on those very rare winter and spring nights, when seeing is at its best, and galaxies are in full bloom, those tiny pixels will make a difference.

  • Like 1
Link to comment
Share on other sites

54 minutes ago, wimvb said:

AfaIk, @tomato has a Moravian with 5.4 um pixels on a Kaf8300 sensor, and an ASI178 with 2.4 um pixels on a small Sony Exmor sensor. Both on an Esprit150. What interests me is if such a dual rig with pixelscale difference of a factor of 2, can resolve much more detail than the kaf alone. My guess is that on most nights it won't make much difference. But on those very rare winter and spring nights, when seeing is at its best, and galaxies are in full bloom, those tiny pixels will make a difference.

It’s the 2.4 um pixels.  I did not realize they were so small

Link to comment
Share on other sites

5 hours ago, Rodd said:

Vlad--in Pixinsight you can modify saturation without altering the color balance.    That is what I am asking about.  Making an image more or less saturated does not imply altering color balance--otherwise there would not be a means to change saturation.  There is color balance, and there is saturation--2 different things.  

We are clearly using different terminology here. By color balance I mean ratio of linear R, G and B components for a single color. Not any sort of relationship between colors.

Saturation will change that ratio.

image.png.dbaf5cbaf2d7649b85c0fb6f3e217223.png

here we see some RGB ratio for color in rectangle. It is 1 : 0.6 : 0.6 (RGB)

Now look values as I increase saturation:

image.png.eac601f6a965aec96654dafebfc0834c.png

Color is now more saturated than one above (I just went Color/Saturation and added some).

Now ratios are no longer the same 1 : 0.45 : 0.51

If you change saturation it will change ratios of RGB and vice verse.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

We are clearly using different terminology here. By color balance I mean ratio of linear R, G and B components for a single color. Not any sort of relationship between colors.

Saturation will change that ratio.

image.png.dbaf5cbaf2d7649b85c0fb6f3e217223.png

here we see some RGB ratio for color in rectangle. It is 1 : 0.6 : 0.6 (RGB)

Now look values as I increase saturation:

image.png.eac601f6a965aec96654dafebfc0834c.png

Color is now more saturated than one above (I just went Color/Saturation and added some).

Now ratios are no longer the same 1 : 0.45 : 0.51

If you change saturation it will change ratios of RGB and vice verse.

I disagree. If you have a ratio of 1/1/1 it is the same as 50/50/50. More color saturation in the second, same color balance

Link to comment
Share on other sites

2 minutes ago, Rodd said:

I disagree. If you have a ratio of 1/1/1 it is the same as 50/50/50. More color saturation in the second, same color balance

Nope, try putting 1:1:1 color ratio into photshop / gimp and you will see that it is completely grey color. So is 50/50/50 - or any other combination that has equal amounts of R, G and B.

 

Link to comment
Share on other sites

14 minutes ago, vlaiv said:

Nope, try putting 1:1:1 color ratio into photshop / gimp and you will see that it is completely grey color. So is 50/50/50 - or any other combination that has equal amounts of R, G and B.

 

Not PI. Saturation is different than color balance.   Change the numbers so one color is more but the ratios stay the same.  Like 1/2/4 and 10/20/40. I can have a perfectly balanced histogram with a gray looking image and a colorful imsge

Edited by Rodd
Link to comment
Share on other sites

8 minutes ago, Rodd said:

I can have a perfectly balanced histogram with a gray looking image and a colorful imsge

That is because histogram is something totally different and has no relevance to what we are discussing here.

Link to comment
Share on other sites

All very interesting..   here's the data through PI only.. DBE, photometric colour calibration, Arcsinh stretch 15 twice then LRGB combination at default values..  no Ha added.  I understand that Arcsinh stretch preserves the colour ratios .. so I am assuming that these colours are "correct" according to PI... (although happy to be corrected)  looks pretty Blue to me ..   (apologies if this has been done before in the thread or is irrelevant to where the thread currently is)

Dave

Rodd_L_RGB_crop_DBE_pcc_Arcs15x2_Lumcombine.thumb.jpg.e28caa58ca1f3bdda7cf5aee9473684d.jpg

Edited by Laurin Dave
Link to comment
Share on other sites

2 minutes ago, vlaiv said:

That is because histogram is something totally different and has no relevance to what we are discussing here.

We always talk about different things.  An image can be color balanced with one color showing more than another if that is the true color of the object.  Color balance does not imply all colors equal. Otherwise all images would be gray

Link to comment
Share on other sites

1 minute ago, Rodd said:

We always talk about different things.  An image can be color balanced with one color showing more than another if that is the true color of the object.  Color balance does not imply all colors equal. Otherwise all images would be gray

Term color balance really has no place here. It is related to daytime photography and fact that you are creating image in one lighting conditions and want to display image in another "conditions" - or rather you want to display colors of the object close to viewing conditions. That is what color balance really is. Color balance does not deal with raw camera data as that part is handled for you by DSLR camera. It transforms raw data in actual true color data.

Here in astrophotography we are using raw color data from camera. It has not been transformed to actual color data, but it is used as if it were. This gives "strange" colors of objects - or rather colors that are not true. Problem is - this is how it has been done always and people are used to such colors in astrophotography. There is also "more is better" culture present. People like to see more - more faint stuff, more color, more "bling" ...

I'm going to try to do a demonstration for you in a minute of what is happening with colors in astrophotography - by means of comparison to daytime photography. I'm going to make a raw image of colorful object with my DSLR and then I'm going to show you out of camera jpeg and I'm going to show you what raw color without color calibration looks like (I'm going to treat raw from camera as it was astro image - same workflow people use).

Link to comment
Share on other sites

25 minutes ago, Laurin Dave said:

ere's the data through PI only.. DBE, photometric colour calibration

Did you see my comparison between old CC and the photometric CC?  Big difference.  I think we can all come up with slightly different results because of settings and where we place our previews.  Not sure.  Maybe it was just the two methods yield different results (pretty different though).  I'd say yours looks good and if you were going to process the image I'd say no problem yet.

Link to comment
Share on other sites

20 minutes ago, vlaiv said:

Term color balance really has no place here. It is related to daytime photography and fact that you are creating image in one lighting conditions and want to display image in another "conditions" - or rather you want to display colors of the object close to viewing conditions. That is what color balance really is. Color balance does not deal with raw camera data as that part is handled for you by DSLR camera. It transforms raw data in actual true color data.

Here in astrophotography we are using raw color data from camera. It has not been transformed to actual color data, but it is used as if it were. This gives "strange" colors of objects - or rather colors that are not true. Problem is - this is how it has been done always and people are used to such colors in astrophotography. There is also "more is better" culture present. People like to see more - more faint stuff, more color, more "bling" ...

I'm going to try to do a demonstration for you in a minute of what is happening with colors in astrophotography - by means of comparison to daytime photography. I'm going to make a raw image of colorful object with my DSLR and then I'm going to show you out of camera jpeg and I'm going to show you what raw color without color calibration looks like (I'm going to treat raw from camera as it was astro image - same workflow people use).

Vlad--semantics aside.  Color saturation is not color calibration.  Color calibration is balancing the colors "with respect to that image"  A color calibrated image can look red, or blue or green if those colors are more prominent in the actual light.  In PI you can increase saturation and maintain color calibration.

Link to comment
Share on other sites

5 minutes ago, Rodd said:

Did you see my comparison between old CC and the photometric CC?  Big difference.  I think we can all come up with slightly different results because of settings and where we place our previews.  Not sure.  Maybe it was just the two methods yield different results (pretty different though).  I'd say yours looks good and if you were going to process the image I'd say no problem yet.

Rodd how did you stretch your pcc'd image?  Did you use ArcSinh?  this gives richer colour as (I believe) all channels are stretched the same amount (with HT the degree of stretch depends on brightness and it washes the colours away)

Link to comment
Share on other sites

1 minute ago, Rodd said:

Color calibration is balancing the colors "with respect to that image" 

No it is not. At least we might be again using different terminology.

Color calibration means adjusting color information so that is properly measured. It is adjusting for instrument differences with respect to color capture. Aim of color calibration is:

- If two people image same target with different equipment and produce color of that object - it needs to match between the two of them.

- If person is imaging two different objects and we know that these two objects have same color - data after color calibration needs to match - it should indicate that those two objects indeed have same color.

Since color is perceptual thing, there is one more requirement for color calibration:

- if you take an image of object (or light source) and color calibrate it properly and take same object (or light source) and show it to people side by side - they will all tell you - yes that is the same color. Therefore color calibration is not just adjusting for instrument response since color is perceptual thing - it is also color matching, or it needs to satisfy color matching.

 

Link to comment
Share on other sites

Now something that I promised. Comparison of different color data from DSLR. This is Canon 750d, Image was taken in both RAW and Jpeg at the same time (camera created both).

Here is jpeg from camera - just resized down:

demo-camera.jpg.debe337ce6a02aab9e9c837729b5fced.jpg

Colors rendered in the image faithfully show what observer was seeing in those conditions (good color balance and factory color calibration for that sensor).

demo-raw-no-process.jpg.b461f8386746b2cbccd201c4aba0832e.jpg

This is raw image, converted to Fits with FitsWorks, just debayered and channel composition done in Gimp. It is obvious that this image is still in linear space (no required gamma applied - everything is darker) and there is no color calibration done. Observer saw very different scene (see image above).

Here is now raw image with histogram adjustment in Gimp (just a simple curve stretch as we would do for astro photography):

demo-raw-histogram.jpg.10c2bcc3115f65b70d27303e85c6c0dd.jpg

This looks more like image above in terms of gamma (that is to be expected as simple curves stretch very much resembles gamma in shape), but regardless - colors are way off.

Link to comment
Share on other sites

I fear we're mixing colour terminologies between RGB and HSL (or HSV). Saturation that does not change the colour balance (hue) only makes sense when considering HSL(V). To change saturation in RGB without affecting colour balance requires changing the rations of RGB and the luminance to which it applies.

Of course, you can convert between the two at will, so you can apply "saturation" tools to RGB values but they will not appear to work the same.

H53 S75 L50  =  R128 G116 B032 with a luminance of 49%

H53 S50 L50  =  R128 G120 B064 with a luminance of 50%

H53 S25 L50  =  R128 G124 B096 with a luminance of 52% (rounding error as PS only shows integer values)

So both scales describe the exact same colour balance but in different terms.

I think this derives from RGB typically being used for print and HSL(V) for screen.

Edited by Filroden
Link to comment
Share on other sites

16 minutes ago, Laurin Dave said:

Rodd how did you stretch your pcc'd image?  Did you use ArcSinh?  this gives richer colour as (I believe) all channels are stretched the same amount (with HT the degree of stretch depends on brightness and it washes the colours away)

I stretched each the same using histogram so both were subjected to same--that should preserve difference

Link to comment
Share on other sites

7 minutes ago, Filroden said:

I fear we're mixing colour terminologies between RGB and HSL (or HSV). Saturation that does not change the colour balance (hue) only makes sense when considering HSL(V). To change saturation in RGB without affecting colour balance requires changing the rations of RGB and the luminance to which it applies.

Of course, you can convert between the two at will, so you can apply "saturation" tools to RGB values but they will not appear to work the same.

H53 S75 L50  =  R128 G116 B032 with a luminance of 49%

H53 S50 L50  =  R128 G120 B064 with a luminance of 50%

H53 S25 L50  =  R128 G124 B096 with a luminance of 52% (rounding error as PS only shows integer values)

So both scales describe the exact same colour balance but in different terms.

I think this derives from RGB typically being used for print and HSL(V) for screen.

Thanks Ken.....why have a tool for color calibration and a tool for saturation?  the tool I refer to increases saturation for all channels--not just red or blue or green.  In PI you can increase saturation of all of them at the same time.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.