Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

M101 data set


Rodd

Recommended Posts

13 minutes ago, vlaiv said:

Colors rendered in the image faithfully show what observer was seeing in those conditions (good color balance and factory color calibration for that sensor).

My point exactly--color calibration is not color balance.  One can have a blue image and still have a color calibrated image--just look at one of the squares--they are all different colors!

 

Link to comment
Share on other sites

21 minutes ago, vlaiv said:

No it is not. At least we might be again using different terminology.

Color calibration means adjusting color information so that is properly measured. It is adjusting for instrument differences with respect to color capture. Aim of color calibration is:

- If two people image same target with different equipment and produce color of that object - it needs to match between the two of them.

- If person is imaging two different objects and we know that these two objects have same color - data after color calibration needs to match - it should indicate that those two objects indeed have same color.

Since color is perceptual thing, there is one more requirement for color calibration:

- if you take an image of object (or light source) and color calibrate it properly and take same object (or light source) and show it to people side by side - they will all tell you - yes that is the same color. Therefore color calibration is not just adjusting for instrument response since color is perceptual thing - it is also color matching, or it needs to satisfy color matching.

 

The old style CC in PI was relative--not photometric.  So it is based in the image.  Different scopes will capture different colors based on their quality etc etc.  that's what I mean about it being particular to image.  the software only knows what is in the image--not what it should be.  What was actually captured--not what should have been captured.  In PI, BN and CC (not photo2metric) is a relative thing based on the dataset at hand.

Link to comment
Share on other sites

Back to the image....Here is a reprocessed data set using photometric color calibration with absolutally no color changes--not saturation boosts at all--just use of curves and other tools to process image.  It looks pretty bland to me--nice sharpness and depth--but it is bone white! 

HaLRGB-1.thumb.jpg.f53b6541afcd0d3813835d5234a03b60.jpg

Link to comment
Share on other sites

16 minutes ago, Rodd said:

Thanks Ken.....why have a tool for color calibration and a tool for saturation?  the tool I refer to increases saturation for all channels--not just red or blue or green.  In PI you can increase saturation of all of them at the same time.

You can't increase saturation on a single channel - saturation is property of color - or rather perception of color. You can change it only if you change the color - make it a different color.

15 minutes ago, Rodd said:

Then why color calibrate prior to stretching?  That is the PI work flow

In my workflow you don't stretch color information at all. That way, RGB ratio is preserved while in linear stage. In the end gamma of 2.2 is applied because it is required for sRGB color space - which we use for our images on the web (web standard is that if you have image without color space information - it will be treated as if it was sRGB color space).

Here is workflow that I use:

1. luminance is stretched and mapped to 0-1 range

2. Color is calibrated using color calibration matrix (point of my work here is to find suitable color calibration matrix - same thing that makes above raw images into proper image made by camera with factory embedded transform matrix)

3. Normalized ratio of RGB is computed by r = r/max(r,g,b), g = g/max(r,g,b), b = b/max(r,g,b)

4. Luminance is transformed by inverse gamma 2.2

5. Luminance is multiplied by r, g and b to produce linear r, g and b

6. Resulting linear r, g and b are then forward transformed with gamma 2.2 to give sRGB R, G and B channels which are simply combined into final image.

Only stretch that I do is on luminance. This way you won't have stretch artifacts, color is preserved and it is proper if you do good calibration, no star cores will be burned out if you have full range R, G and B subs (they are not saturating - luminance can saturate, that is not important as it is only one that is stretched and will compress star profiles anyway).

Link to comment
Share on other sites

2 minutes ago, Rodd said:

just use of curves and other tools to process image

If you applied curves after color calibration - colors will be wrong. Non linear transforms don't preserve colors.

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

If you applied curves after color calibration - colors will be wrong. Non linear transforms don't preserve colors.

Doesn't sound like Pixinsight.  Your workflow is not pixinsight.  Its like when Olly tells me his workflow in PS--means nothing to me.  Since I have never seen a fully processed image from you, its hard for me to know if what you do is really any different in the end to what others do.  In pixinsight you are supposed to use curves after color calibration.

Edited by Rodd
Link to comment
Share on other sites

4 minutes ago, vlaiv said:

If you applied curves after color calibration - colors will be wrong. Non linear transforms don't preserve colors.

In PI you can use curves but not on the colors.  You also can use curves on colors--two different uses of teh tool

Link to comment
Share on other sites

I guess ultimately we, in the guise of astrophotographers, are making pictures, we are not producing data for scentific research.  For most the primary objective is to produce an image that looks pleasing and satisfies the basic general expectations of atrophotography i.e. not blown out in the cores, round stars and good detail in structures etc. 

I like looking at astro images (pictures) with some colour, even if that is somewhat by way of artistic license ALA narrowband.

I like your images, Rod, as you do have some colour in them, and the level if this colour is to your persoanl preference which, in my opinion and looking at the positive comments you get, is to the liking of very many people.

Edited by RayD
Link to comment
Share on other sites

5 minutes ago, Rodd said:

In PI you can use curves but not on the colors.  You also can use curves on colors--two different uses of teh tool

I'm starting to feel that this discussion is all over the place and we might be loosing the sense of what initial direction was (at least at some point - at very beginning you offered your data to others to be processed so you can compare different approaches).

At some point we started discussing real color of the object in question. Certain workflow can either provide you with that color or not.

Fact that you are using "standard" workflow in PI does not mean it will provide you with proper color of the image, nor that it should be "the way" images are ought to be processed. That holds for "true color" approach as well. At some point in discussion we touched on that subject as well - how much value there is in rendering true color of the target and what people expect out of image.

If you want to get that true color of the object - regardless of what it actually is, there is a "set of rules" to be followed. Same as with any other data that you measure - if you want your measurement to be accurate and represent the thing that you measured - you need to make sure you know what you are doing with the data and you don't want to skew the data in any way.

While in linear stage, ratio of R, G and B represents color. If you change this ratio - you are changing the color, simple as that. This is related to how light works and how our vision works. In other color spaces, you don't need to preserve ratio, but in linear RGB (and for example XYZ color space) - you do. Any operation that messes up this ratio of three components is going to change color.

Non linear operations on RGB triplets change ratios. It does not matter if you stretch single channel, or stretch RGB image (which is just applying same stretch to each R, G and B components of color image) - you are changing RGB ratios and hence changing color.

That is perfectly fine if you don't want to be bothered with accurate color - just use workflow that you are used to and that produces what you perceive as nice image. But if you want to get that true color - you need to work in a way that will allow you to do it - namely only linear transforms on RGB color components. Proper colorimetric calibration of your data (btw - PI photometric color calibration will not provide you with accurate color - it also uses data from the image and has no concept of color in sense that we are using it here - it assumes that color is - "astronomical color" or rather color index - difference between two photometric filter measurements) is also needed to provide you with true color.

I outlined workflow that will give you proper color if you have properly calibrated color data. Issue is to take camera and filters (btw this works on OSC cameras too) and somehow measure and deduce proper transform matrix needed for color calibration.

That was my goal - to demonstrate this workflow, make usable method of color calibration and present as a result proper color of M101 for reference - of what it actually looks like.

Link to comment
Share on other sites

6 minutes ago, RayD said:

I guess ultimately we, in the guise of astrophotographers, are making pictures, we are not producing data for scentific research.  For most the primary objective is to produce an image that looks pleasing and satisfies the basic general expectations of atrophotography i.e. not blown out in the cores, round stars and good detail in structures etc. 

I like looking at astro images (pictures) with some colour, even if that is somewhat by way of artistic license ALA narrowband.

I like your images, Rod, as you do have some colour in them, and the level if this colour is to your persoanl preference which, in my opinion and looking at the positive comments you get, is to the liking of very many people.

So true. My aim is to provide with simple "click here" if you want true color image approach to people. If not, there is plenty of different ways to process image and get pleasing looking result.

But in order to arrive at "click here" point, we do need quite a bit of R&D and some science involved.

  • Like 1
Link to comment
Share on other sites

13 minutes ago, RayD said:

I guess ultimately we, in the guise of astrophotographers, are making pictures, we are not producing data for scentific research.  For most the primary objective is to produce an image that looks pleasing and satisfies the basic general expectations of atrophotography i.e. not blown out in the cores, round stars and good detail in structures etc. 

I like looking at astro images (pictures) with some colour, even if that is somewhat by way of artistic license ALA narrowband.

I like your images, Rod, as you do have some colour in them, and the level if this colour is to your persoanl preference which, in my opinion and looking at the positive comments you get, is to the liking of very many people.

Thanks Ray....I agree...though I have trouble deciding! 😀

Link to comment
Share on other sites

59 minutes ago, vlaiv said:

I'm starting to feel that this discussion is all over the place and we might be loosing the sense of what initial direction was (at least at some point - at very beginning you offered your data to others to be processed so you can compare different approaches).

At some point we started discussing real color of the object in question. Certain workflow can either provide you with that color or not.

Fact that you are using "standard" workflow in PI does not mean it will provide you with proper color of the image, nor that it should be "the way" images are ought to be processed. That holds for "true color" approach as well. At some point in discussion we touched on that subject as well - how much value there is in rendering true color of the target and what people expect out of image.

If you want to get that true color of the object - regardless of what it actually is, there is a "set of rules" to be followed. Same as with any other data that you measure - if you want your measurement to be accurate and represent the thing that you measured - you need to make sure you know what you are doing with the data and you don't want to skew the data in any way.

While in linear stage, ratio of R, G and B represents color. If you change this ratio - you are changing the color, simple as that. This is related to how light works and how our vision works. In other color spaces, you don't need to preserve ratio, but in linear RGB (and for example XYZ color space) - you do. Any operation that messes up this ratio of three components is going to change color.

Non linear operations on RGB triplets change ratios. It does not matter if you stretch single channel, or stretch RGB image (which is just applying same stretch to each R, G and B components of color image) - you are changing RGB ratios and hence changing color.

That is perfectly fine if you don't want to be bothered with accurate color - just use workflow that you are used to and that produces what you perceive as nice image. But if you want to get that true color - you need to work in a way that will allow you to do it - namely only linear transforms on RGB color components. Proper colorimetric calibration of your data (btw - PI photometric color calibration will not provide you with accurate color - it also uses data from the image and has no concept of color in sense that we are using it here - it assumes that color is - "astronomical color" or rather color index - difference between two photometric filter measurements) is also needed to provide you with true color.

I outlined workflow that will give you proper color if you have properly calibrated color data. Issue is to take camera and filters (btw this works on OSC cameras too) and somehow measure and deduce proper transform matrix needed for color calibration.

That was my goal - to demonstrate this workflow, make usable method of color calibration and present as a result proper color of M101 for reference - of what it actually looks like.

I appreciate your goal, knowledge and expansiveness...its just that it has a tendency to blow the cart up.  In one fell swoop you have rendered it impossible for me to process an image at all as the workflow you mention is a foreign language.  I use PI and might as well not produce images at all as there is no way to do it "correctly".  Anyway--lets get back to images.  Here is my former image that I called the final--and the new image that has had NO saturation adjustments at all other than a bit of SCNR green.  It may not be correct--but it is more correct as I did not touch color after calibration--other than the use of non color targeted curves adjustments.  Which is better

Previoius

9G4P4NB7Lkq4_16536x16536_SEXPyU5O.thumb.jpg.0e17d196d53f886bff31352a586f4df9.jpg

New Image

HaLRGB-1a2.thumb.jpg.73b044050698ac4aa9baf6a74c288eab.jpg

 

Edited by Rodd
Link to comment
Share on other sites

3 minutes ago, vlaiv said:

I personally prefer second image from your post above.

Dont know why there was three--2nd and 3rd are the same--duplicate post for some reason.  Thanks--now I have to reconsider everything......again

Link to comment
Share on other sites

19 minutes ago, vlaiv said:

I personally prefer second image from your post above.

I think most would prefer this--an overall saturation boost--not targeting one color or region

 

HaLRGB-1a4.thumb.jpg.0579803b018c232df10a480e31cedee8.jpg

 

Edited by Rodd
Link to comment
Share on other sites

First off, all three are outstanding images of M101, which I would be very proud of if I had produced them. Following the interesting discussions on what the the ‘real’ colour rendition of this object (and other galaxy images) should be, then my preference is the third one. I am now of the view that the established presentation of quite vibrant blue spiral arms is a tad overdone.

Link to comment
Share on other sites

2 minutes ago, tomato said:

First off, all three are outstanding images of M101, which I would be very proud of if I had produced them. Following the interesting discussions on what the the ‘real’ colour rendition of this object (and other galaxy images) should be, then my preference is the third one. I am now of the view that the established presentation of quite vibrant blue spiral arms is a tad overdone.

Thanks--I tend to agree.  But now I have become hypersensative to color and see the last one as the new vibrant!  A very slippery business, indeed.

Link to comment
Share on other sites

Ok, I officially give up on trying to color calibrate this data.

I tried numerous options - like two different QE curves for ASI1600 that I have - one published by ZWO and one that Christian Buil produced with this spectroscopes (btw these two are quite different that I find highly puzzling). I also tried with atmosphere compensation and without.

In every case, image was quite a bit away from what I would expect out of proper calibration. I even tried matching colors of individual stars against measured values - and then it hit me.

I don't have information on spectrum of flat panel used nor exposure lengths used. You don't need that information if you normalize master flats (scale them so that max intensity is 1) but no one really does this unless they want to do some sort of photometry. This introduces another level of color disbalance into the image as flat frames intensities depend on both spectrum of flat light source and exposure length used.

Link to comment
Share on other sites

11 minutes ago, vlaiv said:

Ok, I officially give up on trying to color calibrate this data.

I tried numerous options - like two different QE curves for ASI1600 that I have - one published by ZWO and one that Christian Buil produced with this spectroscopes (btw these two are quite different that I find highly puzzling). I also tried with atmosphere compensation and without.

In every case, image was quite a bit away from what I would expect out of proper calibration. I even tried matching colors of individual stars against measured values - and then it hit me.

I don't have information on spectrum of flat panel used nor exposure lengths used. You don't need that information if you normalize master flats (scale them so that max intensity is 1) but no one really does this unless they want to do some sort of photometry. This introduces another level of color disbalance into the image as flat frames intensities depend on both spectrum of flat light source and exposure length used.

The panel was a Flatman L--I dont have the literature I would have to look it up.  Flat durations were 1.3 sec/lum; and 2.2 sec for color, and 40 sec for Ha I would be surprised if there are desceranble differences--certainly not visually apparent

Link to comment
Share on other sites

23 minutes ago, Rodd said:

The panel was a Flatman L--I dont have the literature I would have to look it up.  Flat durations were 1.3 sec/lum; and 2.2 sec for color, and 40 sec for Ha I would be surprised if there are desceranble differences--certainly not visually apparent

Could you check something for me - that might help a lot. Can you open up each R, G and B flat and check max value in PI.

Even better would be if you could select central - brightest part - like small circular selection in center of each flat and measure mean value?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.