Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Pelican Nebula - OSC ASI2600MC - kinda 2nd light.


Recommended Posts

4 hours ago, vlaiv said:

Problem with phorometric color calibration is two fold.

1. It's not implemented properly (PixInsight), or is implemented partially (Siril)

2. Stellar sources represent rather narrow part of visible gamut and can be used to correct for atmospheric influence, but won't produce very good results for complete calibration

In fact, the way Siril implements it is adequate for atmospheric correction if you already have color calibrated data to begin with

Another problem is that we need paradigm shift in the way we process our images. Classic RGB model that most people think in when trying to process images - is suited for content creation, but not for accurate color reproduction. Most software that has color management features is geared towards following goal:

Make colors be perceived the same between content creator and content consumer. In another words - if I "design" something on computer under one lighting conditions and then image is printed and put in another lighting conditions in living room - or perhaps object was died and put in kitchen - we want our perception of that color to be the same.

In astrophotography, if we want correct colors - we need to think in terms of physics of light (and that is something most people are not very keen on doing) - and what color would we see if we had that starlight in a box next to our computer. We want color on computer screen to match that color of starlight from the box.

Thanks, vlaiv. That is a very thought provoking post. If photometric colour calibration is not implemented properly in Pixinsight I think that’s way beyond my level of engagement in image processing. I can only go by what I see. I guess like a lot of primitive imagers I’m hoping to remove colour casts and produce a reasonably pleasing and ‘natural’ looking image. In the end it’s only subjective of course.  My experience has been that background neutralisation and either the Colour Calibration or Photometric CC processes have worked, for my eyes I should emphasise. 

If we’re going to get into the philosophical discussion about what are the right or wrong colours, that’s a real can of worms. What would the Andromeda Galaxy (say) look like by human eye closer up? It might well look rather underwhelming compared with the bright colourful images we produce.  After all, we already see a galaxy close up … our own. By eye it’s pretty faint and misty, and almost entirely devoid of colour. 

Having said all that I probably should calibrate my screen properly with hardware and not just for astro pictures either. I’ve been telling myself that for years.  :) 

Link to comment
Share on other sites

Actually colour isn’t the area I have found most difficult. I’m pretty happy with noise reduction too having spent hours wrestling with the noise obtained with my old DSLR.  This ASIasi camera is a dream in comparison with noise. I feel I only have to apply the lightest touch. 

The processes I have found most challenging are:

High Dynamic Range Compression
Local Histogram Equalisation
Multiscale Linear Transform

Link to comment
Share on other sites

2 minutes ago, Ouroboros said:

If photometric colour calibration is not implemented properly in Pixinsight I think that’s way beyond my level of engagement in image processing.

If you read:

https://pixinsight.com/tutorials/PCC/

You will see that it is all about photometric filters and "absolute white balance" (what ever that means - that term is really nonsense if you ask me - what we see as white is always perceptual thing - and depends on environmental factors and our adaptation).

I completely understand part about photometric filters and conversion between one set to another and all of that. However, that has nothing to do with actual color displayed on the screens. Even that explanation of how PCC works - has section of text dedicated to "The Meaning of Color in Astronomy" - or rather color index of stars.

12 minutes ago, Ouroboros said:

If we’re going to get into the philosophical discussion about what are the right or wrong colours, that’s a real can of worms. What would the Andromeda Galaxy (say) look like by human eye closer up? It might well look rather underwhelming compared with the bright colourful images we produce.  After all, we already see a galaxy close up … our own. By eye it’s pretty faint and misty, and almost entirely devoid of colour. 

That is quite true, however, there is rather simple way to ask question that will certainly have definitive answer:

Given some color produced on the screen and given light from Andromeda galaxy (or rather single color uniform patch of it) of the same intensity as light coming from computer screen, side by side - would you say: These two colors are the same, or these two colors are different.

 

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Ouroboros said:

Actually colour isn’t the area I have found most difficult. I’m pretty happy with noise reduction too having spent hours wrestling with the noise obtained with my old DSLR.  This ASIasi camera is a dream in comparison with noise. I feel I only have to apply the lightest touch. 

The processes I have found most challenging are:

High Dynamic Range Compression
Local Histogram Equalisation
Multiscale Linear Transform

Out of interest, what each of these do?

I'm not PI user, so don't even know what sort of tools are available, but I'm rather interested in learning what these do.

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

Out of interest, what each of these do?

I'm not PI user, so don't even know what sort of tools are available, but I'm rather interested in learning what these do.

Oh dear! You’re asking me to do some work. 😉  The best thing would be for you to search Pixinsight site for detailed info. 

In summary high dynamic range compression enacted by the high dynamic range multi scale transform process uses wavelets to improve core detail by compressing the dynamic range of an image. Exactly what it’s doing and how I’m not sure. I have found by stumbling about in its parameter space that it can improve detail in an image considerably. As I recall it didn’t do much for this image of the Pelican. 

Local Histogram Equalisation is concerned with improving contrast and sharpness. Essentially it enhances the visibility of structures in areas of low contrast.  Applying this to my Pelican image was beneficial I think once I’d appropriately masked the image and backed off its aggressiveness somewhat. 

Multiscale Linear Transform is used in noise reduction but is also used to help sharpen an image in the nonlinear phase of processing in PI. Actually I don’t think I used it at all this time except for noise reduction. 

 

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.