Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

M31 colour adjustments.


ollypenrice

Recommended Posts

1 hour ago, vlaiv said:

Here is HST image for comparison:

image.png.639b8d34818d90c3e0dcaba037b54cb6.png

image.png.be58cf43b1e2f2d4750c0cb7c28bfa01.png

and "stellar bar".

I think that HST images in general match star colors very well - which leads me to believe they have been properly calibrated for display.

Is that RGB or HaRGB? Looks like HaRGB to me (which should exaggerate the red) but maybe HST picks up more Ha in the red channel than amateur equipment.

Link to comment
Share on other sites

There have been efforts to make synthetic galaxies images from star spectra. They are generally quite dull as are the Wray images. An example  browsable gallery is here

https://www.illustris-project.org/galaxy_obs/gallery/

Scroll down a few pages and you get some with blue star forming regions.

Regards Andrew 

Edited by andrew s
Link to comment
Share on other sites

1 hour ago, gorann said:

Is that RGB or HaRGB? Looks like HaRGB to me (which should exaggerate the red) but maybe HST picks up more Ha in the red channel than amateur equipment.

Have no idea.

Best detail on the image capture process that I could find in a quick search is this:

image.png.18e039b6beb386db136d4828d7be1d04.png

It sort of tells me that many data sets were used to combine to create this image. It might as well be some sort of Ha filter blend.

In reality, Ha can't be properly displayed on any display device and it certainly can't be encoded in sRGB format as it is pure spectral color.

CIExy1931_srgb_gamut.png

Here is chromaticity diagram displaying all colors of sRGB color space compared to all color space that most humans are capable of seeing. Outer rim of this diagram represents pure spectral colors - rainbow.

Ha for example is too saturated red compared to any red shown on this diagram inside sRGB color space (redder than any red displayed on computer screen). It is also a bit darker - meaning for same light intensity, you would get something like this:

image.png.95fcf749af8246ac62f90cea6af7e182.png

Left is the reddest of what display can show and right is (lacking a bit of saturation in this image that Ha normally has) same intensity Ha red. This is due to how our vision cells work:

image.png.6f01e95b72ea5e054cf3cb521de38919.png

Note that sRGB can't accurately represent any of the colors of rainbow / spectral colors and every spectrum you see on your computer monitor will be wrong. In image above, author really tried to simulate spectral colors as best as they could - which shows in Ha part - by darkening of same red color trying to mimic what our eyes would see (note sensitivity of L type cells).

For that reason - we can say that any image containing Ha is false color image as far as our ability to display images, but it also gives us guide on how best to incorporate pure Ha regions - not as bright red but rather dim deep red if we want to try to best represent visual appearance of object.

This however does not hold for "color mix" situations. When one has Ha region filled with stars that are far away to be resolved (like in another galaxy) or perhaps has OIII as well present - it is much harder to tell exact color. Luckily, we actually can give some idea of how it should look like - or rather where on chromaticity diagram it should be located.

If one has two light sources of different color - their combined color will simply lie on line connecting those two colors in above graph.

Stars + Ha will have this color:

image.png.d6a248be3177849f777d57f4cbfe5ee7.png

I outlined region of color that such combination can have - btw that series of numbers is what is called Planckian locus and it represents color of black bodies of different temperature (source of above diagram of colors of stars - each spectral class will have certain temperature and each temperature has certain color - like when you warm up steel - it will first be deep red, then have yellow glow and it will turn white and bluish in the end).

Similar thing we get when we connect OIII line on outer, spectral locus - at 500nm to Ha point at 656 - line joining them gives you all possible colors of Ha/OIII nebulae (actual color will depend on relative intensities of each spectral color).

Fun stuff, isn't it?

Edited by vlaiv
Missing h ...
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

19 minutes ago, andrew s said:

There have been efforts to make synthetic galaxies images from star spectra. They are generally quite dull as are the Wray images. An example  browsable gallery is here

https://www.illustris-project.org/galaxy_obs/gallery/

Scroll down a few pages and you get some with blue star forming regions.

Regards Andrew 

This reminds me of older thread :D

 

  • Like 1
Link to comment
Share on other sites

2 hours ago, vlaiv said:

Fun stuff, isn't it?

Yes, and takes a bit of digestion! Thanks Vlaiv! Do I get it right that when we add Ha to the red channel of an RGB image, we really take a high intensity signal that is near the limit of what our red cones can detect (so it looks very dull to our eyes even if it is a high intensity signal picked up by our camera) and spread it evenly over the whole red channel, thereby really boosting the Ha signal in a way that we could never percieve with our eyes.

Edited by gorann
  • Like 1
Link to comment
Share on other sites

4 minutes ago, gorann said:

Yes, and takes a bit of digestion! Thanks Vlaiv! Do I get it right that when we add Ha to the red channel of an RGB image, we really take a high intensity signal that is near the limit of what our red cones can detect (so it looks very dull to our eyes even if it is a high intensity signal picked up by our camera) and spread it evenly over the whole red channel, thereby really boosting the Ha signal in a way that we could never percieve with our eyes.

Depends how you look at it.

Fact that you add it to red channel does not mean that you are "spreading" it over whole R range (which we can loosely define as 600-700nm). This is because of the way our eyes work.

It just mixes in like regular additive light.

ef4ecc3faf59bfff517b841df0f6392c.jpg

If you add Ha signal to red channel - it is almost like shining that ha light with the rest of the image. If one pixel was completely black before you added Ha - it will now be red. But if it was green (had green light shining from it) - it will turn yellow. Similarly if it was blue - it will turn cyan.

Look at that Ha region from Hubble M51 image again (I picked one at random):

image.png.be5f21e67c830c7f148d0c676fa6bdd2.png

There are places where red is darker but there are places where it turns to white - simply because there are bluish stars there and light mixes to form white light.

Our eyes, nor camera can tell if particular "red photon" is coming from Ha or "regular red" if it falls in ~ 600-700nm zone. Only narrowband filter can separate Ha from the rest.

What our eyes and some cameras can tell you is if particular photon should be pure red or maybe some other hue.

Plot-of-the-CIE-RGB-color-matching-funct

These are not sRGB matching functions - they are CIE RGB matching functions (different color space), but since I can't find image of sRGB matching functions, this will do.

Imagine you have three spectral sources - one of 656nm, second at 640nm and third at 630nm.

If you examine above curves - 640nm will be stronger intensity in this color space (curve is higher - more sensitive - similar to our vision) than 656nm, but 630nm will also have a bit of green in it.

You can now imagine following scenario:

Light source at 600nm + light source at 650nm.

First one will have some green in it and second one will be pure red but weaker. There is linear combination of the two that can produce same color as single light source at 630nm in this color space.

This is important to understand - there is no single spectrum that corresponds to any particular color (except for spectral colors which are pure colors of single wavelength). This is the reason why RGB displays can't produce spectral colors. They can only produce colors that are "in between" on chromaticity diagram - because they produce colors by mixing (linear combination). That is why we have triangle for sRGB color space - and at each vertex of that color space there is sRGB primary color - R, G and B.

In that sense - Ha is "spread" over whole R range - you can't separate it once you add it by simple means of sum - or if you shoot broad band red filter. But it is not spread in sense that if you have only H in your red channel - it will not produce those hues of red that other red frequencies might produce - like that at 620nm - which will have some green in it and won't be pure red.

If you are wondering how color matching function can be negative (red is below X axis in some parts) - that is because CIE RGB can encode colors that can't be generated by using three light sources that are primaries of that color space.

image.png.9e83f05cfa512083ebabe1baa88455d9.png

CIE RGB is wider gamut space than sRGB, and in order to create monitor that can display it - you would need to use some sort of "laser LEDs" that shine in those exact spectral frequencies at vertices of the triangle of this color space. Regardless of that - you still would not be able to produce OIII as it lies outside of triangle - and it would have negative R component - and you can't shine "negative" light.

  • Like 1
Link to comment
Share on other sites

21 minutes ago, gorann said:

Yes, and takes a bit of digestion! Thanks Vlaiv! Do I get it right that when we add Ha to the red channel of an RGB image, we really take a high intensity signal that is near the limit of what our red cones can detect (so it looks very dull to our eyes even if it is a high intensity signal picked up by our camera) and spread it evenly over the whole red channel, thereby really boosting the Ha signal in a way that we could never percieve with our eyes.

I've wondered about this and had the same thoughts as you. On the other hand, when we hold an Ha filter up to the daylight we have no difficulty in seeing colour through it so its passband must lie in the visible. I remember a thread elsewhere started by imager Bob Anderson on what colour Ha really was. I found myself wondering why it wouldn't the colour we saw when we looked through the filter! (A simple experiment...) To remind myself I've just this minute looked through a Baader 7nm OIII and an Astrodon 3nm Ha filter, naked eye, at the sky. Exactly as you'd expect the OIII hovered between green and blue depending on where you looked, and the Ha was an exquisitely beautiful deep red. RGB filters, likewise behave as we'd expect.

The notion that the Ha line is outside our range maybe comes from the conservative view of visible light passed by DSLR filters. They may be deliberately over-restrictive in order to avoid atifacts generated by the chip and the optics.

Olly

  • Like 3
Link to comment
Share on other sites

20 minutes ago, ollypenrice said:

I've wondered about this and had the same thoughts as you. On the other hand, when we hold an Ha filter up to the daylight we have no difficulty in seeing colour through it so its passband must lie in the visible. I remember a thread elsewhere started by imager Bob Anderson on what colour Ha really was. I found myself wondering why it wouldn't the colour we saw when we looked through the filter! (A simple experiment...) To remind myself I've just this minute looked through a Baader 7nm OIII and an Astrodon 3nm Ha filter, naked eye, at the sky. Exactly as you'd expect the OIII hovered between green and blue depending on where you looked, and the Ha was an exquisitely beautiful deep red. RGB filters, likewise behave as we'd expect.

The notion that the Ha line is outside our range maybe comes from the conservative view of visible light passed by DSLR filters. They may be deliberately over-restrictive in order to avoid atifacts generated by the chip and the optics.

Olly

This is exactly what I wanted to propose in previous post when I was taking about difference between "regular" red and Ha red, but somehow forgot to mention it (had too much on my mind while trying to explain things the best I could).

I would advise both this experiment with filters - that can be done indoors as well - especially if one wants to compare Ha color to regular monitor red color - just place red square and white square in PS or Gimp on your screen and use Ha filter to look at white one while looking at red without filter - you'll get instant display of differences between colors.

Another very good "exercise" to perform for "color education" is to take StarAnalyzer (if you have one of course, otherwise prism can be used) - and to observe how rich pure spectral colors are. You'll be hard pressed to find such color anywhere around you as most colors are "broadband" in nature.

  • Like 3
Link to comment
Share on other sites

On 15/11/2020 at 16:25, ollypenrice said:

You're most kind!

The question you ask is a good one. Assuming you're in sharp focus you're in the lap of your optics, mostly, when it comes to capture and, while the scopes I used on this are excellent, they cannot give tiny stars on a hard stretch of long subs so it's mostly down to processing. If you're shooting LRGB you already have a set of effectively 'short' subs in your RGB. The point of luminance is to get more signal so, if you want a stack with less, you probably already have it in your RGB. Sometimes that's useful to bear in mind. 

On this image, made a few years ago, I used familiar star reduction techniques. An initial, gentle stretch was done with the stars masked, but this can only be done gently or it will soon show, especially where stars lie on faint nebulosity. It's a help but not the answer. After that I used the Astronomy Tools 'Make Stars Smaller' routine. This is great and was formerly known as Noel's Actions. Well worth having. However, the arrival of Starnet++ transformed star control in DS imaging. It's free and either Standalone or incorporated into Pixinsight.

Basically it removes the stars in a single click - but don't expect miracles. The output image often looks quite artificial and 'blotchy' where large stars have been removed. I took it into Ps and made a three layer stack with copies of the original top and bottom and the starless image in the middle. I wanted to replace the stars with smaller versions of the original so I set my top layer to blend mode lighten and, in curves, pulled down its brightness till the stars were tiny. In Blend mode Lighten they were now the only part of the original showing in the blend. By playing with the curve I could get them to look crisp, natural and small. Trial end error. Flatten top onto middle when happy. In one or two places (the satellite galaxies) Starnet had done some damage but I had an original as my bottom layer so I could just erase the remaining top layer where necessary.

One or two stars needed a cosmetic fix post starnet but nothing drastic.

Olly

Olly,

Many, many thanks for this - very kind of you to share your approach, and so much for me to take away, mull over and practice etc!

Of course, at the moment, 30 minutes of clear skies would be welcome, let alone 30 minute subs! 😁

Thank you again!

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.