Jump to content

Narrowband

Eternal issue of blown star cores


Recommended Posts

So, I'm processing the Iris nebula. I have most of my color, 13h of 10 min RGB. Still ticking away on a few more hours. I have 10 hours of Lum in 20 and 10 min.

So I started setting up all the processing and got carried away. Led me down the path to this.

image.thumb.png.1680a3cf3b6b9a989abf4403ffca4ab3.png

The problem is the star cores and it comes from the lum side when I blend it in. 

image.png.054e7896f9ab5cb92ff834aae76c3c06.png

The problem almost always comes in the MaskedStretch, which is both great and horrible. It controls the outer parts of the stars beautifully, but I always end up with blown cores. 

Does anyone have a golden advice for me on how to get this stretched without blowing it up?

Link to comment
Share on other sites

10 hours ago, Datalord said:

Does anyone have a golden advice for me on how to get this stretched without blowing it up?

In Pixinsight if your luminosity value is greater than about 0.8 (which you can measure with the readout probe) then you will find it very difficult to get colour into that part of the image.  

If I have an object that I'm imaging then I will generally stretch the object until it has a maximum of 0.8 in the area of interest. This becomes Lum#1.  I then take a look at the lum values of the stars, and if they are very high for a lot of these eg > 0.8 then I will perform a separate new stretch which just concentrates on the star field, this will obviously be much lower - this creates result Lum#2.  I then blend the result of the both results to give me result Lum#3 which is best compromise between the two stretches. 

Although this is a matter of personal taste, I personally would not be too concerned about eliminating all stars with white cores, some stars are very bright compared to their neighbours and so to me, it is quite natural for them to look this way.   However, if you have a very bright star (with a white core) which you really don't like then I would recommend minimizing its impact through size reduction.

(If you are interested in seeing my attempt at the Iris Nebula - have a look in my album Deep Sky III) 

Alan

  • Thanks 1
Link to comment
Share on other sites

Thanks Alan, that's very tangible advice. I looked over my pre-stretched picture and the cores are above .9 in both lum and color. I have some narrowband I had a test with, which might work well for starfield and core, so I'll try that.

Link to comment
Share on other sites

I can explain a technique on how to never get a blown star core provided that it is not clipped in color subs (luminance can actually be clipped - it won't make a difference).

Technique is quite simple and can be easily tried, but this is not a tutorial in particular software so anyone trying this with their favorite software will need to figure out how to do each step (all are fairly easy).

Have your color subs stacked and wiped (meaning removed background gradients and background neutralized to gray value). Make sure also that you don't push your color subs to negative - add offset so that background is not 0 and there are no negative pixels.

Now turn this into normalized RGB ratios - here is the most easy way to do it:

Stack those three subs with maximum stacking method (each resulting pixel will be maximum value out of R, G and B).

Divide each of R, G and B subs with resulting maximum stack to obtain normalized R, G and B subs. These will look ugly, but don't worry about it, this is just color information. Combine those three into color image - again don't worry as you will get color mess at this point.

Now process your luminance layer like it is mono image, neutralize background, do histogram stretch. Don't worry that star cores clip at 100% at this point. This is luminosity information, and star cores are supposed to be 100% bright. Don't blow DSO core though, as you will loose detail.

Now comes mixing part. Take normalized color image and split into L*a*b* channels. Normally one would discard L channel, but we won't do that in this method.

Take color part L and multiply with stretched luminance. Result is new L that we will use. Now just recompose final image with this new L and a and b channels from color part and convert back to RGB.

Now you have "full color" image without blown cores of stars.

 

For those that are interested in why star cores get blown "normally" and why this approach preserves star cores, here is a brief explanation.

When you do a histogram stretch on your image if it is a color image and each channel represents 0-1 values, you will end up stretching all three components R, G and B. When you stretch to your liking, bright parts get compressed towards value 1. There is no way to avoid this and you will end up with R:1, B:1, G:1 - which is white color.

This is why you get white cores for all stars. In fact, problem is quite big here. Imagine you have blue star and red star and white star (for purpose of discussion let's go with pure red and pure blue, although stars never have these colors).

Imagine also that these three stars are equally bright, Any stretch will keep them equally bright, but if we want to keep our color the way it is - colors don't have same luminosity! In RGB mode, white point sets maximum luminosity, any other color that can be displayed will have less luminosity than this. Here is a good example:

image.png.fc61a1834d0044e75f741f6d9e7ac67b.png

You can see that yellow comes close to full luminosity of white, but pure red is at 54% and pure blue is even lower at 44% - this means that stretch needs to depend on color - it can't be done simultaneously for each pixel in the image.

This is what we do above. First we get pure color in terms of RGB ratios - by normalizing color information - this just means that we get purest RGB ratio without altering that ratio that can be shown in RGB color space. One of three components - one that is largest (max function) will end up being 1. No component will be larger than one (no clipping) and we are scaling them so ratio is preserved. Mind you this is simple method but if you want to be precise - this is not best way to handle color - you should not be doing it in RGB domain.

Next we use luminance to bring out detail. In the end we take our "pure color" information and we look at how much luminance each pure color requires. For example pure red star will have pixels that have "intrinsic" lum level at 54%, while pure blue star will have "intrinsic" lum level at 44%.

When we multiply stretched true image luminance with this "intrinsic" luminance we are in effect adjusting Lum so that it can represent actual true color without clipping.

Hope that all makes sense.

 

 

  • Like 5
Link to comment
Share on other sites

47 minutes ago, Datalord said:

Thanks Alan, that's very tangible advice. I looked over my pre-stretched picture and the cores are above .9 in both lum and color. I have some narrowband I had a test with, which might work well for starfield and core, so I'll try that.

OK, I'm glad that you found that helpful.

One other item that you may wish to consider: given your pre-stretched values are very high (eg 0.9) it is likely that you're on the non-linear portion of your camera for these parts of the image.  If you examine your unstretched RGB image and you can see odd looking star colours on the very bright stars - magenta/purple is common - this is an indication that you are on the non-linear portion of your camera, a consequence is that star colours will not be accurately represented.  To correct for this, I'd suggest you experiment with the Pixinsight repair script Repaired HSV separation on the unstreached RGB image. I generally use the default values, measure the approx size of the problem stars in pixels and that insert this value into the routine.

Alan

Edited by alan4908
Link to comment
Share on other sites

Not a clue about PI but other programs have DDP with sliders that can be used to set black and white points in luminance image before moving to P'Shop or suchlike for processing so giving headroom for further fiddling.

Dave

 

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Hope that all makes sense.

It really does. Now the problem is to figure out how to do this normalization in PI. The only part I can't just do in PixelMath is the "Maximum stacking method", which I have no clue where to find.

Link to comment
Share on other sites

46 minutes ago, alan4908 said:

Repaired HSV separation on the unstreached RGB image

I've been playing with that today, but grief ensues when the resulting RGB image is grey and flat and very little information about why can be teased out of it.

Link to comment
Share on other sites

12 minutes ago, Datalord said:

It really does. Now the problem is to figure out how to do this normalization in PI. The only part I can't just do in PixelMath is the "Maximum stacking method", which I have no clue where to find.

There should be maximum stacking in regular stacking - just create stack of three subs (R, G and B ), and turn off all "advanced features" - like normalization, pixel rejection, .... and do simple max stacking.

Let me see if I can find docs for that:

image.png.92418135f2a9c8983d1bc25c917190a1.png

So you want maximum stacking, and No normalization, no weights, and no Scale estimator, essentially all other options "turned off"

Link to comment
Share on other sites

3 hours ago, vlaiv said:

Now comes mixing part. Take normalized color image and split into L*a*b* channels. Normally one would discard L channel, but we won't do that in this method.

Take color part L and multiply with stretched luminance. Result is new L that we will use. Now just recompose final image with this new L and a and b channels from color part and convert back to RGB.

Something here is beyond me.

I have all the maximum and division and combination done, which gets me to this:

image.png.e842d9b4ee13fdbcad8a4f2daa0532bf.png

Extracting CIE lab gets me 

image.thumb.png.c239b7633582142627160ef10eeb59cc.png

Multiplied L with my processed Lum:

image.png.cc1e3c09e6f7cdf564c0958bad269b84.png

ChannelCombined back it becomes: 

image.thumb.png.89430cb64c6eee8180ecf54969738031.png

Blown cores and this is even without any form of stretch in the color space at the time of combination. Don't I have to apply the normalized color to a combined color image before applying any Lum?

Link to comment
Share on other sites

I think that you have performed all steps correctly, but you have issue with your color data.

- first point is that you have star saturation/clipping in your color data, on few brighter stars - ones that ended up white in finished image

- second thing that you probably missed out to do is background neutralization - wipe, and proper white balance.

For this to work, your color channels need to be white balanced already and of even background. With this image that is a bit hard to do since most of it contains nebulosity so you have to "pick out" places where background space is showing thru and bring that to equal 0 level - again don't go full 0 as you don't want negative or zero valued pixels.

White balance is out as color only composite has very yellow cast - that means that whole image has yellow cast - and most stars in the image are red - in fact very red.

Maybe if you post your fits stack and I'll do procedure in another software to see if I get same results - just to rule out any issues in procedure?

Link to comment
Share on other sites

21 minutes ago, vlaiv said:

For this to work, your color channels need to be white balanced already

But, how do I do that? White balancing works on the entire colour range. I usually do PhotometricColorCalibration on the combined color image.

What I could do is to process the colors combined, then split them and go through this procedure?

Link to comment
Share on other sites

3 minutes ago, Datalord said:

But, how do I do that? White balancing works on the entire colour range. I usually do PhotometricColorCalibration on the combined color image.

What I could do is to process the colors combined, then split them and go through this procedure?

Do you do photometric color calibration on unstretched linear color image?

If so, then yes, combine R, G and B into color image, use screen transfer function to see what the image looks like, but work on linear data (I think that PI can do this, right?) - wipe background and make it black / uniform, and do photometric color calibration. When you are done - split color image again into R, G and B and then proceed to create normalized values.

 

Link to comment
Share on other sites

Hmm, that got me closer, but I feel like this is something where I skip over a lot of steps and chances to do tiny adjustments before the stretch, which simply isn't possible doing it like this. This is where I got to, still with lots of issues:

image.png.95460fbe1c1abb6c3217bad6572d6d3b.png

I have uploaded the combined colour file, which has been through DBE, background neutralization, colour calibration and SCNR.

RBG_ColourCalibrated.fit

Link to comment
Share on other sites

Yes, that seem to be more what one would expect.

Again, you have some seriously red stars - I'm not sure you can get so red stars in real life - even 2000K is not going to produce such red color, but again - RGB is not really proper color space for color calibration. I wonder if PI color calibration uses different color space or not.

One thing that can help with result would be not to stretch luminance as much. There is another thing that can help - and that is variable saturation. Done like this, it keeps max saturation for each pixel. This means that if color data is noisy compared to lum data and you stretch lum data quite a bit - there will be color noise in faint regions and it will be quite evident.

Reducing saturation in noisy regions can help with this. Actually you can use stretched luminance to regulate saturation in dark regions of the image. But these are all tricks. If you want to try, just take stretched lum, increase brightness (not by further stretching as that will increase noise - you don't want that, you can even blur the image to remove noise) by adding constant offset to image, don't worry if you blow out bright parts, or rather make 100% in value all parts where you want to keep 100% saturation. Use such modified luminance and multiply it with normalized color - other processing is the same. This will in effect reduce "normalization" in darker parts so blue that was 1:0:0 will end up being 0.5:0:0 for example.

Maybe best thing at this point is to use above image that has colorful stars to blend in missing star color in "normally" processed image - just select star cores and copy data from above image into your standard processing?

Link to comment
Share on other sites

Ok, I think I found a way that somehow manages the luminance blowout without causing too much mayhem. Essentially I truncate the blown out centers by applying a curve to the lum value above a threshold. I'm reasonably happy with the result when merged to my colour image, so I'll move forward with this. Left is the processed Lum, right is the truncated Lum.

image.thumb.png.68542470f58b259bbdf6034a7b6a35bd.png

@vlaiv, I think your method holds merit with the colour normalization, I just can't get it working on this particular set. Thanks for helping me move this one forward. 

  • Like 1
Link to comment
Share on other sites

On 01/06/2019 at 02:25, Datalord said:

Does anyone have a golden advice for me on how to get this stretched without blowing it up?

I just process the lum with all the bells and whistles. Then, before lrgb combination, I apply curves transformation with the white point pulled down to about 0.85, but otherwise linear.

The l to be used in lrgb should have similar values as the l that you can extract from the rgb image before lrgb combination. The difference between the two is sharpness and control of local contrast, noise reduction etc

Link to comment
Share on other sites

A less complicated way is to use the "Increase Star Color" action in PS (It is included in the Astronomy Tools, aka Noel Carboni's actions). You can do several rounds with it and it brings the colour around the blown out core into the core.

  • Like 1
Link to comment
Share on other sites

  • 1 month later...

My preferred approach is to shoot the RGB frames at bin 1x1 and forget about luminance. In my view this has several advantages:

  1. Due to the reduced bandwidth of the filters, RGB images will tend to be sharper than L
  2. Any faint structures in the data are guaranteed to have matching chrominance information. This is not necessarily the case if you've gone deeper on the luminance
  3. As there is only one copy of the luminance data (derived from the RGB images) in the final stack colour saturation is easier to preserve

The one downside is that you will be spending much more time collecting colour data and overall data acquisition time will be longer.  Incidentally, if you're shooting RGB at bin 1x1 and using luminance as well then this is a mistake as the only advantage of LRGB is being able to shoot the colours in reduced time using bin 2x2.

 

Andrew

Link to comment
Share on other sites

3 hours ago, andrewluck said:

Incidentally, if you're shooting RGB at bin 1x1 and using luminance as well then this is a mistake as the only advantage of LRGB is being able to shoot the colours in reduced time using bin 2x2.

I quite interested in this as it's a question I have asked myself in the past (but I'm not really able to answer it satisfactory). Intuitively I have a feeling that it's not quite as simple as this but not entirely sure why. Can you maybe expand a bit why you  think this is true? For example on an average imaging night of 6 hours total why do you think that an image created from 2h R, 2h G, 2h B would be better than let's say an image of 3h L and 1h RGB each? Would the 3h luminance not give a better result than the luminance created from RGB? And how can we quantify the amount of colour data required to create a good image to give colour to the luminance data?

Link to comment
Share on other sites

If you're using LRGB then generally you're time limited. The target also matters, it works best on bright objects with a lot of detail such as galaxies. Light pollution levels also have an impact. RGB filter sets usually have a gap in their pass bands that attenuates sodium sky glow. L filters will record this light pollution and generally you will have more work to do removing gradients from the L image. There will also be more shot noise so you need more exposures to drive this noise level down.  I should have mentioned this on my original post as it's a another advantage to RGB imaging under typical UK skies.

LRGB doesn't work well for me as my targets are often dark nebulae and molecular clouds with extremely low levels of illumination and not much in the way of spacial information. Star colour in the field is also very important and this is captured best with RGB. As usual, astrophotography is an art, not a science (in the context that's being discussed here) so use whatever works for you!

If you decide to do LRGB then most, if not all of the spacial information is derived from the luminance frame and most of your exposures will be dedicated to this to reduce the noise level to expose this detail. The goal in the colour channels is to drive the noise level down as quickly as possible. As no fine detail is required from these frames (they're being used to colour the luminance information) then binning is the quickest, most efficient way to achieve this.

There's no requirement to balance the RGB exposure times with either method. Another approach is to assess the noise level in each channel and expose more in the channel(s) with more noise.

My last attempt at the Iris is here: http://littlebeck.org.uk/?p=1197 albeit with a one shot colour QHY9 so RGB only. I will have to revisit this with the mono camera.

Andrew

  • Like 1
Link to comment
Share on other sites

Topic of LRGB vs RGB is seriously complex one and even without unknowns (spectrum of LP and that of target) gets complicated really quickly real fast.

Fainter the signal, and more pronounced single component (be that R, G or B ) is - greater the difference will be in favor of LRGB approach.

It is down mostly to read noise, but color plays very important role as well (ratio of signal strengths in R, G and B channel).

Maybe simplest explanation why LRGB works better can be found in edge case (not likely to ever have it dominate the image, but it does make a point):

Imagine you have 3 hours of LRGB vs 4 hours of RGB (12h total imaging time). With LRGB you already have 3h worth of luminance, but in order to process RGB data - particularly to do a color independent stretch you need to create luminance data from RGB. Let's assume for simplicity case that it is sum of R, G and B channels.

Iit is in fact weighted sum, and best matched L to visual color intensity, if color data is calibrated to sRGB standard:

image.png.f17786c309f8d036c5b1b0052745496e.png

so it is in fact 0.2126*R + 0.7152*G + 0.0722*B = luminance (Y component best matches human luminosity perception).

We will stick to 1/3 each combination for simplicity but argument holds.

You can say that 4h of "synthetic" luminance is better than 3h of real thing, but it actually might not be the case. Let's say that you have certain read noise and that you take 6 minute subs in each case.

L component will have 3x6 doses of read noise. Synthetic lum will have 4x6 * 3 doses of read noise. Faintest signal will be most affected by this, especially in parts of the image where you have single channel signal - like for example Ha nebulosity - it will be picked up by red channel only. Now you are creating luminance there by adding empty G and B channels with no signal, yet each of them has read noise, so you are effectively only adding that read noise to synthetic luminance.

On the other hand, when doing LRGB, you don't need to do equal times - you can do something like 6:2:2:2 and get even better result. This is because of human perception of noise - we are far more sensitive to variations in light intensity then in hue/saturation, so you can trade color noise for smoother luminance and image will look better. After all, that is the reason why bin 2x2 when shooting R G and B works - it won't capture best detail but that does not matter - as brain picks up detail far more easily from luminance data (there are other areas where this fact is exploited - for example JPEG compression relies on this as well - it looses more information in hue/saturation domain than in luminance domain to achieve compression).

As for RGB providing better saturation - it is solely down to processing as you capture same RGB data when doing LRGB (using same camera/scope/filters) - so color information will be the same, and only processing can loose color data if not done properly but capture process will preserve it.

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.