Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

NGC 1333 lum.....done?


Rodd

Recommended Posts

back to issues that I can't seem to get away from.  This image has 411 1min Lum subs--almost 7 hours with FSQ 106 with .6x reducer and asi 1600.  That HAS to be enough data.   But it still look svery grainy.  I don't want to use too much noise control.  I have used just about as much as I care to (more to tell the truth).  I was hoping for a smooth mono image like a Ha mono, but it doesn't look anywhere as near smooth as Ha.  Is this because its Lum?  Will the addition of RGB data smooth things out?  What have others found?  Alan--was you Lum this grainy?  You have a great image there.  is there hope for mine?  So far every time I have attempted this image I get teh same type of result--no matter how much data I collect.  This is the best I have managed.  The big question...more data?  or will 4-5 hours each of RGB be enough.

Lum-411-final5.thumb.jpg.637778824d7bad37626cc3c4dc3f8fbf.jpg

Link to comment
Share on other sites

  • Replies 82
  • Created
  • Last Reply

Nope, not good enough :D

I mean, many people would be happy with this result, but I'm guessing that you strive for top quality, and if you are going to push your data this much in processing then I would have to say you need more data.

What sort of LP are you working from - SQM sort of reading? Lum is very sensitive to light pollution and more LP you have, you either need LP suppression filter (that helps somewhat, but not nearly as going to really dark site), or you need more exposure time.

Color will add color noise to this, it will not "smooth" it out in any way, but you can be much more aggressive with noise reduction in RGB than in Lum.

Just to get a sense of what LP does, here is quick comparison of exposure times for equal SNR (I'll just throw in some of mine "defaults" for gear, but this can vary depending on pixel scale, aperture, target brightness, etc ...).

In 20mag skies you need : ~ x23.74 more exposure
In 19mag skies you need : ~ x9.69 more exposure
In 20mag skies you need : ~ x4.1 more exposure
in 21mag skies you need : ~ x1.88 more exposure

Compared to truly dark site - mag 22. (mag 25 target, 4 hour of 1 minute subs at 200mm F/6, binned x2 in software).

Link to comment
Share on other sites

Hi Rodd. My guess here is that the conditions weren't as good as you would have liked as there're halos around the stars too. 

One thing i'm not sure why you expect to have a lovely smooth image in only 7 hrs as the best imagers throw 30 hrs plus at an image in total so about 15 hrs in lum and at proper dark sky locations.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Nope, not good enough :D

I mean, many people would be happy with this result, but I'm guessing that you strive for top quality, and if you are going to push your data this much in processing then I would have to say you need more data.

What sort of LP are you working from - SQM sort of reading? Lum is very sensitive to light pollution and more LP you have, you either need LP suppression filter (that helps somewhat, but not nearly as going to really dark site), or you need more exposure time.

Color will add color noise to this, it will not "smooth" it out in any way, but you can be much more aggressive with noise reduction in RGB than in Lum.

Just to get a sense of what LP does, here is quick comparison of exposure times for equal SNR (I'll just throw in some of mine "defaults" for gear, but this can vary depending on pixel scale, aperture, target brightness, etc ...).

In 20mag skies you need : ~ x23.74 more exposure
In 19mag skies you need : ~ x9.69 more exposure
In 20mag skies you need : ~ x4.1 more exposure
in 21mag skies you need : ~ x1.88 more exposure

Compared to truly dark site - mag 22. (mag 25 target, 4 hour of 1 minute subs at 200mm F/6, binned x2 in software).

I don’t agree about color. If I add 500 red subs to this it will contain 900 subs. You know the equation for signal and noise with respect to the number of subs. Whenever I have made a super luminance out of lrgb  subs it is always smoother then the lum by itself

Rodd

Link to comment
Share on other sites

1 minute ago, Rodd said:

I don’t agree about color. If I add 500 red subs to this it will contain 900 subs. You know the equation for signal and noise with respect to the number of subs. Whenever I have made a super luminance out of legs subs it is always smoother then the lum by itself

Rodd

Not sure what are you trying to say, but here is break down of different options and what sort of SNR you can expect.

1. Original post stated you have total of about 7 hours of Lum data, and question was about 4-5 hours of RGB each.

Each of R, G and B filters has narrower spectral range compared to Lum, and it is more than likely that each of them will contain less signal than L channel (only time it can be equal if you are imaging emission source where wavelengths emitted completely fall into one of R, G or B filter ranges - and that filter will have equal signal to L, while other will have 0). On most broadband targets we can roughly say that each of R, G and B will have about a third of L (in any case signal of R+G+B = signal of L for  filters without overlap or gap in spectrum).

You are stacking R to R stack, G to G stack and B to B stack - each of those stacks will have less total exposure time and less signal then L - there is no way that any of them will have higher SNR than L.

Depending how you do color transfer, resulting image will have lower or equal SNR in Luminance than L mono. Straight RGB ratio will give poor results (modulates noise from L with noise from each channel). Lab color transfer is a bit better option (it lowers perceived luminance and color noise because you will be working in perceptually uniform color space and you will be using Luminance from L data).

2. You can stack L, R, G and B from 7 + (4-5)x3 hours of exposure much more effectively - but this is something no one ever mentioned doing and I have not found reference to it anywhere. You need special stacking algorithm to be able to do it.

For L you will be stacking your L subs, but also R+G+B subs (take one of each, add and write as one synthetic Lum sub) to same stack. It's best if you have equal number of R, G and B subs so you can match them up into L subs.

For each color you will be stacking that color subs, but also L-(sum of other two colors). Again, you pick one of each to do calculation and produce additional color sub.

When you do something like this, you need to be very careful about calibration you perform and that your filters are good for this sort of thing - their spectral range needs to have no or minimal overlap and they need to cover full L spectrum. You need to match exposures with coefficients for each sub and you need normalization routine that is capable of dealing with such artificial frames and stacking algorithm that is capable of dealing with different SNR subs in proper way.

You can create artificial lum from R, G and B stack by adding them and then average that with your original L stack - but this can have either positive or negative effect on total SNR - depending on just how much SNR there is in each channel (and that can vary depending on color of target and conditions each set was taken in).

Straight average works the best only if all subs that are averaged have same SNR, when SNR varies across subs, best way to stack them is to assign weight based on SNR ratio that will maximize total stack SNR. This is not that complicated to calculate provided that you exactly know SNR in each sub - a bit of math, partial derivatives and equating those with 0 to find minima (you want minimal noise in result) and then solving system of equations (matrix math). Problem is of course that you don't know exact SNR in each sub - and this is where fancy stacking algorithm steps in - it compares each frame to other frames and tries to figure out from that how much noise there is in each of them.

But even doing this fancy stuff will improve your L SNR by factor of less than x1.4  (for case of each color being 4-5h total) - less than you would get with another 7h of exposure in L.

Link to comment
Share on other sites

38 minutes ago, vlaiv said:

Not sure what are you trying to say, but here is break down of different options and what sort of SNR you can expect.

1. Original post stated you have total of about 7 hours of Lum data, and question was about 4-5 hours of RGB each.

Each of R, G and B filters has narrower spectral range compared to Lum, and it is more than likely that each of them will contain less signal than L channel (only time it can be equal if you are imaging emission source where wavelengths emitted completely fall into one of R, G or B filter ranges - and that filter will have equal signal to L, while other will have 0). On most broadband targets we can roughly say that each of R, G and B will have about a third of L (in any case signal of R+G+B = signal of L for  filters without overlap or gap in spectrum).

You are stacking R to R stack, G to G stack and B to B stack - each of those stacks will have less total exposure time and less signal then L - there is no way that any of them will have higher SNR than L.

Depending how you do color transfer, resulting image will have lower or equal SNR in Luminance than L mono. Straight RGB ratio will give poor results (modulates noise from L with noise from each channel). Lab color transfer is a bit better option (it lowers perceived luminance and color noise because you will be working in perceptually uniform color space and you will be using Luminance from L data).

2. You can stack L, R, G and B from 7 + (4-5)x3 hours of exposure much more effectively - but this is something no one ever mentioned doing and I have not found reference to it anywhere. You need special stacking algorithm to be able to do it.

For L you will be stacking your L subs, but also R+G+B subs (take one of each, add and write as one synthetic Lum sub) to same stack. It's best if you have equal number of R, G and B subs so you can match them up into L subs.

For each color you will be stacking that color subs, but also L-(sum of other two colors). Again, you pick one of each to do calculation and produce additional color sub.

When you do something like this, you need to be very careful about calibration you perform and that your filters are good for this sort of thing - their spectral range needs to have no or minimal overlap and they need to cover full L spectrum. You need to match exposures with coefficients for each sub and you need normalization routine that is capable of dealing with such artificial frames and stacking algorithm that is capable of dealing with different SNR subs in proper way.

You can create artificial lum from R, G and B stack by adding them and then average that with your original L stack - but this can have either positive or negative effect on total SNR - depending on just how much SNR there is in each channel (and that can vary depending on color of target and conditions each set was taken in).

Straight average works the best only if all subs that are averaged have same SNR, when SNR varies across subs, best way to stack them is to assign weight based on SNR ratio that will maximize total stack SNR. This is not that complicated to calculate provided that you exactly know SNR in each sub - a bit of math, partial derivatives and equating those with 0 to find minima (you want minimal noise in result) and then solving system of equations (matrix math). Problem is of course that you don't know exact SNR in each sub - and this is where fancy stacking algorithm steps in - it compares each frame to other frames and tries to figure out from that how much noise there is in each of them.

But even doing this fancy stuff will improve your L SNR by factor of less than x1.4  (for case of each color being 4-5h total) - less than you would get with another 7h of exposure in L.

I am trying to say that a stack of 500 red 500 green 500 blue and 500 lum will have a better snr than 500 lum alone.  Also, If I do that the advantage over collecting only lum is I will be able to make an lrgb image.  Many folks make a very respectable image of ngc1333 with a lot less than 30 hours. 411 subs with the asi 1600 is fair number. Not often do people take much more. Often allot less. The sky does stink I must admit. But some images come out great

Link to comment
Share on other sites

Great start of a new target, Rodd. I think you should definitely go for RGB, and create a super-L.

Most LP comes from sodium, and mercury, still. RGB filters are usually designed to block this. They will therefore give less LP noise. If you shoot from a light polluted site, you should consider a light pollution filter in your L-slot. Maybe an IDAS, which still has enough transmittance over most of the visible colour range. Another option is to not shoot L at all, and spend that time on more RGB.

Link to comment
Share on other sites

2 minutes ago, Rodd said:

I am trying to say that a stack of 500 red 500 green 500 blue and 500 lum will have a better snr than 500 lum alone.

You mean if you add them to single stack? Again, I have to say that it is not necessarily so and that it depends - it can actually be worse than stack of 500 lum only.

I believe that you are talking of regular - average or sum stack (those are the same really - same SNR in the end)?

Look at this example, I'm going to stack only two frames, but I want to make a certain point:

Let's start with two equal SNR frames, that is scenario that is talked about most often (well, nobody really talks about mismatched SNR when stacking for some reason):

image.png.7e1b9e88247a67d049bf1836879d3091.png

Left two columns are two frames and right column is stack of them, signal adds like signal - simple addition, noise adds like noise - square root of sum squares. Result is what you would expect when stacking two frames - total SNR is base snr times square root of number of stacked frames - in this case 2, so square root of 2 is ~1.41 multiplied with original SNR gives ~28.284 - all good so far.

But let's try two mismatched SNR frames, let's say 20 and 12.5:

image.png.09d386f34111d06e3bf4b5c62847a4b2.png

Oh, look at that - hardly any improvement - we stacked 2 frames and total in this case is ~21.2, much less than 28.28 when both frames were of equal and good SNR

We can go further:

image.png.81d9e21727007830cfda13ad7d64d93b.png

Look at this! So stacking one good frame with SNR 20 with one poor one - SNR 10 (note that snr of second frame is only half of that of first one) - total SNR is actually lower than first frame at ~17.889!

Moral of the story is: If your R, G and B frames are of lower SNR than L frames (and they will be - I've explained why) - it can happen that total stack is of lower SNR than Lum frames alone.

This is one of the reasons people don't shoot wide band data with Moon up in the sky - adding such data to data collected on moonless night can often degrade moonless data if you use "regular" stacking methods.

Link to comment
Share on other sites

27 minutes ago, vlaiv said:

You mean if you add them to single stack? Again, I have to say that it is not necessarily so and that it depends - it can actually be worse than stack of 500 lum only.

I believe that you are talking of regular - average or sum stack (those are the same really - same SNR in the end)?

Look at this example, I'm going to stack only two frames, but I want to make a certain point:

Let's start with two equal SNR frames, that is scenario that is talked about most often (well, nobody really talks about mismatched SNR when stacking for some reason):

image.png.7e1b9e88247a67d049bf1836879d3091.png

Left two columns are two frames and right column is stack of them, signal adds like signal - simple addition, noise adds like noise - square root of sum squares. Result is what you would expect when stacking two frames - total SNR is base snr times square root of number of stacked frames - in this case 2, so square root of 2 is ~1.41 multiplied with original SNR gives ~28.284 - all good so far.

But let's try two mismatched SNR frames, let's say 20 and 12.5:

image.png.09d386f34111d06e3bf4b5c62847a4b2.png

Oh, look at that - hardly any improvement - we stacked 2 frames and total in this case is ~21.2, much less than 28.28 when both frames were of equal and good SNR

We can go further:

image.png.81d9e21727007830cfda13ad7d64d93b.png

Look at this! So stacking one good frame with SNR 20 with one poor one - SNR 10 (note that snr of second frame is only half of that of first one) - total SNR is actually lower than first frame at ~17.889!

Moral of the story is: If your R, G and B frames are of lower SNR than L frames (and they will be - I've explained why) - it can happen that total stack is of lower SNR than Lum frames alone.

This is one of the reasons people don't shoot wide band data with Moon up in the sky - adding such data to data collected on moonless night can often degrade moonless data if you use "regular" stacking methods.

Your theory is sound. But it will take allot of theory to make 2,000 subs have a lower snr than 411.   I would put my $ on the 2,000 

Rodd

Link to comment
Share on other sites

34 minutes ago, wimvb said:

Great start of a new target, Rodd. I think you should definitely go for RGB, and create a super-L.

Most LP comes from sodium, and mercury, still. RGB filters are usually designed to block this. They will therefore give less LP noise. If you shoot from a light polluted site, you should consider a light pollution filter in your L-slot. Maybe an IDAS, which still has enough transmittance over most of the visible colour range. Another option is to not shoot L at all, and spend that time on more RGB.

I heard that an Lp filter really reduces signal. I suppose as long as my rgb data is shot at the same resolution as my lum (and it is. Usually my Fwhm values for rgb are lower than lum) then why do I need lum.  Because Olly thinks lum is the key, and critical. Maybe my sky just can’t handle the truth....err...I mean lum

Link to comment
Share on other sites

8 hours ago, Rodd said:

I heard that an Lp filter really reduces signal. I suppose as long as my rgb data is shot at the same resolution as my lum (and it is. Usually my Fwhm values for rgb are lower than lum) then why do I need lum.  Because Olly thinks lum is the key, and critical. Maybe my sky just can’t handle the truth....err...I mean lum

Yes, it does. But the signal it is intended to reduce is that from light pollution. Imo, IDAS filters are generally good at reducing LP while maintaining colour in an image. Many UHC filters (used as LP filters) are not so good, because they block too much of the wanted signal. As Olly has said on more than one occasion; it's not what filters let through, but what they block, that's important.

Very much so, especially at Olly's dark site. But with increasing LP, truths change. There is no general recipe or best practice in AP that works everywhere regardless of conditions, other than: experiment. Fortunately this is easy. Just collect the RGB data, with of course as long integration time as you are willing to spend. Then process:

1. just RGB

2. LRGB with the luminance you have collected already

3. LRGB with synthetic luminance from your RGB data

4. LRGB with superluminance from L and from your RGB data

Start by comparing the luminance images from 2. 3. and 4.

(You can of course do the same on data you already have)

This should answer the question whether luminance will add quality. LRGB processing came about when cameras were noisier than they are today. By collecting just a little colour data (often binned) and a lot of luminance data for detail, it was possible to improve image quality with limited integration time. But if you shoot at best resolution, and collect about as much luminance as colour data, the benefit of luminance may be marginal. Especially so if that luminance suffers more from light pollution than the RGB. If your RGB filters don't overlap, and have the same total bandpass as a luminance filter, then one hour of luminance will be the same as one hour each of R, G, and B, because the signals will add. But if the filters don't match with L, as near the Na/Hg spectral lines and near the Oiii line, the situation becomes totally different.

As I wrote before, RGB filters are designed to block sodium and mercury lines, which often make up most of man made light pollution. L filters will let this light through in its full g(l)ory. That's one reason to reconsider the inclusion of luminance.

RGB imaging is more forgiving on optics. With your scope, chromatic aberration should be minimal. Mono RGB imaging, unlike OSC (and L), allows refocusing between colours and reduce the effect of CA. But CA will still be present in luminance. Another reason to reconsider the use of luminance.

These are just two reasons why best practices are not general, but depend on local imaging conditions.

Link to comment
Share on other sites

3 hours ago, wimvb said:

You can of course do the same on data you already have)

Ahh, but to really do it right, the comparison can't be the RGB I have verses the LRGB I have.  Really, the image without the luminance should include more RGB to compensate for the lack of Lum.  In other words, the total integration times of the images should be the same.  It would not be appropriate to compare (6 hours of RGB) with (6 hours of RGB and 20 hours of lum.) .

But I understand what you are saying.  I have been a fool collecting 9 hours of lum on m78 and 7 hours on NGC 1333.  I always thought that the reason I don't see much benefit from lum is because I don't shot enough of it.  But it did seem to help my galaxies.  Then again, an equal amount of RGB may have benefited the same.  Also, when I shot M31 the galaxy was high in sky and it was very dark nights (LP changes sometime....Don't know why.  people sleep?  But street lights stay on.  So it shouldn't, but light domes around me certainly do fluctuate at times (not usually but they do).

I will say this....I don't see any benefit in SLUMs (lums made with RGB data).  I have done that a few times and do not see an advantage.  Also--I have done super lums (lums mage with LRGB data), and do not see a benefit over just using Lum data.  In fact, most times I choose to use the Lum data alone.

GIVE ME NARROW BAND ANY DAY!  

 

Link to comment
Share on other sites

42 minutes ago, Rodd said:

LP changes sometime....Don't know why

One reason it changes is that when there's more moisture in the air, light is scattered more. Over a city, there can even be a temperature inversion layer and a significant increase in smog. You can easily test for moisture (and maybe use this method to asses air quality). Shine a torch straight up. If you can't see the beam, there's nothing scattering the light. But if you can see the beam, there's moisture in the air (or, in my case, ice crystals). The better you can see the beam, the worse conditions are. In the most extreme case, you have fog.

Link to comment
Share on other sites

17 hours ago, Allinthehead said:

One thing i'm not sure why you expect to have a lovely smooth image in only 7 hrs

 

17 hours ago, vlaiv said:

Nope, not good enough :D

 

14 hours ago, wimvb said:

Great start of a new target

OK---I did a few things.  I eliminated many subs with high FWHM values--this image only has 305 subs. I used a touch of Mure Denoise.....only a .6 variance, and I was more careful with  noise control--I used less actutally.  And I also took Vlad's hint and was not as aggressive with the stretch.  An improvement?  It seems a bit less heterogenous to me (in a good way).  on to RGB, or should I go for a massive number of lum subs and try to achieve  a lum mono that will do wonders for RGB data.  

 

Lum-305b.thumb.jpg.59101963f53f9327ed0b519ece2889f2.jpg

Link to comment
Share on other sites

8 minutes ago, Rodd said:

So...not much of an improvement after all, huh?

Rodd

It's difficult to compare on my mobile phone, so I'm not even going to attempt that. But I think that, since you eventually want this to become a colour image, it may be more straightforward to make it a colour image first, then decide if more luminance is needed. The luminance, used in lrgb, may need a different stretch to work with the colour data.

Just my € 0.02

Link to comment
Share on other sites

Step back a sec. This is an incredibly deep rendition in which very faint dusty structures are very easily visible. Some noise is going to be pretty well inevitable when you stretch faint stuff as hard as this. Do you actually need to stretch it quite so hard? Personally I don't find the noise intrusive but, if you do, a gentler stretch would still give remarkable faint structure. Alternatively you could make a partial stretch, sufficient to get the dust shapes into view, then pin the black sooty patches on the Curve and then stretch only above that.

In a nutshell, don't beat yourself up!

Olly

Link to comment
Share on other sites

13 minutes ago, Rodd said:

So...not much of an improvement after all, huh?

Rodd

Now back at my computer: the new version is a definite improvement, at least as far as noise is concerned. Whether this is due to selecting better subs, or less aggressive stretching is difficult to say. But jpeg nr 2 is definitely better.

Link to comment
Share on other sites

2 minutes ago, ollypenrice said:

Step back a sec. This is an incredibly deep rendition in which very faint dusty structures are very easily visible. Some noise is going to be pretty well inevitable when you stretch faint stuff as hard as this. Do you actually need to stretch it quite so hard? Personally I don't find the noise intrusive but, if you do, a gentler stretch would still give remarkable faint structure. Alternatively you could make a partial stretch, sufficient to get the dust shapes into view, then pin the black sooty patches on the Curve and then stretch only above that.

In a nutshell, don't beat yourself up!

Olly

You know, when I first started processing this I had you in mind, knowing that I should take this exact advice (you have given it before!).  I tried to limit my stretch--not enough I know.   I think I am trying for a result that just is not suited for this data--maybe not any lum data of this target.  maybe a stand alone lum image of this region is not realistic from my area.  is it possible to try and get a stand alone lum that resembles a stand alone Ha image (not Ha in this region, but you know what I mean)?  In some cases it is--m31 or other galaxies--bright targets.  But dusty low dynamic range regions?  

Rodd

Link to comment
Share on other sites

10 minutes ago, wimvb said:

Now back at my computer: the new version is a definite improvement, at least as far as noise is concerned. Whether this is due to selecting better subs, or less aggressive stretching is difficult to say. But jpeg nr 2 is definitely better.

Its hard to say.  I usually don't see much improvement between all subs and the best 70%--not much.  So maybe its the processing.  Although star size is tighter (I eliminated high FWHM subs)

Link to comment
Share on other sites

2 minutes ago, wimvb said:

The data is still available.

I remember.  Good idea (imogi with eyebrows moving up and own rapidly like some one considering a devious plot...couldn't find that one)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.