Jump to content

Recommended Posts

Posted

I have been trying to capture a nice and clean image of M51, and in my first session I caught an already quite nice image with only 1.5 hours in RGB.

On the 23rd, night 2, I managed to bring in over 5 hours! I figured I would see a big improvement sticking the two stacks together... Alas there appears to be no improvement at all 😕

Could it simply be a matter of the second night having that much worse SNR per sub, or could it be difficulty getting multiple nights to work well with eachother in pix in general?

When I stack night one and night two individually, they do look very similar, I am struggling to tell a difference SNR wise.

Is it just bad luck or am I realistically limited to one night per image or per colour?

Thanks

Image13 is both nights together with a quick pix process (ABE, SPCC, stretch and colour boost only), The other Image13 is, as described, only data from the second night.

Image13-2nd night only.tif Image13.tif

Posted

Difficult to measure objective things about noise when the data is stretched, but to my eyes the image with both nights is noticeably better. Not that i know how to make those measurements objectively anyway :D.

snibedisnab.thumb.JPG.d381ffb16438c6d8eeaa8fa167327b08.JPG

Could be just a difference in level of stretch of course, but just a preliminary eyeball-only measurement makes me think the left image has a much better signal to noise ratio. The core parts of M51 are incredibly bright so i dont think there are obvious SNR improvements to be seen there. Look at the tidal tail parts for example, they are smoother with significantly less RGB noise. Also many of the Ha regions and bright blue clusters look tighter in the left image.

By the way, when you say sticking the 2 stacks together do you mean actually just stacking the stacks instead of the data from the 2 nights? Stacking stacks will be less effective than integrating the subs to a new stack.

Posted
44 minutes ago, ONIKKINEN said:

Difficult to measure objective things about noise when the data is stretched, but to my eyes the image with both nights is noticeably better. Not that i know how to make those measurements objectively anyway :D.

 

Could be just a difference in level of stretch of course, but just a preliminary eyeball-only measurement makes me think the left image has a much better signal to noise ratio. The core parts of M51 are incredibly bright so i dont think there are obvious SNR improvements to be seen there. Look at the tidal tail parts for example, they are smoother with significantly less RGB noise. Also many of the Ha regions and bright blue clusters look tighter in the left image.

By the way, when you say sticking the 2 stacks together do you mean actually just stacking the stacks instead of the data from the 2 nights? Stacking stacks will be less effective than integrating the subs to a new stack.

Hm ok maybe the processing will mask it a bit

I just put the two nights together into a simple LRGB combination without stretching, and the second night by itself in another simple LRGB combination

I stacked the two nights combined by adding all the raw files into the WBPP script, so I avoided trying to stack two master lights (PIX refuses to stack fewer than 3 images anyhow)

Maybe this makes it a little easier, I don't really see any appreciable difference myself

both-night-raw.tif 2nd-night-only raw.tif

Posted

I think that easiest way to combine two datasets to reveal SNR difference is to create "split screen" scenario.

Both stacks need to be registered against the same sub and calibrated / integrated in the same way (as to have same intensity - to be compatible).

Then half of either stack is copied and pasted directly over the other. This will create "split screen' scenario for linear data and provide you with means to process both stacks in exactly the same way (whatever you do to process the image will equally affect both sides of the image).

Posted
27 minutes ago, pipnina said:

Hm ok maybe the processing will mask it a bit

I just put the two nights together into a simple LRGB combination without stretching, and the second night by itself in another simple LRGB combination

I stacked the two nights combined by adding all the raw files into the WBPP script, so I avoided trying to stack two master lights (PIX refuses to stack fewer than 3 images anyhow)

Maybe this makes it a little easier, I don't really see any appreciable difference myself

both-night-raw.tif 145.62 MB · 0 downloads 2nd-night-only raw.tif 143.32 MB · 0 downloads

Hmm, the differences really do seem suspiciously small. If i had to guess which one had 5x the data im not sure i would make the right call blind.

Your flats are overcorrecting by the way and that could throw a spanner into the works with normalization of the subs which will ruin all hopes of getting the best possible image especially if conditions are different on both nights. Without working normalization adding more subs might not necessarily improve the stack. Maybe something to do with it?
Also looks like you have some pretty heavy light pollution judging from the levels of the stacks. Was one of the nights just with better transparency? I have seen transparency affect an imaging locations bortle rating by more than 1 (from 4 to 3).

Posted
1 hour ago, vlaiv said:

I think that easiest way to combine two datasets to reveal SNR difference is to create "split screen" scenario.

Both stacks need to be registered against the same sub and calibrated / integrated in the same way (as to have same intensity - to be compatible).

Then half of either stack is copied and pasted directly over the other. This will create "split screen' scenario for linear data and provide you with means to process both stacks in exactly the same way (whatever you do to process the image will equally affect both sides of the image).

As in normalise both stacked results with a linear fit, and register them so flicking between them in an image viwer or photoshop will let me see a flickering 1:1 comparison?

It'd be worth a shot!

1 hour ago, ONIKKINEN said:

Hmm, the differences really do seem suspiciously small. If i had to guess which one had 5x the data im not sure i would make the right call blind.

Your flats are overcorrecting by the way and that could throw a spanner into the works with normalization of the subs which will ruin all hopes of getting the best possible image especially if conditions are different on both nights. Without working normalization adding more subs might not necessarily improve the stack. Maybe something to do with it?
Also looks like you have some pretty heavy light pollution judging from the levels of the stacks. Was one of the nights just with better transparency? I have seen transparency affect an imaging locations bortle rating by more than 1 (from 4 to 3).

Transparency might have been lower for night 2, but I don't know about a whole bortle lower!

Might help here to see the subs:

The sub with the full date stamp in the file name is the second night, the sub that's just M51_Light_Blu is from the first night.

I notice putting them to the same stretch level in Kstars that the subs from night one are definitely darker, but I am not sure how much the signal in M51 is being attenuated by, it doesn't look like a 5x loss to me but my eyes are not so keen for this sort of thing.

I found two of the darkest subs in the middle of each dataset and put them side-by-side at the same stretch and they look similar in terms of signal and noise, but the new dataset is definitely brighter:Screenshot_20230429_213537.thumb.png.4d5a5490399fb6812e7832273f53c612.pngScreenshot_20230429_213855.thumb.png.770225dc6de0603c8069f2470bf0e761.png

And looking at the stats, given my offset of 256 (which yeilds minimum ADUs around 40-60 on my cam), it seems that the average ADU value difference between the two nights is only about 80-90 ADU. It seems a bit suspicious that it would make such a big difference in the final product.

 

As for my overcorrecting flats... I am still experimenting with the issue. I think I need to apply darks to my main data and bias frames to my flats (flats are around 0.05 to 0.01 second exposures so flatdarks maybe not so necessary?) Getting good, uncontaminated darks is quite hard though. I've burried my camera before and still seen light get in (I even stuck some socks over the front of it in a dark room once trying haha)

It is a bit odd because my much noisier and stronger dark current DSLR had no issue with flat calibration, despite me never using darks or bias frames!

M_51_Light_2023-04-23T23-43-01_001.fits M_51_Light_Blu_001.fits

Posted
3 hours ago, pipnina said:

As in normalise both stacked results with a linear fit, and register them so flicking between them in an image viwer or photoshop will let me see a flickering 1:1 comparison?

It'd be worth a shot!

Transparency might have been lower for night 2, but I don't know about a whole bortle lower!

Might help here to see the subs:

The sub with the full date stamp in the file name is the second night, the sub that's just M51_Light_Blu is from the first night.

I notice putting them to the same stretch level in Kstars that the subs from night one are definitely darker, but I am not sure how much the signal in M51 is being attenuated by, it doesn't look like a 5x loss to me but my eyes are not so keen for this sort of thing.

I found two of the darkest subs in the middle of each dataset and put them side-by-side at the same stretch and they look similar in terms of signal and noise, but the new dataset is definitely brighter:Screenshot_20230429_213537.thumb.png.4d5a5490399fb6812e7832273f53c612.pngScreenshot_20230429_213855.thumb.png.770225dc6de0603c8069f2470bf0e761.png

And looking at the stats, given my offset of 256 (which yeilds minimum ADUs around 40-60 on my cam), it seems that the average ADU value difference between the two nights is only about 80-90 ADU. It seems a bit suspicious that it would make such a big difference in the final product.

 

As for my overcorrecting flats... I am still experimenting with the issue. I think I need to apply darks to my main data and bias frames to my flats (flats are around 0.05 to 0.01 second exposures so flatdarks maybe not so necessary?) Getting good, uncontaminated darks is quite hard though. I've burried my camera before and still seen light get in (I even stuck some socks over the front of it in a dark room once trying haha)

It is a bit odd because my much noisier and stronger dark current DSLR had no issue with flat calibration, despite me never using darks or bias frames!

M_51_Light_2023-04-23T23-43-01_001.fits 49.49 MB · 1 download M_51_Light_Blu_001.fits 49.49 MB · 1 download

The second image (one taken on 23/4?) looks quite a bit worse than the first one. Not obvious from the stats, since offset is 256 we have around 1000 ADU in the first night which gives us 250 electrons with gain 100 that has an e-/ADU rate of approx 0.25. The second night is then somewhere around 1200 ADU/ 300e- so not an earthshaking difference, but both are pretty high already. Im just looking at these in Siril autostretch and there is a noticeable difference when i blink between them, the first night has an obviously brighter M51 and stronger stars than the second one and indeed by measuring the number of detected stars i see a drop off a cliff from 640 stars to 400 in the second night sub. Fewer stars, darker looking target, a little bit more signal in the same sub length and filter.

Sounds like terrible transparency for the second night. It can really be that bad if there is some thin high cloud, excessive humidity, aerosols like smoke or pollen (we have a pollen apocalypse here at the moment for example). All of that will reduce the actual signal you want but still make it seem like you are getting usable signal since the frames have a familiar looking ADU count on them just from local lighting conditions.

On flats; you can drop darkflats if you want to and use bias or even the dark master as a darkflat. You can also subtract offset by some other method, i know APP does some kind of pedestal thing for flats and if i recall correctly WBPP in PI also had an option like this. Its important that the offset gets removed, just not very important how with how little dark signal there will be in flats. In principle its the same thing for your lights, you could drop darks and just subtract offset. With 180s lights you are getting less than a tenth of an electron per pixel on average if you cool down to -10 so up to you to decide if that's worth taking darks over.

  • Like 1
Posted
14 hours ago, ONIKKINEN said:

On flats; you can drop darkflats if you want to and use bias or even the dark master as a darkflat. You can also subtract offset by some other method, i know APP does some kind of pedestal thing for flats and if i recall correctly WBPP in PI also had an option like this. Its important that the offset gets removed, just not very important how with how little dark signal there will be in flats. In principle its the same thing for your lights, you could drop darks and just subtract offset. With 180s lights you are getting less than a tenth of an electron per pixel on average if you cool down to -10 so up to you to decide if that's worth taking darks over.

I did some looking through the settings: WBPP has an output pedestal which adds a certain ADU *post* calibration, and the separate ImageCalibration tool has both output *and* input pedestal.

Setting the input pedestal to match my camera offset does appear to reduce the overcorrection, albeit a bit hard to see given how dark and noisy the subframe is to begin with!Screenshot_20230430_162222.thumb.png.6ee5f87d0ccc5fe3a1488923504f883c.png

I stacked the blue frames that I calibrated with input pedestal and hit autostretch (right), and compared it to autostretched blue on a previous stack with no pedestal in calibration (left)

Thank you very much for telling me this exists! I have been banging my head against this flats problem for what feels like forever haha.

Posted

I can't download 300meg of data but O's post (the first response) shows an obviously better S/N in the left hand image. Maybe consider cropping the images to make file sizes more reasonable? Dark skies and poor internet tend to go together! :grin:

Olly

Posted
1 hour ago, ollypenrice said:

I can't download 300meg of data but O's post (the first response) shows an obviously better S/N in the left hand image. Maybe consider cropping the images to make file sizes more reasonable? Dark skies and poor internet tend to go together! :grin:

Olly

I just re-calibrated the data with the proper input pedestal and re-stacked twice. First dataset is all subs from both nights thrown into WBPP, second is only the first night's subs (so 12 subs per channel for first night only vs 45-ish for all data image).

I cropped them down to 1280x1024 so this is now only a 30MB total download.

I will be honest when I look at individual channels from the stack, autostretched, I can see the difference in SNR much more easily. I guess it all threw me off because it doesn't look like a 5x integration difference in SNR!

All_data_correct_calibration_crop.tif First_night_only_correct_calibration_crop.tif

Posted

Well, I think the full set was good. Total integration time is not long by galaxy standards, especially when there are tidal tails involved, but the data were very workable.

PIPNINAFINWEB.thumb.jpg.5ac565dd817b957e755df4e8a39ff5c1.jpg

Olly

  • Like 1
Posted
3 hours ago, ollypenrice said:

Well, I think the full set was good. Total integration time is not long by galaxy standards, especially when there are tidal tails involved, but the data were very workable.

 

Olly

Wow that really shows what a competent processing wizard can do!

What denoise algorithm are you using? I have struggled to find one that doesn't make the whole image look like a jpeg artifact, even the one pix includes.

Posted

Not sure what calibration you used but noise in images goes through 3 major stages. 1) read noise, 2) photon shot noise, 3) fixed pattern noise and finally saturation. 

1 and 2 have reduced effect with increasing signal but 3 is proportional to the signal and they only way to remove it is with a good flat.

Not sure it's relevant.

Regards Andrew 

  • Like 1
Posted
1 hour ago, pipnina said:

Wow that really shows what a competent processing wizard can do!

What denoise algorithm are you using? I have struggled to find one that doesn't make the whole image look like a jpeg artifact, even the one pix includes.

Noise Xterminator. It is beyond good! However, I did two other noise-related things in Photoshop for the background sky.

1) Use the colour sampler to pick up just the background sky and greatly reduce the saturation. A very obvious trick, dead easy and always helpful.

2) Zoom in to 200% or more, open Curves and find the brightest of the background pixels. Put a fixing point on the curve at that point. Put several fixing points above that to keep the line pinned straight. Now grab the curve below the original point and raise it to taste. This simply brings the darker background pixels up closer to the brighter ones. Don't overdo it.  In this image, be very careful not to lift any background above the faint tidal extensions or they'll be drowned.

Always do NR as a layer on top so you can easily erase it where you don't want it (though Noise Xterminator is so good that this is hardly ever necessary.)

Olly

  • Like 1
  • Thanks 1

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.