Jump to content

Narrowband

Is traditional long exposure imaging dead?


Rob Farmiloe

Recommended Posts

  • Replies 50
  • Created
  • Last Reply
8 minutes ago, Allinthehead said:

get.jpg?insecure

This guy is doing great work with an Asi 1600 and short exposures. 

Thousands of frames!!

 

I shot 600 frames on Orion last month, all 30s exposures and aiming over 2 lampposts that shine directly into the garden. 2 hours of Lum and an hour each of RGB. I don't think I could go any higher aiming in that direction due to the streetlights. My laptop crashed when PI tried to pre process the lot. Had to switch to the newer works laptop...

 

Still, the ASI1600 does impress with short exposures.

M42_SCNR.jpg.c6e9106952facbdc6400fa8eb1125367.jpg

Link to comment
Share on other sites

16 hours ago, symmetal said:

I don't think the talk indicated that long exposure imaging is dead, but seemed to confirm the reasoning (that I've mentioned in the past :angel7:) that there's little point in exposing beyond the point where the sub sky background swamps the read noise. For really dark skies, even with CMOS, this still means exposures of several minutes or more and it's only in badly light polluted skies where exposures around of around 20 to 30s are considered 'optimal'.

Alan

Hi Alan, I don't  follow the logic in this. If shot noise from the sky background dominates the read noise then taking longer exposures will improve the S/N ratio of both the background and the "target". A good thing or am I missing something? 

I know stacking can do this but less efficiently.

Regards Andrew 

Link to comment
Share on other sites

57 minutes ago, tooth_dr said:

I know someone else pretty amazing with a CMOS Richard ?

Ah stop it ya flirt. ?

 

1 hour ago, david_taurus83 said:

Thousands of frames!!

Ya. It took about 12hrs to stack all those. I tried going down the 30 second route and grew seriously tired of the calibration and stacking times, not to mention the disk space required. I'm back at manageable lengths of 120-180 seconds. Nice Orion btw.

Link to comment
Share on other sites

18 minutes ago, Allinthehead said:

Ah stop it ya flirt. ?

 

Ya. It took about 12hrs to stack all those. I tried going down the 30 second route and grew seriously tired of the calibration and stacking times, not to mention the disk space required. I'm back at manageable lengths of 120-180 seconds. Nice Orion btw.

Thanks. I cant go much longer than 30s pointing south for lum due to LP. Pointing away though about 60s lum and 120s RGB I've managed. Ha however, 300s seems a bit underwhelming. I've had great results with 900s Ha. So prefer longer exposures with NB.

Link to comment
Share on other sites

2 hours ago, Allinthehead said:

get.jpg?insecure

This guy is doing great work with an Asi 1600 and short exposures. 

That is 36.5hrs at F2.  

F2 @ 50s is equal to

F2.8 @ 100s

F4 @ 200s

F5.6 @ 400s

so that isnt short exposures, its just an extremely fast scope doing effectively long exposures.

Link to comment
Share on other sites

Picking up from the previous post....

The question for me is pursue "longer exposures" or "more stacks" or both?

I have been using an Atik Horizon CMOS for around a year. It is awesome when using short stacks on Hyperstar for EAA "observing". Means I need no wedge, no polar alignment routine etc and I see stuff that I will never see through an eyepiece even if I had an enormous Dobsonian. My local light pollution is pants, and "EAA" using this high resolution large sensor CMOS camera has offered me much joy.

However, my increasing conclusion is that if we exclude NV and 'Video' the vast majority of EAA participants are actually frustrated astrophotographers that will probably not be satisfied unless their "observations" ultimately produce 'images' that mirror the quality of images that we see generally see posted in the forums. After all, adding 'darks on the fly' and other like processing has become routine practice to gain an incremental advantage.  I now find myself in that group, and my frustration is generally due to poor seeing and light pollution.  My exposures have hence progressively grown longer and my stacks numerically greater  up to the limit where field rotation permits on an Alt-Az (under 30 seconds).

I have reached a stage where my 'imaging' seems to have reached a plateau. Hence, in the last week I have reverted back to using my wedge and polar alignment with the intent of increasing my typical subs from 20 second to 60 seconds or longer (but still on Hyperstar). So far, clouds have frustrated me, but I have no doubt that tripling the length of my exposures must boost things. I obviously don't know for sure, but I suspect Hyperstar on GEM or wedge using 60 second to 120 second exposures may be where I probably want to be. As that on Hyperstar at f/2 equates to exposures of 28x that of regular long exposures then maybe this is where more of us will eventually head, albeit traditional CCD will survive. But it does appear to me that "EAA" has moved on with a trend  towards far longer exposures than the 5s to 10s where its pioneers started. The blurring of lines between EAA and AP probably now means its all AP (except for NV). But I don't think that matters, it's just an inevitable progression.

Link to comment
Share on other sites

4 hours ago, andrew s said:

Hi Alan, I don't  follow the logic in this. If shot noise from the sky background dominates the read noise then taking longer exposures will improve the S/N ratio of both the background and the "target". A good thing or am I missing something? 

I know stacking can do this but less efficiently.

Regards Andrew 

Hi Andrew,

I got the reasoning from the multitude of posts on CN where they can get a bit obsessive about the ASI1600 and noise. :D

The background sky glow limits the faintest detail you can get from the target. Increasing the exposure duration beyond the point where the read noise is masked by the skyglow won't yield fainter detail and will give more overexposed stars due to reduced dynamic range which is the main reason for not exposing longer. The shot noise from the intended target isn't distinguishable from the shot noise and light pollution 'noise' from the skyglow and in light polluted skies the skyglow noise is much more dominant. In very dark skies there would be an advantage in taking longer subs as the target shot noise contributes more to the overall noise. RGB imaging in moonlight gives very noisy images with the overall light pollution 'noise' being dominant over any target noise. Stacking more shorter subs should give noise reduction similar to stacking fewer longer subs (same integration time) when you're skyglow limited. That's my understanding, and I'm not a noise 'expert' and am willing to be proved wrong. :D

Alan

Link to comment
Share on other sites

5 hours ago, andrew s said:

Hi Alan, I don't  follow the logic in this. If shot noise from the sky background dominates the read noise then taking longer exposures will improve the S/N ratio of both the background and the "target". A good thing or am I missing something? 

I know stacking can do this but less efficiently.

Regards Andrew 

It's about stacking and way noise adds. It adds up as linearly independent vectors and large LP shot noise will "absorb" small amount of read noise. Read noise is only parameter that does not depend on time - signal, shot noise, thermal noise, LP noise - all depend on time and when you stack these you get the same result as going with single long exposure.

Only place where multiple stacked exposures differ from single long exposure (for same total time) - is read noise. This is of course in terms of SNR. Other things also matter - like amount of data, need for precise tracking / guiding, amount of lost frames due to unforeseen events, etc ...

Once you have source of noise that is dominant over read noise - what ever that may be, LP noise, thermal noise or target shot noise - then difference between stacked frames and single exposure becomes very very small. As it happens with AP - targets are faint and we use cooled cameras, so really only thing left to dominate read noise is LP noise - hence whole logic. Indeed one would improve SNR of result by using single exposure over multiple exposures even when read noise is swamped but difference is so small - that it's not worth it (in the light of other thing related to multiple exposures).

Back to original topic.

Traditional long exposure going away will be dependent on two factors really. One is computing power and algorithms - and that develops really fast, even now we don't need more powerful hardware to do this - everything is already invented and affordable, we just need software support. Second is even lower read noise sensors.

Just as a thought experiment, but not far removed from what could be expected in near future - imagine 0 read noise sensor. Just a "digital" photon counting device that adds no read noise.

Such device could take any length exposures and produce exactly the same result as equivalent device (pixel size, QE and the rest) doing single exposure. Yes, this means that one could do 5ms exposures and stack 2880000 of them and get exactly the same result as single 4h exposure. There would be less need for full well depth, there would be less need for high ADC bit count. One might wander, well how about storage space? Why not do stacking in real time - no need to store all that frames as live stacking can be used to produce number of "sub stacks" or even single stacked image.

What would be benefit of such approach? Well, we just showed that one would have no need for storage of large number of subs (or even small number of subs). Other things include - no need for guiding what so ever, even poor performing mounts will be excellent. There would also be a way to select frames based on seeing - something people doing planetary lucky imaging already employ. Passing satellite? Just drop that half a second ... Wind / cable snag? just drop that few seconds and continue ...

That is all it's needed to completely change both the way we gather images and also the quality of result.

Btw, DSO lucky imaging is something people are already doing. I've seen some extraordinary images with large dobsonian telescopes and tens of thousands of short exposures. Here are couple of threads on this topic on CN:

https://www.cloudynights.com/topic/550166-m57-with-short-exposures-10000x500ms/?hl=lucky imaging

https://www.cloudynights.com/topic/550864-ngc-40-with-short-exposures-21000x500ms/?hl=%2Bngc40+%2Bwith+%2Bshort#entry7443798

And people are writing presentations to explain this type of imaging:

http://www.cedic.at/arc/c11/dwn/CEDIC11_FilippoCiferri.pdf

Only thing "holding back" this type of imaging is read noise, and good software to process things on the fly.

Link to comment
Share on other sites

@vlaiv yes I agree with your comments. Do you have a view on quantisation noise with current CMOS cameras and very short exposures, gain settings etc. This is not an issue with true photon counting if I remember correctly but it was 40 yrs ago when I did it.

Regards Andrew 

Link to comment
Share on other sites

2 minutes ago, andrew s said:

@vlaiv yes I agree with your comments. Do you have a view on quantisation noise with current CMOS cameras and very short exposures, gain settings etc. This is not an issue with true photon counting if I remember correctly but it was 40 yrs ago when I did it.

Regards Andrew 

At first I thought that it will be an issue, but after running some tests, it turn out not to be as much. Currently, sensor designers are using read noise to dither quantization errors, and even at quite a bit of clipping (gain above unity) - there is not much impact. Noise distribution remains fairly ok and stacking works as expected. I simulated ASI1600 at 0 gain setting - and although there is increase of noise due to quantization error - it still stack ok, since read noise is 3.4e at this setting.

I personally still like to use unity gain or higher gains that don't introduce quantization errors to photon count (read noise does get shaped by this - but it always is regardless of e/ADU used) - like integer fraction e/ADU values - 1/2, 1/3, etc ...

Link to comment
Share on other sites

15 hours ago, vlaiv said:

It's about stacking and way noise adds. It adds up as linearly independent vectors and large LP shot noise will "absorb" small amount of read noise. Read noise is only parameter that does not depend on time - signal, shot noise, thermal noise, LP noise - all depend on time and when you stack these you get the same result as going with single long exposure.

Only place where multiple stacked exposures differ from single long exposure (for same total time) - is read noise. This is of course in terms of SNR. Other things also matter - like amount of data, need for precise tracking / guiding, amount of lost frames due to unforeseen events, etc ...

Once you have source of noise that is dominant over read noise - what ever that may be, LP noise, thermal noise or target shot noise - then difference between stacked frames and single exposure becomes very very small. As it happens with AP - targets are faint and we use cooled cameras, so really only thing left to dominate read noise is LP noise - hence whole logic. Indeed one would improve SNR of result by using single exposure over multiple exposures even when read noise is swamped but difference is so small - that it's not worth it (in the light of other thing related to multiple exposures).

Back to original topic.

Traditional long exposure going away will be dependent on two factors really. One is computing power and algorithms - and that develops really fast, even now we don't need more powerful hardware to do this - everything is already invented and affordable, we just need software support. Second is even lower read noise sensors.

Just as a thought experiment, but not far removed from what could be expected in near future - imagine 0 read noise sensor. Just a "digital" photon counting device that adds no read noise.

Such device could take any length exposures and produce exactly the same result as equivalent device (pixel size, QE and the rest) doing single exposure. Yes, this means that one could do 5ms exposures and stack 2880000 of them and get exactly the same result as single 4h exposure. There would be less need for full well depth, there would be less need for high ADC bit count. One might wander, well how about storage space? Why not do stacking in real time - no need to store all that frames as live stacking can be used to produce number of "sub stacks" or even single stacked image.

What would be benefit of such approach? Well, we just showed that one would have no need for storage of large number of subs (or even small number of subs). Other things include - no need for guiding what so ever, even poor performing mounts will be excellent. There would also be a way to select frames based on seeing - something people doing planetary lucky imaging already employ. Passing satellite? Just drop that half a second ... Wind / cable snag? just drop that few seconds and continue ...

That is all it's needed to completely change both the way we gather images and also the quality of result.

Btw, DSO lucky imaging is something people are already doing. I've seen some extraordinary images with large dobsonian telescopes and tens of thousands of short exposures. Here are couple of threads on this topic on CN:

https://www.cloudynights.com/topic/550166-m57-with-short-exposures-10000x500ms/?hl=lucky imaging

https://www.cloudynights.com/topic/550864-ngc-40-with-short-exposures-21000x500ms/?hl=%2Bngc40+%2Bwith+%2Bshort#entry7443798

And people are writing presentations to explain this type of imaging:

http://www.cedic.at/arc/c11/dwn/CEDIC11_FilippoCiferri.pdf

Only thing "holding back" this type of imaging is read noise, and good software to process things on the fly.

Yup, definite potential for resolution gains with short exposures and they also require smaller otas due to pixel size.

I think another thing holding back (at least me personally) is the acquisition software. It is designed for ccd cameras with long exposures and slow readouts. >15s interframe wastage is fine with 300s subs, but it is crippling for 30s subs. Acquisition software needs to catch up with the times and eliminate as much interframe wastage as possible. Funnily enough that is something sharpcap aces!

Link to comment
Share on other sites

Thank you to all for posting on this subject.

I appreciate the various technical theories behind very short exposures.

It is a process which helps cameras and mounts which are not well equipped to expose for longer frames.

I think the use of super fast optics is a different subject,  although by default permit shorter exposures.

I have not seen evidence to date that demostrates in practice, it is ready to challenge the quality of images produced by exposing for as long possible without saturation. Especially when narrowband is required.

I understand the potential benefits, however there are negatives also in terms of processing huge frame numbers.

The images to back up the arguments in favour just do not appear to be presenting themselves.

Yes there are decent examples, but ones that really challenge the great work people have doing for years are not there yet.

I have no doubts cmos sensors will continue to improve as CCD development is ending. 

Developers will work on capture apps to take full advantage of faster read speeds etc.

It reminds me of when CD took over from Vinyl.  CD was cheaper to produce, it suited the distributor and user convenience.

The reality was though, vinyl when produced correctly and played on well engineered equipment was still better.

Horses for courses and marketing and demand will map out the future products.

Link to comment
Share on other sites

This subject always peaks my interest so I had a look at some M81 data I collected.

Capture.thumb.JPG.81cf67e2ab3a44348f13adeae6747498.JPG

On one side is 12x300s @ gain 0

On the other is 120x30s @ gain 76

FWHM for the 30s stack is 4.56"

FWHM for the 300s stack is 4.61"

IMO the longer subs look smoother but resolution is very very slightly higher on the shorter. The 300s subs were definitely overexposed, but probably not to the point of highlight destruction.

(Guide RMS was <0.6" with an image scale of 1.15".)

Source stacked files: https://1drv.ms/f/s!AtooZYq7lKxzgT4RKG5bqSJCgHgK

I did something similar for IC405 a while back and the shorter gain 76 subs were clearly better. 

I think this shows that you can do either, you don't need one or the other. With a CCD you need longer subs because of the read noise and gain combination?

Link to comment
Share on other sites

1 hour ago, jimjam11 said:

This subject always peaks my interest so I had a look at some M81 data I collected.

Capture.thumb.JPG.81cf67e2ab3a44348f13adeae6747498.JPG

On one side is 12x300s @ gain 0

On the other is 120x30s @ gain 76

FWHM for the 30s stack is 4.56"

FWHM for the 300s stack is 4.61"

IMO the longer subs look smoother but resolution is very very slightly higher on the shorter. The 300s subs were definitely overexposed, but probably not to the point of highlight destruction.

(Guide RMS was <0.6" with an image scale of 1.15".)

Source stacked files: https://1drv.ms/f/s!AtooZYq7lKxzgT4RKG5bqSJCgHgK

I did something similar for IC405 a while back and the shorter gain 76 subs were clearly better. 

I think this shows that you can do either, you don't need one or the other. With a CCD you need longer subs because of the read noise and gain combination?

With CCDs you need longer subs because of the read noise. Gain does not contribute on its own to it in any way (it is linked to level of read noise in CMOS sensors so it is related but not on its own - it's the read noise that is important).

Did you use ASI1600 for above images? For short subs you need to use much higher gain value. Both gain values you used are below unity. Read noise is high there. Again, higher gain does not collect more signal, nor does it make SNR higher, only thing it provides is that on high gain settings read noise is lower.

With ASI1600 and short exposures, use gain of 261 for example, that will give you read noise of ~1.3. Gain settings that you've used, have much higher read noise. Gain 0 has ~3.6e and Gain 76 has about ~2.25e.

Link to comment
Share on other sites

For narrowband I always use gain 200, but for lrgb gains higher than 76 become impractical because of interframe wastage with sgp. If the interframe wastage was much lower (<2s) then dso lucky imaging could be real fun!

I don't like using gain 0 because of the read noise, I typically use gain 76 for lrgb which gives me 30 and 90s subs. I always follow the excellent optimum imaging tables from here:

https://www.cloudynights.com/topic/573886-sub-exposure-tables-for-asi-1600-and-maybe-qhy163/

 

Link to comment
Share on other sites

19 minutes ago, jimjam11 said:

For narrowband I always use gain 200, but for lrgb gains higher than 76 become impractical because of interframe wastage with sgp. If the interframe wastage was much lower (<2s) then dso lucky imaging could be real fun!

I don't like using gain 0 because of the read noise, I typically use gain 76 for lrgb which gives me 30 and 90s subs. I always follow the excellent optimum imaging tables from here:

https://www.cloudynights.com/topic/573886-sub-exposure-tables-for-asi-1600-and-maybe-qhy163/

 

What is interframe wastage? Time spent downloading frame and the rest?

Don't really follow your logic about gains higher than 76. If you set your exposure length to say 60s, you'll get better results with unity gain then with gain of 76. Only advantage gain 76 will have is full well depth, and that is important only if you have clipping.

Link to comment
Share on other sites

Yes, time wasted between frames. I measured that anywhere between 6-14s in sgp depending on settings. A 30s sub with 14s gap between frames equates to huge wastage. 

 

I need to check again, but a 30s sub at gain 76 gives me an (SGP) adu of approx 800 above bias so I am already slightly beyond minimum to swamp read noise. I think I would see significant star clipping if I attempted 60s at gain 139 but will try it when I next get outside. 

Link to comment
Share on other sites

My interframe wastage was measured with a new i7 laptop and ssd so the delays are mainly architectural to sgp. 

I started this with an old i5 laptop and it had an interframe delay of 12-22s hence switching to something new.

 

Link to comment
Share on other sites

Oh, that is strange, I have i5 laptop and SSD, have ASI1600 and my "interframe wastage" is about 1-2s and I was pretty upset with that, since ASI1600 should be able to to more than 10fps on full image download, so frame download should be about 100ms.

This is mostly annoyance when doing flats - I do a lot of flats / flat darks, and my panel is very strong, so each flat is less than a second (sometimes even less than 100ms). I expected to do set of 256 flats in under 5 minutes, but it takes about 12 minutes or so (+additional 12 for flat darks).

Maybe you are using USB2.0 cable? USB 3.0 cable and download speed set to 64 is what gives me above download times.

Link to comment
Share on other sites

On the matter of clipping, and this is something not related to short exposures, but for general imaging.

Don't worry about star cores being clipped in luminance. You'll probably blow those cores out when stretching (not probably, for sure, they end up being 100% in order to bring faint stuff out). Keeping star cores unsaturated is important for RGB part, but you don't need to have whole session of frames that keep them unsaturated. You can saturate them in most frames, and then just do dozen of really short frames to capture only star cores without saturation. Then you can "mix" those in after processing to get accurate star color in cores.

Link to comment
Share on other sites

I tried both. With a usb2 cable I get 1-2s, normally towards 2. With usb3 it is the same range but normally closer to 1. USB traffic is set to 80. Sgp is brilliant software but its performance is terrible in my experience. Things like dithering add wastage to every frame because you need to set a settle time which is then applied between each frame irrespective of dithering.

With sharpcap I was getting 1fps with 1s exposures over usb3. I should retry this with USb3 and see if it is faster.

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

On the matter of clipping, and this is something not related to short exposures, but for general imaging.

Don't worry about star cores being clipped in luminance. You'll probably blow those cores out when stretching (not probably, for sure, they end up being 100% in order to bring faint stuff out). Keeping star cores unsaturated is important for RGB part, but you don't need to have whole session of frames that keep them unsaturated. You can saturate them in most frames, and then just do dozen of really short frames to capture only star cores without saturation. Then you can "mix" those in after processing to get accurate star color in cores.

Good idea. I will try an hour of gain 139 data at my next opportunity and see how it compares...

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.