Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Astrobiscuit is looking for quality astrophotographers to join the Big Amateur Telescope (BAT)


Recommended Posts

I'm really interested to get clarity on what the aims are. It seems like it's to have high resolution, high snr images using small/amateur sized telescopes. 

The issues I see are

 tracking - what exposure length makes tracking errors below the target scale seeing - can we get enough pictures sampled better than the target scale

resolution, by combining pictures from smaller scopes can we get sufficient snr to discern and resolve features approaching images from larger scopes

collimation - can we educate enough people to get the most out of their systems to contribute. 

optical errors - can we combine images from a huge  variety of image scales and their specific optical aberrations and combine the images in a meaningful way, that is not disproportionately compute intensive. Do current generations of stacking programs model the image plane aberrations for stars in each image as part of the  stacking process or does this require a new approach ? Have you got a feel for this new approach ? 

Finally, what is meant by 'lucky imaging'. In planetary it's all about freezing the seeing to remove the atmosphere from the optical limitations and stacking those few high quality, single-sourced images.  In low snr, long exposure deep sky , you must mean something else right ?

it's not lucky imaging if you are still exposing at 30 seconds per shot because you haven't broken any of the limits outlined above. 

I'm not knocking any of this, crowd sourcing imagery and pipeline processing is a great tool; just trying to understand the headline aims and the underlying physics. 

mike

 

 

 

Link to comment
Share on other sites

25 minutes ago, skybadger said:

I'm really interested to get clarity on what the aims are. It seems like it's to have high resolution, high snr images using small/amateur sized telescopes. 

The issues I see are

 tracking - what exposure length makes tracking errors below the target scale seeing - can we get enough pictures sampled better than the target scale

resolution, by combining pictures from smaller scopes can we get sufficient snr to discern and resolve features approaching images from larger scopes

collimation - can we educate enough people to get the most out of their systems to contribute. 

optical errors - can we combine images from a huge  variety of image scales and their specific optical aberrations and combine the images in a meaningful way, that is not disproportionately compute intensive. Do current generations of stacking programs model the image plane aberrations for stars in each image as part of the  stacking process or does this require a new approach ? Have you got a feel for this new approach ? 

Finally, what is meant by 'lucky imaging'. In planetary it's all about freezing the seeing to remove the atmosphere from the optical limitations and stacking those few high quality, single-sourced images.  In low snr, long exposure deep sky , you must mean something else right ?

it's not lucky imaging if you are still exposing at 30 seconds per shot because you haven't broken any of the limits outlined above. 

I'm not knocking any of this, crowd sourcing imagery and pipeline processing is a great tool; just trying to understand the headline aims and the underlying physics. 

mike

 

 

 

The way I see it - couple of seconds of exposure is threshold for lucky type DSO imaging.

Just a couple of seconds is needed for seeing to average out in particular time frame to get good indicator of seeing induced FWHM. Guiding works in tame scales of couple of seconds - so we can assume that most mounts will track that much without significant drift.

Lucky DSO imaging is similar with planetary type lucky imaging - only in name and the fact that we rely on luck. With planetary type lucky imaging we hope that frozen PSF will not be severely distorted. With DSO lucky type we hope that averaged seeing PSF will have low enough FWHM.

Seeing changes from minute to minute and constantly during the course of evening. This means that for example - it will be 1.6" FWHM on average with moments at 1.2" and also moments with 2". We keep those subs that have FWHM below certain threshold - but it is still averaged seeing effect and far from planetary resolution.

There are algorithms that can deal with different types of optical aberrations - but that is probably outside of the scope of project such as this as it involves extensive modelling of telescope system (measuring optical aberrations over imaging field and then doing dynamic PSF deconvolution on subs).

 

Link to comment
Share on other sites

Really looking forward to seeing the results off the back of this, definitely an interesting concept and a great way to follow on from the other budget astro videos 👍

This hobby does seem to attract some "naysayers" though! "You can't do it without xyz kit, without perfect seeing and only on Thursdays!" 🤣 although SGL is pretty free of that compared to some other forums that shall not be named 😉

 

Link to comment
Share on other sites

I think the approach is designed to at least partly, get around the poor seeing conditions most amateur imagers encounter compared to professional telescopes located on the top of mountains.
 

I totally respect the theory behind the limitations attributed to the Lucky Imaging approach to DSOs, but having  watched the Astrobiscuit video I have been sufficiently impressed with the results of Rory’s experiment to try and make a contribution.

Uploading the humongous amounts of data generated will probably be my biggest challenge (after the weather of course)😊

  • Like 1
Link to comment
Share on other sites

14 hours ago, skybadger said:

I'm really interested to get clarity on what the aims are. It seems like it's to have high resolution, high snr images using small/amateur sized telescopes. 

The issues I see are

 tracking - what exposure length makes tracking errors below the target scale seeing - can we get enough pictures sampled better than the target scale

resolution, by combining pictures from smaller scopes can we get sufficient snr to discern and resolve features approaching images from larger scopes

collimation - can we educate enough people to get the most out of their systems to contribute. 

optical errors - can we combine images from a huge  variety of image scales and their specific optical aberrations and combine the images in a meaningful way, that is not disproportionately compute intensive. Do current generations of stacking programs model the image plane aberrations for stars in each image as part of the  stacking process or does this require a new approach ? Have you got a feel for this new approach ? 

Finally, what is meant by 'lucky imaging'. In planetary it's all about freezing the seeing to remove the atmosphere from the optical limitations and stacking those few high quality, single-sourced images.  In low snr, long exposure deep sky , you must mean something else right ?

it's not lucky imaging if you are still exposing at 30 seconds per shot because you haven't broken any of the limits outlined above. 

I'm not knocking any of this, crowd sourcing imagery and pipeline processing is a great tool; just trying to understand the headline aims and the underlying physics. 

mike

 

 

 

I'm going to have to be brief ....(time).  The heart of the theory goes like this. If you had a magical camera with zero read noise you could take million of micro second exposures over the course of a minute stack them up and your stack would have the same snr as a single 60sec exposure.. Obviously we don't have magical cameras yet but we're getting close. Modern CMOS sensors manage about 1e/pix of read noise. The read noise is low enough for us to stack thousands of subs that are only a few seconds long and get good results. The the BAT team we have a guy that images at Keck. He's produced a graph that shows you start to see the benefits of lucky imaging when your exposures drop below 10seconds. You see a really good increase in resolution when you shoot exposures of about a second. And in the future if the next generation of CMOS cameras reduces the read noise further then we will be looking to shoot at about 1/10th sec exposures bc thats when you get even more benefits from lucky imaging. Thats the theory. The reality is that most folks have issues with their set up that needs to be sorted before they're set up is sharp enough to even begin to think about lucky imaging!

Link to comment
Share on other sites

14 hours ago, doublevodka said:

Really looking forward to seeing the results off the back of this, definitely an interesting concept and a great way to follow on from the other budget astro videos 👍

This hobby does seem to attract some "naysayers" though! "You can't do it without xyz kit, without perfect seeing and only on Thursdays!" 🤣 although SGL is pretty free of that compared to some other forums that shall not be named 😉

 

SGL is a really great forum. Hats off to FLO for making the community a better place. Cloudy nights has some agenda that i don't quite understand and is full of elitist astronomers.  They banned me for posting this request for imagers for instance...

To be fair its not going to be easy and you do really need a modern CMOS camera to make it work with the kind of scopes we can afford BUT it will work and ultimately the naysayers will have to admit that they are wrong 🤣

 

  • Like 1
Link to comment
Share on other sites

12 hours ago, tomato said:

I think the approach is designed to at least partly, get around the poor seeing conditions most amateur imagers encounter compared to professional telescopes located on the top of mountains.
 

I totally respect the theory behind the limitations attributed to the Lucky Imaging approach to DSOs, but having  watched the Astrobiscuit video I have been sufficiently impressed with the results of Rory’s experiment to try and make a contribution.

Uploading the humongous amounts of data generated will probably be my biggest challenge (after the weather of course)😊

thx and you're totally right apart from one thing. Finding quality astrophotographers is harder than  finding the IT wizards who sort out how to upload the data😊

 

  • Like 2
Link to comment
Share on other sites

Totally agree about cloudy nights. The number of times I Google something and get linked there to find its someone being ridiculed for asking a question, etc.  Imho a problem with most USA forums tbh whatever the hobby.

  • Like 1
Link to comment
Share on other sites

17 minutes ago, rorymultistorey said:

I'm going to have to be brief ....(time).  The heart of the theory goes like this. If you had a magical camera with zero read noise you could take million of micro second exposures over the course of a minute stack them up and your stack would have the same snr as a single 60sec exposure.. Obviously we don't have magical cameras yet but we're getting close. Modern CMOS sensors manage about 1e/pix of read noise. The read noise is low enough for us to stack thousands of subs that are only a few seconds long and get good results. The the BAT team we have a guy that images at Keck. He's produced a graph that shows you start to see the benefits of lucky imaging when your exposures drop below 10seconds. You see a really good increase in resolution when you shoot exposures of about a second. And in the future if the next generation of CMOS cameras reduces the read noise further then we will be looking to shoot at about 1/10th sec exposures bc thats when you get even more benefits from lucky imaging. Thats the theory. The reality is that most folks have issues with their set up that needs to be sorted before they're set up is sharp enough to even begin to think about lucky imaging!

Tx Rory. So the intent is to aim for one or two second exposures then. Lucky imaging it is. 

  • Like 1
Link to comment
Share on other sites

10 minutes ago, skybadger said:

Tx Rory. So the intent is to aim for one or two second exposures then. Lucky imaging it is. 

Yes or at least an HDR image, lucky imaging for the bright bits and regular imaging for the dim bits. It all depends on how many good astrophotographers we get to join up so please spread the word. Thx

Link to comment
Share on other sites

39 minutes ago, rorymultistorey said:

SGL is a really great forum. Hats off to FLO for making the community a better place. Cloudy nights has some agenda that i don't quite understand and is full of elitist astronomers.  They banned me for posting this request for imagers for instance...

To be fair its not going to be easy and you do really need a modern CMOS camera to make it work with the kind of scopes we can afford BUT it will work and ultimately the naysayers will have to admit that they are wrong 🤣

 

Well I wasn't going to mention the forum by name ;) but yes that's the one and unfortunately the ban doesn't surprise me at all, there's a bit of a "can't" culture over there with a few exceptions 

I look forward to them being proved wrong 🤣 you'd think astronomers of all people would be a bit more open minded!

I can't help with the imaging (back garden dabbler on a budget) but IT I know a little about, maybe look into things like https://wetransfer.com/ - free file transfers of up to 2GB at a time, otherwise your looking at something like a custom ftp setup which gets a bit complex for the average user

Link to comment
Share on other sites

Problem with such short exposures is amount of light captured.

In order to properly stack data, we need at least a few stars in the image - more stars there is - better alignment we can get. Poor alignment reduces sharpness of the image.

This again requires larger telescope as SNR achieved is determined by aperture at given working resolution (another reason why over sampling is bad as it reduces star SNR - light from star gets spread over too many pixels).

  • Like 1
Link to comment
Share on other sites

I'm just a newbee at this, but it does seem that longer exposure = more photons captured. So I would have thought it would be extremely difficult to capture enough data on dim DSOs. Of course, I am looking forward to being proved wrong on this and anticipating eagerly the next Youtube video.

Link to comment
Share on other sites

On 29/06/2021 at 22:26, rorymultistorey said:

Please can you thank them from me. Terry didn't want to give me their emails which is fair enough.

No problem! I think they rather enjoyed it! 

One more question - whilst I can't help with imaging I might be able to help with IT. How much dat are we talking about needing to store/upload? Or do you have that bit sorted? 

Link to comment
Share on other sites

Hmm, the ability to image at pace feels self limiting on the size of image files BC you can't download them fast enough. You could do it all offline though. Say 3600 1 sec images of an object by 4 MB per file by several hundred imagers ? Each night. 

I'd be looking to pipeline this through dynamic cloud services (lambdas) and auto stack when a pipeline is quiet for longer than say, half an hour. Use central coords of image and return list of all folders in operation when someone wants to know what subjects are bing imaged. 

Just thinking aloud though. 

  • Like 2
Link to comment
Share on other sites

5 minutes ago, skybadger said:

Hmm, the ability to image at pace feels self limiting on the size of image files BC you can't download them fast enough. You could do it all offline though. Say 3600 1 sec images of an object by 4 MB per file by several hundred imagers ? Each night. 

I'd be looking to pipeline this through dynamic cloud services (lambdas) and auto stack when a pipeline is quiet for longer than say, half an hour. Use central coords of image and return list of all folders in operation when someone wants to know what subjects are bing imaged. 

Just thinking aloud though. 

I'd use different approach - more you can do locally, the better. Distributed computing approach - like network of nodes. People allocate space on local node for target and put their files in designated space and anyone can at any time request stack of say 2" FWHM from anyone else on the network.

Local nodes then examine subs, select them - do local stack and upload only one file - stack. Requester then stacks submitted data locally.

At this point, I'd like to point out few other important things: Algorithm is needed for effective stacking of incompatible data.

- Some of the data will contain diffraction spikes and some won't. Diffraction spikes will not be oriented the same in each data set (rotation of reflector OTA in rings)

- Sensor QE and filter response curves will be different. Even luminance will be different. This makes color management next to impossible unless each contributor submits color calibration data along the regular data - or at least data that has been converted to common color space like XYZ.

- Vastly different SNR. Simple weighted approach will not give optimum results. There is no single SNR per image - every pixel has its own SNR. Stacking algorithm needs to weigh each pixel value accordingly when stacking mismatched SNR data.

- Different sampling rates (this is by far easiest thing to deal with - requester specifies FOV and sampling rate along with threshold FWHM and receives already plate solved and aligned data to be combined).

Above is true for any collaboration project - regardless if it aims to provide high resolution data or not.

  • Like 1
Link to comment
Share on other sites

It will be quite a challenge. Looking at the list of kit already signed up I don’t think there are two setups the same.☺️
 

But hey,  “We choose to do these things, not because they are easy, but because they are hard”.

  • Like 4
Link to comment
Share on other sites

Great sounding project Rory- good luck and really looking forward to seeing how you all get on. Wish i could contribute but i only just started learning imaging and my camera’s an old noisy slow ccd so not really what you’re looking for unfortunately! Best of luck though :)

 

  • Like 2
Link to comment
Share on other sites

9 hours ago, badhex said:

No problem! I think they rather enjoyed it! 

One more question - whilst I can't help with imaging I might be able to help with IT. How much dat are we talking about needing to store/upload? Or do you have that bit sorted? 

a chap called @geeks on the server is heading that up , at the moment I'm just using google drive bc its so user friendly but when we properly get going I don't know what we're going to switch to.

  • Like 1
Link to comment
Share on other sites

@rorymultistorey I could potentially be of some help here. Currently working with an Esprit 120 - 840mm focal length @ f7 and a QHY 268M. Not the best scope for the job (a fast 10”+ aperture scope would be ideal), but the camera is probably as good as it gets right now. If I operate in high gain mode at maximum gain, then I am down to 1e read noise, which in my suburban skies (SQM 20) means I am swamping read noise in luminance after around 5s. My image scale is 0.92”pp, and if I can get my EQ6-R guide RMS down to 0.4” or so then I can support that image scale, which would critically sample a FWHM of 1.5-2” or thereabouts. The question is then just whether or not I can achieve that FWHM with a choice selection of 5s subs. Your video certainly suggests a meaningful resolution improvement should be possible!

Charles

  • Like 1
Link to comment
Share on other sites

6 minutes ago, cfinn said:

If I operate in high gain mode at maximum gain, then I am down to 1e read noise, which in my suburban skies (SQM 20) means I am swamping read noise in luminance after around 5s

Are you sure about this?

Back of the envelope calculation suggests that you'll be getting about 0.4e/px/s from sky. In 5s exposure that means about 2e of background signal or about 1.41e of LP noise. Hardly swamping 1e of read noise.

You need about 1 minute of exposure to truly swamp read noise (5:1 - LP to read).

 

Link to comment
Share on other sites

27 minutes ago, vlaiv said:

Are you sure about this?

Back of the envelope calculation suggests that you'll be getting about 0.4e/px/s from sky. In 5s exposure that means about 2e of background signal or about 1.41e of LP noise. Hardly swamping 1e of read noise.

You need about 1 minute of exposure to truly swamp read noise (5:1 - LP to read).

 

Yes. Running the calculation through here the sky background in luminance (300nm bandpass) will be roughly 2.6e/px/s. I have captured a screenshot so you can see the parameters I have chosen. Following the reasoning here, the optimal exposure time to swamp read noise is roughly 10 x read_noise^2 / sky_background_rate. So that is around 5s.

Screenshot 2021-07-02 at 12.25.42.png

Edited by cfinn
  • Like 1
Link to comment
Share on other sites

1 hour ago, cfinn said:

Yes. Running the calculation through here the sky background in luminance (300nm bandpass) will be roughly 2.6e/px/s. I have captured a screenshot so you can see the parameters I have chosen. Following the reasoning here, the optimal exposure time to swamp read noise is roughly 10 x read_noise^2 / sky_background_rate. So that is around 5s.

Well, there seems to be some discrepancy in calculators used.

First - don't set 90% QE as that is peak QE for your sensor and it won't have such QE over whole 400-700nm range. ~70% QE is better approximation, and maybe use even less because you did not account for system losses (depending on scope - you can have 12 or more glass/air surfaces - air spaced triplet will have 6, reducer/flattener will have at least 4 and UV/IR cut fillter will have 2. There is also camera cover window - we now counted 14 air glass surfaces - even with best coatings that transmit 99.5% of light that totals to 93.22% for just air / glass surfaces).

Second - I prefer using x5 rather than x3.1622 (square root of 10) as factor between read noise and LP noise.

With x3.1622 you'll get increase of ~5% noise increase over read noise free sub, but with x5 you'll get only 2% increase.

Third - I ran above calculator and compared the result to my calculator and above seems to give x3.5 higher electron count for same parameters. Not sure why is that - possibly different source of mag 0 flux used?

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

Well, there seems to be some discrepancy in calculators used.

First - don't set 90% QE as that is peak QE for your sensor and it won't have such QE over whole 400-700nm range. ~70% QE is better approximation, and maybe use even less because you did not account for system losses (depending on scope - you can have 12 or more glass/air surfaces - air spaced triplet will have 6, reducer/flattener will have at least 4 and UV/IR cut fillter will have 2. There is also camera cover window - we now counted 14 air glass surfaces - even with best coatings that transmit 99.5% of light that totals to 93.22% for just air / glass surfaces).

Second - I prefer using x5 rather than x3.1622 (square root of 10) as factor between read noise and LP noise.

With x3.1622 you'll get increase of ~5% noise increase over read noise free sub, but with x5 you'll get only 2% increase.

Third - I ran above calculator and compared the result to my calculator and above seems to give x3.5 higher electron count for same parameters. Not sure why is that - possibly different source of mag 0 flux used?

All very good caveats, thank you. I did a couple of independent checks of that calculator a while back and it all seemed to check out ok. I completely agree on the QE, I was generous on that front, and you are right to say I have ignored transmission losses. You are also right to say that there is a choice to make on how close you want to get towards noise free sub. My view is that we are well into diminishing returns by +5%, but I accept some may want to get even closer than that. Even with all of that considered, I do think this proves that it is possible to swamp read noise in subs of less than 10s in duration with latest CMOS cameras, which is what makes this whole exercise quite feasible in my opinion. I'm also operating at f/7, which is hardly ideal. Furthermore, the relationship between optimal exposure time and read noise goes as read noise squared, so even small reductions in read noise as CMOS technology develops will translate into big gains, making sub exposures of 1s or less extremely viable. This means the bigger problem in my view is the compute power necessary to process all those tens or hundreds of thousands of subs!

Link to comment
Share on other sites

3 hours ago, cfinn said:

@rorymultistorey I could potentially be of some help here. Currently working with an Esprit 120 - 840mm focal length @ f7 and a QHY 268M. Not the best scope for the job (a fast 10”+ aperture scope would be ideal), but the camera is probably as good as it gets right now. If I operate in high gain mode at maximum gain, then I am down to 1e read noise, which in my suburban skies (SQM 20) means I am swamping read noise in luminance after around 5s. My image scale is 0.92”pp, and if I can get my EQ6-R guide RMS down to 0.4” or so then I can support that image scale, which would critically sample a FWHM of 1.5-2” or thereabouts. The question is then just whether or not I can achieve that FWHM with a choice selection of 5s subs. Your video certainly suggests a meaningful resolution improvement should be possible!

Charles

Sounds great. And your absolutely right about having the right camera for the job. I found that in London (bortle 8/9) with my f6 newt I had to shoot out 20 second subs to get enough stars to stack (around 20 is enough) but when I went into the bortle 5 field outside london 5 second subs worked fine. Your cameras pixels are bigger than mine (asi178), and your scope is probably soaking up a similar number of photons. The best thing to do is forget about what the naysayers tell you and just try it. I have APP loaded on my laptop so after I've taken a few subs I can quickly check that they stack. I think your esprit will work very well.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.