Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Crowd-sourced deep field?


Synchronicity

Recommended Posts

This is a thought experiment but my knowledge is too limited to get to the end of it, hence I'm asking the multimind here 😉

If all the imagers here concentrated on the same piece of sky for one or more nights we could amass thousands, or even tens of thousands of hours of data.
Assuming we could share that data and filter it for quality, and had access to the computing power, what are the limitations which would stop someone combining that into an ultra deep field type of image?  Would there be a limit to the magnitude imagable from within the atmosphere, would distant objects be too small to resolve or what else would limit what could be achieved?

I'd be interested in everyones thoughts.

Michael

Link to comment
Share on other sites

Surely the resolution of the vast proportion of the scopes used would be the limiting factor. It wouldn't matter how many scopes were used if they didn't have the capacity of seeing a particular far-distant object.

  • Like 1
Link to comment
Share on other sites

I think light pollution in the atmorphere will be a big limiting factor, even in dark locations the sky is not that dark (mag 22 usually ) and most people live in cities.

Where I live the sky is Mag 19, so can't really go very deep.

Then there is the issue of calibrating the images with read and thermal  noise from different sensors.

The BAT is more reasonable, it's trying to do lucky imaging for DSOs, so only short exposures needed. It is not aiming to go deep, but improve imaging DSO resolution to 1 arcsecond and below.

Edited by Nik271
  • Like 1
Link to comment
Share on other sites

LP is actually not limiting factor.

It is just source of light pollution noise and like other types of noise - you just need enough integration time in order to reach target SNR.

What does present a challenge - is:

a) resolution

b) lack of efficient algorithms implemented as available software to be able to efficiently stack data from different sources

8 hours ago, Tiny Clanger said:

Do you know if that collaboration actually produced any result so far and where it can be seen?

  • Like 1
Link to comment
Share on other sites

2 minutes ago, tomato said:

Images and data are available from the link, I collaborated in the early days but I was spending too much time trying to keep up with the lively Discord forum, SGL is enough for me. Some of the results have been quite impressive IMHO.

https://bigamateurtelescope.com/

Not sure if I can find any.

There are some images on that website - but no explanation / annotation for them?

There are several targets and reference frame. Those pages also contain images of target - but not sure if these are result of collaboration?

Download data link says it is only available to members ...

  • Like 1
Link to comment
Share on other sites

I took part in one once.  Rather then image from light polluted earth, we put a telescope into space.  It was crowd funded using taxes (you know where I'm going with this...)

On the original point, it might not be doable, but it would be fun to try and the results would probably be spectacular.

  • Like 2
Link to comment
Share on other sites

you need to join the discord site to get the real value and feel the traffic - there are examples of output and how they are achieved on there. 

There is a process of identifying whether you can generate images sufficiently well to entire into the data stack for merged data. 

I'm not clear myself whether the original outcome of high resolution from high frequency stacking using short ( ~1sec) images very low read noise cameras have been sufficiently well demonstrated though. 

 

 

  • Like 1
Link to comment
Share on other sites

3 minutes ago, skybadger said:

I'm not clear myself whether the original outcome of high resolution from high frequency stacking using short ( ~1sec) images very low read noise cameras have been sufficiently well demonstrated though. 

My view is that that sort of imaging is only feasible with large apertures (12"+) and it only provides means to approach upper seeing resolution bound - but not cross it (like in planetary imaging where exposures are in millisecond range).

I'd be really surprised to see image that is less than 1"/px in effective resolution.

 

  • Like 1
Link to comment
Share on other sites

Something like the B.A.T could be interesting as a community challenge of sorts for SGL users. Maybe users would calibrate and stack their own data and upload the stacks here, and then we could all stack the stacks to produce our own images and share them. That would skirt around the terabytes of data issue while also sort of bringing the benefits of very long integrations to play. It would be fairly easy to choose an easy target that most users here can image at certain times of year, like M101 which is fairly large and can be imaged with a wide variety of scopes so no data is truly lost in the end. The soft stacks could go towards a nebulosity image and the sharp stacks could provide detail for the star/galaxy disc layer.

Just an idea i had in mind, but seems doable.

  • Like 1
Link to comment
Share on other sites

I joined the BAT, but like tomato found the Discord too discordant, so dropped out. My current setup is really not suitable unless I put and IMX 571 or 455 camera on my ODK12, something I'm in no hurry to do.

  • Like 1
Link to comment
Share on other sites

I just analyzed M51 image from Astrobin.

It has sampling rate of ~0.52"/px. Yet, it is over sampled by factor of x2.75 - x3 (frequency analysis suggests between 5.5px - 6px per cycle).

This means that data actually captures about 1.5"/px. It did not even reach close to 1"/px

 

  • Like 3
Link to comment
Share on other sites

30 minutes ago, ONIKKINEN said:

Something like the B.A.T could be interesting as a community challenge of sorts for SGL users. Maybe users would calibrate and stack their own data and upload the stacks here, and then we could all stack the stacks to produce our own images and share them. That would skirt around the terabytes of data issue while also sort of bringing the benefits of very long integrations to play. It would be fairly easy to choose an easy target that most users here can image at certain times of year, like M101 which is fairly large and can be imaged with a wide variety of scopes so no data is truly lost in the end. The soft stacks could go towards a nebulosity image and the sharp stacks could provide detail for the star/galaxy disc layer.

Just an idea i had in mind, but seems doable.

Could be, but we need to careful about several things.

First is SNR compatibility. Each stack that users provide would also need SNR map to go with it.

Regular stacking only works when you have the same SNR images. If you have images with different SNR - you can even "spoil" the better stack with worse when you combine them.

Here is simple example of how things work in this regard.

Say you have SNR of 10 and SNR of 2 images, what SNR will their average have? We can use simple case scenario where signal in each is 10 and Noise in each is 1 and 5 respectively (10/1 = 10 and 10/5 = 2)

Average of 10 and 10 is - well 10, so signal stays the same, and what is average of noise? It is noise added divided by number of items added. Only thing we have to be careful about is how noise adds - it does not add like with simple addition - it adds like square root of sum of squares.

We will thus have sqrt(5^2 + 1^2) / 2 as noise average. That is sqrt(26) /2 = ~2.55

Resulting stack SNR is 10 / 2.55 = 3.92!

We started with one image with SNR 10 and we ended up with resulting SNR of 3.92!

Hold, on, you might say, but how does stacking then works? Well - let's try the same thing with two images both having SNR 10. That is signal 10 and noise 1.

Signal part is easy - average will simply again be 10.

We repeat noise part as sqrt(1^2 + 1^2) / 2 = sqrt(2)/2 = ~0.7071

Resulting SNR is thus 10 / 0.7071 = ~14.14

We have improvement!

This is why we must use weighted average - where poor SNR image contributes much less to final stack than good SNR image. Problem with this however is:

a) no implemented algorithm uses per pixel weights (PI for example uses per sub weight - and only uniform gray sub will have same SNR for all pixels - regular images have different SNR for each pixel and it can span very large values - from SNR 50+ down to below 1).

b) no implemented algorithm has good enough SNR estimation

Luckily - there is method that we can use if each of submitted images comes with SNR map (which is SNR value for each pixel). In order to do that, next to signal we must also submit stack with standard deviation stacking instead of regular average stacking. This will give us noise for each pixel.

Then we can pose following question:

If we have expression

sqrt((c1*n1)^2 + (c2*n2)^2 + (c3*n3)^2 + .... + (ck * nk)^2)

and c1+c2+c3+....+ck = 1

for any given pixel with noises n1, n2, n3 ...., nk

What are c1, c2, c3, .... that minimize it.

We can find first derivative and equate it with zero and then we get bunch of equations that we must solve, but in the end - we will have coefficients c1, c2 .... ck that we must use - per pixel in order to stack only that pixel. We then repeat the same thing for all the rest pixels of the final stack.

And we did not even touch problem of different resolution in each image :D (nor difference in QE between sensors and filters and the fact that signal won't be the same because of this).

 

 

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Synchronicity said:

From their website I found this on Astrobin.  It seems kinda more what I was thinking about in that it maybe captures more "galactic cirrus" and bring out faint distant objects in the background.

https://www.astrobin.com/7ez0pl/0/

Michael

I'm not sure how much integration time that is, but I've seen similarly deep or even deeper images done by single imager.

Don't know how they combine subs - but look at above post - I'm not aware of any software that will perform necessary computations when having largely different SNR images to combine - and ineffective combination can lead to drop in SNR over even single source (result can be worse than the best submission - if one is not careful).

 

  • Like 1
Link to comment
Share on other sites

21 minutes ago, vlaiv said:

Could be, but we need to careful about several things.

First is SNR compatibility. Each stack that users provide would also need SNR map to go with it.

Regular stacking only works when you have the same SNR images. If you have images with different SNR - you can even "spoil" the better stack with worse when you combine them.

Here is simple example of how things work in this regard.

Say you have SNR of 10 and SNR of 2 images, what SNR will their average have? We can use simple case scenario where signal in each is 10 and Noise in each is 1 and 5 respectively (10/1 = 10 and 10/5 = 2)

Average of 10 and 10 is - well 10, so signal stays the same, and what is average of noise? It is noise added divided by number of items added. Only thing we have to be careful about is how noise adds - it does not add like with simple addition - it adds like square root of sum of squares.

We will thus have sqrt(5^2 + 1^2) / 2 as noise average. That is sqrt(26) /2 = ~2.55

Resulting stack SNR is 10 / 2.55 = 3.92!

We started with one image with SNR 10 and we ended up with resulting SNR of 3.92!

Hold, on, you might say, but how does stacking then works? Well - let's try the same thing with two images both having SNR 10. That is signal 10 and noise 1.

Signal part is easy - average will simply again be 10.

We repeat noise part as sqrt(1^2 + 1^2) / 2 = sqrt(2)/2 = ~0.7071

Resulting SNR is thus 10 / 0.7071 = ~14.14

We have improvement!

This is why we must use weighted average - where poor SNR image contributes much less to final stack than good SNR image. Problem with this however is:

a) no implemented algorithm uses per pixel weights (PI for example uses per sub weight - and only uniform gray sub will have same SNR for all pixels - regular images have different SNR for each pixel and it can span very large values - from SNR 50+ down to below 1).

b) no implemented algorithm has good enough SNR estimation

Luckily - there is method that we can use if each of submitted images comes with SNR map (which is SNR value for each pixel). In order to do that, next to signal we must also submit stack with standard deviation stacking instead of regular average stacking. This will give us noise for each pixel.

Then we can pose following question:

If we have expression

sqrt((c1*n1)^2 + (c2*n2)^2 + (c3*n3)^2 + .... + (ck * nk)^2)

and c1+c2+c3+....+ck = 1

for any given pixel with noises n1, n2, n3 ...., nk

What are c1, c2, c3, .... that minimize it.

We can find first derivative and equate it with zero and then we get bunch of equations that we must solve, but in the end - we will have coefficients c1, c2 .... ck that we must use - per pixel in order to stack only that pixel. We then repeat the same thing for all the rest pixels of the final stack.

And we did not even touch problem of different resolution in each image :D (nor difference in QE between sensors and filters and the fact that signal won't be the same because of this).

 

 

Hmm, much more complicated than i thought. Gonna need a cup of coffee to digest all this.

This would play well as a processing challenge in that regard and those skilled enough to utilize the best data only, or implement stacking to make the best out of all the data will surely have the best image.

The resolution issue could be partially solved by categorizing the stacks to fwhm ranges. Lets say 3" and better, 3"-4", 4"-5" and so on. Most will probably produce not very sharp data but that depends on who participates. Also works as a processing chalenge in the end.

  • Like 1
Link to comment
Share on other sites

I don't think the M51 result was one of the better BAT examples, but I was impressed with the M27, NGC891 and NGC7331 images. I wouldn't argue with the science Vlaiv has outlined, but I think the collaboration result is better than each individual's contribution and some are comparable qualitatively to the images produced from larger aperture scopes in premium locations, so to that extent the project was a success.

  • Like 1
Link to comment
Share on other sites

It's going to be easy to throw stones until you actually engage with the team properly, they do have a tier of professionals engaged in this to work out the details you describe Vlav, details I haven't really tried to discover much about. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.