Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Group Image?


Recommended Posts

All things being equal it's a great idea and would produce a deep image quickly, sadly things are not equal - particularly in terms of the quality of data generated. Just like capturing an image on your own you would select the best subs to combine into your final image, and with a collaboration where data is collected from a vastly differing range of kit and experience there would be a consequential huge range in quality of data. Whoever did the post processing would have to take harsh decisions on what to include in the final image.

ChrisH

Link to comment
Share on other sites

as Chris and Dave says equal kit would be needed but it can be done, a good subject would be M42 the great Orion neb, being bright and easy too image, small fracs would be best say like a ed80 and dslr to keep it simple, and set rules about image size ect, but thats not to say it couldnt be done aswell with a newt and CCD. its a good subject and idea but i think the plan needs to be worked out well. charl.

Link to comment
Share on other sites

It can certainly be done. There have been a number of good collaborations done on here. One that's ongoing is between Gnomus and Barry WIlson but there have been a number of others. Registar makes it relatively easy, but only relatively! For me the ideal target would be a faint one needing an awful lot of data.

Olly

Link to comment
Share on other sites

That is an nteresting idea.

"the final image" "harsh decisions" "the plan"
sounds like a committee would be needed and then a camel   leader chosen,   and first past the post or , single transferable , and ,, oh heck !!

But if each contributed to a pool of images, then each could choose his/her own selection. to add to his/her own/postproc. Perhaps A final image would then emerge by acclamation :)

Could certainly keep us all busy on rainy days :):) I'll get my hat&coat

Link to comment
Share on other sites

I hope I didn't make it sound like such a project cannot work! It can, and I've contributed to one previously. I think the best method is to split the task into channels, Lum, NB, R G B etc., those with longer focal lengths and sharper optics go for the Lum/Ha data, others concentrate on the chrominance data. That spreads the load and optimises the results. The project still requires someone with skill (and time) to stitch it all together.

ChrisH

Link to comment
Share on other sites

To begin with, we have to chose a good target which suits everyone. Charl's choice sounds good (M42). The thing that makes this possible is that we are able to download different images and files that people have taken and process them ourselves. Thank you SGL!:happy7: I don't want to jump too far ahead though.

seb

Link to comment
Share on other sites

People may not want to hear this, but Pixinsight is supposedly able to align and register subs from any FL (any scope) automatically rescaling the images to the shortest FL sub that is chosen as the master.  Maybe all software can do this--not sure.  But I think a project like this is wonderful idea.  I have wanted to use 4-5 scopes to capture data ona target for a long time.  You could get 20-24 hours of data in 1 night!

Rodd

Link to comment
Share on other sites

With M42, the team could combine images of various fields of view.  eg. a wide angle view with Ha to capture all the hydrogen gas clouds around, then the main nebula and finally the inner part which is much brighter.  This is the sort of thing I have been thinking of myself as a single person project using different imaging rigs.

Link to comment
Share on other sites

Here's a suggestion, bearing in mind comments elsewhere about aesthetics!

What about creating a massive mosaic by 'dropping in' SGL member's images onto a widefield view of  a big chunk of the sky.

Rather than worrying about exactly matching the images the idea would be to produce an image you could get poster printed with lots of smaller images very obviously aligned onto it, with a key and numbers so you can say 'that's my one'. Rather than trying to be an infinitely detailed image, using relatively small scale for individual pics would enable SGL members of all levels of ability and equipment to shine. No doubt there would be competition for the best targets...

An alternative would just be a big image comprising, say 100 images by SGL members with a strict rule of no more than a few images per subject and no more than one or two images per person. Candidate images to be uploaded to a thread here.

If it worked well perhaps FLO could find a print house willing to produce a run of posters at a reasonable cost, I'm sure most contributions would buy one or more and perhaps a few cents could be added on for a charitable good cause.

Link to comment
Share on other sites

I think it is doable, and I was considering algorithm to do this in a bit more depth. There are a couple of things that need to be addressed first.

Registering and stacking of different resolution images is not a big deal. It can be done rather easily. I think that algorithm for target resolution can be easily implemented. So we don't have to choose either highest or lowest resolution, but rather one that is best fit based on data collected. It would use combined approach from drizzle and binning - if sub being stacked is at higher resolution than target one - binning would occur, if at lower - a kind of drizzle. One might think of it like this: instead of treating pixels like point samples - we should treat them as surface squares that collected certain amount of light. Then we use target grid (also surface squares) and project onto each sub (based on resolution and registration info). Then we simply see how much light would fall on target square if it was in place of original sub.

Calibration is a bit more tricky. People shoot images under different conditions, with often unknown ADU values. Algorithm should be able to do two things: adjust background level (tricky business - not everybody's skies are the same in terms of light pollution), and adjust brightness of stars to match - using kind of photometry (not everybody will have same transparency or shoot targets thru same amount of air mass), and this brings us to third point that is not easily addressable - different response curves of equipment used, including mirror coatings of reflectors and sensor sensitivity over wavelengths coupled with filters used.

In theory, if each image / image set is coupled with proper response curve, then an algorithm can be implemented to produce combined images in really fine grained wavelength slices - instead of usual R, G, B components (that vary depending on filters and sensor used anyway). In this way we could combine narrow band data with R,G,B and L data in algorithm itself. This would produce very realistic color rendering of target.

Other things that need more consideration:

What is the impact of stacking low and high SNR images (how much improvement there will be, or even if there will be any, can low SNR images spoil the result?)

Would there be rejection threshold for frames in terms of quality (SNR, Fixed pattern noise and FWHM values or something)? Would we implement algorithm to try deconvolution of subs that have issues with FWHM and tracking errors (elongated stars and such) before stacking (algorithm would select subset of frames which pass "good enough" criteria, stack those in first iteration, get star profiles that are expected, then for each of remaining subs determine PSF kernel that needs to be used for deconvolution based on expected star profile vs actual star profile in sub - there could even be spatially dependent PSF kernel so that we can correct coma, astigmatism and field curvature in these frames, if there is enough data to provide low noise PSF).

Spike orientation for data coming from reflector telescopes that have secondary support spider? Depending on data distribution this could be handled with some kind of sigma rejection stacking (spike present at some images at certain orientation would be excluded because there is no spike in most of the images in that position), or using deconvolution technique (make star profile from subs that don't have spikes, apply that star profile to stars with spikes to get "spiked" PSF that we will use to deconvolve sub containing spikes).

So I think it is doable if we could come up with solution for presented problems, then implement adequate algorithm to do this, also make support infrastructure for gathering samples (like dedicated web service where people can submit their work to be included in these "deep fields") - I think also that such a service would be super cool and have large impact in amateur community if was to work well.

Link to comment
Share on other sites

This was done on Stargazing live a couple of years ago also with the Orion nebula so it's certainly doable ( I can't find the final image though)

How easy it is, is another question. For me I suspect it would be impossible, but there are some smart folk on here, so I'll keep an eye on this for sure.

Link to comment
Share on other sites

6 minutes ago, Stub Mandrel said:

Or simply normalise all images, align and stack and see what happens! I bet that's what SL did!

In general case results are unpredictable. Let me demonstrate this with an example (extreme case, but it will illustrate my point):

One of us puts forward images in H-alpha, someone else in OIII and third person for example in H-beta. With a dedicated algorithm that considers wavelengths and sensor/filter response - such images can be stacked. Simple stacking - you'll get a mess since each set will see other sets as noise without any real signal. Now you could say, ok let's stack OIII data to OIII data, but as I've said this is extreme case, what if we add full spectrum data, and mono data taken with some UHC filters in the mix? How to stack such data together? Then consider someone adding OSC data taken with camera A, and then somebody else OSC data taken with camera B.

In small scale "deployment" it is certainly feasible for couple of people to contribute well known data to the stack, for example all using same type of equipment, or each using same band pass filters.

But if you consider something that could be described as "large scale deployment" in terms that there is dedicated online service where many hundreds of amateurs can contribute data, there has to be well designed algorithm that can handle all the differences in sensible manner.

And only imagine that, call to community - everyone interested should capture XY target in next month and submit their data for joint effort. If it turns out that there are let's say hundred participants, each contributing 5h worth of exposures on average, we might end up with 500h worth of exposures. That would mean some serious SNR, which could be even traded for such things as super-resolution (a concept that I'm also thinking about - probability based stacking, or in simple terms, algorithm that would answer following: given all the data and characteristics of equipment (PSF, noise characteristics, etc) what is the most probable original image that would produce such a dataset), or frequency granularity given enough different response curves.

Anyway this is just a vision, and maybe too wide in respect to topic of this discussion, so please forgive me, I could not resist it since I've been thinking about this for quite some time now :D

Link to comment
Share on other sites

The determining factor is if someone is willing and able to put in all the bespoke programming that stacking that data would call for - plus would that exclude people taking shorter OSC images? I must admit I was thinking 'OSC', or at least 'meta-stacking' of finished images for my mass stacking approach.

Link to comment
Share on other sites

Well I was thinking of doing the actual programming - because that's what I do anyways when I'm not doing amateur astronomy :D , but at the moment I'm pretty much tied at work (couple of projects with deadlines). Anyway such a project should certainly be open source project - so that all those who have the skill and want to contribute can participate. I think that it might appeal even to programmers less skilled at image processing and maths - it will certainly need a healthy dose of support infrastructure - like website / service for uploading data and downloading results.

I was thinking of accepting almost any kind of data - mono wide and narrow band, OSC, provided that the data complies with couple of guidelines - area coverage, min and max resolution, maybe threshold SNR, FWHM or such. People will still be able to upload their work but automated system would determine if data passes criteria to be included in result set.

Probably most difficult thing would be for people to provide metadata with their images, things like sensor/filter response. We could have a extensible library of cameras and filters so people can simply select camera model, any filters used, enter time and place of integration (to help with air mass estimation - might not be necessary, I think frames can be normalized, at least coming from same sensor, will have to think a bit more how to normalize values coming from setups with different wavelength response). There should also be option to add new camera models and filters.

I'm certainly going to do a pilot project of sorts. My plan is to complete the following setup in next few months: RC8" + ASI1600MMC with TS80 F/6 triplet combined with ASI185MC, in side by side arrangement. Both setups give similar FOV - TS80 with ASI185 being a few percent wider, but really not by much. Original idea was to capture lum with RC and at the same time color information with TS80. It will certainly help with integration time - twice data for same imaging time. Also I will have to employ different resolution stacking, and also test out final stack at some third resolution - RC8" + ASI1600MMC is oversampling my seeing and guiding at 0.5"/pixel and I'll probably be targeting 1"/pixel, on the other hand TS80 + ASI185 is at something like 1.6"/pixel.

So after I complete this setup I'll probably do "in house" stacking solution to process this data. I already have a sort of stack and processing method for such data with standard programs like DSS and ImageJ, but wanted to have a go at programming my own thing :D . Yes, one more thing - in this "in house" solution I planned to have mosaic stacking - I've done mosaics many times, including RC8" and ASI185 (really only way to get decent FOV with long focal lenght / small sensor size setup) and was always frustrated with extra steps needed to calibrate and stitch mosaic pieces.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.