Jump to content

Banner.jpg.692da9ed9af2eace53e1dc94cfc0e912.jpg

Recommended Posts

I have been pursuing the impossible task of astrophotography for a couple of years now and the penny has just dropped. Namely, that the sheer quantity of data processed does not necessarily equal a good quality final image, rather it is the quality of the data processed. To quote someone who knows more than I do about this subject, 'Put crap in, get crap out!'.
Just as a quick example, I imaged Saturn a few days ago with 4" refractor scope (planetary is not usually my thing). Video 5000 frames and then processed in Auotstakkert, Registax. Then tried decreasing the number of frames used for stacking in each image.  The processing was exactly the same for each image. Obviously a 4" scope with shortish focal length is only going to capture a limited amount of detail for this subject but hopefully it demonstrates the point. Maybe 1% would be perfect!

FRAMES 2.jpg

Edited by Bluemoonjim
Update picture
Link to comment
Share on other sites

Might try to debayer your data to get even better result.

In any case - lucky imaging is trade off - you want to use only the best subs when seeing is effectively frozen and distortion is minimal - but you want to stack as many subs as possible to get good SNR that will help with sharpening.

  • Like 1
Link to comment
Share on other sites

Interesting discussion, here is a post I made on the same topic. It is very tempting to put in  all the subs in the hopes of improving the SNR, but I’m getting increasingly more selective on the data I choose to process, especially as my object data files grow ever larger with every season that passes.

 

  • Like 1
Link to comment
Share on other sites

Yes Tomato, exactly what I am finding. I recently imaged the Wizard Nebula and managed around 10 hrs of data over 3 nights. My first session was just over 3 hours worth, so when I got to 10+hrs I thought there would be dramatic improvement. After throwing out the obvious bad subs with star trails, clouds, cat playing with cables etc. there was still a lot of subs that fell below the 'benchmark' in DSS that I had set from the first batch of images and the temptation is to say, 'sod it' and put everything in the mix regardless (technical term) but I have slowly begun to realise this is self-defeating in terms of final image quality.

Link to comment
Share on other sites

DSO / long exposure imaging is different.

There are techniques that allow you to utilize all data gathered, even if that data is suffering from more LP, has worse SNR and worse FWHM

 

Link to comment
Share on other sites

I recently posted an image of M31 which had a lot of subs from my RASA8 with poor star shapes. Despite my best processing efforts the stars detracted from the image and the intensive processing introduced other artefacts. After receiving feedback I discarded two thirds of the RASA data and although the image was not as deep, it was IMHO, an improvement. What do you think?

All Data

E5CE9B36-FA88-4E7C-8F2E-C7F9AF7009E3.thumb.jpeg.3b3d838146f87c8cfe6c8c11459e469c.jpeg

Two thirds of RASA data removed

DD0739D2-E4D1-4EE8-8C3C-318E26EFA3B2.thumb.jpeg.7cba1a29fbd36e2680682444596f1593.jpeg

 

Edited by tomato
Link to comment
Share on other sites

4 minutes ago, tomato said:

What do you think?

I think several things :D

- In order to utilize all the data, you need to work with appropriate algorithms. Just throwing data all together is not proper way to go about it

- Having two different images to compare can't exclude different processing parameters used. Top image is clearly pushed much harder and color management is over the top (saturated beyond repair :D ).

Only thing that should be compared between two stacks is:

- resulting FWHM

- any gradients that can't be removed (nonlinear gradients at linear stage)

- achieved SNR

There is even tradeoff between first and last point here. If you have good enough SNR, you can afford to sharpen data more and can end up with sharper image than if you only keep "sharp" subs but don't achieve good enough SNR to be able to sharpen image.

 

Link to comment
Share on other sites

All valid points, and you are correct the processing is very different, but I still don’t believe a better image could be achieved with the original full set of data.

Garbage in, garbage out.

Link to comment
Share on other sites

5 minutes ago, tomato said:

All valid points, and you are correct the processing is very different, but I still don’t believe a better image could be achieved with the original full set of data.

Garbage in, garbage out.

It is about how you stack that data.

You really want two things - first is advanced algorithm that will stack based on actual SNR not "estimated weights"

Second is algorithm that will equalize FWHM across subs with use of deconvolution. This will take subs with high FWHM (poor seeing) and improve sharpness at expense of SNR.

Former algorithm will deal with SNR deterioration.

 

  • Like 1
Link to comment
Share on other sites

11 minutes ago, vlaiv said:

It is about how you stack that data.

You really want two things - first is advanced algorithm that will stack based on actual SNR not "estimated weights"

Second is algorithm that will equalize FWHM across subs with use of deconvolution. This will take subs with high FWHM (poor seeing) and improve sharpness at expense of SNR.

Former algorithm will deal with SNR deterioration.

 

Were you working on some algorithm to that effect, or have I completely made that up? 

Edit: ok, didn't see the link to the other post. Clearly you were! 😁

Do you know if anyone ever picked it up for PI?

Edited by The Lazy Astronomer
Link to comment
Share on other sites

mmmm... interesting. It brings into question what we are actually trying to achieve when imaging. Is it an aesthetic thing or is it about trying to be 'faithful' to what exists in the vastness of what is out there. In other words, do we go with what we see/capture or do we invent colour, tone, scale, star size etc. Astrophotography, at least in the end result, seems more like art than science. You only have to compare images of the same subject from different Astro photographers to see that it is more about interpretation than about reality. BTW I think they are both wonderful images in their own right. I mean, not so many years ago images like this would not have been achievable by Joe public (no disrespect intended) 🙂

  • Like 1
Link to comment
Share on other sites

There is no simple answer to this question and any simple answer will almost certainly be wrong. The most important thing to understand in DS image processing is that you cannot expect to process all parts of the image in the same way if you want a good result. This has been called 'The Zone System,' meaning that different parts of the image have different processing requirements and should be worked on differently. It follows logically from this that different parts of the image have different requirements in terms of quality.

Take Tomato's Andromeda.

1) Stars need to be round so should be made from a stack only of subs with round stars. You have plenty of signal on stars so you can be selective.

2) Fine disk details require quality focus, tracking and sky transparency. Select your stack accordingly. It is likely to be the same as the star stack made from subs with a good FWHM, as Vlaiv says.

3) That faint, soft glow of nebulosity which is so extensive around the galaxy if you take the time to work on it. All it needs is signal. It contains no details, so who cares if it's blurred? Throw everything you've got at it. The outer glow is miles better in Tomato's full stack, though the colour balance is better on the short stack. Have the best of both worlds!

Provided you have the processing knowledge to be able to combine them, this will give the best result and use an awful lot of your data at least somewhere in the image.

Whenever I'm working on a channel or a layer I ask myself,  What am I going to do with this layer?  Only when I have the answer to that question do I know how to process it. Say I'm working on an Ha layer to blend with red. I can stretch the Ha's background well beyond the noise floor because, the way I add Ha to red, the Ha background will be fainter than the faint red background so it won't appear in the final image.

Olly

Edited by ollypenrice
Clarification
  • Like 3
Link to comment
Share on other sites

Just to summarise, the smart algorithm to stack data based on @vlaiv's criteria doesn't currently exist in any of the available processing packages, DSS, PI, APP, AP, Siril etc? I use the default settings in APP for the sub selection and weighted stacking, but there might be some better ones more suited to what needs to be achieved with the dataset.

Certainly using @ollypenrice's  approach, there is scope to combine the data to achieve a better balance between star shape and colour and the nebulosity surrounding the galaxy, I'll have another go.

  • Like 1
Link to comment
Share on other sites

12 minutes ago, tomato said:

Just to summarise, the smart algorithm to stack data based on @vlaiv's criteria doesn't currently exist in any of the available processing packages, DSS, PI, APP, AP, Siril etc? I use the default settings in APP for the sub selection and weighted stacking, but there might be some better ones more suited to what needs to be achieved with the dataset.

Certainly using @ollypenrice's  approach, there is scope to combine the data to achieve a better balance between star shape and colour and the nebulosity surrounding the galaxy, I'll have another go.

Currently it is certainly viable to do what Olly proposed.

Make a few stacks - one with best subs, one with medium number of subs and one with all subs.

Then combine them either using layers or with pixel math.

You can use either SNR map or pixel intensity for combination.

SNR map is simply signal image / noise image ratio where signal is obviously stack and noise image is stack but using standard deviation as stacking method rather than average.

These two can be used to do combination at linear stage.

For example - if intensity > some value use good image, else use full stack

Similarly you can use SNR map in same way.

Alternative of course is to use layers in standard way.

 

  • Thanks 1
Link to comment
Share on other sites

4 hours ago, vlaiv said:

Currently it is certainly viable to do what Olly proposed.

Make a few stacks - one with best subs, one with medium number of subs and one with all subs.

Then combine them either using layers or with pixel math.

You can use either SNR map or pixel intensity for combination.

SNR map is simply signal image / noise image ratio where signal is obviously stack and noise image is stack but using standard deviation as stacking method rather than average.

These two can be used to do combination at linear stage.

For example - if intensity > some value use good image, else use full stack

Similarly you can use SNR map in same way.

Alternative of course is to use layers in standard way.

 

I think what I'd do is take the best image from the short stack and adjust the colour balance of the deep stack to match it. I'd then de-star the deep stack (Starnet or StarXterminator) and adjust it so that it was very slightly less stretched than the short stack. I'd measure its brightnesses in various places other than the sought-after outer glow in order to make this adjustment. I might use a kinked curve to pull up the outer glow a bit more as well. Then I'd put the deep starless image on top of the short starry one in Photoshop's blend mode Lighten. In that way only the outer glow ought to affect the short stack below it. (Sounds OK in theory :grin::grin:)

As for trying to do it in PI, that would be like trying to play the piano wearing boxing gloves! 🤣

Olly

Link to comment
Share on other sites

4 hours ago, ollypenrice said:

As for trying to do it in PI, that would be like trying to play the piano wearing boxing gloves! 🤣

Easy. Do everything you said using curves, write simple max(image a, image b) pixelmath expression, recoil in horror at the all the artifacts, make further adjustments, write more complex pixelmath expression, recoil in horror again (repeat previous steps as many times as required), then give up and move to a layers program 😁

Link to comment
Share on other sites

On 21/09/2022 at 14:45, ollypenrice said:

There is no simple answer to this question and any simple answer will almost certainly be wrong. The most important thing to understand in DS image processing is that you cannot expect to process all parts of the image in the same way if you want a good result. This has been called 'The Zone System,' meaning that different parts of the image have different processing requirements and should be worked on differently. It follows logically from this that different parts of the image have different requirements in terms of quality.

Ah, yes, what Vernor Vinge calls the 'Zones of thought'...

Is there any evidence, or theoretical possibility, that the universe really  works something like the universe Vernor Vinge describes in his 'Zones of  Thought' series? - Quora

I'm pretty sure that Andromeda's core falls, like the Milky Way's, in the category of "unthinking depths", when it comes to processing. 🤔

  • Haha 1
Link to comment
Share on other sites

Well guys, I have had another try, this was using 78% of the RASA subs after discarding really bad star shape subs and some frames which APP assigned a high scoring quality rating based on star shape and FWHM values but were badly affected by passing cloud so not much signal in them. This was used along with the Esprit and RC data to generate a galaxy 'layer'. Another stack where only 33% of the RASA subs were retained was used to create the stars 'layer'. These were combined in PI with PixelMath. Better than my reduced data image higher up the thread?

Image49NX.thumb.jpg.051fe87ee92cc3f5e0699950bbbbdbf3.jpg

  • Like 3
Link to comment
Share on other sites

From an aesthetic view the first image would be great if it were desaturated particularly the stars. 

As I don't know have a benchmark of how M31 is actually  supposed to look. Any of these versions is good to my eyes. I don't have the knowledge, or experience of the folks doing these DSO.

As mentioned above AP can be more of an artist's interpretation, rather than an accurate account of the target. 

 

Link to comment
Share on other sites

2 hours ago, scotty1 said:

 

As mentioned above AP can be more of an artist's interpretation, rather than an accurate account of the target. 

 

On the other hand, we might argue that the art of astrophotography lies in rendering the object with a meaningful appearance. Some facts are known: the background sky is a neutral dark grey, star colours accord with their spectroscopic classes, the H-Alpha line centres upon 656 nanometres and OIII lies on 500 nm.  These colours are identifiable.  More generally, galaxy bulges are dominated by reddened Population II stars and bright spiral arms by hotter Population 1 stars.  The natural colour imager should be constrained by these facts. I think it's easy to over-state the claim that it's largely subjective, though it is to some extent. We might also exaggerate deliberately in order to bring features into view, but the features themselves are there in the data.

10 hours ago, tomato said:

Well guys, I have had another try, this was using 78% of the RASA subs after discarding really bad star shape subs and some frames which APP assigned a high scoring quality rating based on star shape and FWHM values but were badly affected by passing cloud so not much signal in them. This was used along with the Esprit and RC data to generate a galaxy 'layer'. Another stack where only 33% of the RASA subs were retained was used to create the stars 'layer'. These were combined in PI with PixelMath. Better than my reduced data image higher up the thread?

Image49NX.thumb.jpg.051fe87ee92cc3f5e0699950bbbbdbf3.jpg

I think that's very good. My only hesitation lies in the outer glow as it heads into the image's bottom left corner. In reality that glow would, in all its glory, reach right into that corner whereas here is seems to lose confidence!  (Is it being nibbled into by DBE  or ABE?) It's a little stronger in your 'full data' version. However, this is a big ask, especially from anything but a very dark site.

(Paul and I are working on M31 with the RASA at the moment. This was after a quick test to see what it gave. We thought it did a beautiful job and are now carrying on with it, so I'm not surprised to see how nicely yours is working out.)

Olly

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

11 hours ago, tomato said:

Better than my reduced data image higher up the thread?

I like this rendition the best, but its not without its flaws.

For some reason, people using PI seem to have strangely clipped stars in their images. I think it is feature of some workflow everyone seems to be using.

image.png.b72b604c1612e8800881e0c9919c95fb.png

Above is example.

Then there is that "plastic wrapper background" effect which stems from overuse of denoising:

image.png.006238f90ebd09b1563134ab627b9943.png

If you see your faintest stars starting to "polymerize" and form chains like melted plastic - you've overdone it.

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.