Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Comparison of stacking in DSS, PixInsight and APP.


Recommended Posts

Being a longtime DSS user, I recently got a trial for PixInsight for postprocessing. After the initial grumblings about the horrible UI, I got to like it and the results were objectively better. Then I decided to do calibration and stacking in PixInsight...

What an absolutely terrible dog of a process that is. Slow, cumbersome, using an outrageous amount of harddrive (my 307 pictures for this picture bloated from 50GB to 273GB!).

DSS Flow:

1. Import pictures, flats etc.

2. Click stack

3. Profit

PixInsight Flow:

steps 1 to 83: Do stuff in multiple different steps to all sorts of stuff individually.

84: Spend a full day of doing it

85: Profit

 

Anyhoo, if it yields a better result, I will take it. So here is the calibrated and stacked output side by side, having done an autostretch in PI. The data in question is 11 hours of 120s and 300s subs from my RASA and QHY247c. Click to enlarge. 

Left is DSS                                                                                                           Right is Pixinsight

pixinsight_DSS_Compare_zoomed_out.thumb.png.c15914e9d962a154f5823971ae0e5629.png

pixinsight_DSS_Compare_zoomed.thumb.png.cd9dd73335ecc1c9d98f209720dd52a4.png

I think I see a difference in the noise level in favour of PixInsight, but since I haven't actually processed it fully yet, I don't know what impact it will have on the final image.

What do you think?

Link to comment
Share on other sites

  • Replies 39
  • Created
  • Last Reply
2 minutes ago, michael.h.f.wilkinson said:

I would try APP (Astro Pixel Processor). I find it better than DSS and very easy to use.

I tried that a while ago. Let's just say that the results were very suboptimal. I don't think it was suited well for RGB pictures.

Link to comment
Share on other sites

Just now, Datalord said:

I tried that a while ago. Let's just say that the results were very suboptimal. I don't think it was suited well for RGB pictures.

Which version did you try? There have been some major improvements. This was done in APP (with some extra processing in GIMP), and the very latest version is supposedly a lot better still.M42USM3expcropcurves3.thumb.jpg.a2c6ed3e05a7e8aec8e995b2774e2289.jpg

Link to comment
Share on other sites

9 minutes ago, michael.h.f.wilkinson said:

I would try APP (Astro Pixel Processor). I find it better than DSS and very easy to use

And so it should be its not free - please compare like for like - so APP with Pixinsight for example.

Link to comment
Share on other sites

I also calibrate and stack my subs in APP, however, a full stack takes ages compared to DSS. The main reason I chose APP over DSS is because it's normalizing the background and it allows weighting based on sub quality.

Link to comment
Share on other sites

15 minutes ago, stash_old said:

And so it should be its not free - please compare like for like - so APP with Pixinsight for example.

I asked for yet another trial of APP. If I get it and I make it work, I'll post comparisons here.

Link to comment
Share on other sites

Yeah the batch processing script in PI is very easy to use. Stacking, calibrating, and integrating manually in PI does take a long time but it's meant to be very flexible with a lot of options such as weighing, normalisation, drizzle etc. The batch processing script is more like using DSS.

Link to comment
Share on other sites

I'm not sure about the noise but PI image looks like better aligned stack - faint stars are tighter in right image (zoomed / cropped part). That could be just down to centroid accuracy (better in PI so more precise alignment). Also I don't know what sort of algorithm is used for sub transform / align. Depending on type of filter - it might have impact as well.

Link to comment
Share on other sites

ok, I got a trial of APP, got latest version and set it off to chew on it. 1h45m later it finished. Workflow wise, it was almost as simple as DSS. As others have pointed out, PI probably is too if I had used batch.

I think APP is more smooth, but I can't tell whether it actually is in the final image after processing:

Left: APP                                            Right: PI

APP_vs_PI_zoomed_out.thumb.png.3780dbe971bd4dba3ab0cb4e8c97e348.png

APP_vs_PI.thumb.png.c2dbcbf4a57c495a6b819495d5028ea3.png

Link to comment
Share on other sites

6 minutes ago, Datalord said:

ok, I got a trial of APP, got latest version and set it off to chew on it. 1h45m later it finished. Workflow wise, it was almost as simple as DSS. As others have pointed out, PI probably is too if I had used batch.

I think APP is more smooth, but I can't tell whether it actually is in the final image after processing:

Left: APP                                            Right: PI

APP_vs_PI_zoomed_out.thumb.png.3780dbe971bd4dba3ab0cb4e8c97e348.png

APP_vs_PI.thumb.png.c2dbcbf4a57c495a6b819495d5028ea3.png

I see there are differences between the images at the top. Did you only apply an STF to them? In the PI stacked image I can see the tidal extension much easier, although the APP stack at the bottom looks smoother.

Also, the image scale is not 1:1 for the images at the bottom. However, it's not far and I can't really say which one displays more detail.

Link to comment
Share on other sites

1 hour ago, moise212 said:

I see there are differences between the images at the top. Did you only apply an STF to them? In the PI stacked image I can see the tidal extension much easier, although the APP stack at the bottom looks smoother.

Also, the image scale is not 1:1 for the images at the bottom. However, it's not far and I can't really say which one displays more detail.

Yes, opened them and nuked them. Only STF. The scale is not the same, I couldn't hit the exact crop, so it is slightly off. To me it looks like the APP version has a bit more detail in the galaxy itself.

Link to comment
Share on other sites

7 hours ago, Datalord said:

I think I see a difference in the noise level in favour of PixInsight, but since I haven't actually processed it fully yet, I don't know what impact it will have on the final image.

I think the stars are rounder in APP...

Link to comment
Share on other sites

Finally got some time to process this. I went with the APP version. I think the colours are better than I have had with previous versions and the detail is pretty good. Can't really figure out what a good crop is on this one.

709872825_M51_APP16bit.thumb.jpg.b3b1e150a06000d8caed055b5d42e8a6.jpg

Link to comment
Share on other sites

What an interesting thread. Your side by side results obtained from  different applications are fascinating, @Datalord.  Thanks for doing this.  It's a lot of work. 

I only once used PI to register and stack and, deciding life is too short, have stuck with DSS ever since - only using PI for post-stack processing. I haven't tried PI's batch processing though. Maybe I should. 

Link to comment
Share on other sites

  • Datalord changed the title to Comparison of stacking in DSS, PixInsight and APP.

Interesting comparison. The APP images look very clean. Imo there may be a little more detail in the background of the PI image.

Did you apply the same stf to both images? To compare noise, it's probably better to make a preview of the same area in each linear image and use the noise evaluation tool in PI (or APP if it has one).

Link to comment
Share on other sites

11 hours ago, wimvb said:

Interesting comparison. The APP images look very clean. Imo there may be a little more detail in the background of the PI image.

Did you apply the same stf to both images? To compare noise, it's probably better to make a preview of the same area in each linear image and use the noise evaluation tool in PI (or APP if it has one).

Ah, no, I just nuked it and looked through it.

Here's the output from the NoiseEvaluation script in PI, using roughly the same preview area in completely fresh linear:

APP

* Channel #0

σR = 8.073e-04, N = 2125832 (43.60%), J = 4

* Channel #1

σG = 7.039e-04, N = 2795901 (57.34%), J = 4

* Channel #2

σB = 9.019e-04, N = 1714804 (35.17%), J = 4

DSS

* Channel #0

σR = 7.473e-05, N = 772918 (17.55%), J = 4

* Channel #1

σG = 8.078e-05, N = 1481187 (33.62%), J = 4

* Channel #2

σB = 1.112e-04, N = 986582 (22.40%), J = 4

PI

* Channel #0

σR = 3.668e-05, N = 1109813 (27.27%), J = 4

* Channel #1

σG = 3.592e-05, N = 1458419 (35.84%), J = 4

* Channel #2

σB = 3.402e-05, N = 1839301 (45.20%), J = 4

 

I have no clue how to interpret this?

Link to comment
Share on other sites

The sigma (standard deviation) is a measure of noise, R G and B stands for the colour channels. The pixinsight image seems to have less noise than the APP image. There's also less spread between the colour channels in pi as compared to app. Anyhow, the results suggest that the PixInsight image you showed earlier wad stretched harder than the APP image.

Link to comment
Share on other sites

10 minutes ago, wimvb said:

The sigma (standard deviation) is a measure of noise, R G and B stands for the colour channels. The pixinsight image seems to have less noise than the APP image. There's also less spread between the colour channels in pi as compared to app. Anyhow, the results suggest that the PixInsight image you showed earlier wad stretched harder than the APP image.

It also means I have to redo my process to use the PI image instead of APP... doh

Link to comment
Share on other sites

40 minutes ago, Datalord said:

Ah, no, I just nuked it and looked through it.

Here's the output from the NoiseEvaluation script in PI, using roughly the same preview area in completely fresh linear:

APP

* Channel #0

σR = 8.073e-04, N = 2125832 (43.60%), J = 4

* Channel #1

σG = 7.039e-04, N = 2795901 (57.34%), J = 4

* Channel #2

σB = 9.019e-04, N = 1714804 (35.17%), J = 4

DSS

* Channel #0

σR = 7.473e-05, N = 772918 (17.55%), J = 4

* Channel #1

σG = 8.078e-05, N = 1481187 (33.62%), J = 4

* Channel #2

σB = 1.112e-04, N = 986582 (22.40%), J = 4

PI

* Channel #0

σR = 3.668e-05, N = 1109813 (27.27%), J = 4

* Channel #1

σG = 3.592e-05, N = 1458419 (35.84%), J = 4

* Channel #2

σB = 3.402e-05, N = 1839301 (45.20%), J = 4

 

I have no clue how to interpret this?

I'm afraid those numbers don't mean much.

In order to truly asses which software produces the best results one would have to have the ground truth, and pay special attention on parameters used to produce final stack.

Only way to get the ground truth is to create synthetic ground truth (for example random star field of varying magnitudes and for example an elliptical galaxy - easiest to model), and then produce set of subs, where for each sub there would be random level (within certain interval) of blur applied (simulating changing conditions like seeing and guiding error - wind and such), sample it to certain resolution (with shift in each sub), introduce sky glow (here one could model LP dome and movement of target across the sky), add read noise, add dark current and associated noise.

One frame would be selected to be reference frame, and sub with no noise contributions would be created to be reference.

Each software would be then used to create stack based on reference frame, and difference of final stack to reference frame would be examined statistically to see which one produced the least noise / best result.

Since we don't (yet) have software to produce synthetic subs, best thing you can do if you want to examine performance of software is to make stack as close in parameters as possible.

Use the same reference frame in each software, use (the same) part of the image where there is no signal at all to compare noise levels (no stars, no objects).

This will give you indication of level of background noise, but it will not give you indication of final SNR - for example, alignment process, depending on algorithm used, can introduce additional blurring - background will be "smoother" but SNR and resolution of target will suffer in such case.

 

Link to comment
Share on other sites

While I guess I agree with the scientific version, Vlaiv, I care more about practical application and experience. Obviously all of the above is done with my bortle 5 skies and all the flaws I have in my imaging train, but if it gives an indication of real world usage, I dare say it would be useful.

To that end, it would be interesting if the developers of each of these software suites would take a given dataset, freedom to produce the best possible result and post the settings and results for comparison in one thread here. And that dataset should NOT be clean, it should be representative of an "average" user with all the dirt we put into it.

Link to comment
Share on other sites

What I find more interesting than sigma values, is that the dss image has slightly blurrier/wider stars. This indicates that the stacking algorithms of pi and app are better (in this instance). You can always increase snr somewhat by tweaking stacking parameters and adding image weight factors. In the end, I would rely more on visual inspection than any numbers. To me, the pi images, while seemingly noisier, also seemed to hold ever so slightly more detail in the faint background regions. But again, this can be the result from stretching more.

Other things that need examining are the cores of bright stars, and large scale rejection (satellite trails).

Link to comment
Share on other sites

  • 3 weeks later...

Just to add to this, I captured over an hours worth of data last Saturday night on M3, (mainly as an excercise to try and eliminate coma by adding some spacers to my MPCC, which failed BTW) and spent the best part of Sunday running it through the long Pixinsight pre processing. Needless to say, the results are horrific!! Out of curiosity, i run everything through DSS last night and the result is much better. Gradient is even and the result after ABE is also fairly even and should be easy to process out.

80 odd 60s lights at ISO800, 15 darks, 30 flats and 30 bias, dithered hard every 5 subs

 

First and second are the Pixinsight stacks, first stretched and second after auto background extraction.

 

Third and fourth are DSS stack, drizzled, again initial stretch and then ABE.

 

Pixinsight took many hours on Sunday where DSS took a mere half hour, if that even applying drizzle. I've had decent results using the Pixinsight method so I'm stumped as to whats went wrong here!

drizzle_integration1.jpg

drizzle_integration_ABE1.jpg

Autosave001.jpg

Autosave001_ABE.jpg

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.