Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

DSS or PixInsight WPBB?


Recommended Posts

As a graphic designer, I used Photoshop for best part of 30 years so when I took up AP a couple of years ago it was natural that I would use it for my processing and, up until now, I have resisted getting involved with the ‘steep learning curve’ that many users talk of when taking on PixInsight. However, a few days ago I took the plunge and purchased PixInsight and thanks to some excellent YouTube tutorials and a lot of playing around I can already see why PI is the program of choice for many astrophotographers and I am now a happy convert but… the WPBB script for stacking seems unnecessarily complicated and takes for ever times longer than Deep Sky Stacker to achieve what appear to be similar results. Am I missing something here? Is there an advantage to using the WPBB script rather than DSS. Once the DSS tiff is identified in PI there seems be no difference from a WPBB produced stack when processing.(?)

Link to comment
Share on other sites

Very interesting question, @Bluemoonjim.  It’s also one I can’t answer because frankly life has been too short to do a comparative test. But you have I assume, and you find no difference?  Interesting.  Mind you, I suppose there’s no reason in principle why there should be.  I used to use DSS and then processed in Photoshop. I then got into Pixinsight and it seemed natural to try and preprocess the Pixinsight way. I did it manually for some years, which was incredibly slow,  but it taught me a lot about what it’s doing and why.  When more recently WBPP was enhanced to its current excellent state it seemed so easy compared with doing it manually that I never thought to compare it with DSS.   What I like about WBPP is the way it correctly associates the relevant lights, darks, and flats from a mix of files.  Actually, I’ve only ever done OSC, but I can see how useful WBPP might be when dealing with a whole bunch of subs corresponding to different filters and their appropriate calibration files. I really appreciate WBPP’s flow diagrams showing how different files will be processed at each stage because it is very useful when making sure it’s all going to work correctly before hitting the start button.   I am interested to hear the thoughts of others on your question.  

PS. Adam Block’s collection of free YouTube videos entitled the Definitive Guide to WBPP are excellent. They’re split into different aspects like OSC and narrowband.  Search for them in YouTube. 

  • Thanks 1
Link to comment
Share on other sites

If I understand PI WBPP and DSS, the principle differences between them are that PI performs a deep analysis of each individual frame and gives it a score, or “weight” for several image parameters, as indicated by the PI tool's name Weighted Batch Pre Processing.

That deep analysis is partly the reason for the higher computer processing overheads and longer time-to-complete when stacking subs in PI compared to DSS.

Juan Conjero, author of PI, has stated many times that the single output combined image from a run of WBPP (and the previous non-weighted version of WBPP, BPP) is only a preliminary image and that by going back and recombining the calibrated .xisf subs with small changes to the combination parameters may yield superior results.
Knowing which parameters to change is where the “learning curve” with PI starts to bite.

If all your subs are of uniform high quality, with low noise levels, good star shapes etc., then I would not expect to see a huge difference between the preliminary output image after a first run through with WBPP in PI and a run through with DSS, but if the source subs were highly variable in quality, especially with regard to noise, then I would expect to see some more apparent variation between a DSS processed image and PI WBPP processed image, with the PI WBPP image having a lower background noise level in the dark, low-contrast areas of the image as the weighting process kicks in on a pixel-by-pixel basis rather than a whole-frame basis during the combination stage.

lastly, with the weighting analysis scores that were calculated during the WBPP process being saved with the calibrated .xisf subs that are automatically output from a WBPP run those scores can then speed up processes used by other PI tools during post processing because the subs don’t need to be analysed a second time (if for example you are carrying out a second recombination of the calibrated .xisf subs in the Image Integration tool using modified integration parameters).

Don’t know why, but writing this summary I can feel @ollypenrice peering over my shoulder, trying hard not to laugh…..🫢😅🫢.

HTH.

William.

Edited by Oddsocks
Spelling
  • Like 1
  • Thanks 1
  • Haha 1
Link to comment
Share on other sites

Thanks for the pointer to Adam Block’s videos, I’m checking them out.

Maybe there is a greater range of processes that take place in WPBB and also, I am only using OSC camera. My main area of concern though is that when I fed WPBB some five and half hours worth of lights plus calibration frames it took about three hours to complete and produced an extra 125gb (GB!) of files on my drive. The same frames took about 25 minutes in DSS and produced just one extra tiff file for the final image, a considerable difference. My machine is a pretty capable latest generation i7, 16gb ram, ssd and external storage but if I’m going to be producing hundreds of gigabytes of data for a few hours of data every time it now feels inadequate. 

Link to comment
Share on other sites

2 hours ago, Oddsocks said:

 

Don’t know why, but writing this summary I can feel @ollypenrice peering over my shoulder, trying hard not to laugh…..🫢😅🫢.

HTH.

William.

Not at all! I'm always an admirer of your expertise. 

Of all the things, though, that go into the making and processing of an image, I cannot believe that much will change under the niceties of different stacking software. My office is always open, and my data available, to anyone wanting to prove me wrong. I'll even put the kettle on!

:grin:lly

  • Haha 1
Link to comment
Share on other sites

If different applications produce more or less the same result then surely the two remaining considerations are ease of use and how long it takes.  The former of those two is a matter of personal preference.

As for the latter, does DSS do local normalisation?  That seems to be the most lengthy process in WBPP.  Also drizzle - not that I use that. 

Link to comment
Share on other sites

On 24/10/2023 at 14:20, Ouroboros said:

different applications produce more or less the same result

This seemingly casual remark is at the heart of the matter. What is "more or less" in this case? The purpose of all the bells and whistles in WBPP is to make as much use of all the data to create one final stacked image of the highest quality possible. "High quality" for me means low noise, efficient small and large scale pixel rejection, removing uneven areas in subexposures due to the occasional cloud passing (local normalisation), etc.

Furthermore, when you understand what WBPP does, you'll find it much more transparent than DSS. It gives more control over the stacking process, but the price you pay for this is a more complex process. For example, what is the "score" in DSS based on? Why reject a fixed percentage of frames, rather than using a quality threshold? (Maybe this has changed in DSS. I haven't used it for a long time, but when I used it, this lack of transparency annoyed me.) In PixInsight I use image weights based on a metric of my choice (noise, fwhm, eccentricity, etc), combined with a minimum weight to control the image integration process. If half of my subs are bad because focus drifted or poor guiding resulted in increased eccentricity after a meridian flip, image weight will tell me that. DSS with a fixed percentage would possibly try to stack poor subs and give an inferior result.

  • Like 2
Link to comment
Share on other sites

To be fair for all of its apparent complexity WBPP doesn't really take that long to get a reasonable working understanding. Like @Ouroboros, I really like the way you can simply point to one folder and it identifies and collects all the relevant files.  Running time can be long, some people have reported 11 hours plus for certain heavy jobs  - crikey I don't think I have ever collected enough data to warrant that :)   It is a great question though, exactly what difference do these different programmes make? For a bit of a giggle I had my ASIR stack some subs the other night when I was reconfiguring my setup for winter, nothing to lose experiment.  As with the ASI Air in general, its stacking routine is the most delightfully simply process to watch, totally intuitive and it's fairly quick. As for the results, well to be honest, I thought the masters it produced were actually very good. Certainly though, if I ever manage to pull off several hours of subs (blooming weather) I'll be sticking to PI and WBPP. 

Jim 

Edited by saac
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.