Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Processing and stacking two separate sessions in Siril?


Recommended Posts

Hello

I have two separate sessions, same gain, exposure, temperature and framing taken a couple of weeks apart

I've taken darks, bias and flats for both sessions

Could anybody give me tips on how I manually process these in Siril ?

I assume I need to separately calibrate each session, then register and stack, but I am struggling with how exactly you do this in Siril as I usually use the scripts for auto stacking

Link to comment
Share on other sites

The simplest is to use the script and peer into the "Process" folder that the script creates in your working folder and grab the calibrated files from there (assuming its the default scripts you have used here).

You will find a bunch of files with the prefix "pp_lights" which are your calibrated lights, the files are also debayered in case you ran the OSC script for colour data. You can then just drop these into some other folder where you might keep work-in-progress type data for later use in case you want to keep adding data. Then just drag and drop all the different files from all the different sessions to the Conversion tab window, or you could use the "+" icon to add them there. Name the sequence and click convert, it will create a new sequence from all the data you fed it. I recommend setting the Symbolic links option on as this way Siril does not actually write any new files and the sequence is imported basically instantly, instead of writing several gigabytes of files first. If you have more than 2048 files it gets a bit more complicated because of OS limitations but im assuming not, so wont get into that now.

The less simple - at first - way is to create your own script that does whatever you want it to, such as just calibrate the data. The laziest method is to find the script in the Siril installation folder and edit the premade scripts to just calibrate the data and stop there. The files are in .ssf format but WordPad will open and work with this format just fine, the edited file has to be saved as .ssf after the edits. Simply remove the parts i have crossed below and the script will now save non-debayered files and not register or stack them. If you want to have the files be debayered at this phase then dont remove that part.

exampleedit1.JPG.160e12e8308ef4f98aff20a4e49cb200.JPG

If you save the files as not debayered you also have the option to use the "seqsplit_cfa" command to split debayer the OSC images. This creates 4 mono images from each raw input sub: 1 red, 1 blue, and 2 separate green ones. The image size is also effectively binned x2 this way.

* Edit

Realized that i did not really answer the question of how to stack manually, but its fairly simple. After the sequence is created you need to register the files through the Registration tab. Use Global Star Alignment with Lanczos-4 and Interpolation clamping option enabled (make sure you have the newest version of Siril). For RGB data its probably best to have the channel set to green as thats where the best SNR is likely to be found. After registration completes Siril will then draw a plot of various statistics into the Plot tab where you can remove images with bad FWHM, low star number, bad roundess, etc. Handy tool really, at least give it a small look before stacking everything as you will learn how good the data actually was.

For Stacking, you can mostly stick with the defaults, at least i think these are defaults: Average with rejection, Normalisation as Additive with scaling, winsorized sigma clipping with 3 for high and low range for the Pixel Rejection part. If you have a lot of dubious data of various quality skies then be sure to use some kind of weighting for the subs, such as #stars or noise or whatever.

If you want to get a tiny little bit better pixel rejection with less actual faint signal removed then use the Generalized Extreme Studentized Deviate Test method. It consistently clips fewer pixels so presumably clips less faint stuff, but gets rid of all outliers. It takes much longer though, like several times longer.

Edited by ONIKKINEN
forgot the question...
Link to comment
Share on other sites

Thanks Oskari, that's really helpful. Ended up shooting a couple of hours from the balcony last night and want to combine with some frames from a few weeks ago. That's a big help 

Link to comment
Share on other sites

@ONIKKINEN

Houmenta  Oskari

I modded the Script for OSC with background extraction / debayer, and ran it separately on the data from two nights, so I have two sets of calibrated lights from the two nights (52 files)

Script:

image.png.175627749e51d2cb2700f1fcdaf51014.png

 

I have then added these using the '+' in the 'Conversion' tab and created a new sequence 'PPConv' with symbolic links  checked but not debayer

At this point I hit a blank.  I have the PPConv symbolic links in my working directory, and the PPConv sequence

When I move to the 'Sequence' tab, I press 'search sequence' and it locates the PPConv seq in the working directory no problem.  Do I need to export / normalise this sequence in some way to proceed further?

 

image.png.76a9c4696d95e11237f5b34ce31df5de.png

 

 

I assume I ignore the 'Calibration' tab because the Light frames were previously calibrated

 

image.png.5c5cfff19fba0ad3073e4df876550a2d.png

 

When I go to the Registration tab this appears to be greyed out for 'go register'

image.png.33ea56a75cd91c56770d7eaadb6c25ca.png

Appreciate any heads up as I'm new to manual processing so this is a learning curve

 

Kiitos

Rich

Edited by 900SL
Link to comment
Share on other sites

Your go register button is greyed out as you haven't done the 3 star selection procedure. You can do that (drag select a star leaving room around the star selection and press star (1/2/3) button) then move to the next star or leave the registration mode on (is it global deep sky as default?). I tend to do all mine manually rather than scripts.

Link to comment
Share on other sites

Any particular reason for removing the background on a per-sub basis rather than from the final image? I know the Siril tutorial mentions that it can be done both ways but i think its better to let the gradients be until the images are stacked. The per-sub extraction should be used as the last resort if the tool is unable to deal with the gradients in the stacked image for some reason. From personal experience that only happens when flats dont match the lights in orientation/tilt and there is a severe local light pollution gradient such as a light that shines directly down the tube causing reflections and all sorts of issues.

The new tool introduced in 1.10 and afterwards can deal with very cursed gradients and have not seen an image that was not fixable.

I am not mathematically oriented so cant give you an equation that would prove why BE before stacking is worse but i would assume removing the gradient before stacking makes the subs more noisy (as noise stays, cant remove that with background extraction) and so a less efficient end result as the subs now have worse SNR pre stacking. Sure the gradient will evolve with the night but typically it wont do a full 180 and whatever direction is towards the ground in the image will always have the worst gradient so the gradient doesn't get to be all that complex if flats have worked out ok and the locale isn't completely ruined by some nearby lights.

Link to comment
Share on other sites

Given that it's snowing again I'll check that out  Oskari :)  I'll do one stack with BE and one without, see which works better. I've generally used preprocessing with background extraction in the past.

The camera rotated slightly between sessions. Means I have to crop the combined stack a bit. I'll do a plate solve next time to check rotation.

 

Link to comment
Share on other sites

Isn't it something to do with accumulation of light pollution signal if you leave it to remove it at the end, and also the pattern becomes more complex if you leave it for later. It can calculate background extraction more easily the simpler the gradient (ie per sub). Tbh I haven't really noticed much difference doing either way.

Link to comment
Share on other sites

If the input gradient is linear, which it is if flats have worked well, and there are no obvious flaws in the setup the output gradient should be linear too. This is how i thought it went:

gradientexample.png.f2022b21ae7657e8dd3410886218776e.png

The gradient could be a complete 180 in a third night and it would more or less cancel out and still be a linear gradient, but now to some other direction.

Another reason why i dont think its a good idea to automate BE on a per-sub basis is that there isn't a complete picture of the signal available in the background of the image and so accurate placement of background samplers is impossible. True signal will be treated as background in that case and some will be lost and also since the background extraction has very noisy data to make the decisions on - surely the end result will be noisier (=less accurate) too. I tested this with a fairly challenging dataset below, where there is a little bit of IFN in the background to ruin the automatic removal process by making the samplers placed on the single sub guaranteed to be inaccurate. For images without IFN you can have any number of things that do the same thing such as fainter stars, faint Ha nebulosity in nebulous regions etc. Images shown in the Histogram preview mode and false colour rendering to show issues obviously.

Without gradient removal:

nobkg.thumb.jpg.47d60e0be08fd1b4d01e1f3a6a616e65.jpg

With per-sub gradient removal:

bkg.thumb.jpg.e45af717f7fbc515d6ffa5e0783397d8.jpg

To my eyes it looks like the gradient removal process has created the issues rather than solved them. Your mileage will most likely vary, but since the Siril tool is so simple to use and so effective, i really dont think its worth it to try and automate it pre-stacking.

By the way i will mention that i kind of moved the goalposts in this argument to my favour by picking this particular image where i know automatic BE fails. Of course there might be many images where such issues do not arise.

Link to comment
Share on other sites

😁

'Another reason why i dont think its a good idea to automate BE on a per-sub basis is that there isn't a complete picture of the signal available in the background of the image and so accurate placement of background samplers is impossible. True signal will be treated as background in that case and some will be lost and also since the background extraction has very noisy data to make the decisions on - surely the end result will be noisier (=less accurate) too. I tested this with a fairly challenging dataset below, where there is a little bit of IFN in the background to ruin the automatic removal process by making the samplers placed on the single sub guaranteed to be inaccurate. For images without IFN you can have any number of things that do the same thing such as fainter stars, faint Ha nebulosity in nebulous regions etc. Images shown in the Histogram preview mode and false colour rendering to show issues obviously'

I agree with the rationale here. It would be useful to know how Siril does frame by frame BE. I assume it uses fairly loose parameters, 1 degree polynomial and is preset to remove simple gradient

 

I did two runs on data from other evening, flaming star again. One with BE and one without (but BE done in Siril after stacking)

Frame by Frame extraction on left

Capture.thumb.JPG.e3f5d17d8a0e3ecd543a1f14f298fae6.JPG

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.