Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Thoughts on which imaging rigs to concentrate on


Gina

Recommended Posts

Are you aware you can change buffer sizes in the ImageIntegration process that could improve performance?

Here's a quote from the reference documentation - the options mentioned are found under the Image Intergration section of the process. I believe that buffer size needs to be at least as large as your working image size (so over 64Mb for the ZWO 1600MM saved in xsif format) and the stack size should be lower than your total available RAM (leaving enough RAM for your operating system, etc to work). I have 16Gb of RAM so set this to 10Gb.

Quote

 

Buffer size

Size of a pixel row buffer in mebibytes (MiB). This parameter defines the size of the working buffers used to read pixel rows. There is an independent buffer per input image. A reasonably large buffer size will improve performance by minimizing disk reading operations. The default value of 16 MiB is usually quite appropriate. Decrease this parameter if you experience out-of-memory errors during integration. This may be necessary for integration of large image sets on systems with low memory resources, especially on 32-bit operating systems. The minimum value is zero, which will force ImageIntegration to use a single row of pixels per input image.

Stack size

This is the size of the working integration stack structure in MiB. In general, the larger this parameter, the better the performance, especially on multiprocessor and multicore systems. The best performance is achieved when the whole set of integrated pixels can be loaded at once in the integration stack. For this to happen, the following conditions must hold:

Buffer size (see above) must be large enough as to allow loading an input file (in 32-bit floating point format) completely in a single file reading operation.
Stack size must be larger than or equal to W×H×(12×N + 4), where W and H are the image width and height in pixels, respectively, and N is the number of integrated images. For linear fit clipping rejection, replace 4 with 8 in the above equation. Note that this may require a large amount of RAM available for relatively large image sets. As an example, the default stack size of 1024 (1 GiB) is sufficient to integrate 20 2048×2048 monochrome images optimally with the default buffer size of 16 MiB. With a stack size of 4 GiB and a buffer size of 64 MiB you could integrate 20 4K×4K monochrome images with optimum performance on a 64-bit version of PixInsight.

Use file cache

By default, ImageIntegration uses a dynamic cache of working image parameters, including pixel statistics and normalization data. This cache greatly improves performance when the same images are being integrated several times, for example to find optimal pixel rejection parameters. Disable this option if for some reason you don't want to use the cache. This will force recalculation of all statistical data required for normalization, which involves loading all integrated image files from disk. The file cache can also be persistent across PixInsight Core executions. The persistent cache and its options can be controlled with the Cache Preferences dialog.

 

 

Link to comment
Share on other sites

Thanks very much Ken :) No - I hadn't read that - I plowed through quite a lot of the documentation but I'm too old for all that maths so gave up!  Thanks very much - the default values are far too small.  No wonder the pixel row integration is so slow!

Edited by Gina
Link to comment
Share on other sites

PI aborted when it got into the pixel integration :(  I guess I've set the Stack Size too high - the cache must be using memory leaving less room for the stack.

Trying again with 5GB stack.

Edited by Gina
Link to comment
Share on other sites

Torrential rain here again and my observatory underfloor sump emergency bilge pump is working well :)  25s on and 25s off with a great spout of water coming out of the 22mm hose which I can see from my living room window.  The pump is specified at 750 gallons/min so it's clearing a LOT of water  :eek:

Link to comment
Share on other sites

Back to PixInsight...

Put everything back to what worked using the 300 bias subs.  Fine.  Then changed just the buffer size to 65MB and ran it again - got to the pixel integration and crashed - no error message - just disappeared from the screen.  The file size of the FITS files is 32.15MB so maybe 64.3MB would suffice so I'll try that.

Link to comment
Share on other sites

Running some tests.  Firstly, I thought I'd see what happens with fewer subs so I selected part of the set - 90 subs instead of 300.  Tried 65MB buffer size firstly with 1024MB Stack then 2048 - both OK. 

Link to comment
Share on other sites

Shut down everything else except System Monitor and ran PI with different sets of bias frames - 100, 200 and 300.  Using 100 frames, 65MB buffer and 5120MB stack with cache on and that went all the way through and not quite reaching 100% memory usage.  Went on to 200 frames and that didn't even work with the default 1024MB stack - it ran out of memory.  Next I'll turn off the cache and see if that makes any difference.

Link to comment
Share on other sites

Turning the cache off didn't seem to make any difference.  As soon as the pixel integration started the memory usage went up from about 2Gb and gradually rose to maximum when PI shut down.  Running Firefox at the same time seemed to make no difference to memory usage - certainly less that 100MB.

Conclusion - for more than 100 bias subs changing the buffer and stack from the default to what would seem realistic values doesn't seem to work.  On 100 subs the speed increase in using 65MB buffer and 5120MB stack seems minimal which surprised me.  For 300 subs or more it seems as if the default settings have to be used though I haven't tried setting the buffer somewhere between the 16MB default and 65MB needed to contain a frame's worth of data.

Since bias frames only need to be captured and processed once for each gain setting the loss of processing speed from using the defaults in not really significant.

Link to comment
Share on other sites

That's good to know. Although bias subs are only done once, I'm integrating around 200-300 lights now for some of my projects. 30s and 45s subs quickly add up. Mine ran just under 200 lights with 65Mb and 10Gb in a single integration, though it was still relatively slow suggesting it might have been calling to hard disk cache. I might just leave it at default settings too.

  • Like 1
Link to comment
Share on other sites

Here is a screenshot of the ImageIntegration setup that I tried with 200 bias subs and which failed during pixel integration.  It might possibly give you a clue as to why changes in buffer size and stack didn't work as expected.

PI Bias Intergeration 06.png

Link to comment
Share on other sites

And how many frames did you integrate?  I shall leave the file cache on as it doesn't seem to affect pixel integration.

I have 600 bias frames with gain of 500 which is what I'm using now and also 600 for a gain of 440 which I used for earlier lights so I might as well use all of them, I guess.  I doubt much if anything would be gained between 300 and 600 though.  I'll probably try both 600 and 300 on one set of lights though and see if there's any difference in the final result.

I'm currently downloading the 600 g440 bias frames from my obsy laptop - taking about 40m.

Link to comment
Share on other sites

ImageIntegration of 600 bias frames for gain of 500 completed.  Here's a screenshot with auto-stretch applied to result plus settings.  Now processing same for gain of 440.

PI Bias Intergeration 09.png

Link to comment
Share on other sites

That looks very different to my own. Here's my master bias based on 256 subs at a 300 gain and 50 offset. This is taken with the standard auto-stretch applied. I'm not sure how clear it is but there are very feint variations, particularly vertical (which is where I would expect them).

2016-11-21 (2).png

  • Like 1
Link to comment
Share on other sites

On 08/11/2016 at 15:19, Gina said:

Have gone through a few dates in my data and correlated that with information from pages 26-27 in this thread.

Date             Object    Filter           Exp   Gain   Temp
2016-10-10    C-Loop    Ha & OIII    60s    440    -30
2016-10-11    C-Loop    OIII            60s    440    -30
2016-10-11    C-Loop    SII            120s    440    -30
2016-10-13    Flats       OIII              1s      ?     -27
2016-10-13    NAN       Ha               30s    440    -30
2016-10-13    NAN       OIII             60s    440    -30

Now capturing darks to re-do these sets of lights properly.  Currently 30s subs and 80 of them which I gather should be plenty.

Link to comment
Share on other sites

7 minutes ago, Filroden said:

That looks very different to my own. Here's my master bias based on 256 subs at a 300 gain and 50 offset. This is taken with the standard auto-stretch applied. I'm not sure how clear it is but there are very feint variations, particularly vertical (which is where I would expect them).

2016-11-21 (2).png

My lights are with gains of 440 and 500 - a lot more than yours and the offset I've been using is 10.  Could you tell me the reason for using an offset of 50, please?  And have you done tests at various gains - is 300 better that 440 or 500?  I'm still experimenting and learning and very interested in other people's settings and why.

Link to comment
Share on other sites

6 minutes ago, Gina said:

My lights are with gains of 440 and 500 - a lot more than yours and the offset I've been using is 10.  Could you tell me the reason for using an offset of 50, please?  And have you done tests at various gains - is 300 better that 440 or 500?  I'm still experimenting and learning and very interested in other people's settings and why.

No reason other than it was the default given by the driver for that gain. I've since dropped my offset to zero as, at a gain of 300, my histograms are all clearing the right side, even blue. I have yet to try any other gain but have been watching your and other results at various gains to see what difference it might make. I think 300 is where the read noise starts to bottom out but still leaves more dynamic range than higher settings. I had considered testing unity at some stage but 300 seems to give me results and if it isn't broke I haven't fixed it :)

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.