Jump to content

Walking on the Moon

Pixinsight and Recommended Laptop


George Sinanis
 Share

Recommended Posts

Morning fellow SGLers,

Currently I’m using a 2021 Dell Intel i5 16GB 512GB ssd laptop, to do the integration and processing on Pixinsignt. While I had only the ASI533MC Pro, everything seemed ok and tasks were performed relatively fast.

However, since I got the ASI2600MC Pro and the files are significantly larger, the whole process takes much longer - now it could take 1hrs+ to integrate 50 x 600 secs lights + calibration frames. Not to mention the StarXterminator process that could easily take more than 30mins. 

So, i checked the recommended specs on PI’s website but, it is a bit confusing since the last time it was updated was Nov ‘21 (if I’m not mistaken) as I want to assess the option to buy a new laptop.

It has to be a laptop as “my” home office is usually occupied by the kids studying or my wife and thus, I always have to move around 🙂

I was looking for a Mac Air with either an M1 or M2 but, if I understood well from some limited posts I managed to find, it is not 100% compatible with PI (yet).

 

Question:

What are you using and what would you recommend here for a faster integration? Would a GPU make any difference? Would you go Windows or MacOS?

Link to comment
Share on other sites

I recently purchased a Lenovo Quad Turbo Laptop with an AMD 3020e 2.6GHz processor, 32GB RAM, 500GB SSD and AMD Radeon RX Vega Graphics.  The machine was very good value on Amazon at the time and I wanted a large amount of RAM for a price that was not crazy.  I have found that this machine is much faster than my old HP i3 desktop when working on the huge files produced by my QHY268M, both in PixInsight and in Photoshop. 

However, even on this specification, the laptop still takes a very long time to work its way through my files notwithstanding the increased spec.  For example, Local Normalisation of 120 x 120sec light frames can take well over an hour and a half.

The Radeon graphics chip (essentially a GPU) is engaged by Photoshop to a limited degree on some processes.  However, my understanding is that PixInsight does not access the GPU (although, if you have an NVIDIA GPU, the NVIDIA CUDA code can be used to allow Starnet2 to access an NVIDIA GPU).

It may be that I do not have the machine's memory set up correctly and if the above sounds too slow given the specifications, I would be grateful to receive any tips for optimising the set up.

  • Like 1
Link to comment
Share on other sites

4 hours ago, George Sinanis said:

However, since I got the ASI2600MC Pro and the files are significantly larger, the whole process takes much longer - now it could take 1hrs+ to integrate 50 x 600 secs lights + calibration frames. Not to mention the StarXterminator process that could easily take more than 30mins. 

Are you going to be using the full resolution image that the camera (WBPP) produces? If not then, once stacked, you can speed things up quite a bit my reducing the size of the image you're working on.

With my ASI294MC Pro images, I normally do 2x IntegerResample right after the stretch so that processes like star & noise removal are speeded up. ;)  

  • Like 1
Link to comment
Share on other sites

9 hours ago, AMcD said:

I recently purchased a Lenovo Quad Turbo Laptop with an AMD 3020e 2.6GHz processor, 32GB RAM, 500GB SSD and AMD Radeon RX Vega Graphics.  The machine was very good value on Amazon at the time and I wanted a large amount of RAM for a price that was not crazy.  I have found that this machine is much faster than my old HP i3 desktop when working on the huge files produced by my QHY268M, both in PixInsight and in Photoshop. 

However, even on this specification, the laptop still takes a very long time to work its way through my files notwithstanding the increased spec.  For example, Local Normalisation of 120 x 120sec light frames can take well over an hour and a half.

The Radeon graphics chip (essentially a GPU) is engaged by Photoshop to a limited degree on some processes.  However, my understanding is that PixInsight does not access the GPU (although, if you have an NVIDIA GPU, the NVIDIA CUDA code can be used to allow Starnet2 to access an NVIDIA GPU).

It may be that I do not have the machine's memory set up correctly and if the above sounds too slow given the specifications, I would be grateful to receive any tips for optimising the set up.

Thanks - so, you did not experience significant improvement during processing?

 

This is why I’m reluctant to do nicest in a better spec laptop right now. I even read that Apple’s M1/M2 might have issues with StarXterminator. 

Link to comment
Share on other sites

8 hours ago, Budgie1 said:

Are you going to be using the full resolution image that the camera (WBPP) produces? If not then, once stacked, you can speed things up quite a bit my reducing the size of the image you're working on.

With my ASI294MC Pro images, I normally do 2x IntegerResample right after the stretch so that processes like star & noise removal are speeded up. ;)  

Yes indeed - I do use the native resolution of the camera. But, isn’t that the point to have a high res camera to capture as much detail as possible?

I might need to go down the pc route. 

Link to comment
Share on other sites

I have a tower PC with the following spec:

CPU) AMD Ryzen 7 3700X Eight Core CPU (3.6GHz-4.4GHz/36MB CACHE/AM4)

Motherboard ASUS® PRIME B450-PLUS (DDR4, USB 3.1, 6Gb/s)

Memory (RAM) 32GB Corsair VENGEANCE DDR4 2400MHz (2 x 16GB)

Graphics Card 4GB AMD RADEON™ RX 550 - HDMI, DVI - DX® 12

1st Storage Drive 512GB PCS 2.5" SSD, SATA 6 Gb (520MB/R, 450MB/W)

Stacking 230 IMX571 subs in APP with LNC enabled (3 iterations) takes around 6 hours…

I just go and do something else.😎

  • Like 2
Link to comment
Share on other sites

Just wash you car, do some gardening, watch the World Cup or make dinner while you are waiting. You get even more time for that when you get an ASI6200😁

EDIT: You may even take a walk with the dog or spend some time with your partner

Edited by gorann
  • Like 2
  • Haha 2
Link to comment
Share on other sites

Either I'm doing something massively different or, even with my 183 subs, a nights stack of 200 odd images takes around 10 minutes, I've generally never had a stack take much longer than 15 minutes total. Then I stack the stacks if there's multiple sessions which is very quick. It's one of the reasons I stick with DSS because it's fast.

  • Like 1
Link to comment
Share on other sites

12 hours ago, George Sinanis said:

Thanks - so, you did not experience significant improvement during processing?

It is faster than my old HP desktop but not as fast as I had expected with the new spec.  Reading the other replies, it looks like the programme is just very resource intensive. As @tomato and @gorann say, I guess it gives more time for other activities 🤓

  • Like 1
Link to comment
Share on other sites

Very important to use previews before doing an image integration so that you know that he stacking method and its settings work.  Otherwise you can waste hours doing the full integration only to find satellite trails etc.

PixInsight can use as many PC resources as you can throw at it.  The faster CPU and more memory the better.  For quick "checking' then a low spec laptop is fine - I've used an old Celeron laptop with 2Gig of memory for this.  However, for the integration of hundreds of large CMOS images (which can be over 100Meg for each sub) you need something with a beefier specification.  It can take my i9900K/SSD//64Gig RAM tower machine two hours at 100% CPU to crunch through an integration of 300 images from my 2600C sensor.

  • Like 1
Link to comment
Share on other sites

It’s the LNC option in APP that adds the time, each frame is analysed, do the calculation then apply the result to each channel on every frame.

Also because of of my lazy alignment when taking the subs, the final integration can be 36M pixels rather than 26M, APP on my set up integrates 62000 pixels at a time, and on 250+ frames that’s 6-8 seconds a go, that’s over an hour just for the integration.

I don’t stack stacks, I thought there was some minor  degradation in quality if you do that, so although I have interim integrations as the project progresses, after the final capture session I put all the subs in again and calibrate and stack from scratch, that’s when I can get my life back!☺️

  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.