Jump to content

sgl_imaging_challenge_2021_annual.thumb.jpg.3fc34f695a81b16210333189a3162ac7.jpg

Is Pixinsight worth a new computer?


Recommended Posts

  • Replies 32
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Tee hee - that should, of course, have been 16Gb RAM🤣

You can get a 45 day free trial of PI, so it may be worth trying it with your current PC to see how it performs and to see how you get on with PI. This is what I did and I liked the results I was

Plus a direct feed from the National Grid

Posted Images

5 hours ago, old_eyes said:

We just need a supercomputer each  - that's all!

supercomputer%20fugaku.jpg?itok=WBLNzL95

Fugaku 415 PetaFlops

When I was a young academic, the CDC Cyber 205 could do an amazing 400MFlops. The Intel i3 over 300 GFlops, nearly 1000 times as fast. Ain't progress great?

http://www.computinghistory.org.uk/userdata/images/large/51/91/product-105191.jpg

It really is 🙂 I came in at the start of the Pentium era, though had a fair bit of time on Amstrad and friends prior, but no coding. i386 was my first chip - we've come a long way!

If shipping all the image data around wasn't so prohibitive for most people I think there'd be a huge case for a render-farm style service for PixInsight and others; it's bursty, not something all users do all at once, so quite well suited to workload sharing. I did toy with the idea of using an AWS virtual desktop instance for PI (pay by the hour when you need it), but since I did my last PC upgrade I've not needed to consider it. And that's only because I've got a 1Gbps upstream from home, so pushing all the data to the cloud isn't a big deal - easily done in a few minutes for most jobs. Different story for typical rural/semirural ADSL/VDSL/DOCSIS!

Link to post
Share on other sites
15 minutes ago, discardedastro said:

i386 was my first chip - we've come a long way!

my first was an IBM XT 8088 4mhz chip at 640kb ram and 10mb disk yibee and 5 1/4 fdd 128kb of if lucky double sided at work before that was PET 

Edited by fozzybear
Link to post
Share on other sites

I couldn’t see any difference between PI’s and APP’s calibration and stacking routines, except I find APP easier to use. 
PI has a vast range of tools however, and some of the scripts are excellent, the photo metric mosaic script for example, got something decent out of my data where all other packages failed.

For AP processing I would spend the budget on PI rather than PS given that there is now Affinity Photo as a cheaper alternative to PS.

Link to post
Share on other sites
On 10/04/2021 at 17:10, Clarkey said:

I have used Startools and really like some of the functions for teasing out detail. I just struggle to get a good background as it seems to pull detail from noise.

On 10/04/2021 at 20:55, The Lazy Astronomer said:

I have the same problem with Startools, but I've seen Ivo get good backgrounds from other people's data, so I'm convinced I just need to find the right settings to get it to work well.

To both of you, please feel free to share a stacked typical night's worth of data with me (here, via the ST forums, PM, whatever works for you). Along with that, a StarTools rendition that you're not happy with and - optionally - some other image that you do like. I'd be happy to give you any pointers.

Often times, things are trickier if your data is not clean. The assumption is that the only noise in your image is shot noise (from the signal) and nothing else. That's when ST can really stretch its legs and the engine can properly prove its worth. Something close to this ideal (being shot noise-limited) is absolutely achievable with most gear and instruments.

However. if something else is introducing non-random, correlated patterns/artefacts in your image then you will have to work harder (whether it is ST or some other software). In such cases, ST's algorithms will - just like a human - have a hard time telling apart detail from artefact and will require help/tweaks from you with this subjective task that is no longer in the realm of physics/mathematics.

Some things that can cause a background to get corrupted are; not dithering, bad flats, bad/old bias or darks, accidental application of some post-processing, accidental application of noise reduction (at higher ISOs), unwanted compression artefacts/issues (e.g. Nikon D5300, some Sony DSLRs) or even sensor pixel cross-talk.

Symptoms include blotches, mottling, wormy/stringy noise grain, multi-pixel noise grain, streaks, zipper patterns, faint circles/banding/posterization, and/or minor black clipping (due to dark outlier rejection). (I just realised the latter reads like some sort of medication prescription leaflet... 😋).

As for processing power and super computers, we already have 2002//2003/2004's super computers in every device! Weta Digtal's Lord of the Rings visual FX, were rendered using the power of a single 2021 entry-level gaming GPU in terms of floating poins-operations-per-second (FLOPS).

While not all algorithms are suitable to running on GPUs, CPU power is no longer as dominant as it once was. GPUs simply run rings around CPUs when it comes to raw number crunching due to their specialised silicon. E.g. deconvolution, some forms of noise reduction are magnitudes faster (3x-20x) on the GPU versus CPU. Try StarTools' CPU vs GPU versions and you will see what I mean. Decon previews can now complete in near real-time on the GPU.

We live in amazing times to have something like that sitting on your desk or lap. With the proliferation of these compute capabilities now having achieved sufficient mass-market penetration, you will hopefully see this come to other software like PI and APP as well.

  • Like 4
Link to post
Share on other sites

I really don't think you need anything special to use PI.
Minimum specification is I5 or equivalent and 8Gb with I7 and 16 Gb recommended but for sure an I5 is fine.

Regarding anyone recommending one software package over the other is very difficult to do unless they are good at using all of them which will rarely happen.
Many (including myself) use both Pixinsight and Photoshop (in one version or another) .

Many also say that PI has a steep learning curve or that it is difficult to grasp, personally I do not agree but it is different in the concept and the way you use it, which may put some people off using it.

Now whether it is that people having used other software extensively and then moving on to PI and because they are so used to one way of working that it then becomes difficult to see how PI works and basically I used PI from the start of my imaging (as I am only a relative newbie to it) so had no previous concept of how other programs worked that is why I did not have a problem with it whilst others do, or my brain is just wired differently - I have no idea.
I am the other way round in that I still cannot get a grasp of Photoshop, whilst most are very fluent in Photoshop and struggle, or do not like PI.
Don't get me wrong even after 3 years of owning PI I have not used every tool in there and often have to refer to some notes I have made or a book when using some tools, so I am no Guru with it but can do most things I need to do. I think with any of the 

Also there is Astro Pixel Processor which does get some great reviews and comments so must also do a good job but I have never used.

When I started I originally did use Nebulosity by Stark Labs and that was cheap and very easy to use.
Yes it lacks a lot of the finer tools but generally you can pre-process and align and do some basic stretching very quickly and get some great images if the data is anywhere near reasonable.
So maybe worth thinking about trying it at least to start with as you can process very quickly with it and at least see what is in your data before spending hours learning something more complicated.

Steve

 

  • Like 1
Link to post
Share on other sites

Thanks to all for the multiple replies to the original post. I think I will get a better PC - but maybe not up to the specifications recommended by PI. Even using some of the software now takes a lot of time, even with my 'quick' computer. My main issue is that when learning or tweaking photo's if I have to keep repeating processes to see the differences, it does get time consuming.

On 12/04/2021 at 07:57, jager945 said:

To both of you, please feel free to share a stacked typical night's worth of data with me (here, via the ST forums, PM, whatever works for you). Along with that, a StarTools rendition that you're not happy with and - optionally - some other image that you do like. I'd be happy to give you any pointers.

Thanks for the reply Ivo. I have actually gone back to using Startools in combination with APP and Affinity after reading some posts on the ST forum. My results have improved hugely just by altering my early processing - generally using a manual stretch after binning and cropping. ST works reasonably well with my existing hardware - it is a little slow with some of the more processor intense operations, but tolerable for now. From the perspective of getting detail out of the images it does exceed the other software I have tried by a fair margin. This was an image I reprocessed in ST with a bit of tweaking in Affinity. It is probably my best to date. (I have only been imaging since last summer). The colour is a bit strong but that is due to the computer I am using for processing - the screen is awful!

Whirlpool ST AP.jpg

  • Like 2
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.