Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

What hard drive are you using for processing


Recommended Posts

I decided to stick all my data processing / imaging stuff on one disk drive of my PC. It's high end is said PC, Ryzen (12 core) processor, 32GB memory, and only 2xSolid state drives (SSD) until yesterday.

I had bought a 8TB (that's 8000 Gigabyte for the pc noobs) hard disk drive (HDD) with the intention of isolating imaging to help simplify things, seemed a rationale approach. I had considered speed but didn't think it would be as impacting as it actually was/is!

So anyway I had also decided to use the 8TB HDD disk as the temporary folder for DSS, then I set a stacking operation going and 14 hours later it finished. I was a little surprised at that time and so decided to run a side by side with some other data comparing HDD and SSD speeds.

M97 (not that it matters) was the data.  I tried it again using the HDD and it was stacked in 7 hours and 40 minutes. I then simply switched the temporary folder in DSS to my SSD drive and ran the same stack, no changes. The time1 hour and 40 minutes 😲

So if you use a PC for processing (or similar updateable equipment), just bear this in mind next update you plan. A word of warning that SSD's are quite expensive but can offer some considerable advantages in terms of speed as I can testify too.

 

  • Like 1
Link to comment
Share on other sites

The only thing that stands out is the HDD interface.  There will be other factors that make mechanical drives slightly slower than SSD's, but both will be connected to the PC via SATA cables, through the SATA bus, so will basically have the same data transfer rate.   Given the spec of your machine, I would guess that main board has Nvme ports.  Do you have an Nvme drive installed, and if so try running the same experiment having pointed DSS to a folder on that drive and see how faster the processing is.

One other thing to mention, is to check  that DSS can use all processors - When stacking the pop up states how many in use. It may not use all cores, but should use all (16?) processors. 

  • Like 1
Link to comment
Share on other sites

31 minutes ago, malc-c said:

The only thing that stands out is the HDD interface.  There will be other factors that make mechanical drives slightly slower than SSD's, but both will be connected to the PC via SATA cables, through the SATA bus, so will basically have the same data transfer rate.   Given the spec of your machine, I would guess that main board has Nvme ports.  Do you have an Nvme drive installed, and if so try running the same experiment having pointed DSS to a folder on that drive and see how faster the processing is.

One other thing to mention, is to check  that DSS can use all processors - When stacking the pop up states how many in use. It may not use all cores, but should use all (16?) processors. 

I don't actually have a problem with the machine, just pointing out what to some might not be immediately obvious. The fast SSD is indeed a Nvme drive, I believe that is the reason for part of the huge differential.  One of the reasons also for the differential is the spin speed. I purposely got a 7200rpm HDD but even so, it was running at 100% the whole time of stacking so almost certainly that is where the log jam is.

I could happily run it overnight if needs be on a larger stack requiring more disk space.  The processor is a Ryzen 5 - 6 core, but It has 12 threads so shows as 12 in device manager, sorry for mix up but DSS does pick up all 12. (I can see that being updated soon)

EDIT. Just looked and a Zen3 R7 processors are £200-300 depending on model, tempting!

Edited by bomberbaz
  • Like 1
Link to comment
Share on other sites

My experience is that SATA SSD are markedly faster than SATA HDD. This holds even when each drive is plugged into identically the same cable and SATA port.

One big difference is latency. If the sector you want to read or write is on the opposite side of the platter from the R/W head, you have to wait for half a rotation period (over 100 microseconds) for anything to happen to a HDD. Latency is hundreds of times smaller on a SSD.

Over the decades a great deal of effort has gone into hiding this latency by intelligent reordering of sector layout and disk operations, caching (originally just in system RAM but now also on-HDD) but it can not be eliminated entirely.

 

  • Like 1
Link to comment
Share on other sites

18 minutes ago, Xilman said:

My experience is that SATA SSD are markedly faster than SATA HDD. This holds even when each drive is plugged into identically the same cable and SATA port.

One big difference is latency. If the sector you want to read or write is on the opposite side of the platter from the R/W head, you have to wait for half a rotation period (over 100 microseconds) for anything to happen to a HDD. Latency is hundreds of times smaller on a SSD.

Over the decades a great deal of effort has gone into hiding this latency by intelligent reordering of sector layout and disk operations, caching (originally just in system RAM but now also on-HDD) but it can not be eliminated entirely.

 

Very good point and the HD drive I installed does have a 256MB cache, however (and I am not sure exactly how DDS processes files) but at 39MB per file & 1250 of them, it will as you point out only get you so far.

Link to comment
Share on other sites

If you really want to speed things up - get 256 or 512GB NVME SSD to be your "workspace" drive.

Keep 8TB drive for backup purposes - store your files there when not processing, but use NVME SSD for processing.

With mechanical disks - bottleneck is mechanical side of things - usually capping them at 100mb/s transfer speeds

With SSD SATA disks - either controller or SATA interface is bottle neck - usually capping below 600mb/s transfer speeds SATA3 is capable of.

With NVME you have very high transfer speeds up to 3-6gb/s (depending on pcie version) and very quick access time.

I have SATA SSD and NVME SSD in my machine at the moment:

SAMSUNG SSD 830 in SATA version

and

Samsung SSD 980 PRO in NVME version

here are benches of their performance:

image.png.639314c8ffb042cbfa1610b78e230811.png

versus

image.png.4bd9994a309d5d6c89ab0073f6daca89.png

(from top to bottom - you can roughly think of these benchmark results as follows - optimized work with large files, unoptimized work with large files, optimized work with small files, unoptimized work with small files).

In fact, I also have small 2.5" HDD attached to SATA interface - here is benchmark of that drive (it is not very fast for HDD standards - it is 5400rpm laptop type drive):

image.png.dfdac0ee811c170497453602b3749134.png

Now, we can see that NVME SSD is literally from x50 up to x500 faster than HDD - depending on operation performed. That is enormous performance improvement.

Today - it really does not make sense to use HDD for anything else than mass storage / backup storage - usually in form of NAS or SAN (raid and all - for fail safe).

 

 

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Not that I do a lot of processing these days, but having a dedicated Nvme drive as a scratch disk for many such applications can make a difference.  These days most mid range boards have two or three NVme ports, some with full access to the PCI lanes with others rated at SATA bus speeds.  My machine is now 5 almost 6 years old, so only had the one NVme port, but I fitted an Samsung Evo960 as the system drive and it's read/write and throughput speeds are fast (though not as fast as the Pro version of the same drive).  Granted having it as a system drive does impede performance somewhat as other apps and system processed access the same drive, but when I did process data (RAW files form a canon 400D) it rendered the image in minutes rather then hours. 

My benchmarks are below -  256GB Evo 960 vs 2TB WD 5400spin HDD

Untitled.png.7877a47a7ec38f79e1711922defdcfb3.png

 

Now comparing the HDD results to Vlaiv's - the free space may have some bearing on the outcome.  But you can also see the improved performance Samsung Pro Nvme drives have over the EVO, and how much of a difference there is between Nvme and mechanical drives. Also given the age of my machine, newer generations of drives may also use improved chipsets and Flash memory compared to my first gen drive.

  • Like 1
Link to comment
Share on other sites

3 minutes ago, malc-c said:

But you can also see the improved performance Samsung Pro Nvme drives have over the EVO,

I think that difference is mostly due to PCIe version used by the drive and NVME port.

I matched mine both at PCIe 4.0 version, while yours is probably PCIe 3.0 (and that alone makes x2 difference in max transfer speed).

  • Like 1
Link to comment
Share on other sites

3 hours ago, vlaiv said:

If you really want to speed things up - get 256 or 512GB NVME SSD to be your "workspace" drive.

Keep 8TB drive for backup purposes - store your files there when not processing, but use NVME SSD for processing.

 

Ahead of you on this one vlaiv, I moved all the pictures, video and other media from the storage on the Nvme drive onto the  8TB mechanical drive last night and that leaves me a little over 600GB free. Should cover my needs I think.

Thing is, I have checked my motherboard specs and it turns out it has two Nvme ports and I could have gotten a 1Tb drive which would have met my needs easily for far less than the 8TB mechanical drive Kingston 1TB NV2 PCIe 4.0 NVMe SSD | Ebuyer.com

I think my board is 3rd generation so a super turbo boosted SSD (Read/write over 6GB/s) would be wasted on it.

Going to speak to my PC man monday, I may just buy the drive for the sake of under 50 quid!

  • Like 2
Link to comment
Share on other sites

47 minutes ago, bomberbaz said:

Ahead of you on this one vlaiv, I moved all the pictures, video and other media from the storage on the Nvme drive onto the  8TB mechanical drive last night and that leaves me a little over 600GB free. Should cover my needs I think.

Do you take backups of that drive?

Serious question.
My files are backed up to a separate machine (a rather elderly Linux box) fitted with a ZFS 3-disk array.  Three 2TB drives give 3.6TB usable storage.  One drive hard-failed a while back and was replaced with a 4TB unit (it cost about the same as a 2TB disk) but the RAID array sailed through the episode.  The other two disks are showing faint signs of age but neither have failed yet:

pcl@ra:~$ zpool status
  pool: backup
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 0 days 13:27:01 with 0 errors on Sun Feb 12 13:51:03 2023
config:

    NAME                        STATE     READ WRITE CKSUM
    backup                      ONLINE       0     0     0
      raidz1-0                  ONLINE       0     0     0
        wwn-0x50014ee2bf658d03  ONLINE       0     0     0
        wwn-0x5000c500c0fc45a7  ONLINE       0     0     3
        wwn-0x5000c500c0d65bf2  ONLINE       0     0     2

errors: No known data errors
pcl@ra:~$ df -h /var/lib/backuppc
Filesystem      Size  Used Avail Use% Mounted on
backup          3.6T  1.2T  2.4T  33% /var/lib/backuppc

I will replace them when I return to Cambridge.

Link to comment
Share on other sites

Ok so ran my own Diskspeed checks, results pretty much where I expected them to be although the 8TB mechanical quite a bit better than the other mechanicals on here. I am presuming the higher speed (7200rpm) and larger 256MB cache helped here.

Top to bottom, 1TB NVME drive (partitioned to 250MB as disk C/750MB E), 250MB disk D SATA (this has PC games on it) and 8TB HDD

CrystalDiskMark_nvmessd.png.ba0749e247f55fc943caa1dd4a328d4f.png

CrystalDiskMark_SATASSD.png.b328995f98dd6e88f171daa4cd2dcb0e.png

CrystalDiskMark_HDD.png.7a1fbe168aaa9cc9328f66806ed8e1b2.png

Link to comment
Share on other sites

1 minute ago, Xilman said:

Do you take backups of that drive?

Serious question.
My files are backed up to a separate machine (a rather elderly Linux box) fitted with a ZFS 3-disk array.  Three 2TB drives give 3.6TB usable storage.  One drive hard-failed a while back and was replaced with a 4TB unit (it cost about the same as a 2TB disk) but the RAID array sailed through the episode.  The other two disks are showing faint signs of age but neither have failed yet:

pcl@ra:~$ zpool status
  pool: backup
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 0 days 13:27:01 with 0 errors on Sun Feb 12 13:51:03 2023
config:

    NAME                        STATE     READ WRITE CKSUM
    backup                      ONLINE       0     0     0
      raidz1-0                  ONLINE       0     0     0
        wwn-0x50014ee2bf658d03  ONLINE       0     0     0
        wwn-0x5000c500c0fc45a7  ONLINE       0     0     3
        wwn-0x5000c500c0d65bf2  ONLINE       0     0     2

errors: No known data errors
pcl@ra:~$ df -h /var/lib/backuppc
Filesystem      Size  Used Avail Use% Mounted on
backup          3.6T  1.2T  2.4T  33% /var/lib/backuppc

I will replace them when I return to Cambridge.

I have an external hard drive. Documents, pictures etc which I deem extremely important are back up to that, nothing else is saved but I do keep my eye on things and luckily have always seen failing before it's got to a critical stage.

  • Like 1
Link to comment
Share on other sites

2 minutes ago, bomberbaz said:

I have an external hard drive. Documents, pictures etc which I deem extremely important are back up to that, nothing else is saved but I do keep my eye on things and luckily have always seen failing before it's got to a critical stage.

Here's hoping you continue to be lucky.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Due to running off a laptop with a single 1tb nvme SSD which is mostly full, I've been stacking my subs off a Samsung T7 usb 3 SSD, not really slow at all and quite close to the speed of the internal one. Need more astro storage as already filled up 2TB, but getting a usb HDD seems like a backward step nowadays, the ssds are also physically smaller.

  • Like 1
Link to comment
Share on other sites

20 hours ago, vlaiv said:

I think that difference is mostly due to PCIe version used by the drive and NVME port.

I matched mine both at PCIe 4.0 version, while yours is probably PCIe 3.0 (and that alone makes x2 difference in max transfer speed).

HWInfo is a tad confusing... the Asus X370-A has 2x PCI, 4x PCI Express x1, 3xPCI Express x4 and 1x PCI Express x16  but then states that the supported version of PCI Express is v3.0

I've checked BIOS updates and there is nothing about updating the PCIe version, more to do with win 11 support

To all the other contributors, I would strongly recommend you look into a backup solution such as a NAS that at least has RAID 1.  Whilst its not a disaster recovery option in the event of fire / flood etc, it's a life saver for important files.  You can use free software to take the snapshot and image a whole drive or back up files or folders.  Being RAID if one drive fails you still have access to the data whilst you get a replacement, which often is hot swappable, to rebuild the RAID.    Backing up files to the NAS has saved my bacon on more than one occasion 

  • Like 1
Link to comment
Share on other sites

I have a NAS with mirrored drives that I backup data to. I also have a USB drive off the back of the NAS that I use for snapshots of specific data on the NAS. Fingers crossed I need no more than that 🙂

Edit: In fact for my Astro stuff my mini pc in the Obs copies its data to the NAS in the first instance and I use that to transfer to my processing machine. My backups go to a different place on the NAS so in fact I end up with two copies of the source data until I do some pruning.

Edited by scotty38
  • Like 1
Link to comment
Share on other sites

3 minutes ago, scotty38 said:

I have a NAS with mirrored drives that I backup data to. I also have a USB drive off the back of the NAS that I use for snapshots of specific data on the NAS. Fingers crossed I need no more than that 🙂

Edit: In fact for my Astro stuff my mini pc in the Obs copies its data to the NAS in the first instance and I use that to transfer to my processing machine. My backups go to a different place on the NAS so in fact I end up with two copies of the source data until I do some pruning.

That's  pretty much identical to my approach.
I find that data transfer directly to and from the NAS is not the fastest but I do not find that an issue .

I use the USB SSD drive to get what data I am working on at the time  and generally let it do that at its own pace, say overnight so no rush. Then when it is all on the USB drive I can use it om any PC I need to for the processing and eventually save the Pixinsight project and any final images back to the NAS drive.

I also like using the NINA app that automatically transfers data over the network in the background so after a session I have the data on the drive that is on the PC running NINA (at the mount) and also a backup copy on both hard drives in the NAS, this background transfer does then not interfere with the imaging time as the slow transfer is done whilst imaging later images.

Seems to work a treat.

Steve

  • Like 1
Link to comment
Share on other sites

28 minutes ago, malc-c said:

I've checked BIOS updates and there is nothing about updating the PCIe version, more to do with win 11 support

I don't think you can update to a new version as it is a hardware thing - you'd need motherboard with new chipset. Much like USB 2.0 vs USB 3.0 (and other version) - it is upgrade to protocol / frequency that is implemented in hardware.

  • Like 1
Link to comment
Share on other sites

On 24/02/2023 at 23:50, bomberbaz said:

I decided to stick all my data processing / imaging stuff on one disk drive of my PC. It's high end is said PC, Ryzen (12 core) processor, 32GB memory, and only 2xSolid state drives (SSD) until yesterday.

I had bought a 8TB (that's 8000 Gigabyte for the pc noobs) hard disk drive (HDD) with the intention of isolating imaging to help simplify things, seemed a rationale approach. I had considered speed but didn't think it would be as impacting as it actually was/is!

So anyway I had also decided to use the 8TB HDD disk as the temporary folder for DSS, then I set a stacking operation going and 14 hours later it finished. I was a little surprised at that time and so decided to run a side by side with some other data comparing HDD and SSD speeds.

M97 (not that it matters) was the data.  I tried it again using the HDD and it was stacked in 7 hours and 40 minutes. I then simply switched the temporary folder in DSS to my SSD drive and ran the same stack, no changes. The time1 hour and 40 minutes 😲

So if you use a PC for processing (or similar updateable equipment), just bear this in mind next update you plan. A word of warning that SSD's are quite expensive but can offer some considerable advantages in terms of speed as I can testify too.

 

How many and what size images were you stacking? I store all my subs on a very old sparc 4 drive ReadyNas . I direct WBPP to those drive for all the subs and calibration frames. My PC is a Ryzen 5700g with 64gb ram and I just stacked nearly 500 subs in and hour and a half. 

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Anthonyexmouth said:

How many and what size images were you stacking? I store all my subs on a very old sparc 4 drive ReadyNas . I direct WBPP to those drive for all the subs and calibration frames. My PC is a Ryzen 5700g with 64gb ram and I just stacked nearly 500 subs in and hour and a half. 

I run 15 second subs from my garden due to the conditions, very light polluted, so even with 5 hours data it is an awful lot of space I need. The time is reflected too of this. On M97 I gathered 1450 subs at 39MB each (that is 55GB), thats a lot of registering, stacking then kappa clipping, takes forever. 

If I drizzle which I have had occasion to do then obviously even more space needed and time of course. 

  • Like 1
Link to comment
Share on other sites

23 minutes ago, bomberbaz said:

I run 15 second subs from my garden due to the conditions, very light polluted, so even with 5 hours data it is an awful lot of space I need. The time is reflected too of this. On M97 I gathered 1450 subs at 39MB each (that is 55GB), thats a lot of registering, stacking then kappa clipping, takes forever. 

If I drizzle which I have had occasion to do then obviously even more space needed and time of course. 

14 hours still seems like an awfully long time. but I don't use DSS so maybe that's the right amount of time. 

  • Like 1
Link to comment
Share on other sites

I have TrueNas instance running with 3x2TB WD drives.

It's using ZFS raid.

I already had two instances of one HDD failing that I easily recovered from - simply by replacing the drives one by one with new ones (this usually gives me opportunity to increase array storage size as I replace all drives once first fails, as others are probably similarly worn out)

  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.