Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Thoughts on which imaging rigs to concentrate on


Gina

Recommended Posts

Opened up my Linux Mint desktop and added the second SSD of 500GB.  Booted up to check everything is working with the current drives including Y cable for the SATA power and having taken the SSD carriage out of the box for adding the second SSD and disconnecting cables.  All is well :) 

Now I'm having a little break before running the Mint USB stick and setting up the new SSD and recovering the 220GB partition that was used for the first Mint installation that is now obsolete.  The newer installation has the /root and /boot on the 250GB SSD with /home on the 800GB HD partition.  There is also a data partition of 99GB on the SSD as /ssdata.  This is what I've been using for PI temporary files.  The next arrangement will keep the /root and /boot on the 250GB SSD plus the /ssdata partition and have the whole 500GB 2nd SSD (except for a small extra bit as recommended for SSDs) as a nice big data partition, probably labelled /ssdata2.

I may resize the OS partition (if this is possible) as I think I have allowed far more than necessary - currently about 150GB as I recall.  I have pretty much all the software I want already installed now so I can see how much space I've used.  What I'm not sure of ATM is whether I shall have to reinstall the software or whether I can just add the new SSD to the present setup and recover the 220GB HD partition.

Edited by Gina
Link to comment
Share on other sites

Now have the 500GB SSD partitioned, formatted and mounted and all set for use :)  I also resized the 220GB partition to just over the space taken by the first OS (11.5GB) giving me 188.5GB space which I assigned to another partition, formatted and mounted as /data.  I decided to leave that relatively small amount of space as the first OS as I didn't feel inclined to fiddle with the MBR and thought I might cause a problem if the secondary boot option didn't have a target.  11.5GB out of 1TB didn't seem too much to lose :D

Link to comment
Share on other sites

The Heart and Soul have disappeared behind cloud but it looks like the Cygnus Loop is in the clear so I think I'll go for that.  I should already have some data for that but I don't know how good it is so it won't hurt to get some more.

Link to comment
Share on other sites

Took the best of tonight's Heart and Soul OIII subs plus the best of those from before totalling 385 altogether and now running BPP in PixInsight using the 500GB SSD (which gives 465GB actual useful space).  Let's see now... 385 x 32 x 2 = 321GB - that's how much room PI wants to hold it's temporary files plus a bit more.  So 500GB SSD is by no means overkill!  I'm hoping that despite rather weak data, I shall get some reasonable results.  385 60s subs represents 6hrs 42m of data.  Much more was actually collected - these are the reasonable frames.

 

Edited by Gina
Link to comment
Share on other sites

Imaging session abandoned for the night and little chance for the next few days.

PI BPP finished registering with no rejects :)  Now integrating (stacking).

Link to comment
Share on other sites

After all that there seems to be something wrong with the resultant master light - PI says it isn't there but the file system says it is.  Properties says it's 196.7 MB (196,677,632 bytes) same as the other masters.

Link to comment
Share on other sites

The result is not good, even the stacked data is very weak and is probably showing the deficiencies of the BPP script - here is a super-stretched version.

light_OIII_2016-11-08.png

Edited by Gina
Link to comment
Share on other sites

Looks like I need a good rethink on how to capture OIII subs for the Heart and Soul.  I shall leave it for now and concentrate on processing and maybe capturing more of the Cygnus area.  The Cassiopeia area is still well in the NE when it gets dark and can be left for a while whereas the Cygnus area is fast losing total nightly imaging time due to it going below my observatory roof. 

I may not have this problem next year if I can get my second (mini) observatory built by then as I'm thinking of a mini dome type with views to the west down to tree level.  The other thing I'm considering is raising my EQ8 mount on its threaded rods to give a better view.  With this new camera the biggest scope I shall be using for DSO imaging will be the Esprit 80ED Pro which gives the optimal sampling for the pixel size - the MN190 would be greatly over sampled.  This scope may be assigned to planetary imaging if I decide to try that again.

Link to comment
Share on other sites

I'm beginning to think that my best bet is to take new batches of data rather than try to match up earlier stuff to their biases, darks and flats.  I think I've got myself into a bit of a mess - I think I've probably attacked this from the wrong end.  Should have processed data as I got it but I wasn't ready to do that and there was clear sky just waiting for image capturing.

Edited by Gina
Link to comment
Share on other sites

35 minutes ago, Gina said:

I'm beginning to think that my best bet is to take new batches of data rather than try to match up earlier stuff to their biases, darks and flats.  I think I've got myself into a bit of a mess - I think I've probably attacked this from the wrong end.  Should have processed data as I got it but I wasn't ready to do that and there was clear sky just waiting for image capturing.

I know. I'm wrestling with this issue myself. I *could* calibrate and cosmetically correct each sub and only save these for later registration, deleting the original raw subs and their associated flats but I'm nervous I'm discarding the raw data and should my skills improve (mainly around cosmetic correction) then I could never go back and deal with them. I could save the raw files too, but that's doubling my storage requirements. I wonder if I'm being too cautious and should bite the bullet and only store the calibrated files making registration and integration of additional data much easier.

  • Like 1
Link to comment
Share on other sites

The weather forecast is not good for the next few days so maybe going through a whole lot of data and making a database or spreadsheet could be worthwhile.  But which - database or spreadsheet or maybe just a table.

Link to comment
Share on other sites

12 minutes ago, Gina said:

The weather forecast is not good for the next few days so maybe going through a whole lot of data and making a database or spreadsheet could be worthwhile.  But which - database or spreadsheet or maybe just a table.

Can you not handle it through file naming and stucture?

I record the target, filter, time, exposure, gain, offset and temperature in each sub's filename. I can then link them to the appropriate master bias, dark and flat files which are similarly named. I store all subs in folders based on target and all calibration masters in their own folder structured by equipment.

It's still a pain loading the right files into the right boxes during calibration and that could be simplified by saving only the calibrated images, but it's manageable without the need for additional files linking the data. If Pixinsight could import files based on a script driven by a database/spreadsheet I might go down that route, but given you have to manually select files anyway, I don't see it saving time.

  • Like 1
Link to comment
Share on other sites

Have gone through a few dates in my data and correlated that with information from pages 26-27 in this thread.

Date             Object    Filter           Exp   Gain   Temp
2016-10-10    C-Loop    Ha & OIII    60s    440    -30
2016-10-11    C-Loop    OIII            60s    440    -30
2016-10-11    C-Loop    SII            120s    440    -30
2016-10-13    Flats       OIII              1s      ?     -27
2016-10-13    NAN       Ha               30s    440    -30
2016-10-13    NAN       OIII             60s    440    -30

Edited by Gina
Link to comment
Share on other sites

I would be wary of using a database - it will probably just increase the overhead and frustration (do you need any more :D ).

A simple method of using appropriately named directories and filenames should give you enough flexibility. A database will only add value if your application can store/retrieve details directly into it. Even then it has to keep sync with the physical files which will be on a separate filesystem (storing huge images in the database is not a good idea).

 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.