Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

teoria_del_big_bang

Members
  • Posts

    3,880
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by teoria_del_big_bang

  1. 22 hours ago, Aramcheck said:

    Hi @Ivor

    Nice data! I couldn't resist having a quick play. Workflow as follows:-

    • DBE
    • ImageSolver (script)
    • SpectrophotometricColorCalibration
    • BlurXterminator (to Luminance only) (altenatively do deconvolution after extracting Luminance)
    • RGBWorkingSpace (set values to 1)
    • Extract Luminance
    • rename original RGB

    with the Luminance image I then did:-

    • 3 slight stretches with HistogramTransformation, with the Midtone stretched to 25%
    • StarXterminator (to generate stars only & starless Luminance images)
    • 3 further slight streches to the starless image
    • Applied a small gradient mask to the core of M81 (using Hartmut V. Bornemann's free GAME script); invert & a further slight stretch
    • Invert the mask & apply a CurvesTransformation to brighten the core slightly.
    • On the stars only luminance image, I did four slight streches with HistogramTransformation, with just the Midtone stretched to 25%
    • Used PixelMath to recombine the Luminance stars / starless images using: ~((~starless)*(~stars)) and rename "Lum"
    • Create a clone of the resulting Luminance image, then apply a mask around M81 and apply HDRMultiscaleTransform and then LocalHistogramTransformation. (I create a preview of just the galaxy, make several copies and then try different parameters until I'm happy with the result & then apply those settings to the main image). Then repeat with a mask around M82... finally remove mask and rename "Lum_hdrmt"
    • Use PixelMath to combine the "Lum_hdrmt" and "Lum" images - in this case with 0.5*Lum + 0.5*Lum_hdrmt and rename "L50"

    On the RGB image I then did:-

    • Repaired HSV Separation (script)
    • ChannelCombination using the output from the above but with the "Unrepaired_V" file & rename "RGB_Repair"
    • Clone "RGB_Repair" and apply MaskedStretch & rename "RGB_MS" (I usually also similarly clone / run ArcsinhStretch and clone / run the EZ SoftStretch script & then use Pixelmath to create a blend... in this case I just chose the MaskedStretch result)
    • On RGB_MS i then ran ChannelExtraction to create HSI components. Delete the "I" image
    • use ChannelCombination with HSI selected, using the H & S images created from the RGB_MS file and the "L50" as the "I" component. Rename the result as "HSI"
    • Extract the luminance from the HSI file and using ScreenTransferFunction / HistogramTransformation create a high contrast image to use as a mask to adjust the colour saturation (rename: Sat_mask)
    • Apply "SatMask" to the HSI image & use Curvestransformation to increase saturation. (Two adjustments were made)
    • Apply SCNR with a small reduction to the Green channel (reduced to 0.9 x original)
    • Apply NoiseXterminator (if using other noise reduction techniques then I would instead do some noise reduction when still linear after deconvolution and then after recombining the stretched images)
    • To reduce the size of Stars I then ran StarExterminator again on a clone & renamed the results "Starless", before running Bill Blanshan's star reduction script... and a final tweak with HistogramTransformation

    Thanks for sharing the data!
    Cheers
    Ivor

     

    Great processing and thanks for sharing your detailed workflow, it helps more people than just the OP

    Thanks
    Steve

    • Like 1
    • Thanks 1
  2. 42 minutes ago, Stuart1971 said:

    Probably best reported on the NINA discord, I think you would get a quick reply on there, and if it is a bug, would get fixed pretty quick…

    Although as usual with that site, you may get a few sarcastic comments first from the usual arrogant suspects….😂

    Yes I probably will , I just hate Discord, one for the reason you mention and the other is I just have not worked out how it is organised and can never seem to find things on it, even my own posts sometimes 🙂 

    Steve

  3. 51 minutes ago, scotty38 said:

    Bit late to this and can't offer anything on the rotation but I am curious on the use of images. I know plenty of folk do this but I never have and, I think, my method would have prevented your issues BUT I may be missing reasons as to why images are used instead 🙂

    What I do for any object once framed/rotated etc is either save that as a target or save the sequence in which it's embedded and then when I come to add to it the coordinates and rotation etc is already there and available. Is there any advantage or other reason for doing it via an image?

    No I do not think there is any sdvantage, the only reason for doing it is the image I am adding to was done before I used NINA otherwise like you I would have saved a sequence or target in NINA.

    Steve

    • Like 1
  4. 21 minutes ago, scotty38 said:

    Fair enough but don't forget to send us a postcard 🙂

    I hear from some friends I have in China that due to over 2 years of lockdowns a lot of bars and restaurants have closed down so maybe less places to go but I feel it is my duty to also try to keep any remaining ones open so I will not be sat in my hotel room all the time 🤣

    Steve

    • Like 1
    • Thanks 1
  5. You will get used to it once you give it a try.
    I guess with the external drive I would be tempted to do something along the following lines:

    Create a darks library for all exposure times you think you will use so that you end up with a library of dark masters. this way you then only have to have the masters on your main pc rather than all the dark frames.
    Same goes for a bias if using one.
    Update this every 12 months or so.
    You can use WBPP to do this and run it just without any light images or flats.


    Then each time you process a target:

    Copy all the lights and flats from your session, or sessions, you want to process from external drive to PC if there is room on PC.
    If not you can process directly from the external drive but it can take considerably longer, i would experiment and see if the time difference is extreme, if not too much more maybe makes sense to process directly from the external drive rather than having copies on main pc drive. 

    Once WBPP has run its course if processing from copies on your PC move all unwanted files to the external hard drive just keeping the registered and masters directories on PC.

    Carry on with the post processing saving the Pixinsight project ro PC  or the external hard drive, your choice depending on how the emory is going on the PC so you can retrieve your project later to carry on processing or if you have an unexpected shut down of your PC at any time.

     

    One thing i have done on my laptop (which is quite old and not an all singing and dancing model) is that it has a CD / DVD drive in what is called a bay. I never really used it anymore as all software comes on line these days. You can but a bay that holds a 2nd hard drive and replace the cd drive with the 2nd hard drive. Cost me about £80 including the SSD hard drive and bay for an extra 1TB of fast SSD memory, so not sure if this is something you could consider.

     

    Steve

    • Thanks 1
  6. 3 minutes ago, scotty38 said:

    You're going to China and will spend your time faffing with astro images.... 🤣

    In 2 months you end up with a lot of spare time on your hands, so I thought that would be a good way to pass some of it 🙂 
    Weather will be very cold and not that much to do where i am so seemed a good idea to me 🙂 

    Steve 

    • Like 1
  7. I guess you can delete anything other than the Registered directory.
    So long as you do not delete your original images and flats you can always rerun WBPP if you were to lose something.

    I would never delete any original images, including the flats and darks, although if you are perfectly happy the master dark and master flat are both good then you could just keep the masters.  
    Problem is you can't really just take flats again and be 100% sure they are correct for the nights imaging if done months later, or even days later if anything in the image train has moved.

    External hard drives  1Tb and bigger are not expensive now,  SSD can be but even 1TB SSD not too expensive, and I would recommend getting one and do not delete anything until you have have final images you are happy with. Even then it can be good to save what you can as months, years later you often want to reprocess as you either get better at it or new tools become available and it is very frustrating when calibration files are lost.

    Always best to have any data you are processing on the main harddrive as accessing a slower external drive can slow things down a lot and get frustrating, but they are great for long term storage or even backing up data from other drives in case one goes down you do not want that to be the only source of your data.

    Steve

    • Thanks 1
  8. 11 minutes ago, carastro said:

    Lovely image Steve, but a bit green for my taste.

    Carole 

    Good to get sme clear nights really.
    It is a bit green for me too so may try to address that when I reprocess, I was aiming to make it more a bluey green , teal I guess and hoped for more gold.
    I have to go to China with work for at least 8 weeks so will have lots of spare time in the evenings so my plan is to reprocess a lot of my in=mah==ges, especially as I hve improved in this area I should be able to make many of the images much better and it will pass the time.

    Steve

    • Like 1
  9. 25 minutes ago, Adreneline said:

    ooking at the histogram I might be inclined to pull the black-point in a tad - you certainly have room to do so - doing so will enhance the depth in the image,  but again this is a very personal thing. I use HT pretty much exclusively for taking the image non-linear. I use lots or incremental small stretches, each time resetting the black point but not too much or aggressively. I would say I typically apply 15-20 small careful stretches to an image always making sure I introduce minimal clipping, less than %0.001 - maybe a bit more on an RGB image.

    The depth can also be improved by careful use of LHE - I use settings of 320,1.2,0.5 - 1.2 is an absolute upper limit in my experience.

    I never use anything like HDRMT to enhance detail/structure - I find it introduces too many other artefacts you don't want!

    I prefer to use MLT with very small and decreasing bias levels on the first four layers, typically 0.20 - 0.05, with a CIE-L mask so the enhancement only applies to the brightest parts of the image.

    HTH.

    Thanks Adrian, yes that helps a lot and will take into account when I have another go.

    The more I look at the image the more i feel a bit happier with it,I think my main issue is maybe not the final outcome but :-

    a) how long it took me to get there - that is my umpteenth attempt.
    b) If you asked me to repeat that image from scratch (without peeping at the image history in PI) I would not be able to do it

    When it comes to the stretching stage I just seem to end up going round in circles till I get something I am happy with, I do not seem to have a clear way forward in my workflow.
    But not too worried it is coming bit bit so I will get there 🙂 

    Steve

    • Like 1
  10. 8 minutes ago, Neil_104 said:

    I think so - and there in lies the issue!

    If anything I think I can see Mr Krabs more than a spider? 😂

    I see where you are coming from, half the time I never really see the thing some of these nebulas are called after 🙂 

    I sort of assumed it got its name because it was hovering over the fly

    Steve

  11. This is another image taken this year. I think the data i good but I am still not happy with my post processing. I seemed to have nailed the pre-processing and even the early post processing till it comes to the stretch and just con't seem to know exactly what tool in PI to use when and how much to stretch it.

    This is around 5 hours each of  SII, Ha and OIII with just 20 minutes each RGB for the stars.
    All processing done in PI with the RGB stars screened into the starless NB image.

    It is another image I need to re process after some more practice.
    Any comments, good or bad they are all useful, and help are as always welcome.

    Image001.thumb.jpg.763198fa5b10372e89a7b2043afa3025.jpg

    Steve

    • Like 16
  12. I too never had any reliability with active USB cables, especially ones from Amazon.

    Before I changed to having a fanless PC at the mount and use remote desktop the only thing that worked reliably for me to get signals back over long distances was to use USB over Ethernet, not cheap but worked like a dream.

    THIS is the one I used and can vouch for. I see now there seem to be cheaper ones at around £55 to £60 but cannot say if they will work as well as no experience but I guess if you got from Amazon you can send it back if it doesn't.

    Downside is you will need a power source at the mount though, but assume there is one for the mount itself.

    Steve

    • Like 1
  13. So are you saying that if you start with your focusser racked fully in and then start adjusting it outwards the stars get smaller and smaller as it approaches what you think is the correct focus position and the stars only get as small as the mage you have posted, then if you continue to wind the focusser outwards the stars get bigger anagin ?

    Exactly what is the method you are using to obtain focus ?

    Steve

  14. 5 minutes ago, scotty38 said:

    Just to add to the very extensive reply you've had already, if it's needed and helps you can have NINA create directories by using \\ in the file naming options.

    I did make mention of this at the end of the post, but maybe not in such detail 🙂 
    I think it is covered in detail in the excellent NINA documentation.

    Steve

    • Like 2
  15. Good advice from @Carbon Brush don't rush into any purchase, I know you may want to just get right into it as soon as the scope arrives but as said depending on what you want to do and what your expectations are there may be better things to spend your money on for your needs.

    Tell us exactly what you are wanting to do with the scope, I am guessing just visual. but what do you want to see ?

    Steve

    • Like 1
  16. 7 hours ago, Swillis said:

    I gave guiding another go and it all worked out well. Realised I just hadn't given phd2 long enough to calibrate before starting the sequence in NINA🙃

    All looking good and isn't it great when things go right ?

    Give it a bit of time and then get into using the advanced sequencer. There you can set it to start guiding , force a calibration if you want to, and it will not continue until it has calibrated.
    In fact I thought using the simple sequencer you can start the guiding and I think it waits if it does a calibration, I don't think you can force a calibration though with the simple sequencer.

    Don't be afraid of the advances sequencer, I was for quite a while but now I have tried it I wish I had done sooner.
    Look at Patriot Astros Youtube tutorials, there are loads on NINA and several on the advanced sequencer.
    Get used to it through the day so not as to interrupt your imaging.
    I now have sequences ready well before dark and can hit the start sequencing whenever you want, so long as the mount is aligned and it sets off as soon as it is dark enough, slews to target, plate solves so its bang on framed  up as I want it and away it goes, its brilliant.

    Steve

    • Like 1
  17. Started a session last night that  once the sequence was going seemed to be going well, the initial auto focus in NINA looked a bit  odd but not to conderned as focus looked good I carried on.
    After a few images it did anther auto focus and although it carried on it looked really bad and also the resulting image was not properly focussed.

    When I went ut to check the scope I disconnected the auto focusser so I could turn it by hand using the focusser knob the focusser tube just clunked in and out about 6 or 7 mm back and forth like it was massive backlash and refused to actually then move any further and would not rack in or out, just this 6 or 7 mm movement back and forth. I could even move the inner tube back and forth this amount by hand without the knob moving.

    This was obviously the issue so shut everything down and brought the scope inside,

    Had to strip down the whole focusser, and still not sure exactly what was wrong as basically stripped it down (Everything including all the small 2 or 3 mm little ball bearings and the 3 bigger ball bearings, cleaned everything, and re-assembled giving the ball bearings the lightest of oiling with very fine oil (literally a drop or so) and all working fine again.
    I managed albeit 2:30 am to get it back up just to check the focussing and it auto focussed just perfect several times.

    Now whether any oil at all is recommended I am not sure hence making the oiling very sparingly as I guess a lot of oil will be counter productive as the 10:1 spindle that the focusser attaches to is ultimately only friction driven to the larger spindle that drives the rack on the focusser and so does not want oil on it. Nut a little oil seemed to help otherwise felt quite notchy.

    All working but don't feel confident this will not come back to bite me later down the line.

    In my mind this is the part that is failing.
    1674732845671.thumb.jpg.c7aaf2a9f6d1d1e4d64b98d8754eb9ed.jpg

    Can you get spares like this or do I need an upgraded focusser altogether ?

    Steve

  18. I have been using NINA now for quite a few months and love it and quite used to how to drive it.

    Last nights surprisingly clear night had me  rushing about making use of it.
    When I powered up NINA had an update for me, nothing unusual in that, so let it update.
    Now my session was to add some data for the Heart so loaded an image from my last session into the framing wizard, image I was using below.
    image.png.b0c7362dfcb0593c7f26c35b86f729e9.png

    When I then loaded this into NINA  the image is mirrored as below.
    image.thumb.png.4af77d98a395ab411f975792b19bdc11.png

    Not being that observant I  never noticed this at first and continued and used the slew and rotate (I have an automatic rotator) option and took an image and the angle was nowhere near.
    image.png.7baeff325c5fa19e5531c43eeaa9be79.png

    To start with I still did not notice the mirrored image so tried 2 or three times and also loaded some different images from same last session.
    All images loaded mirrored., Odd thing is it platesolved it without an error (which I guess it may be capable of ) and rotated the camera to the same WRONG  position everytime, as I did try 2 or three times before I did notice the issue.
    So it was wrong but consistently wrong.

    It was then I noticed the image in the framing wizard were mirrored.
    As I wanted the framing to be as close to original as possible because I could not afford a big crop I mirrored the image in Pixinsight and then used that image in framing wizard as when this was loaded it looked the correct way round and it worked a treat.

    BUT, how odd ?????

    This morning I repeated the steps I took to make sure I did nothing odd and it is the same as these screenshots are from today.
    But, more intriguing is I downsampled the image in PI, only reason being is I wanted an image to use in future for the framing and just wanted to make it smaller and low and behold the downsampled image is the correct way round. as below.
    image.thumb.png.0b2287fff5419b306cb4001b71f8cf65.png

    Is this an issue in NINA or have I some odd keyword in my fits header that makes NINA do this , I cant see anything obvious in the header.

    Steve

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.