Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

groberts

Members
  • Posts

    1,160
  • Joined

  • Last visited

Everything posted by groberts

  1. Thanks I appreciate the comments. I did align all the waveglengths using a single 'best' luminance sub fo LRGB & Ha and was therefore surprised to have to do this a second time but that's what the Light Vortex Tutorial seems to say or have I misunderstood? https://www.lightvortexastronomy.com/tutorial-preparing-monochrome-images-for-colour-combination-and-further-post-processing.html#Section1 I have tried again + unticking the 'generate drizzle data' but with the same result. David I take your point but I'm doing every filter to arrive at a final image (a) repeating each stage for each filter reinforces the learning, though it does take longer, and (b) I would like to end up with a final colour image to compare with my original DSS + Photoshop image. Just to clarify the other point - is there a preferred file format I should have save the stacked images in or would either FITS, TIFF or xisf be OK at this stage? Graham
  2. Assisted by the Light Vortex tutorial + after some wobbles, I successfully managed to integrate (stack) a set of LRGBHa subs for the first time, which looked OK when stretched. I've now moved onto the next step of alignment of the said images for subsequent combination in StarAlignment which on processing gives the the message: *** Error: Variant::ToDouble(): Invalid conversion from ByteVector type <* failed *> Any thoughts on what's wrong? I suspect the problem might be file type. On completing the aforemention stacking out of habit from DSS I thought I'd saved the finished stacks as Tiff files but I now see they were FITS - obvioulsy not paying attention. Strangely I can't see the output file typein settings but suspect that if it's still PI .xisf this could be a conflict? Below are the settings used and a full screnn grab of the error. Graham StarAlignment: Global context Loading reference image: C:/Users/Graham/Desktop/PixInsight Image Tests/M101 March 2019/Stack2d/L Stack2d.fit Reading FITS image: 32-bit floating point, 1 channel(s), 9312x7040 pixels: done BLOB property extracted: 'Instrument:Camera:Gain', 8 bytes. BLOB property extracted: 'Instrument:Camera:Name', 28 bytes. BLOB property extracted: 'Instrument:Camera:XBinning', 4 bytes. BLOB property extracted: 'Instrument:Camera:YBinning', 4 bytes. BLOB property extracted: 'Instrument:Filter:Name', 2 bytes. BLOB property extracted: 'Instrument:Sensor:XPixelSize', 8 bytes. BLOB property extracted: 'Instrument:Sensor:YPixelSize', 8 bytes. BLOB property extracted: 'Instrument:Telescope:FocalLength', 8 bytes. BLOB property extracted: 'Instrument:Telescope:Name', 24 bytes. BLOB property extracted: 'Observation:Center:Dec', 8 bytes. BLOB property extracted: 'Observation:Center:RA', 8 bytes. BLOB property extracted: 'Observation:Equinox', 8 bytes. BLOB property extracted: 'Observation:Location:Latitude', 8 bytes. BLOB property extracted: 'Observation:Location:Longitude', 8 bytes. BLOB property extracted: 'Observation:Object:Name', 8 bytes. BLOB property extracted: 'Observation:Time:End', 12 bytes. BLOB property extracted: 'Observation:Time:Start', 12 bytes. 63 FITS keywords extracted. *** Error: Variant::ToDouble(): Invalid conversion from ByteVector type <* failed *>
  3. Probably the most common question I see on the forum is what is the 'best' equipment for those who want to start (a) astronomy / observing and (b) astrophotography + how to set it up. It took me literally ages to sort this out, mainly because there are too many answers and thus on my part confusion but a clearer do's and dont's for those starting out would be invaluable to encouraging newcomers.
  4. Hmm interesting and if the numbers are valid they indicate that AA does a good job and is much easier + quick to use. I've now tried this on my own results below: Original DSS stack PI-1 Stack is without drizzle PI_2d Stack with drizzle Which seems to me to be unclear except for the SNR with drizzle? I know this is my first PI pre-processing and I'll get quicker + a better feel for the settings but I have found it to be very fiddly and very time consuming. The $64,000 question is - what is the easiest + best result software for pre-processing, clearly AA seems to be a contender? Graham
  5. I'll certainly have a look at that when / if I emerge form PI but not sure I need any more processes at the moment, though Jon's work and insight on photography / astrophotgraphy in general is always worth consideration. Graham
  6. I'd be interested to see how they do compare + any other observations or comparisons with other pre-processing software regarding ease-of-use and results. An initial view of the stacks from PI look promising but is it worth this enormous effort - we shall see? Graham
  7. Thanks Wim that's very helpful. As you might discern, I'm influenced by years of working with DSS - this is a whole new ballgame. I've nearly (I hope) got x5 stacks now, so at last it's almost time to move on to post processing and probably more (a lot more) questions! FYI I'm working from Light Vortex + Warren Keller's book + new online version of Rogelio Bernal Andreo's new Mastering PixInsight, all of which together are excellent and provide extensive detail but there always seems to be something not quite clear and first hand + real time help is indispensable - thank goodness for SGL. Graham
  8. Thanks everyone, I hope to have some stacks later today! Out of interest - at the end of the integration / stacking process when there are the high/low/new images to assess (a) have these been saved anywhere, or is that always a manual choice and (b) have any of the underlying calibrated etc subs used for stacking been changed by the process - the point being if you're not happy with the result can you just change the appropriate setting ther and then e.g. weighting and immediately re-run the otherwise same set-up again? Also - given the recommendation to use different rejection algorithms depending on the number of subs, can you do this for different filters if the number of subs are very different and still easily combine the respective stacks generated later? Graham
  9. Thanks Ian for your extensive and very helpful thoughts. I'm edging slowly towards getting my first stacks in PI but it's been something of a journey, which I hope is (a) worth it and (b) gets better / quicker with practice. It's these sort of workflow issues that can imo make a big difference. Graham
  10. Thanks Mark, that makes perfect sense but was not what various tutorials seemed to say. After something of an uphill battle I feel near the finishing line (at least for this stage) and will try again tomorrow. Graham
  11. I've been battling to learn pre-processing for over a week now but have at last Calibrated, Corrected, Weighted etc subs for each LRGBHa wavelength, as instructed in the Light Vortex (LV) tutorial + all saved together in a single folder. Whilst the LV tutorial is generally excellent here and in other sources I'm unable to fathom out how to handle the subs in Image Integration - I get how to process x1 wavelength but not all five as in this case. Based on my experience with DSS, at this point I would expect to stack each wavelength separately i.e. load L subs into Image integration + Apply Global / run etc to produce a stacked L image, then move on to the next wavelength and do the same again. However, various sources seem to suggest I load all the subs (LRGBHa) and run them all together. Contrary to my intuition I have actually done this and I did prodcue a result but I'm baffled as to how to get form here to five stacked LRGBHa images or as I suspect, should I process each wavelenght one by one? Graham
  12. Excellent presentations so far on StarGaZine that brings this already great Forum to life. Many thanks to SGL/FLO and especially Steve & Nik - my only disappointment (as an ex drummer) was that after last week's bagpipes, there was no drumming by Nik this week 🥁! See you all again next week + thanks again. Graham
  13. Great + thanks again - so far so good! The Registration worked fine but (as advised by Light Vortex) the next Local Normalisation step is unclear. Are the files used in this process, including the reference file, the original calibrated subs or those just generated by the Star Alignment process?
  14. Thanks, I think I'm having a 'senior moment' but it was not clear in either the otherwise excellent Light Vortex or Warren Keller + from DSS that's the route I would expect.
  15. Thank Hughsie, it certainly makes sense. Just to be clear - if I do this then obviously all the subs (LRGB) end up back where they were filed but now aligned? Presumably, if I wanted to preserve the subs as they were for some reason, I could set a new output directory and the said now aligned subs will instead go there for subsequent stacking? Graham
  16. Thanks but I'm still confused. I currently have LRGB subs calibrated, corrected and weighted in PI. In the next step, Star Alignment, do I either: (A) Put all the aforesaid LRGB subs into Star Alignment + choose one of the best subs as a reference and run? or (B) Put only each separate set of wavelength subs e.g. Red and then run + then move onto to the next wavelength and do the same again? Logically + experience with DSS suggests that all the subs are processed in Star Alignment with reference to a 'best sub', whichever wavelength that might be i.e. Case A?
  17. As a beginner with PI I'm working my way through the excellent Light Vortex tutorials + with reference to Warren Keller's book and have reached the Star Alignment section i.e. all subs have so far been calibrated, corrected etc. processed only with reference to others of the same wavelength L-R-G-B. In order to enforce what I'm reading I've been processing my own LRGB data I already have, which up until now has been wavelength by wavelength (see above). Now I'm at the alignment etc. stages but it's not clear to me where I start to process the subs together to ensure they are all aligned + registerted for subsequent integration; hitherto my experience is with Deep Sky Stacker so my preconceptions of how PI might deal with these stages is somewhat biased. So, assuming there are parallels at all - at what point herein do I start to process all the subs together, except of course for stacking itself? Thanks
  18. Excellent that's very helpful. I had of course tried Google but, as is often the case, came across a number of conflicting views. Don't understand what's happening with the driver scaling or why but happy to go with that. Many thanks David 👍
  19. I'm slowly working through using PI for the first time and have arrived at the SubFrameSelector process.I have a late-2016 ZWO1600MM-Cool and just want to check the correct resolution input for this camera, which I believe is 12-bits. Please can someone confirm this or otherwise?Thanks
  20. I obviously did this with my current Lodestar x2 but that was three years ago and I'll obvioulsy need to do some revision before adding the new camera. Other than just getting the second camera to work, my main concern is to ensure I don't confuse the two cameras and ideally I'd therefore like to be able to name the new one differently. Thanks Wim. Graham
  21. Thanks Wim, this is what I fear. What do you exactly mean by two profiles + how would I do that?
  22. Thanks Wim, no only one at a time, just on different interchangeble rigs depending on conditions and targets. In PHD2 there's a dropdown 'forked' menu in the set-up connect section, which allows the addition of more than one camera or more of the same. I just don't understand how you then differentiate between each so as not to get them mixed up. Fortunately they will auotmatically create and store separate calibration files however. I'm also wondering if they'll use the same driver that's already installed. I suspect the only way to find out is to do it! Graham
  23. Many thanks Adam but no. See this separate thread I subsequently started as a result of this issue and have now purchased a Baader Varilock extender - see this thread for more info:
  24. I've just acquired a second Lodestar x2 guide camera for use with another rig and a different guidescope but operated from the same computer / PHD2 installation (not at the same time!). I can see in the manual that a second camera of the same type can be named via the "fork" button but wanted to check a few thoughts before I start to add my new camera: 1. As this is an identical camera can it be specifically named e.g. Lodestar-A or -B so as to clearly differentiate the two cameras thereafter? 2. Is it still possible to use the Connect All function using the two different set-ups + how? 3. Having installed the second camera, will each camera establish and thereafter use and identify with its own calibration files depending on which one is selected from the menu? 4. Anything else I need to be aware of / think about? Thanks, Graham
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.