Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

geeklee

Members
  • Posts

    1,129
  • Joined

  • Last visited

Everything posted by geeklee

  1. Thanks @Stuf1978 @teoria_del_big_bang @Mr Spock @MalcolmM @gorann Definitely, mono makes a difference. Another thing in play here for Sh2-240 is star separation and resampling/binning. Personally, I couldn't present this image at 100% as it wouldn't have shown the same depth and still look as clean as I would like. There is plenty of OIII in Sh2-240 and I'm not showing this clearly due to a decision in processing and also because I hardly had any integration time! Here's a snippet of the Ha stack from Pane 5 that contained Sh2-240 (that's about all it contained!). This is 3.7 hours. Here's the right hand image above with a gentle stretch. If you open that in a new tab for full resolution you'll see it's struggling a little. You can get a little Sh2-240 data easily but pulling that out in a way that doesn't look forced is difficult in my eyes. I also find careless noise reduction can ruin the subtleness of the structure in Sh2-240. It's hard to describe the effect I mean. 🙂 Here's a collaboration I did with @Adreneline where we spent 24 hours on Ha and 11 hours on OIII just on Sh2-240.
  2. I think most people have imaged small or large sections of this area of sky - including myself. I wanted to create a slightly larger FOV using the Samyang 135 lens - this needed six panes with the small sensor of the ASI533MM - just about fitting what I wanted. The Samyang was stopped down to a pedestrian ~F2.6 for ~20 hours of Ha and 10 hours of OIII. Astronomik MaxFR 6nm Ha and OIII filters were used. It's fantastic to see so much of this area together to appreciate the scale of some objects and also their proximity to each other. The huge Spaghetti nebula (Sh2240) looming in the lower left and the scale of the vast Ha near the Flaming Star and tadpoles. Then those very faint Ha objects that have through well here. It feels like (much) longer, but I've been shooting this for just a couple of months. I wanted to draw a line under this data set - perhaps returning later in the year for more OIII and perhaps SII (and Ha! ). I left plenty of subs on the cutting room floor but also had to let through some I'd have preferred not too. There is fairly even coverage across each pane in Ha and OIII though - with Ha typically taking a 2:1 ratio. The version here is 33% of the original which gave me more room to bring out the fainter objects while - hopefully - retaining some good detail. Do click through to open in a new tab. Captured with the ASIAIR Pro, pre-processed in APP and processed in PixInsight. Thanks for looking. Here's a smaller, annotated version.
  3. George this looks good but I'm struggling to open it and see full size. Not sure what's happened in processing but the images appear to be HUGE - probably 4x the resolution in width and height.
  4. Lovely image Colm 👍 ...and very vivid description of how you see it! 😅
  5. I don't know if DSS has the option, but APP (AstroPixelProcessor) allows you to effectively Drizzle "x1". I find a very slight bump in star sizes but a better shape (I occasionally use it) and still native resolution. You can also just resample a drizzle x2 image back to native resolution before processing. Saying that, I can't remember the last time I saw a drizzled image (i.e. x2 x3 etc) that wasn't just a higher resolution blurrier version of the original, with no extra detail I couldn't already see before. Great start with the "new" scope @OK Apricot
  6. Yeah, there's always a price under the moons influence isn't there - even in Ha 😐 I wasn't familiar with it either but looking around for other images I think your suggestion of HaRGB could also be a winner. 👍
  7. This look great in Ha Rodd. Some very interesting structure already present and has good depth taking the whole image in as one.
  8. Yeah, just go for it! Saves you getting something else for now and overkill is fine, it's the opposite that's usually a problem!
  9. Thank you Wim! I had been on the NED site the evening before but concentrating on the finder chart views. Simbad can sometimes have additional reference tables with Mpc distances but this object didn't. I can't believe it was staring me in the face on the NED site 🤦‍♂️ (I missed the Redshifts tab also!) As a slight aside, I adapted the tip you provided about using the TypeCat script. I had another image with Collinder 21 included and couldn't find a solve with the default catalogues (or TypeCat - I maybe missed something here). I instead opened one of the TypeCat exported files, removed the entries and created a new tab separated one with the requisite details (RA, DEC, Name, Diameter for annotation mark, Type). I then used this as a custom catalogue and it worked a treat. Maybe overkill and I missed the simple way but useful to know that you can also tag things however you like if you wanted to mark up an image yourself.
  10. Are the image tags the right way around? For me, the first one is the better image. Three things stand out on the second image - the salmon/red colour, the noise reduction and alongside it too much processing (probably BXT and sharpening if this is the new image?). Although the first one is a little noisy, the second image has maybe gone too far the other way. The detail in the core is more subtle in the first image. In the second it's starting to look wormy/wiggly. I took the first image into PI (and it's only the JEG) and ran some light BXT and light NXT on it. Not ideal at all at this stage, but it still had a positive effect. I'm sure there can be a middle ground image that has the best of both worlds, no doubt about it. I dare say not everyone (or anyone?) will agree, even yourself and that's OK. It's your image. 🙂
  11. Thanks. I see he mentions that unintended colour shift on some data too - and working on a fix in AI2.
  12. Thanks @Ratlet @Adreneline @tooth_dr @Spr 👍 I picked it up second hand from a fellow SGL member a few months back. It's been great just having a play with it alongside an OSC camera for now. It's just getting the clear skies to concentrate on a target that's proving the usual challenge.
  13. This is interesting - is that in the documentation or noted online? Given that star colours seem to change when running BlurXterminator (a patch is being worked on I'm sure I read on AB/CN) I've been running BlurXTerminator after DynamicCrop and then running SPCC. I'll need to try the above 👍
  14. A short stack of subs recovered from a very frustrating night on the 2nd January. Haze and passing cloud on a night that was forecast, and continued to be forecast as clear! Sky-Watcher 150P-DS alongside an ASI533-MC Pro - 72 x 60s (72 mins). Captured in NINA, pre-processed in APP and processed in PixInsight. Thanks for looking. Just the one galaxy picked up in the PGC - PGC18441
  15. Yeah, I would agree. When viewed as a small in-line forum size, it's all good but there's always a trade off presenting at full resolution. I thought it looked great, I thought the smaller bright version looked good too - with the above caveat.
  16. Once you see the step in the workflow where they change or when they change subtly you'll hopefully be able to take easier action. How does this look to your eye? A reduced SCNR green on the whole image Extracted the stars with StarXTerminator Inverted the stars, applied SCNR, inverted again. This seemed to retain halo colour although limited by what was there already of course. A little more colour work is probably beneficial. Destretched the stars a little - two iterations. Added the stars back again. I can PM you more specifics if it's any use.
  17. The red in the Iris is welcome (and additional blue) but IMHO the lower brightness reds and some of the other image tweaks have pushed the image too far and look almost like colour noise. For me personally, I prefer @Jay6879's original.
  18. For me, they're quite distracting. As soon as the image opens and while trying to enjoy it, there's just these bright, glaring magenta stars everywhere pulling your view towards them. I understand what you're saying about the palette though. 🙂 I find that a tiny bit of grain seems to trick my eyes into thinking there's a little more detail. Like magenta stars, it's great we all have our own preferences 👍
  19. That looks great Adam. Your version of heavy and wild still keeps it at the limit (for me). The reflection elements throughout the image look fantastic alongside the fabulous dust (and dust colour). Only thing for another time - and doesn't detract from this image - keep an eye on the pinky/purple star flares/halos.
  20. At this resolution (as they're crops) it looks OK to me. Well done with NGC7538 - so many images have this blown out completely but separate care is usually needed on it to preserve detail. The noise reduction looks a little more noticeable in the diffuse surrounding nebulosity of Sh2-159 but again looks OK. I guess when the whole image come together it'll be more obvious across the frame and faint stuff whether the NR is too strong. I'm biased towards liking a little grain in my images. What "strength" do you use for NoiseX?
  21. Absolutely, great going - I'd estimate my own at 10-15%. I'm being optimistic with that 15% so I can I keep looking back with rose tinted glasses on the weather...
  22. You have the "Lightness" and "Lightness mask" options in HDRMT (PixInsight I think you mean?). If I still find HDRMT too strong, I usually clone the image (drag the image identifier tab to a blank area of the Workspace) and run HDRMT on this. I'll check a couple of times with changes to layers depending on the data/subject. Then I'll either: Blend this into the original with PixelMath e.g. Simple expression: 0.5*$T + 0.5*<name_of_HDRMT_clone_image> Drag the blue triangle onto the original (where $T comes in, target) You can obviously adjust the strength of each blend above. Create a modified mask that targets the area/areas I want the HDRMT image to replace. The use PixelMath to replace it e.g. <name_of_HDRMT_clone_image> Drag the blue triangle onto the original (where $T comes in, target) A combination of the above.
  23. Thanks for this tip @scotty38 Although I've used NINA a fair bit I've - through weather and opportunity - still not quite "dug into it" and just crack on usually. It shows how simple the software is to do a lot of things easily. Despite using the dragscript/sequencer in Voyager, I've still not had a play with the advanced sequencer in NINA. While I have your ear (or anyone elses!) If I frame up a mosaic and then "Add target to sequence" + "Simple sequencer", is there a way to save that in its entirety. I came a cropper once thinking I'd saved it and only had one "target" (I.e. one pane)! Also when I have a mosaic in the simple sequencer, can I copy and paste all my sequence settings to each target (pane). Thanks in advance
  24. @tooth_dr did well to bring out the blues - something I didn't manage. This is one other version - no better than anything else in here, just different. I resampled the data down and held off strong use of many tools as the data couldn't support them. The core of M51 is showing detail and the filaments in NGC 5195 look quite defined and natural to my eye (?) If there's one thing I've learned in AP, every person likes something different as evidenced by the comments here even! Whatever you like at the end of processing is the best image 👍
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.