Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

eshy76

Members
  • Posts

    273
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by eshy76

  1. 1 hour ago, Adam J said:

    I think that main thing i would do here is instead of stacking all the images i would use the extra images to enable stacking based on star sharpness that will likely yeald an increase in detail, if SNR is good adding more will not make things much better.

    Thank you for this - what I would say is that I stacked using APP's quality weighting approach, which hopefully emphasised the better subs in the pack, but it is still a bit of an automatic approach...

    ...there is enough good data here to do something I haven't done since buying APP - use some criterion to pick subs to stack. I think I might give that a go at some point! Thanks!

  2. 10 hours ago, Rodd said:

    I see the difference immediately.  Both are great, but the dynamic range in the 18 hour image is greater.  The darks more dark (dust lanes).  If you look carefully at the dust lanes near the core, they are much easier seen in the 18 hour image.  Yes, perhaps this is nit picking at little details.  But you know who inhabits those little spaces

    Thank you! Yes, the more I look at them, the more I see the subtle differences. The background also looks a bit less noisy and less affected by gradients to me in addition to your observations on the dust lanes in the longer integration.

    I'm sure if I took the time to take in the image, there's a lot more in there, but it's a clear night tonight, so...!

     

     

  3. 15 minutes ago, Xilman said:

    What I am suggesting is that processing each channel in its entirity and, perhaps, mixing them in different proportions you will very likely enhance the visibility of some features at the expense of others.

     

    Thanks for the clarification - on the above part of your post....I actually prepared the red channel right at the start of the process. Masking off the stars in the red data means trying to protect them before bringing in the Ha data as a 50:50 blend...the idea being to only blend the "non-star" parts of the red channel. Not doing this and bringing the Ha data straight into the red has, in the past, led to overly red stars for me. After doing that I then combined the R+ Ha, G and B data into an RGB image in a non-linear state (with only a background extraction on each stack beforehand) and after having aligned the brightness of each stack.

    Basically, what I am trying to say is I definitely did not process each channel before combining, more the other way around - combining first and then processing. Of course, it's probably easier to get to the most realistic colour mix using an OSC instead of filters, but probably at the expense of details and noise in my skies.

    I appreciate your interest in the scientific side of the images - for sure that is not my primary imaging goal at this stage of my astrophotography!

    And actually, in the images above, I have performed...star reduction....so these images are unlikely to be the best for someone interested in the actual stars themselves, alas! I do have the original stacks with no star reduction performed and, if I get some time this weekend, will try to process a version with the original stars intact and post it here.

  4. 6 hours ago, Xilman said:

    Ah, perhaps that's why the image is not as detailed as I expected. I had ascribed it to the relatively small aperture of your refractor.

    I urge you to produce another stack, this time, without damaging the stellar detail.

    Thank you, but I don't understand - I masked the stars before adding the Ha to the red channel - so no Ha data touched the RGB stars...how does that damage the stellar detail, or have I missed something?

  5. On 18/09/2021 at 21:17, Notty said:

    Really nice image(s) I'd take either or both of them & mount them on my wall with pride. TBH I seem to hit diminishing returns after 4-5 hrs anyway as the spread of quality of subs (according to DSS anyway) just widen downwards the more I take so I end up stacking the same old 60% from that flukey clear night anyway!  I'd love to get those red coals I've seen in others as well as your excellet image - I assume they're from the Ha input? I use Star Tools and if I use Ha as a Lum layer it just borks all my colours up!

    Thank you very much! Yes the reds come from the Ha data...without the Ha those areas are a fainter pink. My R channel is 50% Ha and 50% Red data (I added the Ha data to the Red after masking the stars).

  6. 11 hours ago, CloudMagnet said:

    There was a thread on there a few months ago looking at a comparison on the Iris nebula on integration time and it seemed to be that once you reached 10 hrs, the level of improvement afterwards really tailed off and you couldn't see much difference.

    I think that once you get about 10hrs, you should really look at adding only the best quality data rather than just pure amount of time to get the best results. Otherwise, adding in data that is lower quality won’t improve the final image as your noise level will already be so low anyway.

    Thank you for that, I missed that topic - it looks like my own experience bears that out too. So for me, in a Bortle 7 sky, with my f5.9 scope, 10 hours seems to be a sweet spot for an LRGB image. I agree with your comments on the quality of the data.

    • Like 1
  7. 4 hours ago, ollypenrice said:

    Once you have your pure background sky up to a brightness you like (which for me is about 22  in Ps for galaxies) you can pin the curve at that value and stretch just above it, restoring the curve near the top.This lets you pull out faint nebulosity without white clipping at the top. In the example below the background is pinned at its original value and the added kink, or bulge, placed in the curve affects only the fainter, outer parts of the galaxies. It leaves their cores and the stars almost unaltered.

    This method allows you to avoid unnecessary stretching of the background (which can lift it above the noise floor) and keeps the bright parts down while getting more out of the faint signal.

     

    It would provoke an apoplectic fit amongst the PI developers so please don't ever show it to them. :D

    Olly

     

    Wow you've blown my mind with that Olly...I had a little play around this morning (before work, ha!) using a similar Curves stretch to one you posted, but in PixInsight (you can do the same thing there with the pins)...and it looked like the Andromeda core was lying in a cloud...it looks better than it sounds!

    I also found that applying an even smaller stretch than your one, but several times iteratively, instead of one bigger stretch, seemed to be a more subtle and less destructive way of bringing up the "outer rim" for my particular data.

    Definite potential with this approach - I will return to this on a quiet evening - it's the sort of tweak which needs work to be done for the day, the kids in bed, a neat desk and possibly classical music! Thank you very much!

    • Like 1
  8. 6 hours ago, ollypenrice said:

    Did you try to stretch the outer glow any harder with the extra data? You might find you could reveal a it more of it?  What I found in going after that very faint stuff with a CCD camera (and this wouldn't apply to CMOS) was that going from 15 min subs to 30 min made far more difference than adding extra 15 min subs.

    Olly

     

    Thanks for this Olly - on the 18 hour integration I processed it as I would normally do, which means I tried ensure I stretched it up to the point that data would clip if I went any further...which probably means I could be more aggressive...

    On your CCD point, it's interesting that the data I added was 8 hours of shorter (30 second), higher gain subs, though as you say with the noise profile of CMOS cameras, that should not be detrimental.

    I'll go back in due course and see how far I can push the data - I'm curious myself about it!

  9. 1 hour ago, brrttpaul said:

    Hi all, I have two setups on a permanent pier. An ASI 1600 mono paired with an ED80 scope, I also have a MAK 127 paired with an ASI 533 OSC. I have no problem plate solving with the widefield outfit but cant for the life of me plate solve with the mak , I keep getting "not enough stars" even though it has detected 37 stars I have been using ASTAP . any suggestions?  I have nina sharpcap and sgpro installed, astap runs but dosnt solve, anyhelp much appreciated

    Paul

    Hi there - that's odd, but to start the troubleshooting, have you inputted the correct focal length of the MAK127 in the capture software (which will configure ASTAP for you)?

    Also I have no idea if 37 stars is enough for plate solving...is it worth taking a longer exposure for the plate solve to have some more stars? So for eg 10 seconds instead of 5...again can be found in plate solving settings somewhere. Or increasing the gain?

  10. 1 hour ago, Xilman said:

     

    For instance, a quick zoom-in around the field of AE And, a blue supergiant in M31, shows the variable very clearly and it was at an average brightness of about 16.7 I would guess over the period of your observations.  Here is a 4x zoom in the field of AE And, which marked with white lines, and it shows the blue colour of the star rather nicely. Marked with red is the 16.9 magnitude globular cluster Bol 356; note its distinctly yellow colour because it is composed of much older and cooler stars than the very young and very hot AE And. Contrast with the red supergiants in the same field. The brightest stars visible are in our galaxy but the great majority of the fainter ones are in M31.

    The image above is very much a quick-look snapshot.  I am sure you could make the M31-resident objects much more easily visible with a bit of work.

    Incidentally, the full image will likely show well over a hundred globular clusters and many thousand of stars, some of which are of great historical interest. Tracking them down should keep you busy for a few cloudy hours.

    Thank you very much for that - it reminds me to take the time to look not just at the main objects that we capture but also what is there in their vicinity! Really appreciate it!

  11. 2 hours ago, alan potts said:

    Lovely image, I am going to start ramping up my data on this target which stands at about 4 hours now. Though I can't see them side by side I struggle with my normally good eye for this sort of thing to see a difference worth talking about. I have 16 hours on M33 which I am also going to add to, weather permitting of course.

    Alan

    Thank you Alan - I think this could be a case of diminishing returns....10 hours was already a decent level of integration, so adding another 8 hours was less impactful. However, going from, say, 2 hours to 10 hours could be more visible.

  12. 19 minutes ago, wornish said:

    Superb images.  I too struggle to see the difference though between the 10 and 18hours versions TBH.

    I managed to go for around 2 hours last week and I  am embarrassed to compare it with your 10 hour one. 

    Thank you! Don't be embarrassed - we all know how the weather can impact our imaging time! I got lucky over two nights and left the rig to chug away all night on both occasions. I am finding 5 hours largely sufficient for narrowband imaging in my Bortle 7 sky, whereas it looks like 5-10 hours seems to be what I should shoot for in LRGB with my f/5.9 scope.

  13. 1 hour ago, Dazzyt66 said:

    Really REALLY great image - I'm struggling to see a major impact of the extra hours though - Looks like I've only got about 9 more hours of data to go with mine then... 😂

    Thank you! Yes when I really zoom in close I can see a little bit more definition in the dust lanes, and I feel I could push the colours more in the 18 hour version if I wanted to...but the difference seems incremental. Of course processing ability could be a factor!

  14. Hi everyone,

    You may recall my recent M31 image -- I have now combined this with data from a year ago, taking total integration from 10 hours 6 mins to 18 hours 36 mins.

    Does it make a big difference? During processing, I would say the extra data made the blues in the outer galaxy easier to coax out and there is maybe more detail in the dust lanes...I'm struggling to see much difference otherwise apart from processing - maybe I'm too tired to notice at this point!

    I will post the new image below and the previous one with 10 hours integration in the next post for comparison purposes.

    Key processing difference - in the new image I've brought in Ha into the red channel at a 40% weighting (vs. 50% in the 10 hour image)...I have also been less aggressive on star colour in the new image.

    Thanks for looking!

    18.5 hour integration - M31 2020 and 2021 data. Edit: Some extra details - taken in Bortle 7-8 skies, LHaRGB filters, William Optics Z73 refractor, ZWO ASI1600MM Pro camera, Rainbow Astro RST-135 mount in 2021, iOptron CEM25P in 2020.

    1373547134_M312020-21.thumb.png.d3a6dbbc782d954102af0afdc24768ff.png

    • Like 20
  15. 1 hour ago, Andy R said:

    Lovely rendition I captured M31 Llast week too with Ha LRGB filters alas I can’t make the Ha pop out like yours as it’s only an hours worth of Ha I guess. Very similar colour in the stars thou. 

    Hi - thanks! My approach was to use Pixelmath in PI on the red channel after cropping and applying DBE. I masked the stars first and then weighted the Ha 50% in the red channel. So Red channel = 0.5xHa + 0.5xR (normal red stack).

    It was definitely a deeper red than my last attempt a year ago using the same approach, so maybe the amount of Ha capture also made a difference.

    I hope this helps!

     

    • Thanks 1
  16. 1 hour ago, gorann said:

    Really nice but I am also a bit distracted by the very red and rather big stars. I would try to only add Ha to the galaxy to highlight its Ha regions rather than the whole image and see what it looks like.

    Hi there thank you for this - i applied Ha to the red channel while masking the stars to avoid this scenario. I think my faulty star mask may be more to blame as I mentioned in another reply to Olly. I noticed as I carefully tweaked the saturation level (the last step!) that some stars reddened despite being masked...thank you for the feedback!

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.