Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

ajwightm

New Members
  • Posts

    2
  • Joined

  • Last visited

Everything posted by ajwightm

  1. I initially used Photoshop for processing but switched over to Pixinsight a while back. Although the end result can be similar the workflow is very different between the two tools. For gradient reduction in Pixinsight you'd use DBE (dynamic background extraction) where you specify a number of background samples across the image and the tool will generate a background image that is then subtracted from the initial image. If you're careful not to include any part of the galaxy when setting the samples then the background can be removed entirely without touching the galaxy at all. As others have said, it can sometimes work "too well" because it'll remove the background but leaves all the noise. It's always tempting to overstretch the data when you get that kind of contrast but there's a limit to what you can bring forward without bringing too much background noise with it. The gradient exterminator plugin for Photoshop works too but I've found it to be harder to use and less effective. It has rather limited options so you need to use layer masking to get the most out of it. I never really got the hang of it myself. For the dust lanes you need to use some kind of local contrast enhancement and there are a bunch of tools to do that. Generally though you need to make heavy use of masking so that you can enhance the higher SNR areas of the image while leaving the low SNR regions alone. This is true for almost all processes applied to the image (not just contrast enhancement). You'll also need to use masking to taper the amount you stretch out the core compared to the outer edges of the galaxy. M31 is obviously way brighter at the core so if you stretch every part of it at the same rate you'll have a blown out core and will still be missing details at the edge. Boosting the edges so much will bring a lot of extra noise so it's best to do some noise reduction before stretching the image, not afterwards. This is easy to do in Pixinsight but much harder in Photoshop. Finally, I did use Photoshop for some minor cosmetic enhancements, mostly for removing a dust smudge. The clone stamp tool in Pixinsight kinda works for that too but the content aware fill in Photoshop is much better. There's definitely different philosophies on whether it's better to pull as much data out as humanly possible or to pull back a bit if it means the image comes out better aesthetically. Ideally you have both but if the data is limited (like mine always is) then you have to walk that fine line. I tend to go back and forth on it myself. I'm still fairly new at this but it seems like the real skill comes in when pulling out those really faint details without it compromising aesthetics.
  2. Processed in pixinsight. I haven't been able to to get any of my own data to play with recently, so this was a pleasure. Great photo!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.