Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

What can people do with this M31 data?


Recommended Posts

Hi All,

I wonder if anyone might see what they can do with this Andromeda data from last night?  The kit is an Evostar ED80 with stock ff/fr, on an HEQ5, taken with a Canon 6d.  The stack is 70 x 2min guided, dithered light frames, calibrated with 20 - 30 each of bias, darks and flats.  Taken from light polluted Bristol, with a large moon not that far away (both of which contribute to the beastly gradients, I suspect!), I didn't expect the results to blow me away, but I'm curious to see what you expert image processors might be able to do with it.....Dropbox link below - if anyone is tempted I'd be hugely grateful!

https://www.dropbox.com/s/g94yazgu045kwi9/M31_2min_Autosave.tif?dl=0

For context, I'd attempted a batch of 1min subs a couple of nights before, sticking to the one-third histogram 'rule', but the results were disappointing, so I risked going quite a bit over 50% on the histogram to see if I could find anything more in the data...think it's worked, though there's a weird light band on the bottom of the frame, and I need to look at how I'm taking flats, as they don't seemed to have reduced the light drop-off on the right side all that well.

My (limited) processing is simply DSS for stacking, followed by levels and curves in Photoshop. 

Any thoughts would be much appreciated, and I hope you're all keeping safe and well,

Derek

M31_2min__.jpg

  • Like 3
Link to comment
Share on other sites

I think that it is already very good image as is.

Yes, there are few little things wrong with it - like that annoying gradient, and burned cores, but otherwise it is very nice. Will see what I can pull out of the data.

  • Thanks 1
Link to comment
Share on other sites

5 hours ago, vlaiv said:

Here is my rendition of the data.

I must confess that I processed as per popular expectation more than what I believe it should look like.

Hope you like it.

 

Thanks for the kind words, and I most certainly do like it!  That's fabulous, thank you so much for taking the time.  You seem to have shifted the gradient entirely, and found some additional faint stuff that I had not!  It looks a much more complete image.  May I ask what approach you took?

I've followed the discussion on the colours that our deep sky objects should have with interest.  Being significantly colourblind, I tend to rely on photoshop to tell me when my background is reasonably neutral, and tend just to pray that the main object isn't abnormally coloured as a result.  I feel like that's a processing step I've been putting off learning for a while, but think I'm going to have to get into it!

Many thanks again for your efforts, Vlaiv, it's really helpful for me to see what others can do.

Link to comment
Share on other sites

1 hour ago, Delboy_Hog said:

It looks a much more complete image.  May I ask what approach you took?

Here is break down of what I did with data.

I first loaded it in Gimp and did RGB separation to channels, then I saved each channel as mono firs image. This step was needed because I think that ImageJ won't properly load 32bit TIFF (might be just habit from old version of ImageJ - I did not check if recent versions load TIFF properly).

Then I load fits files into ImageJ and crop and bin data. Binning increases SNR and makes image smaller in size - so it is easier to work with.

Next step is background removal - I run some code that I've written for ImageJ that removes background gradient. I do this on all three channels. Next I would do color calibration - but your data seemed ok as is so I skipped this step. I just equalized min and max values so Gimp will won't mess up color information when I load channels separately (it scales data when loading from fits - and I don't want different scaling for each channel).

I load all three channels into Gimp and do RGB combine.

Next I decompose that image into LAB components and discard A and B channels and work on L (I keep original RGB master as well).

I stretch Luminance (L component) by using levels until I'm satisfied of how it looks.

Add copy of that layer that I do denoising on and then I use mask on that layer - inverted layer itself on which I adjust levels so that mask only applies denoised layer in dark areas. This smooths out the noise but prevents blurriness where signal is strong. I flatten that and copy it.

I paste that as new layer on top of RGB image.

With RGB image (bottom layer) - I do level stretch where I enter 2.4 in middle value - this simulates gamma stretch of sRGB color model. I set luminanfe (top layer) to layer mode luminance. I flatten the image.

Next I do (and this is part that I do so that image looks as most people would process it):

- Increase saturation to 200%

- Change temperature of the image by about 2000K to fix atmospheric reddening

- do a little bit of curves to brighten things up

And that is it.

 

  • Thanks 1
Link to comment
Share on other sites

On 24/01/2021 at 20:36, vlaiv said:

Here is break down of what I did with data...

Many thanks for that Vlaiv, that's really helpful to see - I have much to learn!  Very much appreciated - I'll go and have another tinker with the data and see what I can do!

Thanks again!

Link to comment
Share on other sites

On 24/01/2021 at 22:29, almcl said:

I had a quick play with your data in StarTools.

It is nice data, once a couple of artefacts are cropped out.

Here's the result, quite different to Vlaiv's, see what you think?

Thanks Almcl, it is different isn't it?  It stands out much brighter than the background sky on this version - it has a punch to it!  I'd be curious to know how you brightened the galaxy specifically and not the background, is that something you've done through curves etc, or is it layer masks and things - not sure if StarTools has those specifically?  It's a trick I need to learn for the fainter objects in the night sky, I think.

Thanks for taking the time to use the data - it fascinates me how everybody has slightly different approaches and different outcomes!

Link to comment
Share on other sites

On 25/01/2021 at 16:37, Laurieast said:

I used Gradiant Exterminator on it first, and then levels and curves and then applied Photoshop Color Preserving Arcsinh Stretch 

Hi Laurieast, thanks for that - I definitely need to look up Gradient Exterminator!  And I haven't used (or even seen!) that feature in photoshop, so I'll be googling that one too, thank you! 

Link to comment
Share on other sites

53 minutes ago, Delboy_Hog said:

 I'd be curious to know how you brightened the galaxy specifically and not the background, is that something you've done through curves etc, or is it layer masks and things - not sure if StarTools has those specifically?  It's a trick I need to learn for the fainter objects in the night sky, I think.

StarTools (you can download a fully functioning version that does everything but save here to see if you like it) doesn't work in quite the same way as other post processing software.  I used four separate modules to emphasize  the galaxy:  Autodev,  HDR, Life and Superstructure.  ST replaces levels and curves with mathematical functions to avoid loosing data such as the faint outer edges of the galaxy while avoiding star bloat. Its creator, Ivo Jager, @jager945  posts here and may give a much better explanation of the philosophy and function.

  • Thanks 1
Link to comment
Share on other sites

16 hours ago, Delboy_Hog said:

Hi Laurieast, thanks for that - I definitely need to look up Gradient Exterminator!  And I haven't used (or even seen!) that feature in photoshop, so I'll be googling that one too, thank you! 

I should have said GradientXterminator, here : https://www.rc-astro.com/resources/GradientXTerminator/
And the photoshop plugin here https://www.cloudynights.com/topic/595610-photoshop-color-preserving-arcsinh-stretch/   , it's a free plugin, works great! There is discussion there by the auther about how to use it with grey layers, I did not do any of that, just did my levels and curves first, then applied an Arcsinh 10. I only found out about this a couple of days ago, and came across it when watching this video. https://youtu.be/cLUcyil3GLE

 

Edited by Laurieast
  • Thanks 1
Link to comment
Share on other sites

  • 2 weeks later...

I'm not an expert, but rather beginner,  but  had a go at your file ^^ 

StarTools 1.7456 only. 

I think flates are not ideal, as it was an area on the right with different brightness so I've used Wipe first, then the workout as usual, and again wipe uncalibrated . it is binned to 35% since my laptop is slow ^^; 

pretty good information on it, althought probably needs more time to reduce noise 

M31_2min_Autosave SGL 2 .jpg

  • Like 3
Link to comment
Share on other sites

I had a go in Pixinsight. Its always difficult to remove gradients with DBE on a DSLR image I find as it always seems too aggressive and leaves the background rather noisy. Still if you got up to 4 or 5 hours with no moon around you'd have a really nice image. I'm impressed by how flat the field is on such a big sensor!

M31_HSV_ABE.jpg

  • Like 2
Link to comment
Share on other sites

I'm still getting the head around the post capture processing and I've not done that much to it, as the only thing I've got on this machine is SiriL; also not run the calibrated the screen in ages so :

Crop

White Balance/Background Neutralisation

Auto Strech

Background Extraction (subtract)

Move Black Point in Asinh Transformation

Green Noise Remove

Colour Saturation

Background Extraction (subtract) again

Move Black Point in Asinh again, but not by too much.

M31_2min_E.thumb.jpg.d80b648851cd317d9e9ab8eb97389dfd.jpg

 

  • Like 2
Link to comment
Share on other sites

Thanks all, I'm intrigued by the different approaches, software, brightness, contrasts etc.  Makes me realise just how much I've got to learn!

Will definitely try again / add more data on this (preferably moonless!), and as discussed in other threads recently, I definitely still need to master flat frames!  The 6d is a great camera, but certainly comes with some challenges!  Hoping to get to some darker skies once we're out of lockdown too!

  • Like 1
Link to comment
Share on other sites

Lovely!  So many good processes - I really like these - the faint stuff around the edge of the galaxy is there (that I didn't manage to bring out), but in a way that makes it distinct from the background sky.  That seems to be quite an art (or science?!) - my version (albeit riddled with gradients) just seems to merge indistinctly with the background sky.  That's definitely on my list of things to correct.

And some people seem to be managing to bring out the dust lanes more clearly, and get some detail of a swirl in the core too - that's barely visible in my go at this - presumably that's controlling the brightness and boosting contrast, some how?  I guess more data would help with that too.

Thanks for taking the time to play with the data.  I'm still trying to master consistently some of the steps that Vlaiv and others have kindly set out, but if anyone has any techniques they've used for galaxy processing on their own data, I'd be fascinated to hear them!

  • Like 3
Link to comment
Share on other sites

I initially used Photoshop for processing but switched over to Pixinsight a while back. Although the end result can be similar the workflow is very different between the two tools.

For gradient reduction in Pixinsight you'd use DBE (dynamic background extraction) where you specify a number of background samples across the image and the tool will generate a background image that is then subtracted from the initial image. If you're careful not to include any part of the galaxy when setting the samples then the background can be removed entirely without touching the galaxy at all. As others have said, it can sometimes work "too well" because it'll remove the background but leaves all the noise. It's always tempting to overstretch the data when you get that kind of contrast but there's a limit to what you can bring forward without bringing too much background noise with it.

The gradient exterminator plugin for Photoshop works too but I've found it to be harder to use and less effective. It has rather limited options so you need to use layer masking to get the most out of it. I never really got the hang of it myself.

For the dust lanes you need to use some kind of local contrast enhancement and there are a bunch of tools to do that. Generally though you need to make heavy use of masking so that you can enhance the higher SNR areas of the image while leaving the low SNR regions alone. This is true for almost all processes applied to the image (not just contrast enhancement).

You'll also need to use masking to taper the amount you stretch out the core compared to the outer edges of the galaxy. M31 is obviously way brighter at the core so if you stretch every part of it at the same rate you'll have a blown out core and will still be missing details at the edge. Boosting the edges so much will bring a lot of extra noise so it's best to do some noise reduction before stretching the image, not afterwards. This is easy to do in Pixinsight but much harder in Photoshop.

Finally, I did use Photoshop for some minor cosmetic enhancements, mostly for removing a dust smudge. The clone stamp tool in Pixinsight kinda works for that too but the content aware fill in Photoshop is much better.

There's definitely different philosophies on whether it's better to pull as much data out as humanly possible or to pull back a bit if it means the image comes out better aesthetically. Ideally you have both but if the data is limited (like mine always is) then you have to walk that fine line. I tend to go back and forth on it myself. I'm still fairly new at this but it seems like the real skill comes in when pulling out those really faint details without it compromising aesthetics.

  • Thanks 1
Link to comment
Share on other sites

On 11/02/2021 at 17:26, ajwightm said:

 

Many thanks for that, that's hugely helpful to hear the similarities / differences with Pixinsight.  The masking is probably next on my list to learn, I think.  It's a helpful one for night-landscape shots too.

And yes, I definitely agree that there's a fine line on how far to push it - and it seems very easy to go that one stretch too far!  I think what I've learned most from this thread is that there are other tools that I'm not yet using that can get my data a bit further in the right direction before it starts to come apart - I just need to get some experience of using those tools.

The big problem I have (like everyone) is the weather.  I learn new skills, but then not get the astro kit out for 2 months and have forgotten it all next time around! (Shakes fist at clouds!)

Thanks again for your thoughts on the processing!

Link to comment
Share on other sites

40 minutes ago, Delboy_Hog said:

 

The big problem I have (like everyone) is the weather.  I learn new skills, but then not get the astro kit out for 2 months and have forgotten it all next time around! (Shakes fist at clouds!)

 

I got my Celestron AVX mount back in September, I think I've taken it outside less than half a dozen times.

Now an expert at doing fake two star polar alignments indoors, switch it off again and push it back in the corner. 

  • Sad 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.