Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

First try at M31 - Can you help me improve?


Recommended Posts

Hey guys,

I used the clear sky on Sunday to squeeze in a couple of shots of the Andromeda Galaxy.  Since I tore 5 ligaments in my right foot two weeks ago I could only set up my equipment in my garden with a street lamp approx. 20 meters away. Nevertheless I was able to capture 22x10 min subs before the clouds came rolling in.  Post-processing was done using PixInsight but so far I'm not really satisfied with the results I am getting. I am sure the light pollution won't help matters but I am wondering weather, given the equipment I am using, I should expect to see better results regardless.  Any criticism and feedback is greatly appreciated.  If anyone wants to try their hands on my raw image files they can find the corresponding files here.

Andromeda6.png

Image Stats:

Lights: 16 x 10 min subs @ ISO 200

Bias:  36 @ 1/4000 sec

Flats: 73 @ 1/250 sec

Equipment Used:

  • Skywatcher Explorer 200-PDS
  • Skywatcher AZ-EQ 6 GT
  • Sony Alpha 55
  • Lacerta MGEN II (9x50 Guide Scope)
  • Hot beverage

Software Used:

  • PixInsight 1.8
  • AstraImage 5.0
Link to comment
Share on other sites

Nice image, with fine detail in the dust lanes.

With your setup, you should be able to get more colour. I will have a go at your image with PI

 

BTW. what is the Bayer matrix for your camera (BGGR, RGGB, BGRG, etc)??

Cheers,

Link to comment
Share on other sites

First of all a colour boost, just to show what colour is in the image.

- calibration of the light frames

- registration and integration of the 11 best frames

- cropping of the image to remove some artefacts

- gradient removal (DBE)

- colour calibration

- stretching (Histogram transform)

- colour saturation (curve transform)

M31_colourboost.png

Now that colour is revealed, you know that it's worthwhile to process this further to enhance the dust lanes near the core and control the stars. This will take a lot longer to do.

The steps I would suggest are:

deconvolution to enhance detail in the lanes

star reduction

masked stretch to enhance faint detail followed by histogram transformation

colour saturation using lightness mask

HDR transform to reveal detail near the core and enhance the dustlanes more.

...

I think it's definitely worth the extra effort. The data you have is good enough, but you might want to look into the artefacts on the left hand side of all your raw images. I couldn't remove the blue band with DBE, and cropped it out of this image. Had it been perfectly vertical, it could have been removed by CanonBandingReduction.

Link to comment
Share on other sites

6 hours ago, wornish said:

I had a play with your raw image files in DSS and PS

Got to admit your raw data is very good.

 

what do you think of this attempt.

 

 

M31-test-process.jpg

Thank you for your effort. :) I quite like how sharp the image appears to be. Would you mind going through the post processing steps in Photoshop for me? I have never used PS for any of my Deep Sky images and I would love to see if there is anything I could incorporate into my PostProcessing workflow.

2 hours ago, wimvb said:

First of all a colour boost, just to show what colour is in the image.

- calibration of the light frames

- registration and integration of the 11 best frames

- cropping of the image to remove some artefacts

- gradient removal (DBE)

- colour calibration

- stretching (Histogram transform)

- colour saturation (curve transform)

M31_colourboost.png

Now that colour is revealed, you know that it's worthwhile to process this further to enhance the dust lanes near the core and control the stars. This will take a lot longer to do.

The steps I would suggest are:

deconvolution to enhance detail in the lanes

star reduction

masked stretch to enhance faint detail followed by histogram transformation

colour saturation using lightness mask

HDR transform to reveal detail near the core and enhance the dustlanes more.

...

I think it's definitely worth the extra effort. The data you have is good enough, but you might want to look into the artefacts on the left hand side of all your raw images. I couldn't remove the blue band with DBE, and cropped it out of this image. Had it been perfectly vertical, it could have been removed by CanonBandingReduction.

Thank you so much for all the effort you put into this. I quite like the background in your image and how smooth the final result seems to be. Could you give me a quick rundown on the settings you used to stack the light frames? I tend to use the standard parameters provided by PixInsight with the rejection algorithm set to sigma clipping. I find the whole stacking procedure to be somewhat frustrating seeing how my stacked images usually end up being very noisy and grainy. I've tried to lower the ISO on my camera, use the dithering option of my autoguider and take longer and more subs to acquire the best raw data I could possibly get. Granted the light pollution doesn't help but I find that the stacked images always show excessive chroma and luminence noise that end up looking like a muddy mess when any type of noise reduction or sharpening algorithm is applied to them.

M31_colourboost7.png

Regarding the weird color artifact in the image. I think has something to do with the way the light from the street lamp entered my telescope. I've never seen this kind of gradient in any of my images before and my camera normally does not exhibit this type of discoloring. I guess its a lesson in where not to set up my telescope, though. :evil6:

I will try to follow the steps you suggested once the weekend comes around and see where they take me from here. Hopefully I have something more aesthetically pleasing to show off to you guys. Again thanks to the both of you for your help and the time you put into post processing my image files.

 

Cheers,

Patrick

Link to comment
Share on other sites

For batchprocessing, I very much used the standard parameters.

If you have a lot of light pollution, then this also adds to the noise:

light signal = number of photons

noise = sqrt(number of photons)

It doesn't matter where the photons come from; stars, galaxy, streetlamps, sky ... They all add to light on the sensor and therefore noise on the sensor

We use background extraction (DBE) to remove the effect of light pollution, but this doesn't remove the noise from light pollution. So, if you have a lot of light pollution, you not only end up with brighter images, but also more noise.

I use TGVDenoise to remove noise in the luminance, and MultiscaleMedianTransform on Chrominance to remove colour noise.

http://wimvberlo.blogspot.se/2016/07/noise-reduction-for-dslr-astroimages.html

I worked some more on your image yesterday evening, but no results yet.

Link to comment
Share on other sites

Quote

 

Thank you for your effort. :) I quite like how sharp the image appears to be. Would you mind going through the post processing steps in Photoshop for me? I have never used PS for any of my Deep Sky images and I would love to see if there is anything I could incorporate into my PostProcessing workflow. 

 
 

I purchased a set of PS actions  called  "Annies Astro Actions" which I use most of the time, they are not expensive and save a lot of clicks. 

I always begin with simple stretching and then using some of the other actions to reduce star bloating and bring out the colour information.

You can find details on them here http://www.eprisephoto.com/astro-actions 

I think I may have over done the sharpening in my test processing but your original data is great - thanks for sharing.

 

 

Link to comment
Share on other sites

This looks very good to me. I haven't played with the raw data but it seems to want to weight the green very heavily. I'd take care to keep that down in the hope of finding a reddish domination in the core with a hint of blue in the spirals. SCNR green would be my second step after running DBE.

Olly

Link to comment
Share on other sites

Yes, there was a lot of green. I generally use a sequence of DBE, colour calibration, (noise reduction on luminance), sharpening, star reduction, stretching, selective colour saturation. In this sequence I use SCNR whenever green starts to dominate. Generally more than once during processing.

Wornish, I don't use PS (haven't made that investment). So I can't advise you on the workflow. But I'm sure others can.

Link to comment
Share on other sites

Probably worth saying, also, that one shot colour cameras benefit enormously from dithering - the process of moving the camera by a few pixels between sub exposures. A roughly 12 pixel dither is good for normal DSLRs. It prevents the familiar effect of mottled colour.

Olly

Link to comment
Share on other sites

Here's my effort. I kept the colour saturation down to avoid a muddy effect. Instead I tried to enhance the dust lanes near the core.

The stars could have taken a little more colour.

M31_SGLres.png

As Olly mentioned, dithering the imaging camera 12 or more pixels between exposures, results in a cleaner background after stacking.

I pretty much used the standard parameters in the batchpreprocessing script, but only stacked the best 11 frames.

Postprocessing included:

DBE

Colour Calibration

I then created a synthetic luminance channel:

     RGB channel separation

     Image integration of the RGB channels, using median combination without pixel rejection

Stretching and colour saturation of the colour image

Blurring of the colour image by deleting 3 wavelet layers using MLT

Processing of the luminance channel:

     TGV denoise

     creation of a starmask with contours, using a clone of the image, stretched and flattened

     Star reduction using morphology transform

     masked stretch followed by histogram transform

     HDRmultiscale transform to get more detail near the core, 7 layers, 2 iterations

     LocalHistogramEqualization to enhance detail in the core further

     Some more stretching using curvetransform

LRGB recombination of the blurred colour image and the sharpened Luminance

Some more gentle colour saturation using curvetransform

sharpening the dustlanes with a starmask to protect the stars, using layers 2, 3 and 4 in MMT

Resampling of the final image

 

Link to comment
Share on other sites

  • 3 weeks later...

I gave it another go and this is what I came up with.

stacked_DBE1.png

 

There are still some light pollution artifacts in there that I can't get rid of. Next time I will setup my telescope somewhere dark and as far away from any streetlamp as I can get.

Link to comment
Share on other sites

Looking again at this I would say that the original image posted is the best, and in terms of managing the brightness range it is by far the best. In this respect it is rather remarkable. I don't remember seeing more fine detail carried so close to the core in any other image. The colour could be improved, yes. Maybe I should move next to a street light for my next assault on the core!

Olly

Link to comment
Share on other sites

57 minutes ago, ollypenrice said:

Looking again at this I would say that the original image posted is the best, and in terms of managing the brightness range it is by far the best. In this respect it is rather remarkable.

Thing is the first one has lost all the outer detail to get the inner detail to show; I would at least tweak its gamma up a bit or raise the black point.

Why not paste the first one as a layer on top of the last one, up the gamma a little and set it to 'luminance'? Something like this, except as the first one is only a crop it doesn't cover it all:

Temp.jpg

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.