Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

M31 : Open data


Recommended Posts

Hi.

This is the calibrated and stacked file of the Andromeda galaxy imaging run from last week. Its from my terrace so moderate LP and thus gradients. I've tried processing it and resulted in lots of retakes.:confused2: (Same story again !!!)  But I thought I might invite the helpful folks here to process this data to know what can be achieved.
Details:
Canon 1100D (unmodded) , 130 PDS, HEQ5 Pro (with much better guiding!! cant believe that yet)
about 40 x 3 min subs @ ISO 400. Darks, Flats and Bias.

Here's the link to stacked file: https://www.dropbox.com/s/wq66kji8m4l1ni7/M31_180s_snrweight.xisf?dl=0

Thanks for your time.

Ishan.

Image16_clone.jpg

Link to comment
Share on other sites

Thanks guys.

9 hours ago, wxsatuser said:

Thats not to bad, processing gets better with practice.

I hope it does. But I'm really frustrated to be honest. Its my 4th attempt at processing this image.  Every time I image and it doesn't live upto my expectations. I have to do a reprocess with all images. Doesn't mean it improves over time :help: . I don't know what I'm doing wrong. Maybe its too noisy and hot in here and I'm expecting too much.

8 hours ago, alacant said:

Looks good to me. Dark frames introduce more noise on Canon (all?) dslr cameras. Try without? A dither in between the snaps also cleans. So, bias and flat: yes. Dark: no. Dither: compulsory! HTH

I tried your suggestion and stacked without darks. I do dither. Want to make sure I understand this right. Dither helps remove the extremely bright and saturated ("hot pixels"?)
Does it play any role except that in reducing noise ? I use PI and stack by Linear Fit Clipping. Would aggressive rejection by decreasing the
 sigma values help decrease noise? or just the opposite?

4 hours ago, Yamez said:

That is a very impressive andromeda, congrats.

:) 

Link to comment
Share on other sites

2 hours ago, Ishan Mair said:
2 hours ago, Ishan Mair said:

 Linear Fit Clipping

Was it any better without the darks? Did you stack the bias with the flats too? If so it should make a difference to both the hot pixels and the noise. Don't know pi but under dss I use sigma clipping with the default values. With a 760mm focus telescope I dither to about 15 pixels, same sensor I think: 700d. APT:PHD2 dither settings are 5:1.9. HTH.

Link to comment
Share on other sites

That's a great image.

If you haven't done so already, you could try a gentle touch of deconvolution with a lightness mask, to sharpen the dust lanes.

Dithering helps to get hot pixels under control, as well as streaks in the background and large scale chroma noise (mottle).

Aggressive hot pixel filters during preprocessing also help with that. I don't use darks, but apply cosmetic correction during image calibration, and a low value for pixel rejection during image integration. This is something you would need to experiment with.

Noise is very much dependent on your imaging conditions. More subs is always the best way to keep noise down. For a noise reduction recipe (that works for me), check this link:

http://wimvberlo.blogspot.se/2016/07/noise-reduction-for-dslr-astroimages.html

Good luck,

 

Link to comment
Share on other sites

5 hours ago, wimvb said:

Dithering helps to get hot pixels under control, as well as streaks in the background and large scale chroma noise (mottle)

Hot pixels and streaks. Yes. Definitely. But I don't think I get any reduction in mottle. But that could be bad observation skills on my part.

5 hours ago, wimvb said:

Aggressive hot pixel filters during preprocessing also help with that. I don't use darks, but apply cosmetic correction during image calibration, and a low value for pixel rejection during image integration

I apply CC. I think you mean "low value of sigma for pixel rejection". But I'm a bit confused about it. I tried decreasing the sigma values. The lower it got, more std. dev. was reported by Pixinsight Noise Evaluation script. I though decreasing the sigma values might reduce mottling, but I couldn't see any improvement even after I changed sigma values from 5 to 0.5.  Am I missing something here?

5 hours ago, wimvb said:

Noise is very much dependent on your imaging conditions. More subs is always the best way to keep noise down. For a noise reduction recipe (that works for me), check this link:

http://wimvberlo.blogspot.se/2016/07/noise-reduction-for-dslr-astroimages.html

That's what I got to understand. The exif temp of the raw light frames was more than 30 deg. C. BTW I had just come across your site yesterday and well, had already bookmarked it. Its a nice and simple NR tutorial. And you've got a pretty M31 image too !! Thanks Wim.

11 hours ago, alacant said:

Was it any better without the darks? Did you stack the bias with the flats too? If so it should make a difference to both the hot pixels and the noise. Don't know pi but under dss I use sigma clipping with the default values. With a 760mm focus telescope I dither to about 15 pixels, same sensor I think: 700d. APT:PHD2 dither settings are 5:1.9. HTH

I used the PI Noise Evaluation but the results indicate that the noise in the (Lights+Darks+Flats+Bias) is less than in (Lights+Flats+Bias). And visually I noted that the result with darks had a slightly smoother background. It seems LDFB works for me better than LFB. And for camera, ya same sensor. But what do you mean by "APT:PHD2 dither settings 5:1.9" ? Are you referring to dither scale? It would be so kind of you to post a screen shot of dithering settings in both softwares. Or maybe just write down the values.
Thanks for your help.

Link to comment
Share on other sites

1 hour ago, Ishan Mair said:

I think you mean "low value of sigma for pixel rejection". But I'm a bit confused about it. I tried decreasing the sigma values. The lower it got, more std. dev. was reported by Pixinsight Noise Evaluation script. I though decreasing the sigma values might reduce mottling, but I couldn't see any improvement even after I changed sigma values from 5 to 0.5.  Am I missing something here?

Yes, that is what I meant, sort of. If you have a (large) number of subs, each pixel position (x, y) on the sensor will have a certain value. Due to noise (Gaussian, Poisson, shot, whatever), these pixel values will vary. Ideally the values will show a bell shaped distribution for a very large number of subs. The width of the bell shape is measured by Sigma. (Sigma is a property of the collection of all pixels and subs, not a value one can set in pixel rejection) But there will also be pixels that don't follow this bell shape, but can be interpreted as outliers. These outliers are hot or cold pixels that need to be removed by pixel rejection schemes, such as cosmetic correction and pixel rejection in integration. During CC and integration, you define a value (lets call it X) such that all pixels with a value higher than <average> + X * Sigma (or lower than <average> - X * Sigma) will be removed before doing the integration. It is the value of X that you set in pixel rejection. The aim is to find the best value of X. If X is too large (as probably your value 5 for X), then not all hot pixels will be rejected. In the final image, you will still see hot pixels. On the other hand, if X is too low (0.5 for X), you will start to reject valid data points. The best way to find the right value for X is to look at the rejection maps that PI creates during image integration. If you start to see star shapes or even a variation from your target, then your X value is too low.

A too high value for X will keep hot pixels in the final image, leading to an increase in noise.

A too low value for X will lead to rejection of valid data. The final image will be the same as if it were made up of fewer subs. This also leads to increased noise.

Again, instead of looking at noise evaluation levels, you should inspect the rejection maps, especially for high pixel value rejection. This image should have evenly distributed noise plus maybe stacking artefacts around the edges. If it shows a pattern or outlines of stars, you are starting to reject valid data.

The only time I use noise evaluation scripts in PI is for getting a starting point for TGVDenoise edge protection. I do all other evaluation visually on previews.

 

As for mottle. I see noise in colour images as consisting of two parts: intensity variations and colour variations. The intensity variations are also visible when extracting luminance data from a colour image. This noise is dealt with by TGVDenoise on L (in L*a*b mode).

Colour variation (Chrominance mottle) needs much more brutal noise reduction. For this noise, I use MMT on up to 7 layers. Used on a neutral background, it will basically remove all colour variation, as it should. But it requires carefull masking, so as not to destroy colour variation in the main object. In my case, it is also partly dealt with by dithering (15 pixels in a non-random pattern.) I have never seen mottle removed by pixel rejection schemes during image integration.

Just as a side note, the noise reduction scheme that I use is what works best for me. My Pentax DSLR has notoriously high noise, and my noise reduction is therefore quite brutal. It starts with taking as many subs as I can. The "point of diminishing returns" is someting I rejected a long time ago. On the occasions that I process other peoples images, I generally need much less noise reduction, sometimes only at the very end of processing.

Hope this clarifies some.

Link to comment
Share on other sites

Hi. Increasing the number of iterations reduces the blotching/mottling but you need a gaming machine or lots of patience to do it. If you have only a few frames, use median kappa sigma. I think someone did a what-it's-called-in-dss : what-it's called-in-pi... In dss it looks like this. HTH.

 

c3.JPG

Link to comment
Share on other sites

3 hours ago, alacant said:

Hi. Look at the image, not the maths! HTH

 

Maths is easy !! Not sure about the other one :icon_biggrin:

 

2 hours ago, wimvb said:

Colour variation (Chrominance mottle) needs much more brutal noise reduction. For this noise, I use MMT on up to 7 layers. Used on a neutral background, it will basically remove all colour variation, as it should. But it requires carefull masking, so as not to destroy colour variation in the main object.

I'm on it !! Will try. Masking is challenge for me. I tried upto 5 layers in past, but ended up with color rings around stars. Gave up.

 

1 hour ago, alacant said:

Hi. Increasing the number of iterations reduces the blotching/mottling but you need a gaming machine or lots of patience to do it. If you have only a few frames, use median kappa sigma. I think someone did a what-it's-called-in-dss : what-it's called-in-pi... In dss it looks like this. HTH

I don't mind the computer working overnight while I dream about the stars ! I'll find out if PI can do iterations of sigma clipping.

Link to comment
Share on other sites

23 hours ago, wimvb said:

The best way to find the right value for X is to look at the rejection maps that PI creates during image integration. If you start to see star shapes or even a variation from your target, then your X value is too low.

Wim. Thanks for the detailed reply. I went through it carefully. I tried changing those values. I see a rejection map full of red, green and blue pixels except where the bright stars or core of M31 is located. These bright areas of image appear black in rejection map. Is that what you mean by "you start to see star shapes" ? So is this value too low?

 

And Alacant thanks for the dither settings. I shamelessly copied them :icon_biggrin:.

Link to comment
Share on other sites

That should still be safe; dark means that no pixels were rejected in those areas. when those areas start to become brighter, is when you start to reject valid data. Btw, I assume that you apply STF to your rejection maps when viewing them.

Link to comment
Share on other sites

19 hours ago, wimvb said:

That should still be safe; dark means that no pixels were rejected in those areas. when those areas start to become brighter, is when you start to reject valid data. Btw, I assume that you apply STF to your rejection maps when viewing them.

Ok I got it. But what about the dimmer areas of the galaxy? The rejection maps don't show those but many pixels are rejected in those areas too.

Link to comment
Share on other sites

If pixels are rejected in a region, then that region should show (= not be empty black) in the rejection map. In the low range map, pixels below the rejection level are rejected and show as bright in the rejection map. This is counter intuitive, since the rejected pixels have a lower than average value (= darker than average). In the high range rejection map, pixels that are rejected have a higher brightness than average. This rejection map looks like noise, with possible dark areas for stars and target.

Here's an example of an image of M31 with rejection maps. Note that an aggressive rejection scheme was used to get rid of hot pixels. This resulted in some data being clipped in the low range, and some star edges in the high range. The images are crops of the actual image, STF was applied to all images.

Image

light_BINNING_1_PV01.jpg

High range rejection map

high_rejection_PV01.jpg

Low range rejection map

low_rejection_PV01.jpg

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.