Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Reprocessed M42 Orion Nebula using PI (first go)


Chefgage

Recommended Posts

Hello all. I took some images of the Orion nebula late last year and processed them using my usual method in GIMP. I decided I wanted to try out new software to process my images so I have obtained the free trial of pixInsight. So this is the reprocessed image of the Orion nebula using pixInsight. This was a first go at using the software, and although very different to GIMP I think I will get on well with it.

The first image is the pixInsight one and the second image is the GIMP one. In the pixInsight I seem to have brought out quite a bit of noise and some strange coloured dashes/streaks??

But overall I do think the pixInsight one is best.

1607267331_Orionnebula_1.thumb.png.06bf4234b666977c6a85a5b1079f6b14.png

453917190_M42OrionNebula.thumb.png.58b1660611a40b21ed4e55dcf99d065c.png

Edited by Chefgage
  • Like 3
Link to comment
Share on other sites

1 minute ago, Chefgage said:

Any reason why? Myself I like the first one as it shows more detail I think.

I like the idea of pushing the data only to the point it will let you. As soon as you push it beyond that point - it will start to show. Noise issues will become apparent, and so will some artifacts in the image.

Second image in my view is much better at this - it is not nearly pushed too far as first image. In fact it only shows traces of noise and that is because it is a bit over sampled. If it were binned to proper sampling rate and stretched like that - it would not show any noise (and in fact - it does not show it when viewed at say 25% zoom).

  • Like 1
Link to comment
Share on other sites

7 minutes ago, vlaiv said:

I like the idea of pushing the data only to the point it will let you. As soon as you push it beyond that point - it will start to show. Noise issues will become apparent, and so will some artifacts in the image.

Second image in my view is much better at this - it is not nearly pushed too far as first image. In fact it only shows traces of noise and that is because it is a bit over sampled. If it were binned to proper sampling rate and stretched like that - it would not show any noise (and in fact - it does not show it when viewed at say 25% zoom).

When you say to bin it to proper sampling rate. How would I determine that. For my camera the pixel size is 3.72um and focal length of scope is 420mm. So using the formula I get a sample rate of 1.83. 

Link to comment
Share on other sites

The first one has a really uneven background, with a weird sort-of rectangular negative vignetting. Plus the brighter stars are showing big halos.

I frequently see very faint detail barely above the noise when processing my own images, but I would never try to bring it out in a finished image, as it's just too noisy.

  • Like 1
Link to comment
Share on other sites

Pixinsight made you stretch the data much harder, introducing noise. If you were to stretch the same in GIMP, you would also see more noise and artefacts. If I were you, I would start by restacking in PI, using cosmetic correction to remove the hot pixels, and the satellite trail. Then when stretching the image, don't apply the stf as a permanent stretch. If you want to use histogram transformation (levels in GIMP), set the midpoint slider to 0.25, leave the black point and white point at 0 and 1, and apply this several times. Bring the blackpoint in just below clipping, when you have the histogram peak well clear of the left side. This method will give you much better control over stars and background.

If you want to push colour and keep detail, create a synthetic Luminance image by extreacting L from your (linear)colour image. Use arcsinh stretch on the colour image, and the above method on the L. Then combine L with colour in Lab mode, channel combination.

If you still need to, apply noise reduction (not obliteration) at the end.

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Chefgage said:

When you say to bin it to proper sampling rate. How would I determine that. For my camera the pixel size is 3.72um and focal length of scope is 420mm. So using the formula I get a sample rate of 1.83. 

Actual resolution of the image will depend on seeing conditions combined with scope aperture and mount performance. This means that it will not be the same each night. In fact - it won't be the same between subs. Stacking averages sub to sub difference so you can look at final stack to estimate actual resolution of the image.

Take average FWHM of stars in your image in arc seconds and divide that with 1.6 to get good sampling rate for such image.

You can also see if image is oversampled if you just take a look at it at 100% zoom. Here is part of your image:

image.png.df688aa131712dd20039cc56e5e70bc4.png

Smallest stars are circles rather than dots and bright stars are very large. In contrast - this is what properly sampled image usually looks like in terms of star sizes:

image.png.ed16150ba1656f4cb2948c76f0d1b2fe.png

Smallest stars are pin points and larger stars are roughly the size of your smallest stars.

It can also be somewhat seen in nebulosity - as softness / lack of sharpness, although most of nebula in this image is soft itself:

image.png.c7b8bf37f06063851734996c5433de26.png

image.png.6fd109935586fd04790c5da62269e3d5.png

  • Like 1
Link to comment
Share on other sites

9 minutes ago, wimvb said:

Pixinsight made you stretch the data much harder, introducing noise. If you were to stretch the same in GIMP, you would also see more noise and artefacts. If I were you, I would start by restacking in PI, using cosmetic correction to remove the hot pixels, and the satellite trail. Then when stretching the image, don't apply the stf as a permanent stretch. If you want to use histogram transformation (levels in GIMP), set the midpoint slider to 0.25, leave the black point and white point at 0 and 1, and apply this several times. Bring the blackpoint in just below clipping, when you have the histogram peak well clear of the left side. This method will give you much better control over stars and background.

If you want to push colour and keep detail, create a synthetic Luminance image by extreacting L from your (linear)colour image. Use arcsinh stretch on the colour image, and the above method on the L. Then combine L with colour in Lab mode, channel combination.

If you still need to, apply noise reduction (not obliteration) at the end.

Many thanks for that. I will certainly be giving it another go. This was my first attempt at a new software so I will take on board your suggestions (again thanks). 

 

Link to comment
Share on other sites

9 minutes ago, vlaiv said:

Actual resolution of the image will depend on seeing conditions combined with scope aperture and mount performance. This means that it will not be the same each night. In fact - it won't be the same between subs. Stacking averages sub to sub difference so you can look at final stack to estimate actual resolution of the image.

Take average FWHM of stars in your image in arc seconds and divide that with 1.6 to get good sampling rate for such image.

You can also see if image is oversampled if you just take a look at it at 100% zoom. Here is part of your image:

image.png.df688aa131712dd20039cc56e5e70bc4.png

Smallest stars are circles rather than dots and bright stars are very large. In contrast - this is what properly sampled image usually looks like in terms of star sizes:

image.png.ed16150ba1656f4cb2948c76f0d1b2fe.png

Smallest stars are pin points and larger stars are roughly the size of your smallest stars.

It can also be somewhat seen in nebulosity - as softness / lack of sharpness, although most of nebula in this image is soft itself:

image.png.c7b8bf37f06063851734996c5433de26.png

image.png.6fd109935586fd04790c5da62269e3d5.png

Thanks for that.

Link to comment
Share on other sites

  • Chefgage changed the title to Reprocessed M42 Orion Nebula using PI (first go)
21 hours ago, vlaiv said:

Actual resolution of the image will depend on seeing conditions combined with scope aperture and mount performance. This means that it will not be the same each night. In fact - it won't be the same between subs. Stacking averages sub to sub difference so you can look at final stack to estimate actual resolution of the image.

Take average FWHM of stars in your image in arc seconds and divide that with 1.6 to get good sampling rate for such image.

You can also see if image is oversampled if you just take a look at it at 100% zoom. Here is part of your image:

image.png.df688aa131712dd20039cc56e5e70bc4.png

Smallest stars are circles rather than dots and bright stars are very large. In contrast - this is what properly sampled image usually looks like in terms of star sizes:

image.png.ed16150ba1656f4cb2948c76f0d1b2fe.png

Smallest stars are pin points and larger stars are roughly the size of your smallest stars.

It can also be somewhat seen in nebulosity - as softness / lack of sharpness, although most of nebula in this image is soft itself:

image.png.c7b8bf37f06063851734996c5433de26.png

image.png.6fd109935586fd04790c5da62269e3d5.png

Just re reading this. When you are referring to my image being oversampled, this would be the raw image taken and not the processed image? You refer to the size of my stars in the processed image but these stars are just bigger due to the prosessing being done?

The reason I ask is that my small understanding regarding ideal camera/scope combinations was that with my camera pixel size and focal length of scope this gave an ideal equipment combination. This was based on the formula of pixel size ÷ focal length X 206.265.

I hope that makes some sense.

Link to comment
Share on other sites

46 minutes ago, Chefgage said:

Just re reading this. When you are referring to my image being oversampled, this would be the raw image taken and not the processed image? You refer to the size of my stars in the processed image but these stars are just bigger due to the prosessing being done?

Both, as they both have the same resolution (unless you drizzle for some strange reason) in both sampling rate and captured detail.

47 minutes ago, Chefgage said:

The reason I ask is that my small understanding regarding ideal camera/scope combinations was that with my camera pixel size and focal length of scope this gave an ideal equipment combination. This was based on the formula of pixel size ÷ focal length X 206.265.

Yes, that is correct formula and if I understand correctly - you were using camera with 3.72µm pixel size and 420mm of focal length, right?

That gives sampling rate of 1.83"/px - that is correct. This, however, does not mean that you'll be properly sampling the image - or in another words, although on paper 1.83"/px sounds ok in principle - it is not always ok.

In order to properly sample with 1.83"/px - stars in your image need to have FWHM of about 2.93". If you are using scope with 420mm of focal length - I'm guessing it is F/6 scope with about 70-72mm of aperture?

Such scope itself has Airy disk size of 3.67". Add average seeing and mount performance into account and realistic star FWHM that you can get with 70mm scope is 3.2-3.6" - which means sampling rate over 2"/px.

In order to fully exploit 1.83"/px - you need something like 1.5" seeing and 0.8" RMS guiding, or if your mount guides at 1" RMS - then you need seeing to be 1" FWHM in order to reach that resolution.

What mount are you using and what is your guide RMS?

 

Link to comment
Share on other sites

@Chefgage: don’t worry too much about sampling. My pixelscale is about half yours, at 0.95”/pixel, and I get good enough images with usually 0.6-0.8” guiding rms. In PixInsight I use the subframe selector script to put a weight on my images, with higher weight for lower fwhm. (I haven’t upgraded PI  to the newest version yet.) Moderately oversampled images are easier to process, imo. There is a slight loss in signal to noise ratio for smaller pixels, but you compensate for that by increasing the integration time.

What I did see in your image is slightly elongated stars. If your images still have that, you might have a look at optimizing your guiding. 

Link to comment
Share on other sites

5 minutes ago, wimvb said:

Moderately oversampled images are easier to process, imo.

Why do you think that is? What about over sampled images do you find easier to process?

Link to comment
Share on other sites

It’s not just one aspect of processing. Process steps influence each other, but usually deconvolution and star control during post processing among others are easier with moderate oversampling. It’s also easier to get decent star masks in PI. For some reason  the star mask tool in PI is very sensitive to star shapes.

Edited by wimvb
  • Like 2
Link to comment
Share on other sites

2 hours ago, vlaiv said:

Both, as they both have the same resolution (unless you drizzle for some strange reason) in both sampling rate and captured detail.

Yes, that is correct formula and if I understand correctly - you were using camera with 3.72µm pixel size and 420mm of focal length, right?

That gives sampling rate of 1.83"/px - that is correct. This, however, does not mean that you'll be properly sampling the image - or in another words, although on paper 1.83"/px sounds ok in principle - it is not always ok.

In order to properly sample with 1.83"/px - stars in your image need to have FWHM of about 2.93". If you are using scope with 420mm of focal length - I'm guessing it is F/6 scope with about 70-72mm of aperture?

Such scope itself has Airy disk size of 3.67". Add average seeing and mount performance into account and realistic star FWHM that you can get with 70mm scope is 3.2-3.6" - which means sampling rate over 2"/px.

In order to fully exploit 1.83"/px - you need something like 1.5" seeing and 0.8" RMS guiding, or if your mount guides at 1" RMS - then you need seeing to be 1" FWHM in order to reach that resolution.

What mount are you using and what is your guide RMS?

 

Right, thank you that makes sense. This image was taken with an ed72 scope on a star adventurer pro mount (so guiding in RA only). Guiding I am averaging about an rms error of 1.5"

I cannot remember what the FWHM showed for each image in deep sky stacker but I do seem to remember seeing so of my other images being higher than 1"

Link to comment
Share on other sites

1 minute ago, Chefgage said:

Right, thank you that makes sense. This image was taken with an ed72 scope on a star adventurer pro mount (so guiding in RA only). Guiding I am averaging about an rms error of 1.5"

I cannot remember what the FWHM showed for each image in deep sky stacker but I do seem to remember seeing so of my other images being higher than 1"

Ah yes, RMS of 1.5" will cause significant drop in resolution. In 2" seeing with 70mm scope in best case scenario that will be 2.7-2.75"/px

Link to comment
Share on other sites

1 hour ago, wimvb said:

It’s not just one aspect of processing. Process steps influence each other, but usually deconvolution and star control during post processing among others are easier with moderate oversampling. It’s also easier to get decent star masks in PI. For some reason  the star mask tool in PI is very sensitive to star shapes.

Have you tried sampling at proper sampling rate then up sampling image in software in order to process it and then down sampling it to original resolution?

That way you can have your cake and eat it too :D

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

Ah yes, RMS of 1.5" will cause significant drop in resolution. In 2" seeing with 70mm scope in best case scenario that will be 2.7-2.75"/px

Hi vlaiv, sorry I'm being super lazy by asking you instead of looking it up myself, but what's the calculation you are using to get to these sampling rates?

Link to comment
Share on other sites

1 minute ago, The Lazy Astronomer said:

Hi vlaiv, sorry I'm being super lazy by asking you instead of looking it up myself, but what's the calculation you are using to get to these sampling rates?

Three things go into FWHM calculation - seeing FWHM, gaussian approximation to Airy disk FWHM and guiding FWHM.

Some of these are given as sigma and some as FWHM - there is conversion factor of about 2.355, or more precisely 2*sqrt(2*ln(2)). Sigma * 2.355 = FWHM for Gaussian distribution.

These three PSFs convolve to produce final PSF (this approximation assumes perfect optics - which is not the case in reality but difference is rather small) and that means that resulting sigma = sqrt(sigma_airy^2 + sigma_seeing^2 + sigma_guiding^2).

Sigma guiding is just total guiding RMS.

Sigma seeing is seeing FWHM  / 2.355

Sigma airy is 0.42 * lambda * F#

image.png.72c64a9c13327e4966e9248fa06d6dbc.png

I use lambda of 550nm because it is middle of 400-700nm range. Btw - above screen shot / formula is from : https://en.wikipedia.org/wiki/Airy_disk#Approximation_using_a_Gaussian_profile

Once you put everything together - you get expected FWHM under ideal conditions and then you can get sampling rate from that as FWHM  / 1.6

That part has to do with Fourier Transform of Gaussian profile and sampling rate at which frequency attenuation falls below 10% (since we are approximating with Gaussian profile - there is no strict cut off point - Gaussian never drops to zero all the way up to infinity - for that reason we just take sensible point after which attenuation of frequencies is just too great).

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.