Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Image Processing: Veil Nebula: Should I be getting more from the data?


Recommended Posts

Hi all, I have relatively basic image processing skills with PixInsight and am wondering if this is why I am not getting more out of my data on the Veil Nebula, or whether there just is not a whole lot more achievable with the data.  I captured 36 x 5min lights @ Gain 200, Bin 1x1 on an ASI294MC Pro, and with a H-Alpha filter in front of the sensor.  I live in a Bortle 7/8 area and there was a big moon on the night also. 30 darks were also used in the stack, I did capture flats but they appeared to make the image worse so I left them out.

Here is an attachment of the stack, performed with DSS, lights & darks only as a TIFF file: Veil_LightsDarksOnly.TIF

Here is what I got out of the data:

Int_v01.thumb.jpg.a3ab535ff3d0a2cf0076b6c00bcb73f6.jpg

Process I followed was roughly:

1. Stacking in DSS (no alignment of RGB channels)

2. Dynamic crop in (PixInsight)

3. RGB channel extraction, LinearFit on R channel, then RGB re-combination

4. Backround neutralisation

5. Color calibration

6. SCNR green removal

7. Automatic background extraction

8. Histogram stretch

9. Curves process used to increase saturation a bit

The main thing I'm trying to get a grasp of here is whether my image processing or stacking process is missing some steps, or whether there can't be much more expected from the data that I have in this case?

Edited by feilimb
  • Like 1
Link to comment
Share on other sites

1 hour ago, feilimb said:

Hi all, I have relatively basic image processing skills with PixInsight and am wondering if this is why I am not getting more out of my data on the Veil Nebula, or whether there just is not a whole lot more achievable with the data.  I captured 36 x 5min lights @ Gain 200, Bin 1x1 on an ASI294MC Pro, and with a H-Alpha filter in front of the sensor.  I live in a Bortle 7/8 area and there was a big moon on the night also. 30 darks were also used in the stack, I did capture flats but they appeared to make the image worse so I left them out.

Here is an attachment of the stack, performed with DSS, lights & darks only as a TIFF file: Veil_LightsDarksOnly.TIF

Here is what I got out of the data:

Int_v01.thumb.jpg.a3ab535ff3d0a2cf0076b6c00bcb73f6.jpg

Process I followed was roughly:

1. Stacking in DSS (no alignment of RGB channels)

2. Dynamic crop in (PixInsight)

3. RGB channel extraction, LinearFit on R channel, then RGB re-combination

4. Backround neutralisation

5. Color calibration

6. SCNR green removal

7. Automatic background extraction

8. Histogram stretch

9. Curves process used to increase saturation a bit

The main thing I'm trying to get a grasp of here is whether my image processing or stacking process is missing some steps, or whether there can't be much more expected from the data that I have in this case?

What telescope/lens did you use to take this image?

Link to comment
Share on other sites

I think you have more to pull out, I'm rubbish at processing but got this in my lunch break at work (15 mins fiddling in PS6- curves / levels)

probably overdone as getting noisier than your go and colour is probably different on everyones monitor.

 

Veil_LightsDarksOnly.png

  • Like 4
Link to comment
Share on other sites

59 minutes ago, knobby said:

I think you have more to pull out, I'm rubbish at processing but got this in my lunch break at work (15 mins fiddling in PS6- curves / levels)

probably overdone as getting noisier than your go and colour is probably different on everyones monitor.

 

 

Thx knobby, certainly a lot more detail in that process than I was getting!

Link to comment
Share on other sites

If you used only Ha filter on your camera, then you would benefit from a different stacking workflow.

Since this camera (and other OSC cameras) are sensitive across the wavelength range in each color - each color will capture Ha wavelength but at different quantum efficiency (blue and green at something about only 5% or so). In order to get the smoothest possible image - you need to separate colors and you need stacking algorithm that will be capable of handling quite different SNR subs in single stack.

Don't expect to get color image out of it, but you should be able to get fairly good mono image out of it. I did a quick stretch of red channel only and it does show quite a bit of detail:

image.png.6e87cb9fac62dd4317625b2f0427c753.png

  • Like 2
Link to comment
Share on other sites

1 hour ago, Anthonyexmouth said:

I also have the 294 paired with an ed80. not sure i see the reason or benefit of a narrowband filter. I get that a dual/tri band filter can be useful for a OSC but an Ha filter is surely just going to strangle the ability of the camera. You'd be better off with a good LPS filter. 

Thanks yeah I actually have an LPS filter (IDAS LPS P2), but on the night I took these images there was a big moon out - and unless I am mistaken I thought the moon would wash out the signal with an LPS filter.  I would like to get some more data on the target though with the LPS another time and try to figure out how I can combine it with the IR data.

Link to comment
Share on other sites

52 minutes ago, vlaiv said:

If you used only Ha filter on your camera, then you would benefit from a different stacking workflow.

Since this camera (and other OSC cameras) are sensitive across the wavelength range in each color - each color will capture Ha wavelength but at different quantum efficiency (blue and green at something about only 5% or so). In order to get the smoothest possible image - you need to separate colors and you need stacking algorithm that will be capable of handling quite different SNR subs in single stack.

Don't expect to get color image out of it, but you should be able to get fairly good mono image out of it. I did a quick stretch of red channel only and it does show quite a bit of detail:

 

Thanks vlaiv, when you say separate the colours - do you mean prior to stacking, to extract the RGB channels into 3 different images for each light frame, and then to stack each channel separately .. or am I mis-understanding this.

Link to comment
Share on other sites

1 hour ago, feilimb said:

Thanks vlaiv, when you say separate the colours - do you mean prior to stacking, to extract the RGB channels into 3 different images for each light frame, and then to stack each channel separately .. or am I mis-understanding this.

Yes, you have it right - split each calibrated sub into 4 smaller images, each containing one "color". You will get red sub, two green subs and one blue sub. This should be done without any debayering, or rather it would be similar to super pixel debayering, where you get R, G and B for one pixel out of each group of 2x2 pixels (only difference to what I've proposed is that two green ones are summed up in single sub instead of having two subs for green pixels).

After that you should stack each of these "colors" (or sub frames) together into single stack (not to different "color" stack). Only thing that you should be careful about is to use stacking method that is capable of "weighing" each sub based on its SNR. This is because each of these sub images from single light has different SNR because sensor has different sensitivity in Ha wavelength for each color, so signal will be of a different strength and hence SNR will be different. For optimum results some subs should be included "more" (those with better SNR) and some subs "less" (those of green and blue pixels, because they have lower SNR).

With Ha filter you are capturing the same signal in each color - so you will always get monochromatic image (either in shades of gray or in shades of some other color) and there is really no point in trying to assign different "colors". You can't get red color in some parts of the image and blue in other parts - you will always end up with purple in both places (or some other color that is same ratio of R, G and B but only differing in intensity).

Hope this makes sense.

  • Like 1
Link to comment
Share on other sites

5 hours ago, feilimb said:

Sorry I should have said in the first post! It was a Skywatcher ED80 DS Pro, with 0.85x focal reducer - so 510mm @ F/6.37

Currently trying to process your image with Gimp. Will post results here later. Perhaps more exposure time would help pull out more detail, but I'm still a newb at AP so don't quote me on that.  Funny enough BTW, I own the model one step down from yours, the 72ED. The evostar ED line has got lots of potential.

  • Like 1
Link to comment
Share on other sites

43 minutes ago, Nerf_Caching said:

Currently trying to process your image with Gimp. Will post results here later. Perhaps more exposure time would help pull out more detail, but I'm still a newb at AP so don't quote me on that.  Funny enough BTW, I own the model one step down from yours, the 72ED. The evostar ED line has got lots of potential.

Nice one cheers Nerf, will be interesting to see what you can pull out.

 

1 hour ago, vlaiv said:

Yes, you have it right - split each calibrated sub into 4 smaller images, each containing one "color". You will get red sub, two green subs and one blue sub. This should be done without any debayering, or rather it would be similar to super pixel debayering, where you get R, G and B for one pixel out of each group of 2x2 pixels (only difference to what I've proposed is that two green ones are summed up in single sub instead of having two subs for green pixels).

After that you should stack each of these "colors" (or sub frames) together into single stack (not to different "color" stack). Only thing that you should be careful about is to use stacking method that is capable of "weighing" each sub based on its SNR. This is because each of these sub images from single light has different SNR because sensor has different sensitivity in Ha wavelength for each color, so signal will be of a different strength and hence SNR will be different. For optimum results some subs should be included "more" (those with better SNR) and some subs "less" (those of green and blue pixels, because they have lower SNR).

With Ha filter you are capturing the same signal in each color - so you will always get monochromatic image (either in shades of gray or in shades of some other color) and there is really no point in trying to assign different "colors". You can't get red color in some parts of the image and blue in other parts - you will always end up with purple in both places (or some other color that is same ratio of R, G and B but only differing in intensity).

Hope this makes sense.

Thanks vlaiv for the extra detail, it does make some sense but I'm still trying to digest it all!

Link to comment
Share on other sites

Thanks Nerf for having a crack there with a good chunk of detail.  I have had another crack at it this morning, trying to follow @vlaiv suggestions and render the final image in monochrome.  I split each sub into R,G & B and I threw away all the B's which seemed to contribute nothing.  I then split my previous master dark into R, G and B (also throwing away the 'B'). Then I stacked all the R,G component subs, along with the R,G component master dark in DSS and chose 'Entropy Weight Average' for the stacking algorithm (although I'm not sure if this is what vlaiv was suggesting with 'weighing each sub based on its SNR').

The processing I followed was similar to in my opening post, except without background neutralisation/color calibration, and performing the CurvesTransformation on 'RGB/K'. Also I performed a light 'Deconvolution'.  I'm pretty happy with this 2nd result in comparison to my first effort.  No noise reduction algorithm was applied.

Int_v02.thumb.jpg.8253ccd716d52946f6c629a9a292b0bd.jpg

Edited by feilimb
  • Like 1
Link to comment
Share on other sites

22 minutes ago, feilimb said:

I split each sub into R,G & B and I threw away all the B's which seemed to contribute nothing.

Ah yes, B seems to have very low QE at 656nm (Ha wavelength) according to this graph:

image.png.cd1adb9aa0fdc066f8e12b637690599a.png

Green is not much better either in comparison to Red, but it is at least three times better than Blue.

  • Like 1
Link to comment
Share on other sites

On 27/09/2019 at 14:03, feilimb said:

Thanks Nerf for having a crack there with a good chunk of detail.  I have had another crack at it this morning, trying to follow @vlaiv suggestions and render the final image in monochrome.  I split each sub into R,G & B and I threw away all the B's which seemed to contribute nothing.  I then split my previous master dark into R, G and B (also throwing away the 'B'). Then I stacked all the R,G component subs, along with the R,G component master dark in DSS and chose 'Entropy Weight Average' for the stacking algorithm (although I'm not sure if this is what vlaiv was suggesting with 'weighing each sub based on its SNR').

The processing I followed was similar to in my opening post, except without background neutralisation/color calibration, and performing the CurvesTransformation on 'RGB/K'. Also I performed a light 'Deconvolution'.  I'm pretty happy with this 2nd result in comparison to my first effort.  No noise reduction algorithm was applied.

Int_v02.thumb.jpg.8253ccd716d52946f6c629a9a292b0bd.jpg

You can use this image as L in an LRGB process. Just stretch the original image using arcsinh stretch and boost colour. Then blur (mlt - 2 layers, or convolution) and apply LRGB combination. 

  • Like 1
Link to comment
Share on other sites

On 27/09/2019 at 13:03, feilimb said:

I split each sub into R,G & B and I threw away all the B's which seemed to contribute nothing.  I then split my previous master dark into R, G and B (also throwing away the 'B'). Then I stacked all the R,G component subs, along with the R,G component master dark in DSS and chose 'Entropy Weight Average' for the stacking algorithm (although I'm not sure if this is what vlaiv was suggesting with 'weighing each sub based on its SNR').

The processing I followed was similar to in my opening post, except without background neutralisation/color calibration, and performing the CurvesTransformation on 'RGB/K'. Also I performed a light 'Deconvolution'.  I'm pretty happy with this 2nd result in comparison to my first effort.  No noise reduction algorithm was applied.

I think you could simplify your initial pre-processing... Calibrate all subs as RGB and then extract the red channel, and stack only that. Even with 'weighting', the contributions of both B and G channels would be very small compared to R. 

  • Like 1
Link to comment
Share on other sites

23 minutes ago, tooth_dr said:

You can load the normal raw data into APP and select the H-alpha setting and it will do all this automatically and produce a stack from the red channel only 

Thanks tooth, I haven't tried APP yet but I will try the trial version. In my case the G channel also did have quite a bit of useful data as well as the R channel, not sure if it would allow for custom channel selection?

2 hours ago, wimvb said:

I had a go at your original data with PixInsight. The process I used enhanced reflection halos which I haven't corrected. They can be reduced with an appropriate star mask.

Veil_LightsDarksOnly.thumb.jpg.d1421d99f4f966ec3fd807743e3fb266.jpg

Many thanks wimvb, this is the best version I've seen yet in terms of the detail pulled out. Any chance you could outline the high level steps in your process?

I had another 45 minutes of data from a pre meridian flip that I incorporated last night, and did one more process attempt with that. It was much the same as my previous effort, perhaps just a little better snr (borders added using Crop process in PI):

Int_v03.thumb.jpg.0cb7435590d72c520682ebb7796d3de5.jpg

Link to comment
Share on other sites

8 hours ago, wimvb said:

You can use this image as L in an LRGB process. Just stretch the original image using arcsinh stretch and boost colour. Then blur (mlt - 2 layers, or convolution) and apply LRGB combination. 

I am hoping to capture rgb this week (clear sky forecast for Tuesday) with a LP filter. This will be my first attempt to then combine this 'L' channel with RGB so I have a lot to learn.. When you say stretch the 'original image' are you referring to the H-alpha capture (stack), or a stack of RGB data?

Link to comment
Share on other sites

5 hours ago, feilimb said:

Any chance you could outline the high level steps in your process?

  • crop to remove a narrow black border
  • dbe using division as correction method
  • background neutralisation
  • HSV repair script
  • cloned image to create two identical linear images, A and B

image A:

  • histogram transformation in several steps
  • curves transformation to boost colour saturation
  • created a mask based on the red channel

image B:

  • arcsinh stretch to retain colours in the stars
  • histogram transformation

Then I blended image B into image A using a weak starmask (brightest area in the mask kept down to 0.25). I used PixelMath for this.

  • Colour boost with the previously created mask
  • DBE to get rid of a persistent gradient
  • resample to 50%
  • Like 1
Link to comment
Share on other sites

I captured about 3 hours of RGB data last Tuesday night, with an IDAS P2 LPS filter and no moon around.  This was my first crack at combining RGB with the above data (H-Alpha using colour CMOS), I'm not fully happy with the process I used for combining and I seem to have brought some (probably light pollution) reddish colour into the background sky (see especially upper half of image) .  Also, despite boosting the colour saturation in the RGB image the colour in the final combination just seems a bit inspid and lacking any punch.  I think I need to learn some more about masks too...

Anyhow here it is (about 3 hours of 5min subs in H-Alpha, and 3 hours of 5 min subs with a LP filter combined):

Veil_HaRGB_v2.thumb.jpg.72e2829189123c46b72713c97e5f2a80.jpg

Edited by feilimb
  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.