Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

M42 layer mask techinque from the RGB channels of a single H-alpha DSLR exposure.


tooth_dr

Recommended Posts

I didn't think you could overexpose an H-alpha capture, but you can!  I cut it short, so there was just 4 exposures stacked in DSS, 600s each.  I decided to process it anyway and noticed that the green and blue channels also contained what I felt like was useful practice data, with the blue channel having the 4 trapezium stars clearly visible.  Seemed like a good opportunity to try some layer masks!  Here is the resultant image.

 

DSS stacked image

M42_stacked_image.thumb.jpg.7c9a7270603fe823ada8e101ac858d13.jpg

 

Red channel

M42_red_channel.thumb.jpg.4b000376016f03a448b13d912e3569b6.jpg

Green channel

M42_green_channel.thumb.jpg.d65eb4491f6e0a0c3919a41b43f0fb4e.jpg

Blue channel

M42_blue_channel.thumb.jpg.2b1f0faaeb303fd0e0ec08f0c21aa361.jpg

 

Layer masked image:

M42_masks.thumb.jpg.474869ceddc4c71c07d9cfd17ca166d9.jpg

 

M42 image combined with previous colour image, using H-alpha as Luminance (just trying it out)

M42_combo.thumb.jpg.12604c9a73b12323139f128a4a56ae63.jpg

 

Link to comment
Share on other sites

That's weird, but I assume it must be the debayering that's brought signal into the G+B channels? If you were to stack just the red channel, I expect you'd be able to recover the same detail from the mono stack.

Link to comment
Share on other sites

41 minutes ago, Shibby said:

That's weird, but I assume it must be the debayering that's brought signal into the G+B channels? If you were to stack just the red channel, I expect you'd be able to recover the same detail from the mono stack.

I must check a subframe later. I thought maybe it was light bleeding into the adjacent pixel sensors

Link to comment
Share on other sites

1 hour ago, Shibby said:

That's weird, but I assume it must be the debayering that's brought signal into the G+B channels? If you were to stack just the red channel, I expect you'd be able to recover the same detail from the mono stack.

 

If you look at the response curves for a typical DSLR both the green and blue channels have sensitivity to Ha. This curve for my camera would suggest the green should show less than the blue at 656nm. Bear in mind a camera with a filter removed will be even more sensitive on all three channels.

I've used the super pixel mode in DSS which demonstrates that it isn't caused by debayering.

There seems to be a variation between Canon models and even between different published curves for the same camera.

Canon_450D_Spectral_Response.jpg

Link to comment
Share on other sites

2 minutes ago, Stub Mandrel said:

 

If you look at the response curves for a typical DSLR both the green and blue channels have sensitivity to Ha. This curve for my camera would suggest the green should show less than the blue at 656nm. Bear in mind a camera with a filter removed will be even more sensitive on all three channels.

I've used the super pixel mode in DSS which demonstrates that it isn't caused by debayering.

There seems to be a variation between Canon models and even between different published curves for the same camera.

Canon_450D_Spectral_Response.jpg

Very true, there is even optimum way of processing Ha images taken with DSLR in terms of resolution instead of debayer-ing each channel.

One needs accurate graph like one above to determine coefficients for each R, G and B pixels and multiply each with its own coefficient. This way you can get mono version in Ha without debayering.

G and B pixels will contain more noise than R pixels, but if you dither and stack, results will be better than debayered version at least in terms of resolved detail.

Now that I think about it, you really don't need a graph if you are taking Flats with Ha filter - flats should provide each pixel with adequate scale value to get mono image.

Link to comment
Share on other sites

Hi guys

All of this is no longer necessary now that we have a tool that deals specifically for this very thing. I strongly recommend giving Astro Pixel Processor a go. In fact, I would even go so far as to say that if you are shooting Narrowband on a DSLR, then it's not a luxury, it's an essential. 

It has specific debayer algorithms for every filter. No need to use use Super Pixel mode, and you can still calibrate your data. I've stopped using DSS altogether now, despite the fact i never once had a problem with it. It was the fact that APP is the only piece of software out there (as far as I can tell) that actually caters for this type of imaging, that sold it for me. Beforehand, I tried everything, Super Pixels, IRIS, you name it, but nothing was as good or as hassle free as APP. 

Adam, you should give the trial a go. If you need any help with it I'd be more than happy to give you a few pointers. 

Link to comment
Share on other sites

Quick screen grabs with my phone this morning as I was running late:

Here is single unstacked sub straight out of camera, no processing. All channels have data. There a no way you would recover it all from the red channel alone.

 

FBF79DF7-6E91-4835-A4DB-48AD0B652640.jpeg

C94A8FC0-1B16-4347-944F-30522CC8A4D4.jpeg

3C0F75FF-F8BD-4420-BBA0-80416A8E887B.jpeg

DFE432F2-5799-4017-89CB-D6BCF48DD4C9.jpeg

4F900ACA-BE76-48EB-900E-823AA6D548C9.jpeg

Link to comment
Share on other sites

1 hour ago, tooth_dr said:

Here is single unstacked sub straight out of camera, no processing. All channels have data. There a no way you would recover it all from the red channel alone.

But surely that's been debayered to produce the RGB image you're looking at; if so, you can't tell here how much of the G signal, for example, comes from the green-filtered pixels. I'm not saying they're not picking anything up (Neil's graph proves that they do) just that there's the possibility that most of the signal is coming from the red pixels. Honestly, though, I don't know without looking closer at the debayering algorithm.

20 hours ago, Stub Mandrel said:

If you look at the response curves for a typical DSLR both the green and blue channels have sensitivity to Ha.

It's interesting. I wonder why you'd want a blue pixel to respond at a red wavelength. Is it just a poor filtering?

Link to comment
Share on other sites

1 minute ago, Shibby said:

But surely that's been debayered to produce the RGB image you're looking at; if so, you can't tell here how much of the G signal, for example, comes from the green-filtered pixels. I'm not saying they're not picking anything up (Neil's graph proves that they do) just that there's the possibility that most of the signal is coming from the red pixels. Honestly, though, I don't know without looking closer at the debayering algorithm.

It's interesting. I wonder why you'd want a blue pixel to respond at a red wavelength. Is it just a poor filtering?

That's all over my head, I just normally use the red channel and discard the remainder.  In this case I poorly chose my exposure time so was making the best of wasted imaging time.  T'is all!

Link to comment
Share on other sites

Some light does indeed leak into the neighbouring G and B pixels (due to the CFA being lower quality compared to proper filters), so Adam is right, in order to extract the maximum amount of signal you would indeed need to use all 3 channels (although heavily weighted towards the Red obviously). But don’t forget, this is probably a worst case scenario we're looking at here, as M42 is one of the brightest objects up there, plus it's been over-exposed. If you examined one of your raw subs from a different target you’d no doubt see much less signal in the G and B channels.

Up to now, Super Pixel mode has been the best tool available for this. It combines all 4 pixels into 1, thus taking the small amount of signal, but also all the noise from the G & B pixels. A side effect is that you also end up with an image 25% in size, so you lose 75% of your resolution along the way (which is liveable with given the high pixel count in today’s DSLR’s). Another method is extracting just the Red pixels and stacking those. With this method you lose a small bit of additional signal, but you lose considerably more noise too, so overall the SNR should be better. Downsides are, you can’t calibrate your subs this way (a big deal if you ask me), and you’re still left with the 75% loss in resolution as well. Of course, you can always drizzle to try and recover some, but it’s far from ideal.

Now, I don’t know exactly what Mabula’s algorithms in APP actually do, but I do know that no resolution is lost in the process, and the subs can be fully calibrated as well, so I presume what it does is analyse each channel for signal and noise, and then uses appropriate percentages of each one when it performs the stack.
 

Link to comment
Share on other sites

15 minutes ago, Shibby said:

 

It's interesting. I wonder why you'd want a blue pixel to respond at a red wavelength. Is it just a poor filtering?

Chiefly because it's how the eye and the camera discriminate colour.

If you have 'near-perfect' RGB filters as used by RGB imagers they have pass bands with very steep cutoffs, and even a gap for the sodium line between Red and Green.

That means that any monochromatic light source appears as either R, G or B. Only 'black body' radiation will come out as a realistic colour, which is why RGB imagers can capture the colour of stars OK.

Take a picture of a (proper, prism generated) spectrum through imaging R,G and B filters and the result will pretty much be three blocks of pure red, green and blue with no shading between them. All narrowband sources of light will simply be allocated to one of the three primary colours.

A DSLR will record different proportions of R, G and B for each colour of the spectrum. These get passed on to the display device, and you see the spectrum as grading from colour to colour.

 

Link to comment
Share on other sites

22 minutes ago, Xiga said:

Some light does indeed leak into the neighbouring G and B pixels (due to the CFA being lower quality compared to proper filters), so Adam is right, in order to extract the maximum amount of signal you would indeed need to use all 3 channels (although heavily weighted towards the Red obviously). But don’t forget, this is probably a worst case scenario we're looking at here, as M42 is one of the brightest objects up there, plus it's been over-exposed. If you examined one of your raw subs from a different target you’d no doubt see much less signal in the G and B channels.

Up to now, Super Pixel mode has been the best tool available for this. It combines all 4 pixels into 1, thus taking the small amount of signal, but also all the noise from the G & B pixels. A side effect is that you also end up with an image 25% in size, so you lose 75% of your resolution along the way (which is liveable with given the high pixel count in today’s DSLR’s). Another method is extracting just the Red pixels and stacking those. With this method you lose a small bit of additional signal, but you lose considerably more noise too, so overall the SNR should be better. Downsides are, you can’t calibrate your subs this way (a big deal if you ask me), and you’re still left with the 75% loss in resolution as well. Of course, you can always drizzle to try and recover some, but it’s far from ideal.

Now, I don’t know exactly what Mabula’s algorithms in APP actually do, but I do know that no resolution is lost in the process, and the subs can be fully calibrated as well, so I presume what it does is analyse each channel for signal and noise, and then uses appropriate percentages of each one when it performs the stack.
 

 

I'm gonna give APP a blast over the next few days.  I've a few narrowband images to process, and I'm hoping for a miracle lol

Link to comment
Share on other sites

12 minutes ago, tooth_dr said:

 

I'm gonna give APP a blast over the next few days.  I've a few narrowband images to process, and I'm hoping for a miracle lol

I think you'll be impressed with what you get.

Also, APP has a nice DDP auto-stretching capability. If you're not an expert in image stretching, then you can just use the one produced by APP (one less thing to have to worry about!). Now, in expert hands, a DDP stretch can be bettered, but honestly, not by a great deal in my experience. Consider it about 90% of optimum, only done in about 2% of the time! Based on the length of exposures you've been getting lately with your Ha filter, I'd say once you stack them in APP and give it a basic DDP stretch, you'll probably be amazed at what you've got.

Good luck!

Link to comment
Share on other sites

Here's the California Nebula. Still on my early Ha attempts with a dslr so not great data. Stacked using 'super pixel' mode so it is 1/4 the normal size with no debayering and each channel solely from the pixels of that colour.

First the whole image, then split into R, G and B.

The RGB image looks surprisingly 'normal' - aligned the red, green and blue histograms but NOT stretched them. I applied a two stage linear stretch raising the black point and dropping the white point.

5a578b88b589c_Hatestall.thumb.jpg.fe300af075f87022d035cc05f578d47b.jpg

As you would expect R shows the nebulosity clearest:

5a578bb30b59b_HatestRED.thumb.jpg.03728e3b4526793561f5df1c6f42a2c8.jpg

Easy to see the nebulosity on G even if it is faint:

5a578ba0d5e6e_HatestGREEN.thumb.jpg.b0fec7498a2b940e5fdba32fbf16a036.jpg

Less obvious on B. but still present:

5a578b928eede_HatestBLUE.thumb.jpg.49551bb8739849d7caceaaf41ea84e5c.jpg

 

Link to comment
Share on other sites

9 hours ago, Xiga said:

I think you'll be impressed with what you get.

Also, APP has a nice DDP auto-stretching capability. If you're not an expert in image stretching, then you can just use the one produced by APP (one less thing to have to worry about!). Now, in expert hands, a DDP stretch can be bettered, but honestly, not by a great deal in my experience. Consider it about 90% of optimum, only done in about 2% of the time! Based on the length of exposures you've been getting lately with your Ha filter, I'd say once you stack them in APP and give it a basic DDP stretch, you'll probably be amazed at what you've got.

Good luck!

 

I've spent an hour or so on APP tonight, basic tutorial watched on how to use it, so will need pointers for sure, if you have time to go through how to process NB with a DSLR.

Link to comment
Share on other sites

Most of the default settings should be fine as they are. So go through each tab in order, from 0) up to 6).

First ensure you have chosen the correct algorithm in the first tab 0). So choose Hydrogen Alpha. 

Then load in all your lights, flats, and bias files. If you use darks, just leave them out for now. Also don't bother making a Bap Pixel Map either. It's not hard to do, but it's something you can do later.

On the Calibrate tab, just use Median with no outlier rejection for each type of calibration file (to keep things simple for now). Leave all other settings alone. Hit Calibrate and wait for them to finish. Masters should automatically be generated, and for future stacks you should use them instead to speed things up. I also save my calibrated files as well (i need to when combining data from more than one night/filter, but for now don't bother, we're just doing a basic Ha stack). Once calibration is done, you should be able to select a sub from the list below, then choose 'l-calibrated' from the dropdown box at the top of the screen and you should then see the effects of calibration on the file in realtime. I find this feature amazing, as it will easily show you if calibration has worked or not. 

The next 3 tabs 3)  4)  and 5), don't change any settings, just click the 'Analyze Stars' 'Start Registration' and 'Normalize Lights' buttons on each. 

The on tab 6), choose the 'Quality' setting under weights. This is nearly always the best one. Again, just do a median stack with no outlier rejection for now. Don't change anything else. Now hit 'Integrate' and wait for the stack to complete. 

Once done, it will appear in the file list below. Click on it to show it. If it's not already selected, then click on the Auto DDP box to the right, this should show you what your data looks like. Hopefully you'll be pleasantly surprised by what you see :-)  Beyond this, you can use the 'Remove Light Pollution' tool under tab 9) to remove any gradients present. Then you can save the image with the button at the top right, you can choose to save the stretched image or linear image. I would say just use the stretched one for the time being. When you get really good at doing curves you can always choose to come back and use the linear file instead. Then it's into Photoshop for everything else. 

The process is a little different when you want to combine images from different nights or filters. We can chat about that one later sure.

Let me know how you get on. All the best mate.

Ciaran. 

Link to comment
Share on other sites

5 hours ago, Xiga said:

Most of the default settings should be fine as they are. So go through each tab in order, from 0) up to 6).

First ensure you have chosen the correct algorithm in the first tab 0). So choose Hydrogen Alpha. 

Then load in all your lights, flats, and bias files. If you use darks, just leave them out for now. Also don't bother making a Bap Pixel Map either. It's not hard to do, but it's something you can do later.

On the Calibrate tab, just use Median with no outlier rejection for each type of calibration file (to keep things simple for now). Leave all other settings alone. Hit Calibrate and wait for them to finish. Masters should automatically be generated, and for future stacks you should use them instead to speed things up. I also save my calibrated files as well (i need to when combining data from more than one night/filter, but for now don't bother, we're just doing a basic Ha stack). Once calibration is done, you should be able to select a sub from the list below, then choose 'l-calibrated' from the dropdown box at the top of the screen and you should then see the effects of calibration on the file in realtime. I find this feature amazing, as it will easily show you if calibration has worked or not. 

The next 3 tabs 3)  4)  and 5), don't change any settings, just click the 'Analyze Stars' 'Start Registration' and 'Normalize Lights' buttons on each. 

The on tab 6), choose the 'Quality' setting under weights. This is nearly always the best one. Again, just do a median stack with no outlier rejection for now. Don't change anything else. Now hit 'Integrate' and wait for the stack to complete. 

Once done, it will appear in the file list below. Click on it to show it. If it's not already selected, then click on the Auto DDP box to the right, this should show you what your data looks like. Hopefully you'll be pleasantly surprised by what you see :-)  Beyond this, you can use the 'Remove Light Pollution' tool under tab 9) to remove any gradients present. Then you can save the image with the button at the top right, you can choose to save the stretched image or linear image. I would say just use the stretched one for the time being. When you get really good at doing curves you can always choose to come back and use the linear file instead. Then it's into Photoshop for everything else. 

The process is a little different when you want to combine images from different nights or filters. We can chat about that one later sure.

Let me know how you get on. All the best mate.

Ciaran. 

Thanks Ciaran. That’s super. I didn’t use H alpha setting so that’s where I was going wrong. Can’t wait to try this out later. 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.