Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Which QHY camera for DSO?


Orion1

Recommended Posts

1 hour ago, ollypenrice said:

Your debayering algorithm will try to interpolate the missing signal and 'fill in' the blank pixels so I guess that's where some of your G and B data is coming from.

Olly

Yes but if you are doing H-a correctly via an osc you should be converting the subs to mono and removing the blue and green channels prior to stacking using super pixel mode.  

But in essence you are correct there is no H-a signal in anything other than red.

Link to comment
Share on other sites

  • Replies 60
  • Created
  • Last Reply
47 minutes ago, Adam J said:

Yes but if you are doing H-a correctly via an osc you should be converting the subs to mono and removing the blue and green channels prior to stacking using super pixel mode.  

But in essence you are correct there is no H-a signal in anything other than red.

Should I be manually converting them to mono (ie just using the red channel) in PS then stacking these? Or does super pixel model do this automatically? My last image NGC7000 was just the full RGB images stacked with normal settings in DSS. 

Link to comment
Share on other sites

37 minutes ago, tooth_dr said:

Should I be manually converting them to mono (ie just using the red channel) in PS then stacking these? Or does super pixel model do this automatically? My last image NGC7000 was just the full RGB images stacked with normal settings in DSS. 

I use a program called ISIS to manipulate the RAW files.

Link to comment
Share on other sites

1 minute ago, Adam J said:

I use a program called ISIS to manipulate the RAW files.

An unfortunate name for software. I had a look there, it talks of spectrographs, etc. What does it do specifically that can’t be done in other programs. 

Link to comment
Share on other sites

1 minute ago, tooth_dr said:

An unfortunate name for software. I had a look there, it talks of spectrographs, etc. What does it do specifically that can’t be done in other programs. 

Nothing spectacular, all it is for me is a free program that can manipulate canon RAW files and remove a couple of channels. From there I save as a fits and stack in DSS then process in PS CS2

Link to comment
Share on other sites

2 minutes ago, Adam J said:

Nothing spectacular, all it is for me is a free program that can manipulate canon RAW files and remove a couple of channels. From there I save as a fits and stack in DSS then process in PS CS2

Thanks Adam. I might just try doing that to see if it makes any difference. I get a slight signal in my green channel, and virtually none in the blue channel. 

Link to comment
Share on other sites

Just out of interest, I took a single .cr2 file (of the Monkeyhead I did recently), split it up into the 4 individual 'rggb' files in Iris. Did some stretching and the red file was naturally where most of the signal was. Blue virtually nothing, but there was a small, but significant signal in the green files (can make out the Monkeyhead outline). The original was a 360s sub at iso 800 on a full-spectrum Canon 1100d and with a 2" Baader 7nm Ha filter only. Could it be that some Ha light is getting through the green part of the Bayer matrix? Or is the Canon doing something weird? 

Confused now

Louise

Link to comment
Share on other sites

12 minutes ago, Thalestris24 said:

Just out of interest, I took a single .cr2 file (of the Monkeyhead I did recently), split it up into the 4 individual 'rggb' files in Iris. Did some stretching and the red file was naturally where most of the signal was. Blue virtually nothing, but there was a small, but significant signal in the green files (can make out the Monkeyhead outline). The original was a 360s sub at iso 800 on a full-spectrum Canon 1100d and with a 2" Baader 7nm Ha filter only. Could it be that some Ha light is getting through the green part of the Bayer matrix? Or is the Canon doing something weird? 

Confused now

Louise

Pretty much exactly my experience. I could see the nebula in the green channel but only a handful of bright stars in the blue channel. Cross communication between pixels?

Link to comment
Share on other sites

2 minutes ago, tooth_dr said:

Pretty much exactly my experience. I could see the nebula in the green channel but only a handful of bright stars in the blue channel. Cross communication between pixels?

I honestly don't know for sure! My gut feeling is that it's optical rather than electronic since you'd expect to see signal in the blue channel also if it was electronic. Where is there a Canon engineering expert when you want one? ;)

Louise

Link to comment
Share on other sites

2 hours ago, Adam J said:

Yes but if you are doing H-a correctly via an osc you should be converting the subs to mono and removing the blue and green channels prior to stacking using super pixel mode.  

But in essence you are correct there is no H-a signal in anything other than red.

I thought I was saying the same thing?  

I said, ' I'd have thought, then, that the imager would strip out the G and B entirely. Isn't that what people do when shooting Ha with an OSC camera? I've never done it so I don't know the process. That would mean literally 'red only' in the final image.'

Olly

Link to comment
Share on other sites

5 minutes ago, ollypenrice said:

I thought I was saying the same thing?  

I said, ' I'd have thought, then, that the imager would strip out the G and B entirely. Isn't that what people do when shooting Ha with an OSC camera? I've never done it so I don't know the process. That would mean literally 'red only' in the final image.'

Olly

Olly - I'm lazy! But you're right, of course, I really should isolate the red channel and I suppose it's easy enough to do in Iris. Still, I'm perplexed as to why Ha shows up in the green channel - but it's academic really.

Louise

Link to comment
Share on other sites

4 hours ago, Stub Mandrel said:

The bayer matrix is designed to mimic the eye, so the sensitivity curves for each colour are curves, this means any one wavelength produces a certain ratio R:G:B which gives the great subtly of colour we see with our eyes and expect in photos.

LRGB imaging uses filters with very 'square' edges to the curves and minimal overlap. This gives maximum sensitivity but it means, for example, that any monochromatic light will simply show up as red, green or blue.

This doesn't matter too much with 'black body' radiation as produced by stars, for example, has a curved profile so the exact colour will impact differentially even through hard-edged filters.

Such RGB filters won't distinguish well between closely spaced colours and not at all between, say, monochromatic deep red and orange, wile a DSLR would clearly show the difference.

This is probably why RGB imagers say they find processing easier - their colour data should give very strong colours, but with less subtly than DSLR data.

It should be possible to see more subtle distinctions in nebula colour in DSLR data while LRGB data should make bold distinctions like that between OIII and Ha starker.

Yes, but the Bayer Matrix is not just designed to 'mimic the eye.' It is designed to mimic the eye's response to predomominantly reflected light passed by the atmosphere.* Now there is no point in trying to replicate this in astronomical targets because their light is mostly from emission and, when it isn't, it's from a different kind of reflection/scattering than is seen on Earth. Essentially terrestrial and celestial colour balance don't seem, to me, to be the same animals. They are not entirely unrelated but they don't strike me as being the same.

A simple question with regard to balancing colour filters for astrophotography: why would you shoot twice as much green as red or blue? On what scientific basis would you do so?

Olly

* Edit, an afterthought! So consider a camera designed for astronauts on a craft with a spectrally neutral set of windows. You want it to mimic their eyes, so RGGB or RGB?

Link to comment
Share on other sites

4 hours ago, Adam J said:

Yes but if you are doing H-a correctly via an osc you should be converting the subs to mono and removing the blue and green channels prior to stacking using super pixel mode.  

But in essence you are correct there is no H-a signal in anything other than red.

Not so, see my example above. This is done using Bayer drizzle so the R, G and B signals are kept completely separate. The G & B are stretched the same amount as the red BUT only after aligning the histograms.

Link to comment
Share on other sites

17 minutes ago, Stub Mandrel said:

Not so, see my example above. This is done using Bayer drizzle so the R, G and B signals are kept completely separate. The G & B are stretched the same amount as the red BUT only after aligning the histograms.

I don't want to be mean but the method you're proposing has produced results which are not very encouraging. OK, not much data, so let's get back to theory. What exactly is it  that you are stretching in the G and B channels? My study of the passbands strongly suggests that it isn't light from the object  because, in a nutshell, there isn't any. So are you not just stretching the essentially fictitious interpolation from the debayering routine? And won't this just be a mixture of meaningless mush from the G and B channels and real signal from the Red/Ha? And has the debayering routine been written to ignore, in this case, the meaningless mush from G and B? Surely not?

So far as I can see, shooting Ha through a bayer matrix records real data only on the red pixels. The rest is fairyland. :eek:

Olly

Link to comment
Share on other sites

59 minutes ago, ollypenrice said:

Yes, but the Bayer Matrix is not just designed to 'mimic the eye.' It is designed to mimic the eye's response to predomominantly reflected light passed by the atmosphere.* Now there is no point in trying to replicate this in astronomical targets because their light is mostly from emission and, when it isn't, it's from a different kind of reflection/scattering than is seen on Earth. Essentially terrestrial and celestial colour balance don't seem, to me, to be the same animals. They are not entirely unrelated but they don't strike me as being the same.

A simple question with regard to balancing colour filters for astrophotography: why would you shoot twice as much green as red or blue? On what scientific basis would you do so?

Olly

* Edit, an afterthought! So consider a camera designed for astronauts on a craft with a spectrally neutral set of windows. You want it to mimic their eyes, so RGGB or RGB?

No, a Bayer filter is designed to mimic the eye's response, which it does very well. It is irrelevant how the light gets to the eye.

For your astronauts RGB or RGGB is irrelevant, what matters is that the ratio R:G:B is the same for both monochromatic and continous light sources.

Example: the eye's response curve sees 'cyan' whether presented with monochromatic blue-green light or a roughly equal mix of blue and green light.

A typical bayer filter will give the same result.

Baader astro filters will also show cyan, as they are designed to have a small overlap around 500nm.

Make the light slightly blue or greener and the eye, and the OSC, will follow the change. The astro filters will just give pure blue or green.

 

The extreme example, consider yellow light.

Mix red and green light and the eye, OSC and Baader will all give a yellow result.

Use monochromatic yellow sodium light, and the baader filters won't register anything.

Use monochromatic orange light, baader will give red.

use monochromatic lime yellow, baader will give pure green.

 

The 'stepped' astro filters also have multiple equivalent solutions for some colours meaning they cannot distinguish the same range of colour tones as OSC or the eye.

This does make both eye and OSC less efficient. It's horses for courses.

 

 

 

I have already invented the ideal bayer pattern for OSC astronomy and I will now destroy any chance of me or anyone else patenting it.

  • One pixel has a clear filters for L.
  • One pale magenta pixel with a shallow dip in the curve dropping  from 100% violet to give about 50% pass in mid-green and rising to 100% in deep red.
  • One pale yellow with progressive cut from mid green to 50% as you reach violet.
  • One pale cyan with 100% through to green-blue, dropping to 50% in deep red.

My back of an envelope attempt at integrating the response curves (which would depend on the actual realisation of this arrangement) gives over 82% transmission but still yields a unique solution to the CMY ratios for the three coloured pixels for every wavelength (rather higher than a standard CMYL bayer pattern) and certainly more efficient than separate R G B filters.

As a step in the right direction why not have 'square' CMY filters which would double the rate at which mono imagers could gather colour data twice as fast?

 

Link to comment
Share on other sites

1 hour ago, ollypenrice said:

I thought I was saying the same thing?  

I said, ' I'd have thought, then, that the imager would strip out the G and B entirely. Isn't that what people do when shooting Ha with an OSC camera? I've never done it so I don't know the process. That would mean literally 'red only' in the final image.'

Olly


I think it is subtly different

You said:

"Your debayering algorithm will try to interpolate the missing signal and 'fill in' the blank pixels so I guess that's where some of your G and B data is coming from"

 

Ideally you remove the Green and Blue prior to debayering so that the noise in those channels does not influence the red channel during the debayering process.

Essentially if you strip it from each raw sub image and use super pixel mode then there will be no bleed over into the green of blue and no noise from the green and blue will bleed over into the red.

So yes you do only want the red in the final image but you want to remove the rest prior to debayering not after. So it actually matters when in the process you remove the blue and green.

 

 

Link to comment
Share on other sites

20 minutes ago, ollypenrice said:

I don't want to be mean but the method you're proposing has produced results which are not very encouraging. OK, not much data, so let's get back to theory. What exactly is it  that you are stretching in the G and B channels? My study of the passbands strongly suggests that it isn't light from the object  because, in a nutshell, there isn't any. So are you not just stretching the essentially fictitious interpolation from the debayering routine? And won't this just be a mixture of meaningless mush from the G and B channels and real signal from the Red/Ha? And has the debayering routine been written to ignore, in this case, the meaningless mush from G and B? Surely not?

So far as I can see, shooting Ha through a bayer matrix records real data only on the red pixels. The rest is fairyland. :eek:

Olly

Olly, THERE IS NO DEBAYERING IN THOSE IMAGES.

image.png.137ca37344906f04a2730f1a19e7608c.png

The blue channel has about 10% the QE of the red channel for Ha according to the source below, so is shooting at 800 ISO, the blue channel is working at about 100 ISO.

The green channel has probably only about 2-3% according to this (I suspect a bit more in practice), so green is somewhere down below 50 ISO.

But they do both get a signal.

It is this 'bleed through' that helps us judge different subtle shades. That bump in the red under blue is very important - it's why we perceive purple (red+blue) as very close to violet. Note there are a couple of tricky areas on the curve where the camera has poorer discrimination than the eye as R:G:B ratios are duplicated (or at least approximated).

Canon_450D_Spectral_Response.jpg

Link to comment
Share on other sites

32 minutes ago, ollypenrice said:

I don't want to be mean but the method you're proposing has produced results which are not very encouraging. OK, not much data, so let's get back to theory. What exactly is it  that you are stretching in the G and B channels? My study of the passbands strongly suggests that it isn't light from the object  because, in a nutshell, there isn't any. So are you not just stretching the essentially fictitious interpolation from the debayering routine? And won't this just be a mixture of meaningless mush from the G and B channels and real signal from the Red/Ha? And has the debayering routine been written to ignore, in this case, the meaningless mush from G and B? Surely not?

So far as I can see, shooting Ha through a bayer matrix records real data only on the red pixels. The rest is fairyland. :eek:

Olly

Olly is correct anything left in the blue and green is simply noise mixed with h-a signal from the red channel by the debayering process. Its trying to interpolate and failing because its simply not designed to account for the case when the image its trying to debayer is mono chromatic (in the sense that data is long contained in the red channel).

If you do have and residual signal due to leakage in those channels then its not going to be worth keeping it in the processed image it will just bring the S/N down.

Link to comment
Share on other sites

1 minute ago, Adam J said:

Olly is correct anything left in the blue and green is simply noise mixed with h-a signal from the red channel by the debayering process. Its trying to interpolate and failing because its simply not designed to account for the case when the image its trying to debayer is mono chromatic (in the sense that data is long contained in the red channel).

See above, the examples are from a stack using bayer drizzle  which does not involve debayering.

The G and B filters in an OSC are NOT hard-cornered like astro-filters and they leak some light across the whole spectrum.

Link to comment
Share on other sites

8 minutes ago, Adam J said:

That is why you remove the blue and green from each sub prior to running them though DSS and using super pixel debayering. Its the only way to fully isolate the red channel.

To quote Olly "No! No! No!".

Super pixel is an approach to reduce the resolution by a quarter and eliminate the need for debayering to produce colour images.

For Ha on an OSC you can use ''Bayer Drizzle' which keeps full resolution BUT only uses (drizzles) data from green pixels to the green channel, red to red etc.

 

http://deepskystacker.free.fr/english/technical.htm

 

Link to comment
Share on other sites

35 minutes ago, Stub Mandrel said:

To quote Olly "No! No! No!".

Super pixel is an approach to reduce the resolution by a quarter and eliminate the need for debayering to produce colour images.

For Ha on an OSC you can use ''Bayer Drizzle' which keeps full resolution BUT only uses (drizzles) data from green pixels to the green channel, red to red etc.

 

http://deepskystacker.free.fr/english/technical.htm

 

What I do know is that using Bayer drizzle produces inferior results to super pixel for me, having tried both when processing Ha.

In effect by trying to recover resolution with that method you will and do take a hit to signal to noise ratio in the final stacked image.

Whats happening is that not all subs are contributing fully to all pixels in your "full resolution image". You are relying on random dither between subs to provide you with data for all 4 sub pixels locations and as such not all 4 sub pixels location are taking data from all subs. The cant do or you would just end up with 4 x identical pixels so the pixel level noise in the image will be much higher than when using super pixel mode when all subs are contribution to all pixels. Its like reducing the number of subs you have by a factor of 4. It would only work if your signal to noise was very very good in the individual subs.

I suspect that this method may work better with a bright broad band target than it does with a faint narrow band target mind you.

Ultimately you cant get something for nothing if you gain resolution you lose out on Signal to Noise.

In terms of your chart, it does indeed show some signal from H-a will pass to the blue but not anything significant to the green. In any case the resultant signal to noise in those channels will be so poor as to render their inclusion into the final image counter productive. Whatever your method to accomplish it removing those channels will produce a better H-a image.

Super pixel actually ensures the best signal to noise in a narrow band image, in this instance I will take that over resolution any day.

 

Link to comment
Share on other sites

55 minutes ago, Adam J said:

What I do know is that using Bayer drizzle produces inferior results to super pixel for me, having tried both when processing Ha.

In effect by trying to recover resolution with that method you will and do take a hit to signal to noise ratio in the final stacked image.

Whats happening is that not all subs are contributing fully to all pixels in your "full resolution image". You are relying on random dither between subs to provide you with data for all 4 sub pixels locations and as such not all 4 sub pixels location are taking data from all subs. The cant do or you would just end up with 4 x identical pixels so the pixel level noise in the image will be much higher than when using super pixel mode when all subs are contribution to all pixels. Its like reducing the number of subs you have by a factor of 4. It would only work if your signal to noise was very very good in the individual subs.

I suspect that this method may work better with a bright broad band target than it does with a faint narrow band target mind you.

Ultimately you cant get something for nothing if you gain resolution you lose out on Signal to Noise.

In terms of your chart, it does indeed show some signal from H-a will pass to the blue but not anything significant to the green. In any case the resultant signal to noise in those channels will be so poor as to render their inclusion into the final image counter productive. Whatever your method to accomplish it removing those channels will produce a better H-a image.

Super pixel actually ensures the best signal to noise in a narrow band image, in this instance I will take that over resolution any day.

 

I suspect you are quite correct.

Bayer Drizzle is really intended for full-spectrum images where you have plenty of decent data. I think you arte right taht super-pixelk is best and that's what I  used for finished images (on the two occasions I have attempted to use it).

My original intent was simply to show what appears in OSC green and blue through a Ha filter, not to claim it has any great merit as data, as several people said they hadn't seen such images.

They responded by saying that what I showed was just a result of debayering, which it wasn't. This is because they are used to separate RGB filters which have much narrower passbands.

 

I found a post I made in August with images created using super-pixel mode, after I had added cooling to my camera:

 

Link to comment
Share on other sites

Edit, just a thought.

The graph I posted above is for an unmodified DSLR, mine is astro-modded.

https://www.bintel.com.au/product/zwo-asi224mc-c-cooled-colour/qe-asi224mc-c/

shows a graph for an OSC astro-camera where the green channel has 20% of teh response of teh red channel to Ha, that's only about two stops down and is likely to produce usable data with a cooled camera.

Found this response curve for a modded 350D on the astrosurf website:

canon_5000k_2.png

Link to comment
Share on other sites

12 minutes ago, Stub Mandrel said:

I suspect you are quite correct.

Bayer Drizzle is really intended for full-spectrum images where you have plenty of decent data. I think you arte right taht super-pixelk is best and that's what I  used for finished images (on the two occasions I have attempted to use it).

My original intent was simply to show what appears in OSC green and blue through a Ha filter, not to claim it has any great merit as data, as several people said they hadn't seen such images.

They responded by saying that what I showed was just a result of debayering, which it wasn't. This is because they are used to separate RGB filters which have much narrower passbands.

I agree, but I do think that something odd if going on in those images you processed. This is what I got when I tried it just now with super pixel without my normal process of separating the channels in ISIS.

This is the BLUE channel from 18 x stacked subs.

5a0779756b468_soulbluechanneltest.thumb.jpg.1beea537336da92950cb57ab1967f3e1.jpg

This is the red channel from a SINGLE RED sub.

5a0779836b657_soulredchanneltest.thumb.jpg.bd2f4915da96a561de4e736eac61ab6a.jpg

Its incomparable. I dont know how you have ended up with so much signal in your blue and green channels on the images you posted. I would hate to think that the signal from a single blue sub would have looked like if that was the stack of 18....

This is cooled and full spectrum 550D.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.