Jump to content

Correct me if I am wrong . . .


ayoussef

Recommended Posts

So Olly posted a reply to a topic I opened few days ago , and it made me think for few days, it was related to the exposure time of OSC vs LRGB . . .

So my understanding was that OSC will save you 77% of the exposure time you need to do in mono ccd using LRGB filters, well it seems I was wrong and here is my understanding on why I was wrong:

Lets say you have the same sensor size and type in a mono ccd and OSC, which is 1000X1000 pixel and each pixel is 9 microns, When photons passes through the bayer filter in OSC case , the green gets 500,000 photons, the blue 250,000 and the red 250,000 at one instant.

Now the final image in the OSC is built based on the probability of what each pixel was suppose to get from the 2 other colors that were blocked, the more you depend on the probability, the lower your SNR will be, so the pixel that is getting green will turn to rgb after being processing by the camera depending on the values of its neighbor red and blue pixels, so basically 77% of the image is built on assumptions.

Now with monochrome ccd, each pixel is for sure getting the color we know it suppose to get because the filter is blocking other wavelengths.

So if you want to get better SNR from the OSC, with the same telescope and same sensor size, you need to shrink each pixel size to 5.2 microns, and by doing that yes you will triple your probability of guessing correctly  the 2 other colors photons that were suppose to be collected by the green pixel for example . . . . Now the problem in that concept is that quantum mechanics interferes, the smaller the pixel is, the less number of electrons on its surface will be, which will be 33% number of electrons vs the 9 microns pixel, and according to quantum mechanics, you will have higher uncertainty in the smaller pixel than the bigger one due to Schrodinger probability wave.

So in theory even if you have roughly the same number of electrons on the surface of the 2 sensors, they are isolated in smaller entities.

I know there are a lot of other considerations to take like dark current and read noise but I am just speaking roughly about basic concepts.

Please correct me if my assumptions are wrong.

Link to comment
Share on other sites

I'm not too sure about the numbers etc but I will say this, from my uk site, with uk weather, a manual filter wheel and manual focusing, in the last 6-8 mths, I've managed about 8-10 Ha images, 1-2 bi colour images and 1 lrgb. no doubt if you have all the gear and the skies to match then mono may well be quicker. I'm still yet to see the evidence under my skies with my gear.

that said, I prefer narrowband as if I only get 2 nights in a month and they land on a fullish moon, all hope is not lost

Link to comment
Share on other sites

I think it is more subtle and depends on the de-bayering algorithm.

The value of, say, a blue pixel for R and G isn't calculated according to the two green and one red pixel that make up its 'unit', but (typically) the four green and four red pixels that surround it.

Similarly a red pixel has four blue and four green neighbours.

A green pixel only has two red and two blue - but there are twice as many of them.

So, the maths is much less simple.

Add into this that the actual algorithms are 'intelligent' and aware that the RGB proportions of pixels will generally be in similar proportion to nearby areas of the image and that they also attempt to identify edges and avoid creating artefacts.

Essentially the 77% missing rule would apply if the levels for each pixel were random, but they aren't, they are highly correlated.

All this means that a good debayering algorithm can probably retrieve the 'missing' data with much greater efficiency and I guess it is progress with this aspect of image processing that explains why the latest RGB cameras are producing such outstandingly detailed images as evidenced in other threads.

Link to comment
Share on other sites

There are two entirely different issues being discussed here and my earlier post only discussed one of them, the more simple one - and in my view the more important one.

In OSC images the algorithms try to guess the missing information which is lost when a structure in, say, Ha passes unseen through the green and blue pixels before being picked up again by the red. I understand (from Craig Stark and others who know far more about this than I) that the different softwares which debayer are more or less adept in this guesswork.

As a practising OSC and mono imager I never had much argument against the competence of these algorithms but a purist is bound to prefer the findings of a pure luminance layer. Personally I don't think this is the crux of the OSC-Mono debate.

The crux is this: luminace, in theory, grabs all three colours at once. I'm only talking S/N ratio here, not resolution. I find that it really grabs about 4x the signal that you get when it's split between R and G and B in OSC or RGB imaging. So during your L shoot you are working three or four times faster than you are when shooting your colour. This is why LRGB is faster. If you have a short window with a mono camera shorten your subs, shoot LRGB, and you will beat OSC in the ratio 6 to 4 in my view.

Olly

Link to comment
Share on other sites

I think it is more subtle and depends on the de-bayering algorithm.

The value of, say, a blue pixel for R and G isn't calculated according to the two green and one red pixel that make up its 'unit', but (typically) the four green and four red pixels that surround it.

Similarly a red pixel has four blue and four green neighbours.

A green pixel only has two red and two blue - but there are twice as many of them.

So, the maths is much less simple.

Add into this that the actual algorithms are 'intelligent' and aware that the RGB proportions of pixels will generally be in similar proportion to nearby areas of the image and that they also attempt to identify edges and avoid creating artefacts.

Essentially the 77% missing rule would apply if the levels for each pixel were random, but they aren't, they are highly correlated.

All this means that a good debayering algorithm can probably retrieve the 'missing' data with much greater efficiency and I guess it is progress with this aspect of image processing that explains why the latest RGB cameras are producing such outstandingly detailed images as evidenced in other threads.

The debayering algorithm is a good point that I didn't think of.

Link to comment
Share on other sites

There are two entirely different issues being discussed here and my earlier post only discussed one of them, the more simple one - and in my view the more important one.

In OSC images the algorithms try to guess the missing information which is lost when a structure in, say, Ha passes unseen through the green and blue pixels before being picked up again by the red. I understand (from Craig Stark and others who know far more about this than I) that the different softwares which debayer are more or less adept in this guesswork.

As a practising OSC and mono imager I never had much argument against the competence of these algorithms but a purist is bound to prefer the findings of a pure luminance layer. Personally I don't think this is the crux of the OSC-Mono debate.

The crux is this: luminace, in theory, grabs all three colours at once. I'm only talking S/N ratio here, not resolution. I find that it really grabs about 4x the signal that you get when it's split between R and G and B in OSC or RGB imaging. So during your L shoot you are working three or four times faster than you are when shooting your colour. This is why LRGB is faster. If you have a short window with a mono camera shorten your subs, shoot LRGB, and you will beat OSC in the ratio 6 to 4 in my view.

Olly

So if you have 2 cameras,a mono and osc and both are based on the same sensor lets say the ICX834, would you say that taking a luminace with the mono and combine it with the osc images an efficient workflow ?

Link to comment
Share on other sites

There are two entirely different issues being discussed here and my earlier post only discussed one of them, the more simple one - and in my view the more important one.

In OSC images the algorithms try to guess the missing information which is lost when a structure in, say, Ha passes unseen through the green and blue pixels before being picked up again by the red. I understand (from Craig Stark and others who know far more about this than I) that the different softwares which debayer are more or less adept in this guesswork.

As a practising OSC and mono imager I never had much argument against the competence of these algorithms but a purist is bound to prefer the findings of a pure luminance layer. Personally I don't think this is the crux of the OSC-Mono debate.

The crux is this: luminace, in theory, grabs all three colours at once. I'm only talking S/N ratio here, not resolution. I find that it really grabs about 4x the signal that you get when it's split between R and G and B in OSC or RGB imaging. So during your L shoot you are working three or four times faster than you are when shooting your colour. This is why LRGB is faster. If you have a short window with a mono camera shorten your subs, shoot LRGB, and you will beat OSC in the ratio 6 to 4 in my view.

Olly

This is my whloe point Olly. If I were to shoot lrgb,lrgb,lrgb with my set-up having a manual focuser and a manual filter wheel, where I need to focus between filters (and yes I do need to. blue is way different from the others for focus) re-centre the fov (I'm not sure if this is a problem specific to manual filter wheels but its one I have none the less albeit small adjustments). Then there's the added myther of having to baby sit the mount with the need to do said adjustments every 5/10/15 or whatever minutes.

Yes with a fully automated set-up (right down to filterwheel/focuser etc) it most probably is quicker, but I wonder, what percentage of us have that?

Link to comment
Share on other sites

So if you have 2 cameras,a mono and osc and both are based on the same sensor lets say the ICX834, would you say that taking a luminace with the mono and combine it with the osc images an efficient workflow ?

It would speed things up, but its not very flexible as for half of the month the OSC camera would be redundant due to moonlight. Its quicker to have two mono cameras, shoot L/L, then R/B, then finally G/G. And when mr moon comes out to play, go for Ha/Ha. (but can get a bit expensive on filters!).

Link to comment
Share on other sites

This is my whloe point Olly. If I were to shoot lrgb,lrgb,lrgb with my set-up having a manual focuser and a manual filter wheel, where I need to focus between filters (and yes I do need to. blue is way different from the others for focus) re-centre the fov (I'm not sure if this is a problem specific to manual filter wheels but its one I have none the less albeit small adjustments). Then there's the added myther of having to baby sit the mount with the need to do said adjustments every 5/10/15 or whatever minutes.

Yes with a fully automated set-up (right down to filterwheel/focuser etc) it most probably is quicker, but I wonder, what percentage of us have that?

Then yes, you have a problem if you need to refocus between filters. But here's the rub: you still have this problem with OSC because you can't focus between pixels. If your blue is not fully corrected then it isn't going to be in focus on an OSC chip either and you would do better with mono where you can focus it correctly. Ideally you'd then merge colours after registering your errant blue channel in Registar which would resize it to fit the others perfectly. In reality the resize would be tiny and might make no sensible difference. 

So if you have 2 cameras,a mono and osc and both are based on the same sensor lets say the ICX834, would you say that taking a luminace with the mono and combine it with the osc images an efficient workflow ?

I was in exactly this position for several years when I had both OSC and mono editions of the Atik 4000. I often captured Ha in the mono to combine with OSC. This worked well.

But there's a problem with capturing OSC and mono-luminance on a dual rig because you never get enough colour. If you want equal doses of R and G and B and L you don't get them with an OSC-Mono pairing on a dual rig. In three hours you get 3 hours of luminance but you only get 45 minutes blue, 45 minutes red and 90 minuutes green - in terms of signal equivalence, that is. Or so it seems to me. I know a couple of people who've tried this approach and both abandoned it as inefficient. I never did L-OSC. I sometimes did LRGB plus OSC. That was nice.

On our present dual rig we run two mono cameras and derive the benefits Rob mentions above, plus the benefit of being able to get LRGB in equal quantities when that's what we want.  To my mind two monos beat a mono and OSC on all counts. (On a dodgy night I run one scrolling LRGB and the other scrolling BGRL, too.)

Olly

Link to comment
Share on other sites

until osc have real LRGB rather than RGGB chip layout as a standard its going to be a struggle, even then it wont be as good as true LRGB, but a step in the right direction.

I have seen some moves in this direction but nothing concrete.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.