Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Can someone explain QE to me


Recommended Posts

Hi folks, I’m after an OSC CMOS camera and don’t really understand Quantum Efficiency.

Basically, does a higher QE mean greater sensitivity and, therefore, fainter stars?

It’s just that I’m after the most sensitive OSC which I can afford (i.e. less than £1k). I’m not too bothered about the actual size of the sensor, just the sensitivity. I just can’t be doing with mono imaging any more with the UK climate.

The blurb about the ZWO ASI 533MC-Pro says how sensitive it is, but only has a QE of 80%. Whereas the cheaper ZWO ASI 585MC-Pro has a QE of 91%.

Does that mean the ZWO ASI 585MC-Pro will capture objects of a lower magnitude and/or require shorter exposures? And is there an alternative camera which would suit my needs?

Many thanks!

 

  • Like 1
Link to comment
Share on other sites

QE is just one part of equation - and given the number - it is the peak QE - which does not paint the whole picture.

QE is percentage of photons that fall on sensor - on a single pixel that will be detected / converted into electrons.

Say that you have pixel that is 3x3 um square. Not all of that 3x3 um is photo sensitive area - there is some electronics around it as well. Further - depending on wavelength of light - photons might miss photosensitive site - simply go thru it. Other photons scatter of material - they simply get reflected. Others still get absorbed without producing an electron.

Light does pretty much what it normally does in everyday light - it's either reflected or absorbed or transmitted. Only some number of incident photons actually manage to hit electron and knock it into potential well.

That is what QE is - if you have 80% QE at some wavelength - that means that out of 100 photons - roughly 80 (or rather on average 80 of them) will be converted into electron.

Back to beginning - number you see quoted is peak QE. Actual QE curve often looks like this:

image.png.74cdf43f3d406f695741065be895a6f1.png

This is relative QE curve and highest point in it - marked with 1.0 is actual 80% QE that has been specified for camera. This means that if you want to get actual QE for say Ha line - which is 656nm - you need to read this chart at 656nm (x axis) and then multiply that with peak QE. So we have 80% x 0.84 = 67.2%

Above camera has 67.2% QE at hydrogen alpha line if peak QE is 80%.

In the end - QE is just one parameter of how sensitive camera is. Another is size of pixel (but we have to take into account focal length as well - and that is where things get complicated).

If you take the same optics and you attach two different cameras to it - actual formula for which camera is more sensitive when paired with said optics is:

area_of_pixel * QE_at_wavelength

In another words - if you have camera with 3.8um that has 50% of QE at Ha line and you have camera with 2.4um with 85% QE at Ha - first will be faster if you consider using them with the same scope because

3.8 * 3.8 * 50%  = 7.22

2.4 * 2.4 * 85% = 4.896

7.22 > 4.896

If you have different QE curves for camera and you shoot broad band targets - well things get rather complex. You need to integrate spectrum of the target x QE curve and multiply that with pixel size for both cameras. That is really out of the scope for most people (no one really knows spectrum of the target until it is measured - and there often is different spectra for all different features of the target) - so people stick to peak QE - rationale being that QE curves are in general pretty much the same shape (just pay attention that for specific purposes some cameras can have edge - like those very sensitive in IR part of spectrum or similar).

Hope this helps

 

  • Like 6
  • Thanks 1
Link to comment
Share on other sites

Many thanks Vlaiv. I suspected that it would be your name providing a highly informed explanation!

So, the basic message is that the ASI 533MC-Pro, although it has a lower QE than the ASI 585MC-Pro, is more sensitive due to the larger pixels relative to the QE, and will capture objects of lower magnitude?

Link to comment
Share on other sites

2 minutes ago, lukebl said:

So, the basic message is that the ASI 533MC-Pro, although it has a lower QE than the ASI 585MC-Pro, is more sensitive due to the larger pixels relative to the QE, and will capture objects of lower magnitude?

Yes, provided it is used with the same optics and nothing special is done to the data.

If you have the choice of telescopes you can use with your cameras, and you can bin your data - then it depends.

If you want to get really faint stuff in "reasonable" amount of time, then this would be my advice:

- figure out your working resolution in arc seconds per pixel (one that will capture all the detail you are after and will not over sample)

- get the biggest aperture that will give you focal length in combination with pixel size you have and any binning factor that will provide you with wanted working resolution - and will have enough FOV to capture the target.

(of course, consider all other variables like ability to mount the said largest aperture scope, costs involved, quality of optics and so on ...)

 

  • Like 2
Link to comment
Share on other sites

3 hours ago, vlaiv said:

more sensitive when paired with said optics is:

area_of_pixel * QE_at_wavelength

In another words - if you have camera with 3.8um that has 50% of QE at Ha line and you have camera with 2.4um with 85% QE at Ha - first will be faster if you consider using them with the same scope because

3.8 * 3.8 * 50%  = 7.22

2.4 * 2.4 * 85% = 4.896

7.22 > 4.896

Yes that's true per pixel but for the same total effective area of sensor, the one with the higher QE will detect more photons. 

Which will give the cleaner image will depend on read noise,  telegraph noise and other sensor parameters.

Regards Andrew 

Link to comment
Share on other sites

9 hours ago, andrew s said:

Yes that's true per pixel but for the same total effective area of sensor, the one with the higher QE will detect more photons. 

Which will give the cleaner image will depend on read noise,  telegraph noise and other sensor parameters.

Regards Andrew 

I think I’m even more confused now!
The science of it baffles me. All I want to know is which camera is more sensitive, assuming the same optical setup, the 533MC-Pro or the 585MC-Pro?

Link to comment
Share on other sites

Posted (edited)
12 hours ago, vlaiv said:

Yes, provided it is used with the same optics and nothing special is done to the data.

If you have the choice of telescopes you can use with your cameras, and you can bin your data - then it depends.

If you want to get really faint stuff in "reasonable" amount of time, then this would be my advice:

- figure out your working resolution in arc seconds per pixel (one that will capture all the detail you are after and will not over sample)

- get the biggest aperture that will give you focal length in combination with pixel size you have and any binning factor that will provide you with wanted working resolution - and will have enough FOV to capture the target.

(of course, consider all other variables like ability to mount the said largest aperture scope, costs involved, quality of optics and so on ...)

 

Am starting to think that the only things that matters are QE, Read noise per unit area and sensor size. 

And this is why, 

I don't think software Binning for increased SNR is the way to go anymore. 

Interested in your thoughts. 

Adam

Edited by Adam J
Link to comment
Share on other sites

2 hours ago, lukebl said:

I think I’m even more confused now!
The science of it baffles me. All I want to know is which camera is more sensitive, assuming the same optical setup, the 533MC-Pro or the 585MC-Pro?

What Andrew there is saying is the same as what I've said - only from a different angle.

Sensitivity of photo detecting element is surface area * QE. This is true for single pixel as well as for sensor as a whole.

From that it can easily be seen that if you have two cameras with the same area of photo detecting element - their relative sensitivity will be determined by QE alone.

This can be further expanded like this:

- if you have two cameras with the same pixel size - then one with higher QE is more sensitive

- if you have two cameras with same sensor surface - then on with higher QE is more sensitive in sense of total number of photons gathered - but in order to exploit that sensitivity - you will need to handle the data in particular way (it's not out of the box because you will need to overcome the fact that surface area is divided into pixels somehow)

- above two statements can be thought of as conflicting and they are if you take them at face value - but first is true if no special processing is done and second can be true with special processing.

Think of the following:

5+5+5+5

4+4+4+4+4+4

Now if you ask - which one is larger - you can answer in two ways: 5 is larger than 4 and 24 is larger than 20 - in one case answer is first and in other case - second. This is equivalent to looking at signal "per pixel" and "per sensor" (sum of all pixels).

  • Like 1
Link to comment
Share on other sites

1 hour ago, Adam J said:

Am starting to think that the only things that matters are QE, Read noise per unit area and sensor size. 

And this is why, 

I don't think software Binning for increased SNR is the way to go anymore. 

With some things I agree 100% and with others 0% :D. I'll explain.

1. Yes, QE does matter, but only up to a point.

Nowadays, most cameras have very similar QE. There is really not much difference in 81% vs 83%. Transparency on the night of imaging and position of the target can have greater impact on speed than this. So does the choice of stacking algorithm if one uses subs of different quality. My reasoning is to go with higher QE only if it fits other criteria below

2. Read noise is in my view completely inconsequential for long exposure stacked imaging where we control sub duration at will.

Since we can swamp the read noise with selected exposure length and again CMOS cameras have much lower read noise than CCDs used to have - I again don't see it as very important factor. I would not mind using 3e read noise camera over 1.4e read noise camera if it fits with other criteria

3. Sensor size is very important for speed - because it lets us use larger aperture while having enough of FOV to capture our target. I guess this is self explanatory

4. Binning is the key in achieving our target sampling rate with large scopes and large sensors.

Speed is ultimately surface_of_the_sky_covered_by_sampling_element * QE * losses in telescope * aperture

In above equation there are only two parameters that can be varied to a greater extent - one is aperture and the other is sampling rate or sky area. Latter determines the type of the image we are after - do we want wide field image with low sampling rate or we aim to be right there on the max detail possible for our conditions. If we aim for latter - well we don't have that much freedom in this parameter. This leaves aperture. We can choose to go with 50mm or 300mm scope, but in order to hit our target sampling rate we must have equal range of pixel sizes - that we don't have. Binning to the rescue.

I simply don't like AI side of things. We have very good conventional algorithms that do wonders as well if applied correctly.

 

Link to comment
Share on other sites

19 minutes ago, vlaiv said:

With some things I agree 100% and with others 0% :D. I'll explain.

1. Yes, QE does matter, but only up to a point.

Nowadays, most cameras have very similar QE. There is really not much difference in 81% vs 83%. Transparency on the night of imaging and position of the target can have greater impact on speed than this. So does the choice of stacking algorithm if one uses subs of different quality. My reasoning is to go with higher QE only if it fits other criteria below

2. Read noise is in my view completely inconsequential for long exposure stacked imaging where we control sub duration at will.

Since we can swamp the read noise with selected exposure length and again CMOS cameras have much lower read noise than CCDs used to have - I again don't see it as very important factor. I would not mind using 3e read noise camera over 1.4e read noise camera if it fits with other criteria

3. Sensor size is very important for speed - because it lets us use larger aperture while having enough of FOV to capture our target. I guess this is self explanatory

4. Binning is the key in achieving our target sampling rate with large scopes and large sensors.

Speed is ultimately surface_of_the_sky_covered_by_sampling_element * QE * losses in telescope * aperture

In above equation there are only two parameters that can be varied to a greater extent - one is aperture and the other is sampling rate or sky area. Latter determines the type of the image we are after - do we want wide field image with low sampling rate or we aim to be right there on the max detail possible for our conditions. If we aim for latter - well we don't have that much freedom in this parameter. This leaves aperture. We can choose to go with 50mm or 300mm scope, but in order to hit our target sampling rate we must have equal range of pixel sizes - that we don't have. Binning to the rescue.

I simply don't like AI side of things. We have very good conventional algorithms that do wonders as well if applied correctly.

 

I think it's better to say they have a similar peak QE, although legacy cameras do have lower QEs, the main difference for OSC is in what the bayer matrix is doing to that QE. 

In terms of read noise. At a dark site my understanding is that Narrowband will take significant time to burry the read noise with LP at least. Now it may be so low even in that case that shot noise from a faint target is still the biggest factor. 

My personal current struggle is to replace my ASI1600mm pro with a higher QE camera or buy a second and use in a duel rig. I am leaning towards a duel rig as the used coat has come down. 

All in all to op I would say get the largest sensor you can afford with the possible exception of wanting to specialise in galaxy imaging. 

Link to comment
Share on other sites

Posted (edited)
4 hours ago, lukebl said:

533MC-Pro or the 585MC-Pro

I haven't had the 585 but I believe the 485 is exactly the same other than the anti amp glow feature. Having owned both the 533 and 485 I'd say the 533 provides slightly better colour saturation. The main thing though to decide upon if budget is not a consideration is the aspect ratio, the 585 is a 16:9 so wide-screen aspect ratio, the 533 is square which provides the ideal format for sensor illumination, decide on what you want to image and choose accordingly. I found the 485 lacked the height so I had to mosaic or use a smaller focal length like camera lenses, the 533 had the height but lacked the width as I wanted the data to mix with other 3:2 cameras I've got. I eventually sold on both but would have been happy with the 533 if it were my only camera, the 585 I'd also be happy with if it was my first camera, also useful for planetary due to the high FPS capability, some cameras which aren't designed for planetary don't like fast frame capture.

 

Edited by Elp
  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.