Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

converting sqm mag to number of photons


Recommended Posts

I have been reading this article on exposure length and am left with just one question:

In my location, LP is going to be the limiting factor (by far!). So how do I convert the 18 point something magnitude reading I get from my SQM meter into  rp,sky (photons per pixel per second)?

Thanks.

Link to comment
Share on other sites

1 hour ago, Demonperformer said:

In my location, LP is going to be the limiting factor (by far!). So how do I convert the 18 point something magnitude reading I get from my SQM meter into  rp,sky (photons per pixel per second)?

Here ya go!

According to this, a Magnitude 0 star will produce 3640 photons per second per metre² at the top of the atmosphere. Adjust the figure for different magnitudes (per the example this on the page) and for atmospheric extinction depending on the sky-angle of the star in question.
 

Link to comment
Share on other sites

12 minutes ago, pete_l said:

Here ya go!

According to this, a Magnitude 0 star will produce 3640 photons per second per metre² at the top of the atmosphere. Adjust the figure for different magnitudes (per the example this on the page) and for atmospheric extinction depending on the sky-angle of the star in question.
 

That is a good source, but 3640 photons is very wrong figure (certainly per meter squared).

This would be approximation for correct figure of photons per pixel:

Mag 0 star will produce around 900,000 photons per cm squared per second.

Take into account following:

Atmospheric extinction (for target, not LP), aperture size in cm squared (accounting for central obstruction if present), system throughput (for mirrors use approximate figure of 94% or 97% per surface depending on coating, for glass use 99-99.5% again depending on coatings), QE efficiency of sensor, and shooting resolution.

For example, in my location (SQM around 18.5), photon count per pixel per 1s will be something like:

~1.02e per second

For following parameters:

8" aperture with 90mm central obstruction and 99% dielectric mirrors (TS RC 8" scope), ASI1600 camera with 50% QE giving resolution of around 0.48"/pixel.

Link to comment
Share on other sites

Looking at it deeper, the figure of 3640 appears to be in Jy units (whatever they are - apparently a unit of energy). Each "Jy" is 1.5 * 10**7 photons.
So multiply your results by 15 million :icon_biggrin:

Link to comment
Share on other sites

1 minute ago, pete_l said:

Looking at it deeper, the figure of 3640 appears to be in Jy units (whatever they are - apparently a unit of energy). Each "Jy" is 1.5 * 10**7 photons.
So multiply your results by 15 million :icon_biggrin:

Yes, Jy stands for Jansky - measure of spectral flux density (or in another terms power per given frequency) - often used in radio astronomy.

To convert from spectral flux density to photon count, one should multiply it by dlambda/lambda value for given filter - quoted link lists these values for standard photometric filters. For example V filter (matches visual) is 0.16,

so proper conversion factor would be 3640 * 0.16 * 1.5e7 = 873.6e7 per meter squared per second, or 873.6e3 = 873600 per cm squared per second (divided by 1e4, because there are 10,000 cm squared in 1 m squared).

Now this is for photometric V filter. It will be somewhat more for Lum filter and it will depend on QE curve of sensor and spectral class of star, but 900,000 is a good approximation in general case.

Link to comment
Share on other sites

  • 2 weeks later...
On 07/05/2018 at 09:24, vlaiv said:

That is a good source, but 3640 photons is very wrong figure (certainly per meter squared).

This would be approximation for correct figure of photons per pixel:

Mag 0 star will produce around 900,000 photons per cm squared per second.

Take into account following:

Atmospheric extinction (for target, not LP), aperture size in cm squared (accounting for central obstruction if present), system throughput (for mirrors use approximate figure of 94% or 97% per surface depending on coating, for glass use 99-99.5% again depending on coatings), QE efficiency of sensor, and shooting resolution.

For example, in my location (SQM around 18.5), photon count per pixel per 1s will be something like:

~1.02e per second

For following parameters:

8" aperture with 90mm central obstruction and 99% dielectric mirrors (TS RC 8" scope), ASI1600 camera with 50% QE giving resolution of around 0.48"/pixel.

I have been crunching some numbers on this and have got myself into a bit of a dead-end.

I think I have followed what you are saying to the point where my 8" scope is collecting just under 9 photons per second from the sky background, [18.5 mag =~0.3583 photons per square cm per second with a collection area of 250.54 square cm].

Now it seems to me that those photons are spread out over the area of the entire light cone (so if the chip is only occupying half the light cone, then only 4.5 of those photons the scope is collecting every second are falling on the chip). [I am ignoring for a moment the reduction caused by "system throughput".]

So, it seems to me that I need to know, not only the size of the chip itself, but also the size of the light cone at the point where it crosses the chip - but how do I work that out? At the eyepiece this would be the exit pupil, but how does this apply to the camera?

Once I know the area of the light cone, I can work out the proportion of that light cone covered by the chip. Knowing the number of pixels on my chip, I would then have the number of photons per pixel per second caused by the sky background, which is what I need to plug into the final formula.

Thanks.

Link to comment
Share on other sites

5 hours ago, Demonperformer said:

I have been crunching some numbers on this and have got myself into a bit of a dead-end.

I think I have followed what you are saying to the point where my 8" scope is collecting just under 9 photons per second from the sky background, [18.5 mag =~0.3583 photons per square cm per second with a collection area of 250.54 square cm].

Now it seems to me that those photons are spread out over the area of the entire light cone (so if the chip is only occupying half the light cone, then only 4.5 of those photons the scope is collecting every second are falling on the chip). [I am ignoring for a moment the reduction caused by "system throughput".]

So, it seems to me that I need to know, not only the size of the chip itself, but also the size of the light cone at the point where it crosses the chip - but how do I work that out? At the eyepiece this would be the exit pupil, but how does this apply to the camera?

Once I know the area of the light cone, I can work out the proportion of that light cone covered by the chip. Knowing the number of pixels on my chip, I would then have the number of photons per pixel per second caused by the sky background, which is what I need to plug into the final formula.

Thanks.

Don't think of light cone - think that there is area of the sky that is one arc second squared in surface - all the light of that patch of the sky gets focused on small area on surface of the chip. You need to figure out how many pixels does it cover - as it is spread over that many pixels and gets divided equally over them.

So you figured out that you get ~9 photons per second on 8" aperture. That seems right. I'm going to do some quick math and then compare to measurement results.

I image at 0.5"/pixel natively, also having 8" scope, and also being in 18.5 mag skies (maybe somewhat brighter now, these are stats for couple of years back). I calculate my system throughput at around 50% (maybe a bit less). That would give a bit less than 4e per arc second squared per second. Going from my resolution it follows that 4 pixels cover 1 arc second squared ( 0.5"/pixel - 2 pixels for 1" in X and 2 pixel for 1" in Y - 2x2 = 4), this gives a bit less than 1e per pixel per second.

My exposure length is 60s, and I measured average background value last time I was imaging. From calculations it turns out that background value in 60s should be a bit less than 60e - actual measured value varied depending on part of the sky for sub but it ranged from ~40e to a bit less than 50e. This was with light pollution suppression filter. So in my case LPS filter makes 18 - 18.5 mag skies behave as 18.8 mag skies.

Here is spreadsheet that I've made that includes these calculations, so you can make comparison to your calculations.

It's been made for calculating approximate SNR on a target of a given magnitude in LP. It's far from perfect - it does not take into account calibration, movement of target across the sky (extinction and different levels of LP) - but it can be a good guide on what might one expect. It also does not handle narrow band / color filters (I don't know spectral curve for my LP so that would be a bit tough to do).

 

SNRCalc-english.ods

Link to comment
Share on other sites

1 minute ago, Demonperformer said:

OK,  I'm confused.

Where does the 'per arcsecond squared' come from when you convert 9 photons per second into 4e per arcsecond squared per second?

Thanks.

System throughput less than 50% - (meaning QE of sensor and mirror reflectance) - 9 * less than 50% is less than 4.5e - 4 is less than 4.5e :D (I did a bit of rough rounding because I could not be bothered to do exact mirror / central obstruction / QE math, but it is more or less correct).

Link to comment
Share on other sites

1 hour ago, Demonperformer said:

Also, if I put QE in here, won't it appear twice as it is also in the final equation? (Or is it supposed to be in there twice?)

No it does not supposed to appear twice, I understood you that you arrived at 9 photons per 8" aperture per second without taking into account system loss (before it enters telescope).

 

1 hour ago, Demonperformer said:

I understand the numbers, but you seem to have changed the units.

Not sure what you are saying, but ok, here is full calculation with units so we can be on the same page.

Source of magnitude 0 emits 880000 photons per second per centimeter squared of surface. Source of mag 18.5 will emit ~0.03424 photons per cm squared per second.

We'll take your number of 250 centimeters squared as collection area. This means that 8.56 photons fall on telescope during one second from 18.5 magnitude source.

Now it is important to understand that 18.5mag per arc second squared skies act as if each little square in the sky (1 arc second x 1 arc second in dimensions) acts as ordinary 18.5 mag source.

This means that each little 1x1" square of LP skies lands 8.56 photons on your aperture. These photons don't disperse all over the chip, but rather get focused in very tight spots. How do you know how many of these photons land on single pixel?

You have what is called sampling resolution - ratio of number of pixels to angular distance in the sky (take software, measure number of pixels between two stars, and divide into angular separation between two stars, or use available formula based on pixel size and focal length). In above example I was using sampling resolution of 0.5"/pixel - (note this is unit of length rather than surface).

If you want to see what kind of surface in arc seconds (squared) corresponds to surface of pixel (in units of pixels lengths squared but since we use discrete points - single pixels no need for unit of length) we need to multiply height by width (or because pixels are squared, just square sampling resolution). So if linear sampling resolution is 0.5"/pixel, "surface resolution" will be 0.5"/pixel * 0.5"/pixel - 0.25 arc seconds squared (per pixel - which is good, we want to know how much signal there will be per pixel, so no need to bother with it any more).

Ok, so we have "surface resolution" of 0.25 arc seconds squared (per pixel). This means that single pixel will get 1/4 of what ever single arc second squared source of sky produces - it will not get that from all 1x1" squares in the sky, because each part of the sky gets focused to precisely single place on the chip (like star - it does not shine on the whole chip, but rather it is concentrated in few pixels). If you worry about Airy disk blur - don't, blurring "constant" signal just gets you same "constant" signal - there are no details / features in LP that can be blurred.

Back to calculations, we have seen that we have 8.56 photons from our source (1"x1"), and that those 8.56 photons land on 4 pixels - this means that 2.14 photons land on a single pixel in one second. Now throw in QE of 50% (and you can add mirror losses if you haven't accounted for those yet), and you get ~1e per pixel per second.

Or in 60second exposure - 60e. (and the rest that I've written above, having LPS filter gets me under these conditions ADU levels between 40-50 with Gain set to 1e/ADU, so 40e-50e - which matches our calculations really well. One could try without LPS filter to match calculation to measurement precisely).

Link to comment
Share on other sites

Ah, I see the source of my confusion. The per arc second squared is implicit in mag. I'm sure I knew that but was forgetting.:iamwithstupid:

I was using 900k instead of 880k, but that is not going to be a major problem at the level of accuracy I was using.

I now see that dividing my ~9 by the number of pixels that make up 1 square arc second will give me the figure I require.

Thanks.

Link to comment
Share on other sites

Well, I have been crunching a few more numbers and I am absolutely staggered by the result I have obtained.

Using measured SQM readings for each of the filters, as measured here:

and the values for the quantum efficiency from the chart on this page (the grey line) using my 8" scope with reducer (f6 - 2.37 pixels per square arcsecond) and unity gain (about 1.75e/s) and allowing the recommended 5% extra noise, and plugging all those figures into the final equation, I get optimal sub lengths for each of the filters as:

Ha: 283s
OIII: 143s
SII: 702s

The bit I find so staggering is the spread of figures, SII being roughly 5x as long as OIII. It may be, of course, that others are not nearly so staggered (because they know a lot more about it than I do). Of course, guiding a 700s sub is going to be a bit more tricky than guiding a 140s sub, so that may also be a factor that will need to be included in the final decision.

But still ...

Link to comment
Share on other sites

35 minutes ago, Demonperformer said:

The equation is the final one in the thread that is linked in my first post in this thread:

t = (1/(1.05^2-1)) * ((readnoise squared)/(quantum efficiency*skyglow))

Ah, I see, not sure that I agree though ...

 

Link to comment
Share on other sites

I did not go over the math, but I'll present my view (again no math, but I can do if you want me to), it is quite simple.

Only thing differentiating stack of subs vs one single exposure in terms of SNR is read noise of the camera. If there were no read noise, it would be perfectly ok to have subs of any length. Other noise components all depend on time, read noise is only one with "fixed amount per sub" (target shot noise - depends on time, LP shot noise - depends on time, dark current noise - again depends on time).

Short subs do have advantage over single long sub, and I'm all for short subs. Gust of wind? Airplane / satellite in frame? Poor guiding/seeing for couple of seconds? Just discard that single sub, and if you operate on short subs you still have 99% of data left. There are drawbacks as well. Time it takes to download sub (very improved with CMOS), full well capacity - we don't want to saturate the signal, dithering (less time spent on longer subs if expressed as percentage of total imaging time), amount of data to store and stack - larger with short subs.

So it is balancing act. We want to find the shortest sub length that will not have very large impact on final SNR (all above included, so FW and guiding, and data considerations ...).

I agree with above analysis in terms of graphs - there is a point at which read noise starts to be dominant component and sub duration vs number of subs starts impacting SNR heavily, and equivalently there is point of diminishing returns where using a five (or more, put any number in) times as long subs will yield less than 1% SNR improvement.

So far this is pretty much in line with linked text. Where we disagree is having optimum sub duration. There is no such thing as optimum duration of sub in general. There is decision where point of diminishing returns starts for particular parameters. It is not the same if imaging very bright target vs very faint target. For bright target one might conclude that last 1% of SNR is not important and arrive at exposure length of only 15 seconds. For very faint target it might well turn out that to reach 1% of SNR loss you need 10 minute subs. This calculation depends not only on target brightness but on "aperture at resolution", quantum efficiency of sensor, read noise, dark noise and LP noise.

For example, people doing planetary work don't even think about LP and they are using exposures in milliseconds. Why? Because of target brightness, it just swamps everything else.

Maybe our two views on subject differ because intended use? I was under the impression that above calculations are related to live stacking and determining optimum exposure in that case, without having prior knowledge about total imaging time, while my approach starts at set imaging time and looks how to best divide that imaging time in number of subs.

To get the idea how sub length vs number of subs behaves for specific conditions - just use spreadsheet that I've attached - I made it precisely for that reason, to try to estimate SNR in relation of total imaging time and number of subs. Unfortunately it does not work for narrow band quite, but you might be able to "adopt" it for that use. You just need to figure out two things: Target surface brightness in darkest parts in band you are interested in (for example Ha) and what would be photon count in that particular band for 0mag source, you already have QE for different bands, and you have sky values for those bands (everything else is the same).

I just had no luck in trying to find narrow band magnitude data for common emission nebulae anywhere - only thing to do is to do measurements when imaging and publish results. For galaxies it is easy - faintest parts usually have 22-24 mag surface brightness, depending on galaxy type for large and near galaxies and it can go down to mag 28-30 for very small and far ellipticals (well for most targets to be imaged).

Link to comment
Share on other sites

I'm afraid my (fairly old) version of excel could not read your spreadsheet.

I really don't know enough about this subject to engage in an intelligent debate on the above. I am aware that to the mathematical 'layman' a progression that seems to contain no flaw can lead to ridiculous results [the schoolboy favourite mathematical 'proof' that 4 equals 5, for example] and so, just because I can see no flaw in the math used in the above-linked post does not mean there is none.

Your point about the decision you have to make where the diminishing returns start to come into play is maybe where the 5% comes in to the first half of the equation? He does say that changing that to 10% only reduces that part from 9.75 to 4.76 (relatively big gain compared to increased exposure time), whereas changing it to 1% would increase it from 9.75 to 49.75 (significant increase in exposure time for relatively small gain).

The problem with what you are saying, from my perspective, is the absence of data you mention. If the only way to decide what exposures to use when imaging an object is to image the object and take measurements, that is going to add work rather than reduce it. At least with the above equation I have a result that gives me a figure without this. As to the accuracy of the figure it produces, as stated I am not qualified to judge the math used, so, until someone can point to the error in the math he has used, I'm really not capable of taking this further forward. However, others who are more qualified to do so may wish to wade-in and debate this with you on that level and I am quite happy for my thread to be used for that and would follow such a debate with interest (if little understanding!).

Link to comment
Share on other sites

Yes indeed, all of these methods are approximate since all rely on either making certain assumptions, or having exact data that you don't have before imaging.

Don't think we need to focus as much on precise figures, but use both approaches as guidelines. If I'm after a target for which very good sub duration would be 1 minute and 40s, and I already have dark library built for 1 minute, I'm not going to obsess and redo my dark library and have exactly 1:40 subs, I'm more than happy to have 1m subs in that case. On the other hand having target where it benefits to go close or above 4 minutes - I'll use 4 minute exposure and build such master dark.

One might ask, where did you come up 1 and 4 minutes with? Well this sort of calculations let you choose couple of exposure lengths for different scenarios based on your setup, prepare for those exposures (build darks, check that your mount / guiding is up to it, etc ...) and just use them.

What is important in both approaches (or maybe a third one) is understanding why they work (and under which restrictions) - this will give one ability to say, yes I know why 1 minute exposure is going to work well enough, or I need to try to sort my guiding as 10 minute exposures are going to give me much better results.

BTW, on the matter of spreadsheet - it has been created with Libre/Open office software, so if you to examine it - no need for latest Microsoft Office suite, just download and install one of those (both are open source and free).

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.