Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

C11Edge vs TOA 130 2.0


Rodd

Recommended Posts

3 hours ago, ollypenrice said:

I think this may be a misleading way to think about it. If we think in terms of 'object photons' (photons from the object) then the number captured depends only on aperture, provided the object fits on the chip. When you have a FL which lets the object cover the chip you are not exploiting more object photons than would be captured by a shorter FL of the same aperture. What you are doing is putting them onto more pixels for an improved spatial resolution at lower intensity of signal. The shorter FL doesn't waste any object photons - they are all used - provided you don't saturate the bright parts.  Only if you saturate the chip can you 'waste' photons because any more cannot be counted.

Olly

Yes but compare a 12” scope shooting a galaxy that fills the FOV. The scope collects a certain number of photons all of which are from the galaxy. Now take the same scope at 1/2 the focal length, make it a design feature so we don’t get bogged down in the reducer myth.  It collects the same number of photons, but only a percentage of them are from the galaxy

Link to comment
Share on other sites

8 hours ago, Rodd said:

Yes but compare a 12” scope shooting a galaxy that fills the FOV. The scope collects a certain number of photons all of which are from the galaxy. Now take the same scope at 1/2 the focal length, make it a design feature so we don’t get bogged down in the reducer myth.  It collects the same number of photons, but only a percentage of them are from the galaxy

No, it does not collect fewer photons from the galaxy.  The galaxy photons do not know the the focal length of the objective into which they pass when they do so.  The same number of photons from the galaxy enter any two scopes of equal aperture and all are focused onto the chip if the FL allows it. Conversely, however, it is the long FL instrument which 'wastes' the photons from around the galaxy.  The incoming sky photons are identical in both cases.

Olly

Edit. Maybe the key thing to remember is that all parts of the objective contribute to all parts of the image. Those high school ray diagrams with two parallel incoming beams, one top and one bottom, are not wrong but they are so incomplete as to be potentially misleading.

Edited by ollypenrice
Link to comment
Share on other sites

4 hours ago, ollypenrice said:

No, it does not collect fewer photons from the galaxy.  The galaxy photons do not know the the focal length of the objective into which they pass when they do so.  The same number of photons from the galaxy enter any two scopes of equal aperture and all are focused onto the chip if the FL allows it. Conversely, however, it is the long FL instrument which 'wastes' the photons from around the galaxy.  The incoming sky photons are identical in both cases.

Olly

Edit. Maybe the key thing to remember is that all parts of the objective contribute to all parts of the image. Those high school ray diagrams with two parallel incoming beams, one top and one bottom, are not wrong but they are so incomplete as to be potentially misleading.

But it doesn’t make sense.  If one scope is imaging a galaxy that takes up 100% of the objective it collects x number of photons, all from the galaxy. If the same aperture scope has the same size objective, but shoots at 1/2 the focal length, the galaxy and a globular cluster fit on the objective so it collects x number of photons but it collects them from the galaxy and the globular cluster.  If the scopes collect same number of photons, something has to give.  The equations are not equal.  While the 2 objectives are the same aperture, they are curved differently, so their light cones cover different parts of the sky.  The same happens with an extender. If the extender enlarges a target so one feature only fits on a sensor, light from an adjacent feature may enter the objective, but it isn’t recorded on the sensor, so it doesn’t count. 
 

the other thing is if a photon emits from a detail that is too small for the resolution of a sensor to depict, it is, in essence, a wasted photon. What good does it do?  A photon from a feature too small to be differentiated by a sensor may enter the objective, but it isn’t recorded. In the same fashion that clipped photons are not used-they add nothing to an image.  Not all photons that enter an objective are used by a sensor-even if they hit it.  This is due in part to focal length. 

Link to comment
Share on other sites

30 minutes ago, Rodd said:

But it doesn’t make sense.  If one scope is imaging a galaxy that takes up 100% of the objective it collects x number of photons, all from the galaxy. If the same aperture scope has the same size objective, but shoots at 1/2 the focal length, the galaxy and a globular cluster fit on the objective so it collects x number of photons but it collects them from the galaxy and the globular cluster.

They both collect light from the globular and the galaxy. The long FL scope only brings the galaxy's light onto the chip whereas the short FL scope brings the galaxy's and the globular's onto the chip. And, yes, the shorter FL brings more photons onto the chip for this reason, but the number from the galaxy is identical for both.

 

37 minutes ago, Rodd said:

 

the other thing is if a photon emits from a detail that is too small for the resolution of a sensor to depict, it is, in essence, a wasted photon. What good does it do?  A photon from a feature too small to be differentiated by a sensor may enter the objective, but it isn’t recorded. In the same fashion that clipped photons are not used-they add nothing to an image.  Not all photons that enter an objective are used by a sensor-even if they hit it.  This is due in part to focal length. 

This is true in the case of parts of the image which fall comfortably above the noise floor and contain details worth distinguishing.  However, if the photons in question are coming from a very faint part of the target, one which is struggling to get above the noise floor and contains little resolvable detail, they are not wasted. This is the difference between sufficient signal and sufficient resolution. No photon from a very faint tidal tail is ever going to be wasted.

But I agree that sufficient resolution to reach the limit of the seeing is very much to be desired.

Olly

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, Rodd said:

But it doesn’t make sense.  If one scope is imaging a galaxy that takes up 100% of the objective it collects x number of photons, all from the galaxy. If the same aperture scope has the same size objective, but shoots at 1/2 the focal length, the galaxy and a globular cluster fit on the objective so it collects x number of photons but it collects them from the galaxy and the globular cluster.  If the scopes collect same number of photons, something has to give.  The equations are not equal.  While the 2 objectives are the same aperture, they are curved differently, so their light cones cover different parts of the sky.  The same happens with an extender. If the extender enlarges a target so one feature only fits on a sensor, light from an adjacent feature may enter the objective, but it isn’t recorded on the sensor, so it doesn’t count. 
 

Not to repeat what Olly has said, the other way to think about it, if you added a 0.5x reducer to the first scope to halve the focal length, then the amount of light entering the objective hasn't changed, but you're now putting 4x as much light onto the same chip (the same amount from the glaxay, but the extra light from the golbular and surrounding space). If you were bin2 with the full focal length and bin1 with the halved focal length then the image of the galaxy from the reduced scope could be cropped to be same as the unreduced scope. 

1 hour ago, Rodd said:

the other thing is if a photon emits from a detail that is too small for the resolution of a sensor to depict, it is, in essence, a wasted photon. What good does it do?  A photon from a feature too small to be differentiated by a sensor may enter the objective, but it isn’t recorded. In the same fashion that clipped photons are not used-they add nothing to an image.  Not all photons that enter an objective are used by a sensor-even if they hit it.  This is due in part to focal length. 

Photons don't have a "size", any photon that hits the sensor has a chance to be recorded (based on the effeciency for the given wavelength), you just wouldn't be able to tell which detail it came from; they all add together.

A galaxy is a cluster of hundreds of billions of stars, each star is a detail that is too small to resolve, but the photons all get added together by your sensor. 

Edited by SamAndrew
  • Like 1
Link to comment
Share on other sites

On 03/06/2023 at 03:49, Rodd said:

It is strange that teh C11 image is 9.4 mb while the TOA imnage is only 4.2.  Probably becuase the piel scale for the C11 image is .4" while the native piel scale of the TOA image was 1.12'.

Is this with the same camera? If so, the raw images and the stacked images should be of the same size irrespective of focal length. The jpeg image otoh, is an entirely different beast. Jpegs are always compressed to some degree. I see a size difference in jpeg if I use different stretches. Also, since the C11 has a smaller pixel scale than the TOA, each pixel would collect fewer photons if the aperture were the same.

Link to comment
Share on other sites

On 09/06/2023 at 02:36, Rodd said:

Well, here’s the thing.  In poor seeing, galaxies are out.  In good seeing the aperture wins.  In poor seeing, the only option is to image at a lower pixel scale. 

I think good seeing is bellow 1 arc second. In our backyards, as amateurs, we rarely have good seeing to take advantage of bigger aperture. You can see in our own pics that the refractor and the C11 have the same median FWHM. The only thing that more aperture brings  in this case, is reaching the desired SNR faster (bigger mirror, more photons). 

Anyways .... I do prefer refractors because they are close to perfection and plug and play, but if I were to have < 1 arc second skies, I would surely get a Planewave CDK or a ASA Newt

  • Like 1
Link to comment
Share on other sites

2 hours ago, dan_adi said:

I think good seeing is bellow 1 arc second. In our backyards, as amateurs, we rarely have good seeing to take advantage of bigger aperture. You can see in our own pics that the refractor and the C11 have the same median FWHM. The only thing that more aperture brings  in this case, is reaching the desired SNR faster (bigger mirror, more photons). 

Anyways .... I do prefer refractors because they are close to perfection and plug and play, but if I were to have < 1 arc second skies, I would surely get a Planewave CDK or a ASA Newt

Well, aperture controls detail. So when seeing is decent, for me that is under 2”, the C11 captures more detail then the toa even if fwhm is the same.  But the scope is only useful for small targets like galaxies. Every time I have compared the two when fwhm was less than 2.5, the c11 wins in detail

Link to comment
Share on other sites

6 hours ago, SamAndrew said:

Not to repeat what Olly has said, the other way to think about it, if you added a 0.5x reducer to the first scope to halve the focal length, then the amount of light entering the objective hasn't changed, but you're now putting 4x as much light onto the same chip (the same amount from the glaxay, but the extra light from the golbular and surrounding space). If you were bin2 with the full focal length and bin1 with the halved focal length then the image of the galaxy from the reduced scope could be cropped to be same as the unreduced scope. 

Photons don't have a "size", any photon that hits the sensor has a chance to be recorded (based on the effeciency for the given wavelength), you just wouldn't be able to tell which detail it came from; they all add together.

A galaxy is a cluster of hundreds of billions of stars, each star is a detail that is too small to resolve, but the photons all get added together by your sensor. 

Reducers are a diffrrent story. Not the same. And binning has nothing to do with it. If you have a mirror that covers m57 only…that means the only photons in the light cone that hit the sensor comes from m57. Now if you have the same size mirror, but the focal length is shorter it means the mirror is curved and the light cone contains photons from other targets. 
 

and we may not be able to see where s photo comes from, but the sensor does. They do not mix. The photons emitted from a crater rim form the crater rim in the image and photons reflected off a mountain peak form the mountain peak in the image 

Link to comment
Share on other sites

7 hours ago, ollypenrice said:

They both collect light from the globular and the galaxy. The long FL scope only brings the galaxy's light onto the chip whereas the short FL scope brings the galaxy's and the globular's onto the chip. And, yes, the shorter FL brings more photons onto the chip for this reason, but the number from the galaxy is identical for both.

 

This is true in the case of parts of the image which fall comfortably above the noise floor and contain details worth distinguishing.  However, if the photons in question are coming from a very faint part of the target, one which is struggling to get above the noise floor and contains little resolvable detail, they are not wasted. This is the difference between sufficient signal and sufficient resolution. No photon from a very faint tidal tail is ever going to be wasted.

But I agree that sufficient resolution to reach the limit of the seeing is very much to be desired.

Olly

 

With respect to photons hitting the sensor, those are the only photons that matter. The scope may collect photons from the globular, but if they don’t land on the sensor, they are 100% useless. Also, if you have a huge mirror, but a very fast focal length, you could have a pixel scale that is large…too large to portray, say a 100 meter lunar crater. If the pixel scale is too large to deferentiate details, you end up with a bright blurr.  Lots of signal, but no details. That is why using hyperstar or RASA to image a small galaxy won’t produce as good if an image of that galaxy. The resolution of the system is to low to record fine structure. It’s the system as a whole that is important, because the only photons that matter are the ones that hit the sensor

I agree about the faint tidal tail, because there is no detail to record

 

Edited by Rodd
Link to comment
Share on other sites

6 hours ago, SamAndrew said:

Not to repeat what Olly has said, the other way to think about it, if you added a 0.5x reducer to the first scope to halve the focal length, then the amount of light entering the objective hasn't changed, but you're now putting 4x as much light onto the same chip (the same amount from the glaxay, but the extra light from the golbular and surrounding space). If you were bin2 with the full focal length and bin1 with the halved focal length then the image of the galaxy from the reduced scope could be cropped to be same as the unreduced scope. 

Photons don't have a "size", any photon that hits the sensor has a chance to be recorded (based on the effeciency for the given wavelength), you just wouldn't be able to tell which detail it came from; they all add together.

A galaxy is a cluster of hundreds of billions of stars, each star is a detail that is too small to resolve, but the photons all get added together by your sensor. 

you can make out individual stars in m33 and m31.  Photons from one star form that star—not another star.  Photons from one globular cluster orbiting M104 form that cluster—not another cluster around M104.  Each photo lands precisely on the spot it emitted from.  Our resolution doesn’t allow us to see it  but photons have 1 end point.  Ha photons from a nebula in m33 don’t land on a star at the other end of the galaxy.  If the did, we would only ever image blurr

 

Link to comment
Share on other sites

11 minutes ago, Rodd said:

With respect to photons hitting the sensor, those are the only photons that matter. The scope may collect photons from the globular, but if they don’t land on the sensor, they are 100% useless. Also, if you have a huge mirror, but a very fast focal length, you could have a pixel scale that is large…too large to portray, say a 100 meter lunar crater. If the pixel scale is too large to deferentiate details, you end up with a bright blurr.  Lots of signal, but no details. That is why using hyperstar or RASA to image a small galaxy won’t produce as good if an image of that galaxy. The resolution of the system is to low to record fine structure. It’s the system as a whole that is important, because the only photons that matter are the ones that hit the sensor

I agree about the faint tidal tail, because there is no detail to record

 

A RASA 14 has a focal length of 790mm. With a 2600 CMOS chip this would give an image scale of 0.98"PP. The optical resolution of a 14 inch will be very unlikely to be the limiting factor and the seeing will be very unlikely to allow 0.98".   This combination would take a lot of beating.  Given the amount of signal its aperture per pixel ratio would give, the signal strength would allow for very aggressive processing of fine detail and pull out the faint stuff. Even the more affordable RASA 11 would give  1.25"PP with optical resolution to spare. I'd love the opportunity to try to live live with that!

Olly

Link to comment
Share on other sites

You're not arguing the same point, you originally said:

9 hours ago, Rodd said:

A photon from a feature too small to be differentiated by a sensor may enter the objective, but it isn’t recorded.

Take the example of a binary star system that is too close to be separated by the optical system; does the sensor ignore the photons from those stars because it can't resolve them? or do they merge onto the same point on the chip?

Edited by SamAndrew
Link to comment
Share on other sites

16 minutes ago, ollypenrice said:

The optical resolution of a 14 inch will be very unlikely to be the limiting factor

But it is in case of RASA.

Given such a fast system and need for correction over large field - that system is far from diffraction limited. In fact - even without seeing effects you are limited to sampling of over 2"/px due to RMS of spot diagram (which really just translates into RMS of blur of optics).

Link to comment
Share on other sites

28 minutes ago, vlaiv said:

But it is in case of RASA.

Given such a fast system and need for correction over large field - that system is far from diffraction limited. In fact - even without seeing effects you are limited to sampling of over 2"/px due to RMS of spot diagram (which really just translates into RMS of blur of optics).

Wide field for large targets, narrow field for small ones.  If they designed the RASA with the same spot size as a non hyper star system, the hyper star still wouldn’t be as good for galaxies because the focal length results in a larger pixel scale.  The hyper star system would have to zoom considerably to depict a galaxy at the same scale (which is the benchmark in this example) while the non hyper star system can view at less than 1:1, which will ensure higher quality.  
 

my dream system is a long focal length, fast focal ratio, and a large FOV.  Note I said system and not scope, because the sensor is critical to the equation

Link to comment
Share on other sites

44 minutes ago, SamAndrew said:

You're not arguing the same point, you originally said:

Take the example of a binary star system that is too close to be separated by the optical system; does the sensor ignore the photons from those stars because it can't resolve them? or do they merge onto the same point on the chip?

The sensor does not ignore the photons. The photons provide naked signal, meaningless signal for imaging but not for science.  When there is sufficient signal, adding photons that represent features smaller than the resolving power does nothin but blur the image. You may have a bright m51, but it will be featureless. For non very faint objects that have no detail this is ok. As ollypenrice said.  But I am referring to structures that have details bright enough to resolve

Link to comment
Share on other sites

51 minutes ago, Rodd said:

The sensor does not ignore the photons. The photons provide naked signal, meaningless signal for imaging but not for science.  

But it's not meaningless, those 2 binary stars will appear as a single brighter star in the image, in the same way that 100s of billions of stars give a galaxy it's brightness.

1 hour ago, Rodd said:

You may have a bright m51, but it will be featureless. For non very faint objects that have no detail this is ok. As ollypenrice said.  But I am referring to structures that have details bright enough to resolve

Everything is featureless until you have enough resolution to resolve the features. Everything is too faint until you expose for long enough to reach a detectable level of signal. You are muddling terms and arguements here.

Link to comment
Share on other sites

4 hours ago, SamAndrew said:

But it's not meaningless, those 2 binary stars will appear as a single brighter star in the image, in the same way that 100s of billions of stars give a galaxy it's brightness.

Everything is featureless until you have enough resolution to resolve the features. Everything is too faint until you expose for long enough to reach a detectable level of signal. You are muddling terms and arguements here.

No I am not.  First Of all, stars are featureless. Not a good example.  Let’s take saturns rings. If you want to image the rings but don’t have the resolution to do it, a brighter Saturn does you no good. For targets that are bright, there is enough signal. I took a 30 hour image of the rosette and it was not any different than a 20 hour image. No matter how many photons I collect, I could not resolve a certain knot of dust around the leopard.  Brighter doesn’t help unless, as olly said, it is for a very dim, featureless target. 
 

yes everything is to dim unless you collect enough photons. But it’s not dimness we are talking about.  If you don’t have the resolution to differentiate a feature, all the brightness in the world won’t help. That is my point.  Yes, that double star will look like a brighter star with more photons.  But you don’t want a brighter star, you want a binary star. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.