Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I guess that without getting to mathematical about this - best way to describe how higher components create sharpness is to carefully watch this animated gif again: Note how first few sine components have sloped edge versus rectangle pulse train that has vertical edge. Higher the frequency faster sine wave goes from -1 to 1 and back again - we could say that it faster goes up and down and that it has steeper curve when it goes thru 0. By adding higher and higher frequencies - you make sum behave the same - look how it is getting steeper and steeper - in fact you need to infinite frequency component as slope is infinite with pulse train - so this sum goes to infinity. Yes, sharp edge of rectangle wave is what is sharp edge in image - very fast contrast change - in instant we go from no signal to signal - no light to light in pixel - that is steep slope. Loosing sharpness means that our slopes are no longer steep.
  2. I'd like now to address another thing in MTF light and give an example for that. When we talk about aperture and telescopes - we get the notion that some small features can only be seen with large aperture telescope. Is that correct and in which cases. It is both "wrong and correct" at the same time - and I'll give you two examples. First one is obvious. We see stars with our telescopes - a lot of them. Diameter of those stars is many orders of magnitude smaller than resolving capability of aperture - yet we see them. That would suggest that no matter how small - feature can be seen in a telescope. But - point telescope to the Moon and try to see very small crater - you won't be able to. Take small telescope and try to see very small detain in Jovian atmosphere on the edge of the belt. You won't be able to do it. What is up with that? Here we encounter contrast and its impact. I'm going to show you two types of features and how they are rendered by ever decreasing aperture - with one important thing. I'm not going to normalize images - I'm going to try to make them look like images we see at the eyepiece. If things get darker - I'll leave them darker - if they get brighter - I'll leave them like that. Here is set of images without impact of telescope aperture. Left is a star and right is what we could call "gap in Saturn's ring system". Let's look at profile plot of these two features: Now we are going to look at those features thru a first telescope: Note that both features are still there but two important things happened: 1. sharpness is lost - look at profile plots - features are getting "fatter" and "softer" 2. contrast is lost - features are no longer so distinct - star is fading into background and gap is no longer black it is "fading into foreground" Let's use twice smaller aperture to see what will happen next. We are continuing to see the same trend - things get lower contrast and get less sharp. I'll jump to x8 smaller aperture now to really show effect: Here we no longer can see the star - and we are virtually at the edge of seeing gap - but it is much wider than original and very low contrast. On the other hand - math plots suggest that both features are still there - they are now really blurred and low contrast - we can see very familiar Airy pattern in our star now and something similar in gap. Conclusion is - details don't really disappear - they just loose sharpness and contrast. We see stars because their initial contrast is extraordinary - stars are very bright but most observers know that seeing is particularly poor - they can't go as deep as on night of a good seeing - that is because seeing is again type of blur. When star is blurred - it's contrast is reduced. Planetary features on the other hand are very low contrast to begin with and it does not take much of contrast reduction to get them to disappear beyond detectability. Increasing magnification reduces brightness and therefore contrast - want to see detail - don't use too much magnification as you'll further reduce contrast that is already reduced by telescope aperture.
  3. In another thread we touched upon concept of telescope MTF and instead of further discussion in that particular thread - we decided to open a new one - dedicated to understanding of: MTF and how it relates to resolution, detail, sharpness and contrast. These terms are often used to convey telescope performance and for that reason I think it is important that we are all on the same page when using these terms. I'll kick off discussion by saying that above terms are somewhat synonymous when we talk about telescope performance and are all "encoded" in MTF. Discussion such as these inevitably rely on some math and I hope that whilst being formally correct - we can still make content understandable to most by means of analogy. First step in understanding all of that is to grasp concept of image as 2d function. We need to think of image as 2d function, but how? 2d function is defined with values it has in points in 2d plane - we take some coordinates X and Y and we get some value in that location. Image is 2d function if we take value to be light intensity in some point along height (Y coordinate) and width (X coordinate) of an image. Above we see an image and its 2d representation as a function plotted against XY. You can see that values go from 0 to 255 (in Z direction - direction of value of function) - which is common for 8bit computer images. Next we need to know that functions can be represented as sums of simple wave functions - sine and cosine. This is called Fourier transform. In fact - some functions can only be represented as infinite sum of sines and cosines while others only need finite number of terms. I have found a few animated gifs to explain this phenomena better thru animated sequences: This one shows how square wave function can be represented with sum of 4 sin functions each with higher frequency. same with even more frequency components. Note that these are all 1d functions, but same applies for 2d functions as well. In the end, Fourier transform or our function is just another way to write down our function. What does this have to do with MTF? Well - MTF also has its "analog" - or another way to "write it down" - it is PSF or point spread function - or image of a single star in telescope terms. PSF describes how our image is blurred when observed thru a telescope. This involves rather complex mathematical operation of convolution. MTF is analog of that operation - just much more simple. It also operates on our image - only on its analog in Fourier domain - by simple multiplication. This is what makes it much more understandable - it is simple multiplication. So here is how MTF looks like: But I have to say - that is actually not MTF - it is only "cross section" of MTF. Actual MTF is 2d function and looks like this: above graph is made when you plot values on a single line from center of the cone outward. There is special class of functions that have this circular symmetry and you can use cross section to represent whole 2d function. MTF is one of those in some cases. What does height of MTF stand for? Since MTF multiplies 2d Fourier transform of our image - it is just number but since it is in range from 0-1 we can say that it represents - attenuation of particular frequency. If number is say 0.2 - then that particular frequency is left at only 20% of its original value. MTF has highest frequency - after which value of the function falls to 0 - that is cutoff point and image at focal plane of telescope can never contain frequencies higher than this cutoff frequency. If we look at shape of MTF - we see that it has tendency to attenuate higher frequency - higher the frequency higher attenuation of it - until at one point we reach cut off and all frequencies above that one are completely removed (multiplied with 0). Now if we look at our animated gif above of how rectangle is formed - you'll notice something interesting. Even sine wave to some extent approximates rectangle pulse train. It is not good approximation - but it is still approximation. We need higher frequencies to make that sine wave look more like rectangle - "to sharpen its edges". What would reverse of that process be - well if we remove high frequency components - we remove "sharpness" of the edges in the image. Telescope that "kills off frequencies faster" - is less sharp telescope. This does not mean that it such telescope won't show feature - we will see later when "features disappear" and why - it means it will show it less sharp. We can directly relate to this by examining star image in two telescopes - one with small aperture and one with large aperture. Here we are strictly speaking without influence of atmosphere - so imagine night of excellent seeing. You are observing a star with two telescopes - one with smaller aperture that "kills off frequencies faster" and one with larger aperture - that "kills off frequencies slower". Smaller telescope will show star as larger "blob" or known by other name - Airy disk, than larger aperture telescope - smaller telescope is less sharp - or star image is more blurred. This is how MTF impacts sharpness - higher frequencies are needed for things to be sharp and MTF attenuates and kills of high frequencies and makes things blurry.
  4. Indeed - I did not add my comment as direct response for any particular claim. Sorry if it looked like that - it was more an addition to your post than disagreement. I just wanted to point above out for anyone reading your comment who not necessarily knows about frequency decomposition. That is why I focused on added features on those diagrams and what they might imply for people just getting into all of this. When we talk about telescope resolution and if something can or cannot be seen in telescope - we don't really explain to people what happens and what they might expect and what that frequency limit means. I figured that people get the sense of telescope resolution in terms of - particular feature can or cannot be seen and then when we start adding GRS or 2" feature size marker on above graph - it can imply to them that frequencies are related to feature size - and in some far removed sense - they are. In one of previous posts we touched upon definition of the terms like: resolution, contrast, detail, sharpness and how those are all related in context of telescope image. MTF that you posted depicts this relationship, but I have a sense that people don't really get what it all means and how it relates to one another. Just as an example - we see stars in telescope. Stars are possibly smallest angular features that we readily see in a telescope. Angular diameters of stars that we see are tiny micro arc seconds or less (close and resolved stars have milliarcsecond sizes) yet we see those features in telescopes that have Dawes and Rayleigh resolution of 1-2 arc seconds. I would be happy to discuss all of this in depth with examples, but I'm not sure if we should derail this thread further or if people are interested in such discussion (although I think that most of us would benefit of understanding all of that in order to better grasp telescope performance).
  5. Hi to you and to Jaden and welcome to SGL. We would love to help in choice of new telescope, but in order to do so - we would need a bit more information on your and your son's observing needs and expectations. From what you've written, I'm tempted to recommend this telescope to you: https://www.teleskop-express.de/shop/product_info.php/info/p3893_Skywatcher-Skymax-102---GoTo-Maksutov-telescope-102-1300mm.html But depending on your needs - maybe there will be something better for you.
  6. It is important to understand how to read such MTF diagram. Actual numbers and figures are correct, but plotting features like GRS or 2 arc second mark is misleading. This diagram shows amplitudes in frequency spectrum of image - or rather attenuation for particular frequency component. It has almost nothing to do with actual size of the feature. Here is example - image in the left - well, that's my doodle of "a feature" on the surface of the planet - right, that is Fourier Transform of that feature image - amplitudes in frequency domain. Bottom - equivalent graph to MTF - X axis is frequency and Y axis is intensity of given frequency component of the feature. As you see - our feature has frequency components all over the spectrum and if we attenuate some of those frequency components - we will not make feature disappear. It does not work like that.
  7. How did you find calibration of that ASI183MC - given that it does not have set point cooling and suffers from amp glow?
  8. If you happen to have multiple diagonals - you can check if there is difference between these too
  9. 1) separate darks for separate temperature lights - create two masters and use according to temperature of lights 2) same Flat exposure can sometimes be rather long - if flat panel is not very strong or sky flats are taken or some other method employed that does not produce strong light. Narrow band filters usually have significantly longer flat exposure due to fact that they only let in fraction of the spectrum and in the end - cameras with mechanical shutter benefit from much longer flats to avoid gradient from moving shutter.
  10. I think that reason for having two green as opposed to two blue or red pixel is human eye sensitivity. It also adds to resolution, as we are most sensitive in brightness rather than color change and brightness that we perceive is closely related to shape of green channel. Lack of mono sensors is tied to economics of scale I believe - it is much more cost efficient to have OSC sensor in consumer cameras and these are driving force behind mass production of the sensors. I think there would be some benefit to daytime photography from having one G pixel replaced with L. Better color rendition is one, improved low light sensitivity is another. There has to be some rationale behind using RGGB instead of say RLGB. Not sure what it is, but I doubt that they "simply did not think of that"
  11. Interestingly enough, there is single small change that sensor manufacturers can do that will negate difference between OSC and mono. Don't know why they don't already do it - I think it would improve daytime photography as well (color rendition and low light performance). Instead of having RGGB as bayer matrix - we could have RLGB - one of four pixels could be without filter.
  12. That is actually quite decent set of settings - except for planetary exposure time. So for point 1 and 2 (lunar and planetary) - set exposure time sort enough so that you can freeze the seeing. Don't pay attention to histogram what so ever. I often record with histogram being as low as 20-25%. Point of the so called lucky imaging is to use very short exposures - most of the time around 5-6ms and on lunar often shorter than that because moon is bright enough. This is because atmosphere is in constant motion and if you use longer exposure - seeing will not be frozen on a frame but will rather move during exposure and this will create additional motion blur that you don't want.
  13. I'm not sure that difference is that significant. Could you walk us thru the reasoning behind 50% figure?
  14. It depends if central obstruction is large enough and band pass of filter narrow enough. In most cases, difference will be negligible, but you are right - central obstruction removes "best" rays in terms of angle with respect to filter - ones closes to optical axis.
  15. Uncooled camera do offer a bit over DSLR type cameras. If we examine DSLR stats, like in this website: https://www.photonstophotos.net/Charts/Sensor_Characteristics.htm we'll find that most cameras have QE that is in 50-60% range. They have anti alias filter and aggressive UV/IR cut filters that people often end up modding. They are much heavier than equivalent astro model. Astro cameras have sensors that have QE in 80% range - no or very efficient UV/IR cut filter and are lightweight without all the bells and whistles of DSLR cameras. Price is major disadvantage, as you can get mirrorless type camera now for 1/2 - 1/3 of price of uncooled astro camera. Having said that - go with ASI294mc - pro. Cooling is not as important as ability to have set point temperature. That enables you to properly calibrate your data. ASI294 simply has best price / area ratio of the cameras mentioned. And sensor size is speed.
  16. There is a neat trick that you can use to make same result as if having twice as short focal length scope. You image 4 panels to cover the target. Each panel you image for only 1/4 of the time. Say you wanted to image for 4h total - then you spend 1h on each panel. Next you stack each panel and stitch whole image together and you bin your image in the end x2. That will produce the same result as if you were using scope with twice shorter focal length and same F/ratio as you already have - only difference will be in overlap between panels - so you'll use about 5-10% of width and height. Other drawback is that it is a bit more involved to process than single panel image.
  17. R+G+B for interference LRGB filter set where RGB do partitioning of 400-700nm range (Baader is very close to this). For any other combination you can find coefficients that produce closest value for particular camera model and filter set.
  18. Some people had issues with calibration of 294 - but I think it was mostly down to doing "automated" capture rather than selecting good capture values (mostly flat calibration was the issue). I think there are a lot of happy 294 users. For me amp glow was never an issue with cooled cameras as it calibrates out nicely with proper dark frames.
  19. As far as filters are concerned - it is still F/3.3 scope There are two ways in which you can use "speed" of telescope: - maximum angle of converging rays at focal plane - ratio of aperture to focal length Second one is used in daytime photography and denotes how long you should expose for to get certain level of signal. Without getting (again) into whole speed thing, I'll present a case where effective F/ratio can be observed: Say you have F/5 refractor and F/5 newtonian scope that you'll use with the same camera. Focal length of each scope is irrelevant for this, but we could say that refractor is 100mm and newtonian is 150mm. If you shoot a nebula with these two scopes and the same camera, for same exposure time - pixels covering that will have the same ADU value. This is the case where effective aperture comes into effect. If you do that with two refractors - one being 100mm and other being 150mm and both F/5 - that will happen. If you do that with 150mm newtonian with 30% CO and 91% reflectivity on both mirrors, it will behave as ~ F/5.77 100mm refractor and not as F/5 100mm refractor. For that reason we can say that effective focal ratio of that scope is ~ F/5.77. However - max angles of converging rays hitting focal plane do not change. They still hit as F/5 beam. Geometry of the system has not changed. Interference filters work by having many layers of special coatings that has certain thickness and light going thru those layers gets reflected and either reinforced or cancelled out with itself (wave interference). This happens because of QM nature of the light. Thickness of these layers plays crucial part in which wavelengths get reflected or passed. When light starts coming in at an angle - it no longer travels shortest distance thru the layers - but longer path. From light's point of view - it looks like layer got thicker and this changes wavelength that this particular layer operates on. A bit like in this diagram: Since geometry does not change, effects of nominal F/ratio onto filter effectiveness remain the same.
  20. Out of those three, I would personally pick 294. It is gives most value for your money as it is the largest sensor of the three and not much more expensive than the other two.
  21. There are all related to contrast. In fact, when we speak of telescope resolving power - each of these terms is contrast in a certain way.
  22. I'm don't quite understand the comparison between two scopes, could you be more specific about that - in particular point about refractor being sharper but not having greater detail than in 8" dob and that it compensated for greater resolution of the dob by this sharpness. I'm trying to imagine the scene - but failing. How can one thing be sharper while other has greater detail and greater resolution? This could be issue with terminology. What exactly do you mean by: a) sharpness b) detail c) resolution You also mention that there was hint of razor thin Encke gap. What magnification was this at? As far as I know - Encke gap is only 325km and when Saturn is closest to the earth - that makes ~0.0559". Airy disk size of 120mm aperture is 2.14" or about x40 that of Encke gap when Saturn is closest to earth. In above marble test, for anyone wanting to try this - it would be similar to trying to observe 16µm slit placed next to a 1cm marble at 60m distance.
  23. No special reason to use 3d object - except for "authenticity" - well, there are a few drawbacks with printer version that you might be able to work around: - paper version will usually be printed on white background - not sure how that will affect contrast, with actual marble - you can get darker background easily (of course - you can print black background - so that can be fixed) - mind the scale to get proper size. Enough distance needs to be between telescope and target for couple of reasons - first is focusing, telescopes are poor at close focusing, second is spherical aberration - you really need at least 60-70meters for apertures up to 8" to avoid introducing excessive spherical aberration because target is close. However, even at 60m - Jupiter needs to be about 1cm in size in order to be approximately the same angular size as actual planet. Just make sure you have detailed image scaled to that size and higher printing resolution (well - most print at at least 300dpi and I guess that is more than enough). Alternative is to display such image on smart phone as well - just use phone with high DPI count like Full HD or higher res devices that are 4-5" diagonal. That way, you can do it in night time as well.
  24. More I think about it - more I like marble observing idea. I always wondered would I be able to see more detail if atmosphere was not interfering and would I be able to use higher power. I think that marble observing would allow one to also experience practical limits of their instrument. Things that are at threshold of observability would no longer remain a mystery - did I or didn't I see that? Well - take a marble and inspect it - if feature is there - well, you saw it
  25. Not sure if there is anything to discuss there - general consensus on that will be: "At least one more than I currently have"
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.