Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

alex_stars

Members
  • Posts

    223
  • Joined

  • Last visited

Everything posted by alex_stars

  1. The idea starts with what we can easily observe. We want to observe a point and measure it's spread, but that is in most cases (except astronomy where you have stars) really hard. The next best thing would be a infinitely narrow line, but that is really hard to observe too. So the idea was born (I think it was LΓ©on Foucault) that it is a lot easier to observe a edge (initially a real knife edge as a screen against the light). Mathematically an edge like in my pictures is called a step function. If the step function has a range of 0 to 1 (normalized, hence the normalization in my steps) and you differentiate it, you get a Dirac delta function, which is the perfect cross-section of a infinite narrow line. So instead of observing (or imaging lines), we observe edges in the realization that if we differentiate the result it is like we have observed a line. Or differently put, if we are interested in the spread of a line, we can observe the spread of an edge (which is a lot easier to do) and measure the spread of the line by differentiating the spread of an edge.
  2. Sorry to say, I am afraid it won't. Would be great though. However as a remedy of clouds I really suggest the ground based observation targets (marbles, print-outs) that @vlaiv suggested. They are great fun and the showed me how good my scope actually is.πŸ˜€
  3. Absolutely. Let's go through all the steps and see if I can clarify. Correct Yes. What I meant with the above is that a perfectly circular symmetric optical system is a system which is radially symmetric along the optical axis. All lenses and/or mirrors are perfectly aligned and have no variations along any given radial coordinate. Perfect optics. Spider vanes which hold a secondary mirror would make such a system non-symmetrical and the PSF would also become non-symmetric radially (example here). However in the radially symmetric case the PSF is radially symmetric and it would not matter under which angle you would cut through the 2D PSF to get a line graph. Now how does this relate to the LSF. And I see what caused the misunderstanding (which was me being not completely precise). A infinite long line which is infinitely narrow convoluted by a PSF does produce the LSF. The LSF is an integration over on variable (i.e. coordinate) of the 2D PSF. You can read up on the mathematical relations in Digital Image Processing by Jonathan M. Blackledge (details here) . My mistake was to not include the integration part, sorry for that. (I have corrected the original post). Now two things are important. The integration changes the look of the LSF in comparison with the PSF. That should be the puzzle piece you missed by me forgetting to write that. And secondly in a radially (circular) symmetric system the LSF holds enough information to reconstruct the MTF, you don't need the PSF. However, a few simple reminders might help to understand the relations between the two: In a 2D system (imaging or observing plane) a point gets blurred by an optical system and becomes the PSF. As a 2D image is a large collection of points, we can convolute the image with the PSF to re-create the blurred image. Or if we know the PSF precisely, we can revert the process and de-blurr an image by de-convolution. In a 1D system (not like our images) an infinite long line (which is the 1D equivalent of the point in 2D) gets blurred by an optical system and becomes the LSF. Same procedures as above are valid, just in 1D. In a radially symmetric system, we don't really need to care about 2D when we want to understand its quality by the measure of an MTF. However if we want to simulate its behaviour, we do need a 2D PSF to apply convolution and such. As you can see in my initial post, if I only use the edge spread directly from the camera sensor, I sample the edge spread with only a few points, then I need to fit a theoretical edge spread function to the sparse data. If I apply super resolution, which I can as I have several hundreds of observations of that edge in my image, then I can resolve two issues I face: As the actual edge is curved, I need to align the edge observations before "stacking". To get a good alignment, I want to work on a sub-pixel scale as the native resolution of the camera is rather coarse. To shift observations on a sub-pixel scale I need super-resolution. I use piece-wise linear interpolation between original pixels as I don't want to add spurious "data" by interpolation, as I would with using a sinc or a spline or something. Let me remind you that I interpolate the raw camera data, so a sinc would not make sense. After the piece-wise linear interpolation the sub-pixel information comes directly from the summation over the several hundreds of observations of the edge. That is what I prefer when I calculate the LSF and MTF numerically out of the data. I do hope that sheds some light into the method and thanks for carefully reading over it and asking questions πŸ‘ Do ask more if you feel like....
  4. So, here is my second attempt to explain what can be done with an Edge Spread Function image. First off, this method has plenty of precision for practical purposes and is used for about any imaging device, from camera lenses, satellite based optical systems, telescopes to even MRI machines in medicine.... I will go through step by step and hope to explain each step in sufficient detail. I have implemented these steps in a Python program to have full control of the processing. First you start with an edge target and take an image at best possible focus with the shortest possible distance on ground to avoid atmospheric disturbance. I took mine 170 m away across a meadow this morning at 7:50 am. Its actually important that the edge is not completely straight. My setup works and if you have a straight edge, the make sure you tilt the edge with respect to the pixel array of your camera, to make use of super-resolution later on. If you just process with a single pixel line (horizontal line in the image) then you need to curve-fit the data with a known edge spread function (see my post before) but that was somehow not convincing as I have been told. To make use of the fact that I have imaged the same edge for over 600 times in the image above (all the horizontal lines in the image), one can deploy super-resolution processing. The algorithm, initially conceived by Airbus if I remember correctly, does the following. For each horizontal line in the image: Upscale the resolution to a chosen factor, I use 10x. This means that in between each actual pixel, 10 new pixels are placed and the data is interpolated linearly. Fit a sigmoid curve through the data. I use least squares as a measure for the fit. The sigmoid fitted curve gives you an estimate of the off-set of the current line to a chosen alignment coordinate in the image at a 10x sub-pixel resolution. Shift the line data to be aligned with the alignment target This results in a 10x super-resolution aligned edge data, which looks like this (I flipped the data to be consistent with standard edge detection literature) Now one can sum up the aligned line data and get a nice representation of the Edge Spread Function the optics produce. This is a similar technique that we use when we stack planetary images. Normalize the summed up line once again and you get the ESF: Now in high-resolution and because of the "stacking" with almost no noise. We know that with perfect optics this would not be a smeared out curve but an instant step from 0 to 1 where the edge is. However we have to be careful now. This is not what you normally would be able to recover when imaging or viewing an object. Don't forget I work at 10x the normal resolution, which is easy for a line target, but a lot of computation for stacking planetary images. Now comes the important part. When I differentiate this curve, I get the next graph. I now differentiate with finite differences on the real data, which increases the noise level: This is whats referred to as the Line Spread Function. This is the smearing/blurring/spreading of a perfect, infinite small line caused by the optics. Now we can make a comparison to the point spread function. In a 2D imaging plane, the spread a given optical device creates when transferring a perfect point source is refereed to as the Point Spread Function. This also relates to the well known Airy disk. (I am sure we are all familiar with this concept) In a 1D imaging plane, the spread of a perfect line transferred through a given optical system is called the Line Spread Function. It is the same concept, just one dimension lower In my case I make use of the fact that I can recover the Line Spread Function of any given optical system by imaging an edge target and the differentiate the image data. Thats just simple maths and if people issue with that I refer them to literature. However there are two things to remember: The LSF is not the same as the PSF. You would have to radially average the PSF and then integrate to get the LSF If you assume a perfectly circular symmetric optical system, the LSF and the PSF hold the same information with respect to the MTF. (The LSF is an integration of the PSF along one of its coordinates). If you want to recover the 2D PSF for a given system, just image several edges which have different alignments with respect to the optical plane and recover the full PSF (just rotate the telescope with respect to your target πŸ˜€ or otherwise if more convenient) Now you can take a Fourier transform of the Line Spread Function and get the mysterious Modulation Transfer Function: Here I plotted the theoretical MTF for my Skymax 180 at 530 nm light in green. The now measured (no curve fitting) MTF of my Mak in red (however that is at 10x super-resolution so you won't get such a good result that often) and for comparison a theoretical MFT at 530 nm for a 125 mm Apo. The interesting thing is that the measured MTF is better than the theoretical one from about 0.8 onward to the right. This is due to the fact that I utilize super resolution. However I recover most of the MTF quite well with the measurement. Nice! Nevertheless I interpret this graph as such. If I don't want to utilize the tedious method of super-resolution or have an atmosphere which hardly ever gives seeing conditions below 1 arcsec, or if I observe visually, then I am just as well off with a 125 mm APO. This is all of course for theoretical perfect optics of a given design. Collimation and optical defects are not considered at all.
  5. Hi @vlaiv, No need to apologize, I think I misunderstood your eager to find the truth as offence. Apologies from my side are in order! Good however that we have cleared that misunderstanding, so we can proceed in finding an answer together. I am currently working further on my algorithm and will present more as results come in.
  6. is that so.... Obviously I did use a proper algorithm, published in a paper on the subject.... but you seem to take issue with what I have presented. For me this discussion has gone too far, you seem to have the urge to prove yourself of being the "wise guy" on the forum and seem to take pleasure in criticizing valid contributions of other... which is not very scientific.... Some points of yours were valid, others not. I was hoping for a discussion among peers, but that does not seem to be possible. Good day to you sir.
  7. Just for full disclosure, I do not attempt to bash any type of scope. On the contrary I am in the position where I have decided to swap my 180 mm Maksutov to a 125 mm refractor, for may reasons, but mainly for the simple fact that the few exceptional nights where I can harness the power of my Mak are just too few and that a 125 mm refractor is a way better choice for me, especially considering the whole MTF discussion above. same here. but it seems one has to be careful on any forum. Clear Skies
  8. Hence my choice of the 530 nm for the theoretical MTF. You seem to misunderstand, I never said that my CCD is not capable of resolving images taken through the Mak. What I said is that it has not fine enough resolution to resolve the ESF for piece-wise finite differencing, which is a whole different thing altogether. And yes I am able to sample images properly. I wonder on what knowledge you base such a strong statement. Hmm. First off, I do not attempt to recover the PSF (point spread function), but the LSF (line spread function), which is the correct 1D equivalent of the 2D PSF. I never claimed to reconstruct a 2D PSF. And secondly, I urge you to read up on some scientific literature before you claim that a certain method is not valid, based on your quick assessment. For example pick any of: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/5251/0000/Fast-MTF-measurement-of-CMOS-imagers-using-ISO-12333-slanted/10.1117/12.513320.short (so you see it even has an ISO standard to it) https://aapm.onlinelibrary.wiley.com/doi/abs/10.1118/1.1288682 https://aapm.onlinelibrary.wiley.com/doi/abs/10.1118/1.597264 https://www.spiedigitallibrary.org/conference-proceedings-of-spie/7109/710905/MTF-assessment-of-high-resolution-satellite-images-using-ISO-12233/10.1117/12.800055.short https://www.osapublishing.org/abstract.cfm?uri=oe-18-4-3531 https://www.osapublishing.org/abstract.cfm?uri=oe-26-26-33625 and soooo many more.
  9. Yes, as my sampling is not that great, due to the CCD I have. So I fitted a sigmoid function into the data. The MFT is a Gauss curve (half of it) as the LSF is a Gauss, which is the differential of the sigmoid. I wish, nope, just pure sun-light πŸ˜‰. Exactly my point. because this is what happens when we image. So the theoretical MTFs (mine for 530 nm) do not compare to our images. Also we do not observe with our eyes on a single wavelength, so we always get a "average" MTF to see. Thanks, for the reminder, I do know finite difference methods πŸ˜‰. However the data does not not justify a piecewise differentiation. Give me a super high-resolution CCD and we get going. That is true, but the issue here is not the noise of the camera in the white area or in the black area (look at the second subplot, first row). The issue is that my CCD does not have small enough pixels to resolve the ESF better. However, as I image with that CCD, that is all I get and it nicely demonstrates the capability of my system to actually resolve the theoretical MTF of my scope.
  10. As my contribution of comparison to reality, I placed a "edge target" (just a piece of paper half white, half black) at my 170 m distance test site and took some photos of that through my Skymax 180. If you look at my previous post on the "edge spread function", well one can use that method to estimate the actual MTF of a scope. Here is my result: From top left to bottom right: Actual BW image of the edge target (the rain softened the paper, but that is not problem). Data measured with my CCD of the Edge Spread (blue dots) and a data fit (red line). The reconstructed Line Spread Function. The theoretical Point Spread Function of the Skymax 180. And finally the MTF comparison. Now what do you think? Opinions welcome. Two ideas to take along: I consider the edge well focused on the "Image" subplot. At least that is as good as I can manage with the mirror focus of the Skymax The small overshoot of the red curve above the green curve in the MTF plot is due to the data fitting in the Edge Spread Function. So consider the two curves (green and red) to be equal until the red one falls below the green one.
  11. Now its getting interesting as we leave the perfect optics world and close in on reality. Just a reminder, also consider how good you can focus you scope, and possible tube currents ..... and .... and .....a yes, Baader prism or not πŸ˜„ I'll get the 🍿 and enjoy.
  12. I have never claimed that a 125 mm APO would provide a perceivable better optical performance (better modulation transfer) than a 172 mm obstructed Mak. I know that you know what I mean and I think that you wanted to make it clear to the readers that there is no advantage in optical performance of the 125 mm APO to the 172 mm Mak. I agree. However, do people really interpret MTF plots like that? I would summarize the comparison in different terms: "A 125 mm aperture unobstructed scope is equal in performance to a 172 mm aperture scope with a 58.5 mm CO down to a spatial frequency limit of about 2 arcsecs. For finer resolution features the 172 mm scope is better". However since we are close to the seeing limits people experience, one can further state: "A 125 mm aperture unobstructed scope might be preferable to the 172 mm aperture scope with a 58.5 mm CO in situations where the seeing is regularly around 1-1.5 arcsecs and the 125 mm scope has other advantageous properties like faster cooling time, easier handling etc. besides its optical performance".
  13. I'm surprised you ask again, did you not answer that yourself in an earlier post: I assumed that people had already read through your posts and we were all on the same page that we talk about transfer of "features" of comparable contrast. πŸ˜‰ I will correct my post
  14. Hi all and @vlaiv thanks for putting in so much work in this thread already. I just wanted to add a slightly different perspective to the discussion. Let us for the sake of simplicity stick to 1 dimension ideas and I add the two dimensional counterparts in brackets. Let us imagine we observe a bright light source, but cover that source with a black screen halfway: (this image is taken from https://en.wikipedia.org/wiki/Optical_transfer_function#/media/File:MTF_knife-edge_target.jpg) then we would in an ideal world with a perfect telescope observe a sharp edge with the possible strongest contrast there is. Bright on one side of the edge and completely dark on the other side. Now in reality we will never manage that as the optics, no matter how good, will blur the image. So our sharp edge gets blurred. if we now plot the intensity of the pixels across that blurred edge: (this image is taken from https://www.researchgate.net/figure/Computation-of-the-Modulation-Transfer-Function-using-the-knife-edge-target_fig2_237077760) we get a plot like the one on the left. This is what is called an edge spread function or ESF. If we apply the mathematical tool of differentiation (don't worry what that actually does), we get the line spread function or LSF (in 2D this would be the well known point spread function). The LSF tells us how much a perfectly thin line will be spread out due to the optics we use. Large aperture optics will spread out the line less than small aperture instruments. Now we want a more overview like, comparative measure of this spread, so we apply another mathematical tool called Fourier Transform (as @vlaiv described above). This tool now allows us to plot a graph that describes how well different signals with different spatial extents are transferred through our optical system, a graph we call Modulation Transfer Function or MTF. Here you can see an example for a 180 mm Maksutov and different unobstructed APOs. As @Captain Magenta pointed out, we can compare different optics if we keep the units on the X-axis (cycles per arcsec in my case). To the left we have the large spatial frequencies. As an example I plotted lines in the MTF graph for different sizes in arcsecs and included well known solar system objects in brackets for comparison. As we go towards the right of the graph, we get to smaller spatial scales and see how the transfer through the system decreases. An easy way to understand the MTF graph is with lines (and I know I simplify here). Imagine you would observe a bight line though your scope that extends all the way across your eyepiece with the width of 50 arcsecs. That bright line would be perfectly well transported through all telescopes in the MTF graph above (the MTF value is very close to 1). Lets make our bright line thinner, say 8 arcsecs. First off you see that all compared scopes will transfer that line less good than the 50 arcsecs wide line, the MTF value has dropped. And you start to see differences between the scopes. However that does not mean that the 8 arcsecs wide line somewhat disappears. No it is just less well transfered through the optical system, and we often describe this as being less "sharp". An interesting line width is the 2 arcsecs line above, where a 125 mm aperture scope is slightly better than a 172 mm aperture scope with an central obstruction. At smaller line widths (towards the right) you see the 102 mm and 125 mm scopes steadily decreasing in their capability of transferring narrow lines. The Maksutov however has a region of flattened decrease, which is caused by the central obstruction. And you might notice that how flat that region is, is controlled by how large the central obstruction is (green vs magenta line). At a line width of 0.93 arcsec, the 125 mm aperture scope reached its Dawes' limit, meaning that lines smaller than that do not get transferred at all through the system. They would be so blurred that we could not well perceive them anymore (given that they have similar brightness than their surroundings and hence low contrast). The large aperture scope (the Maksutov) continues to transfer small features reasonable well through its optical system until that scope also reaches its limit. One other thing to notice is the region around 1 (cycles/arcsec) on the X axis. This is where 1 arcsec wide lines would be placed. I mention this because if we talk about the seeing being around 1 arcsec, then we mean that a infinitely narrow line would be smeared out to about 1 arcsec width by atmospheric turbulence. For us observers on the ground this means that the atmosphere blurs everything on that scale and that we can not make use of our fine telescope capabilities (the region right of that limit where the Maksutov outperforms the 125 mm APO) on that particular night. However this discussion does not directly relate to imaging, it is only valid for visual observation. When we image and utilize lucky-imaging techniques, we can "break through" the seeing induced blur and harvest the capabilities of our large aperture scope. In practical terms that's the reason why I prefer the 5" Apo for visual and would pick up a C14 for imaging, given that I would have a large budget and a beach house in Barbados (both of which I don't!) πŸ˜‰ Hope this helps a bit in the understanding of MTFs
  15. @vlaiv thank you for clarifying and extending the discussion a bit further. This is well appreciated πŸ‘ I indeed agree with you that we should not further derail this thread and keep it to the more practical aspects πŸ˜‰ As you might have seen in this thread, I set up a test target Jupiter today (170 m away from my scope). At the risk of sounding strange, I do look forward to sunrise tomorrow and keep on observing what I can see on my target without the atmosphere interfering. This is truly a good exercise to test a scope. Weather conditions are not good these weeks at my location so this setup is a great way of continue observing πŸ˜‰
  16. Hi @vlaiv I never said that any features would disappear in any given telescope. I merely pointed out spatial frequency limits and their respective size in the spatial domain (e.g. so that a 2 arcsec limit in space would translate into a frequency limit of 0.5 cycles per arcsec). Probably the misleading term was "features" and "spatial scales of above 2 arc seconds" would have been better (I corrected that in the original post). Either way I want to make two points in response to your post: Plotting spatial frequency cut-offs on a power spectrum does make sense, else you would doubt the meaning of the Dawes' or Rayleigh limit. I am sure you are familiar with such cut-off limits from signal processing. Your doodled feature has obviously many more spatial frequencies than its largest dimension, given that it is of complex shape. Hence the rich FFT and the spread in power spectrum response. No surprise there. However I fail to see the connection to my post here. Bringing this back to practical observing. I am sure we can agree that the GRS, even though being about 8 arcsec in diameter, has many more higher spatial frequencies due to its internal structure (I never made the claim that it is otherwise). I am sure we can also agree that features do not disappear when talking about resolution and cut-off limits in spatial frequency domain (as stated initially, I never claimed that either). A practical example would be the Cassini Division on Saturn's rings. If I remember correctly it's 0.75 arcsecs wide at its widest location. The Dawes' limit of a 125 mm aperture is at 0.93 arcsecs. It the (never made!) claim would hold true than you would NOT be able to see the Cassini Division in a scope with an aperture below 155 mm. Which we all agree is non-sense. So in conclusion I don't see what you mean with "It does not work like that", but I am sure you meant well. CS Alex
  17. Maybe to add to the discussion on what a 4 and 5 inch APO can show in comparison to a 7 inch Mak, here the theoretical MFTs of the scopes. [DISCLAIMER] the spatial frequency limits plotted below are translated to actual spatial scale limits. For comparison the diameter dimensions of known objects are listed in brackets: Horizontal resolution is now in actual units (you can check where the Dawes Limit for the 125 mm APO is), so one can compare. Down to a 2 arcsec spatial frequency limit (that's about the size of 1/4 of the GRS in spatial scale), the 125 mm APO will actually be better than the Skymax. As for the Skymax 180mm, which is more close to 172 mm (however that does not matter much for the MTF), the actual problem is that the central obstruction is larger than expected. So on paper the Skymax is the green line, but in reality it is more like the cyan line, and hence the 125 mm APO does compare for quite a long way (resolution wise). However people are right, for the best of seeing nights (around 1 arcsec and lower), the Skymax will be better.
  18. Regarding the observation target, I just had a great session on Jupiter, which is currently at my place about 45 arcsec in diameter πŸ˜‰ at a nice altitude and local seeing is good. πŸ‘ As an explanation, I placed a 4 cm diameter print-out (1200 dpi) about 170 m away from my scope across the meadow, stabled to a wooden rail on the trailhead. Now that is what I call a view. However something interesting to observe regarding the dT/dt (cooling). The first 10 mins the scope was okay, then tube currents set in which made it really hard to focus on "Jupiter" and now the scope is useless for at least an hour (my experience). I can simulate the same focusing problems as on real Jupiter with this setup. Also I can test magnifications, exit pupils and eyepieces and, thanks @vlaiv, test the performance of the scope! I recommend that. Will go outside in an hour and see how the scope does then.
  19. Just went through the original German post again, the optimization was calculated for visual use. However I agree, I'm gonna use my mirror diag (which I think is pretty good) and enjoy Keep life simple. And if I, one day, can't stand the colours in my apo, then I will start thinking about the glass path again. πŸ˜‰
  20. Thanks for pointing that out. I have been looking into that and what to use on the 125 mm F/7.8. A guy on a german forum (original link in German, last post by Gerd-2) calculated the optimal glass path to be added to the 125 mm F/7.8 to get the best colour correction out of the system. Here the short summary: If you add 65 mm of glass path to the 125 mm F/7.8 you can achieve the following Strehl values: 480 mm ... Strehl 0.84 644 mm ... Strehl 0.85 Polystrehl with 65 mm glass path in the optical path is 0.95 There you go, if that is really possible, then you have a great system at hand. Regarding the Baader prisms: The standard 1.25" prism version with T2 threads (details here) adds about 35 mm of glass path The BBHS 1.25" prism version with T2 threads (details here) adds about 47.5 mm of glass path The BBHAS2" prism version with Clicklock (details here) adds about 100 mm of glass path So given our German friend is correct, the BBHS 1.25" prism would be closest to the 65 mm that would be needed. But be aware, if you plan on a bino-viewer, that would add more glass to the optical path and then you might end up over-correcting and possible end up being worse that you started off with. I personally will start off with my diagonal mirror and then see how the colour representation is. Afterwards I work from there... CS Alex
  21. Hi all and once more a big "thank you" for all the contributions. My mind is set, I will get the 125mm F/7.8 Doublet (FPL-53 & Lanthan). For me, that appears to be the best bet. A doublet to cool fast, as I mostly do visual (so not really a need for a triplet) and the largest APO I can currently afford. So there we are. Ah yes and a set of really nice marbles to place across the meadow behind the house to observe something (thanks @vlaiv). Then I am "seeing" independent and gonna have a lot of fun πŸ˜‰ I'll report back when I got the scope.Will take until spring though cause I want to see some planets before I form some opinion on the quality of the scope. CS, Alex
  22. Great idea with the marbles @vlaiv I definitely have to try that. I want to take the opportunity mid discussion to thank all the contributors so far. Lot's to consider. I am still between the 4 and 5 inch class and here is what I currently consider, also reflecting on the eye sight I have: I am used to exit pupils down to about 0.6 mm from my Mak, everything below and eye floaters take so much over that I can not observe anymore without loosing significant detail. That would mean for the 102 mm f/11 a magnification of about 160x (nice for Jupiter). However for Saturn and Mars that would be too little for my taste. The 125 mm f7.8 doublet though would make magnifications up to almost 200x possible, also for a readily available EP (Vixen SLV 5 mm) that I like. So I tend towards the 125 mm doublet. Unfortunately the TAK FC100 is out of my budget.... Else I would consider a high-quality 4"
  23. Absolutely true. That is one of the main issues I have with Skymax 180. Thanks @John @dweller25 I agree. After reading through all the posts here (a big thank you to all who have contributed so far) I tend towards the 5" class with the following contenders: TS-Optics 125 mm Doublet F/7.8 FPL-53 Apo Explore Scientific Triplet ED Apo 127 mm F/7.5 Skywatcher ED 120 The TS seems to be the same as the Altair Wave Series 125 EDF F7.8 APO (info here) and the Tecnosky AP 125/975 ED (info here). And there are probably more to find. Does anybody know of any obvious differences on those re-branded scopes or are they all from the same factory? CS, Alex
  24. Dear fellow stargazers, I am pondering the question which refractor to buy for observing the planets (Moon included) and maybe some globular clusters as well as double stars. But the focus is on visual observations of planets. Might occasionally pop my planetary camera on the scope, but not my main interest. Currently I have a Skymax 180, which is nice, but the cool down time is a challenge for me even with an insulated scope. Since I recently learned that it actually has a 34% obstruction (MTF plots here), I am on the lookout for a new scope. My current contenders are: TS-Optics 102 mm F/11 ED refractor (details here) TS-Optics 125 mm Doublet F/7.8 FPL-53 Apo (details here) or similar like the Explore Scientific Triplet ED Apo 127 mm F/7.5 (details here) I could stretch my budget to get one of these 5" refractors but won't be able to afford a luxus class scope like the APM LZOS 130 mm F/9.2 (details here). My mount is an Celestron AVX. So I was contemplating if the 5" is a worthwhile investment for visual planetary work or if the 4" F/11 is the better choice given its long focal length. This would be my first refractor so I would appreciate any input on the matter. Clear skies, Alex
  25. I agree with @Nik271. Holding the collimation and having hopefully high-quality mirrors should do the trick for the Mak design. Just for fun I plotted the MTFs for all we know including the 4 and 5 inch APOs As we can see, the possible smaller aperture (AP=172mm instead of 180mm) is really no problem for image quality, except maybe brightness. The larger central obstruction (58.5 [34% for the 172 mm AP] ) is however indeed a problem putting the 7" Mak down to a 5 " APO for larger features. The 6" APO does already a lot better. Mind you that we hardly can actually see the very fine resolution features (maybe 0.6 - 0.7 and upward on the X axis in the image above) due to seeing. Would we have gotten the instrument we thought we pay for, as it is advertised by Skywatcher (green line), we would have gotten an instrument which is as good or better than a 5" APO. But we did not get that. I still like my scope, however any time I find out these rather strong deviations between specs and reality in astronomical equipment that I have paid a substantial amount for, I think that this is really the part which sometimes takes the fun out of this hobby. Clear skies. Alex
Γ—
Γ—
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.