Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I've seen it before, it looks like it is very common on Sony CMOS sensors. My ASI178MCC has amp glow that is cross between ASI1600 and that one on the image. Corners are affected like in ASI1600, but in each corner there is "source of rays". Here is what it looks like: And to my knowledge, my ASI178MCC calibrates ok regardless of strange amp glow. Ok, still not following (sorry about that). Well I do understand what you mean I just don't understand fully consequences of that. So you are saying, there are parts of the chip that are at different temperature than the rest of the chip? Right? Do you have any idea how that temperature behaves in time? 1. It is just a patch of sensor surface where temperature is higher then the rest of the chip but stable - let's say sensor is mostly at -20C but corners are at -18C 2. When there is no exposure going on, whole sensor is at -20C and when exposure starts, sensor remains at -20C but selected few patches in corners start rising in temperature 2a) they rise asymptotically to a certain value quickly (like in few seconds) 2b) they rise asymptotically to a certain value slowly (thus not really reaching that value in common exposure times)
  2. Not sure how is that related to amp glow if we assume it is due to build up of heat on adjacent component and it is being fed into sensor via metallic circuitry. Sensor is kept at constant temperature because all excess heat is taken away by Peltier. If amp glow is rising faster than dark current - that means that part of chip is at higher temperature than the rest of the chip, and it is accumulating heat (if it were only at higher temperature but stable we would not be in trouble, since there is time linearity at different temperatures). Some of that heat will indeed be taken away by Peltier via sensor, but if it is rising, it will also be in dynamic with its surroundings - some of the heat will dissipate elsewhere because not all of it is being drawn away by Peltier. Otherwise amp glow temperature would also be stable. And if it is dissipating heat by other means than Peltier - speed of dissipation will depend on temperature gradient between that component and wherever that heat goes to.
  3. Regardless of peltier cooling, if amp glow is produced by heat buildup - this means there is excess heat that is not being removed via peltier cooling. Such heat will be dissipated by regular means of heat transfer to surroundings (conduction and convection). Efficiency of such heat removal will depend on environmental temperature. Even if we look at camera as closed system, camera casing will be at different temperature (it is used to remove the heat from the system) depending on actual surrounding temperature. In cold environment casing will remain relatively cold. In hot environment casing will become hotter and hence everything inside casing will be at higher temperature, meaning less effective heat dissipation of amp glow producing component - bigger thermal build up in it - stronger amp glow. I guess temperature sensor does not need to be accurately calibrated as long as it is "reproducible". Whether I'm shooting at -18C while it says -20C, I don't think it matters as long as each time I tell it to go to -20C it remains on -18C
  4. On topic of CMOS amp glow: https://www.cloudynights.com/topic/599475-sony-imx183-mono-test-thread-asi-qhy-etc/page-3 There is some interesting discussion and data. One thing that intrigues me is ray like feature in amp glow in CMOS sensors. That is not something I would expect from heat. It looks more like there is some sort of electron build up due to electric / magnetic fields from adjacent circuitry rather than heat. Due to fast readout there are components in CMOS sensors that operate in Mhz range. Could this all be due to rouge EM radiation?
  5. Very good point, I have not look into it but it looks like I'm going to have to. There is another explanation why ASI1600 would not be able to calibrate properly with different exposure darks. It turns out there is sort of "floating" bias with ASI1600. Whether this is something implemented in drivers or it is feature of sensor itself, I have no idea, but it turns out that bias level drops with exposure length. I've seen while measuring sensor data that average levels for 1m exposure are lower than that of straight bias for same settings and temperature. I was under the impression that it stabilizes at some point, but it might not be so. Only thing that I've noticed is that after 30s or so average value starts rising - as one would expect due to buildup of dark current. If sensor indeed changes bias level depending on exposure length (and it seems so) it would also be a reason why different exposure darks would fail at calibration. One needs bias removed prior to adjusting dark current levels. On the amp glow topic - it sounds reasonable what you are saying but that would have another implication. There would be difference in amp glow between darks of same exposure and settings taken under different surrounding temperature conditions. Equilibrium in amp glow would depend on efficiency of heat dissipation from the system - which is higher in cold environment than it is in hot. I would expect amp glow to be less in darks taken on a cold night outside vs darks taken at room temperature. I'm shooting my darks indoors, and calibrate with them lights taken in at least 10C colder conditions. I've not noticed residual amp glow after calibration. I'm not saying that it is not there, I just have not seen it, and it might well be because I was not paying attention. If it turns out that there is indeed dependence of amp glow to ambient on same temp and settings, I guess one would need to have couple of sets of master darks taken at different ambient temperatures (like summer darks and winter darks and one set for spring / autumn or something like that). I don't really know the nature of amp glow in CMOS sensors. With CCD sensors it was indeed amp glow - since CCDs have single amp unit, and if that electronics was too close to sensor it would cause problems - heat travels via circuitry back towards the CCD (metal conducts electrons one way but equally good heat in opposite direction). With CMOS sensors there is amp unit associated with each pixel, so I doubt what we see and call amp glow is indeed related to amp unit(s) with CMOS.
  6. It is meant for both CCD and CMOS sensors, well, at least "well behaved" ones. It is proper calibration method in sense that it removes all but light signal from light frame and correctly applies flats (only on light signal). I have ASI1600 myself and that is what I use for calibration. On a side note, well behaved sensor is one that: - has predictable bias signal (read noise is random but bias signal is always same for every pixel) - has dark signal dependent on temperature and exposure duration (linearly grows with exposure length on fixed temperature). Can have uneven dark current (amp glow), but it needs to behave as said per pixel - has linear response to light.
  7. Yes you can skip bias frames, not only that you can skip them, they are not needed with set point cooled cameras, if you use darks of matching length and temperature. Calibrate like this: master dark = avg(darks) master flat dark = avg(flat darks) master flat = avg (flat - master flat dark) or master flat = avg(flat) - master flat dark calibrated light = (light - master dark) / master flat Use as much calibration frames as you can. Use at least 32bit precision, and that is it.
  8. Do you have exact matching lens adapter for ASI1600? Not all ZWO cameras have same T2/2" thread - sensor distance, so lens adapter should be adjustable for flange distance (which for Canon EF / EF-S is 44 mm). If adapter is adjustable is it lockable (like with screws?) it can cause tilt if not tightened properly, well, anything can cause tilt if not tightened properly (and sometimes even if tightened properly but manufactured to wrong specs).
  9. This looks familiar, I used to get star shapes looking like that, and problem is two fold. First is sensor / lens distance - you need to get it spot on for flat field. What you see in image above is mostly astigmatism. Right side of the image is in focus but has astigmatism - cross shape, while left has out of focus astigmatism - elliptical shape - bad sensor / lens distance. Second is that you have some sort of tilt in optical train - sensor is tilted in relation to focal plane. Right side in focus, left a bit out of focus. Ha subs still display this sort of behavior but it is much less pronounced because star shapes in corners when using refractive optics depend on wavelength of light. So in this particular configuration Ha (red part of spectrum) is less affected, but green would be a bit more, and blue still more than that.
  10. Yes, well, probably closer would be to say 1/4 horizontally (3888 is close to 4000) and 1/2.5 vertically. But do have a look at Stu's post from above to get the feel of the size of moon on image - it clearly shows how much of image would Moon take up.
  11. So you are interested in two things (if I'm getting that right): 1. how larger object would be compared to certain lens? Simple ratio of focal lengths of a given lens to 700mm of your reflector (provided we are talking about same camera used). 2. what would actual size of object be on your image? For this you need to know angular size of the object and pixel size of your camera (that is roughly sensor size divided with resolution, so in your case it would be 5.7um - or 5.7 micrometers, or you can look it up online). Then you need to use following formula: http://www.wilmslowastro.com/software/formulae.htm#ARCSEC_PIXEL To get resolution of image in arc seconds per pixel - in your case it is 1.68"/pixel Use that value, and divide target size (in arc seconds) and you will get number of pixels that target will be on image. So for Moon which is 30 arc minutes (that gives 1800 arc seconds) will be ~1070 pixels wide on the image.
  12. In all seriousness, telescope without eyepiece or camera is projection device rather than magnifying device. As pointed out by ronin, telescope forms image of object on focal plane of a certain size. That size depends on focal length of telescope and actual size and distance of object (or in other words - angular size of object). Magnification would imply conversion of scale of single unit - so we can say that Moon is magnified x60 if angular size of Moon is 30 degrees given that we see Moon without magnification to be 30' (or 0.5 degrees). Telescope on its own does not convert scale of single unit - it does unit conversion (projection) - from angular to planar / spatial so we end up going from angular size to length. Now eyepiece / telescope combination does magnification. Telescope / prime focus camera does not do magnification in that sense, it is also projection device - with addition of sampling rate. Sampling can be seen as number of pixels per mm or number of pixels per angular size - this is what gives illusion of "magnification" or zoom to an image. It is not really magnification or zoom, it is just sampling resolution. Same image can be both small or big, depending on monitor that we use to display it, and distance to observer - look at any image on computer screen from "normal" distance and then move 10 feet away - it will look much smaller (but it is the same image, sampled at the same resolution). So resolution is not measure of magnification / zoom in normal sense.
  13. So as you see, telescope is not magnifying at all! It is minifying - making Moon that is 3474 km in diameter, appear to be only 6.1mm!
  14. Hm, I did similar analysis recently - Heisenberg vs Huygens–Fresnel principle and it turned out that there is numerical mismatch, but I did not look at Suiter's example, will have a look. As far as I understand, circular aperture in telescopes (obstructed or clear) produces Airy pattern as PSF on focal plane for point source aligned with optical axes. If we explain the phenomena in classical terms (Huygens-Fresnel) - it is because on aperture plain there are infinite number of points acting as point sources for spherical waves (all in phase, because incoming wave front hits them in phase) and interference of those waves at focal plane gives raise to Airy pattern. On the topic of Airy PSF vs Seeing PSF - we are indeed applying wavelets on stacked image that has been under influence of seeing, but actual final image, due to lucky imaging has 2 blur components - Airy PSF and Gaussian distribution of Seeing influence. Each stacked frame was blurred by particular seeing PSF and each frame had different seeing PSF applied to it, but we align and stack such frames which leads to central theorem - sum of individual random seeing PSFs will tend to have Gaussian shape. Each frame has Airy PSF of telescope applied to it, but that is each time the same. Now property of Gaussian PSF is that it attenuates high frequencies but there is no cut off point - all frequencies are present, and therefore can be restored with frequency restoration - power spectrum of Gaussian PSF is Gaussian (fft of gaussian is gaussian). So cut off that is applied is still Airy PSF cut off, and all the seeing attenuated frequencies can be restored given enough frames. On the matter of sampling and Nyquist criterion in 2d - it is simple, on square grid you need x2 sampling rate (on rectangular grid you need x2 sampling rate in x and x2 sampling rate in y but if we use square grid units are same in x and y) to record any planar wave component. Imagine sine wave in x direction (y component is constant) - it is the same as 1d case so we need x2 sampling rate to record it. Also imagine sine wave in y direction (x is constant) we need x2 sampling rate in y direction to record it - same as 1d case. Now for any sine wave in arbitrary direction - projection on x and y axes will have longer wave length - so it will be properly sampled with x2 because each component will have lower frequency than when wave is in x and y direction. So 2d Nyquist is clearly same as 1d, on square grid we need 2 samples per shortest wavelength we want to record. Now, I did not do my analysis mathematically, but I will present what I've done and how I came to the conclusion, which I believe is supported by above example. I generated Airy PSF from aperture, both clear and obstructed (25%) and made Airy PSF be 100 units (pixels in this case) in diameter. Here is screen shot of clear aperture PSF (normal image, stretched image to show diffraction rings, profile plot, and log profile plot to compare to Airy profile): This is how obstructed aperture looks like (same things shown in same order): you can clearly see energy going into first diffraction ring - cause of lower contrast when doing visual. Now lets examine what MTF graph looks like for both cases: First clear aperture: And obstructed If you compare those with the graph in the first post - you will see that they match. Now question is, at what frequency / wavelength does cut off occur? We started with airy PSF with disk diameter of 100 units. Now I did not do calculation but I rather "measured" place where there is cut off. On this image log scale is used, so where central bright spot finishes that is cut off point: "Measurement" tells us that cut off point is roughly at 40.93 pixels per cycle. This means that we need our sampling to be 40.93 / 2 units or ~20.465 units long. How big is that in terms of airy diameter? Well airy diameter is 100 units, so let's divide those two to see what we get: ~4.886 So in this case we get factor of ~ x2.45 of airy disk size for proper sampling resolution. When I did my original post I also did measurement and found that rounded off multiplier is x4.8 in relation to diameter, or x2.4. So true value is somewhere in that range x2.4 - x2.45. We would probably need rigorous mathematical analysis to find exact value, but I guess x2.4 is good enough as a guide value - most people will probably deviate from it slightly because there is only limited number of barlows out there So to recap: going x2 Airy disk radius - undersampling (you might miss out on some detail, but really not much - it is 4pp example from first post) going x2.4 Airy disk radius - seems to be proper sampling resolution for planetary lucky imaging (if you buy into all of this ) going x3 or x3.3 - you will be oversampling, thus loosing SNR with no addition detail gain (it is 6pp image from first post)
  15. Ooops I just realized something. x2, x3 and x3.3 sampling was given in relation to airy disk radius, not diameter. My example was talking about x4.8 of airy disk diameter. This means that actual value comparable with those listed is x2.4 of airy disk radius. So yes, please if you feel confident in my analysis (and I'll post some more info to back up all of this tomorrow, now it's getting a bit late), use that resolution - it will provide you with better conditions for planetary imaging than other "optimal resolutions" quoted. It will provide more detail than x2 and it will have higher SNR per frame than x3 and x3.3 (but same amount of detail - max possible for given aperture as those).
  16. Yes, that would pretty much be it. You take size of airy disk for blue (highest frequency, shortest wavelength) and find the size of Airy disk in seconds of arc (formula is 1.22 * lambda / aperture diameter - this one is for angle to first minima in radians so you need to multiply by 2 and convert to arc seconds, both lambda and aperture are in meters). Once you found your airy disk diameter, divide with 4.8 - that should give you resolution that you should be sampling at and from that you calculate FL. So for first scope you mentioned: 2 * 1.22 * 0.000000450 / 0.3 radians = ~ 0.755" diameter of airy disk in blue (450 nm) this should give you resolution of ~ 0.1573"/pixel. For camera with 3.75um, FL required for this resolution would be: 4940mm, so you are looking at x3.25 barlow. You will not miss much by using x3 barlow (don't need to sample blue to max, you can go with green instead, blue will be undersampled a bit but it will not make too much difference). Now, this is all theoretical more than practical - that is maximum achievable resolution with perfect optics, perfect seeing and frequency reconstruction (with wavelets or something else). As such this should be guide to max usable resolution above which there is simply no point in going higher. In real use case scenario you might not even want to go with such high resolution - it requires exceptional seeing, and with increase of sampling resolution - SNR per frame will go down - so you need plenty of good frames for stack in order to recover SNR to acceptable levels to be useful for frequency reconstruction. I did all of this analysis because I was always intrigued by the fact that there are many different "optimal" resolutions out there in the "wild", and I wondered which one is correct. Now, x2 airy disk is somewhat related to visual - that is how x50 per inch or x2 in mm for max useful magnification is derived. That and average human eye resolution of 1 minute of arc. But eye can't do what computers do with frequency reconstruction - we can't make our eye amplify certain frequencies - it sees what it sees. This is the main reason behind x3 and x3.3, I guess people just realized that sampling on higher resolution does indeed bring more detail after wavelets (and certain article that does similar analysis as above, but for some unknown reason quotes x3 for 2d Nyquist, and gives wrong interpretation of cutoff wavelength that rises from given airy PSF). Central obstruction absolutely plays no part for imaging. If you look at MTF graph, central obstruction has same cut of frequency, and only impact is to attenuate a bit more certain spatial frequencies (larger details will have a bit less contrast compared to unobstructed scope - this is important for visual, and you often hear that refractors, having no CO, have better contrast on planets - that is the reason). With frequency reconstruction this is of no importance.
  17. There are a few recommendations around the web regarding the optimum planetary resolution with respect to aperture of telescope. I've found values x2, x3 and x3.3 pixels per airy disk. While playing with ImageJ and generation of different airy disk patterns (obstructed, clear aperture, different sizes) and examining results, I've come to following conclusion: Optimum resolution for planetary imaging, when including wavelet post processing is x4.8 pixels per airy disk My conclusion is based on following: x2 sampling (as per Nyquist 2d sampling in rectangular grid) of frequency that is attenuated to effectively 0 if one examines perfect airy disk pattern. If we look at the following graph representing attenuation depending on frequency (well known MTF graph - which represents 1d cross section of power spectrum of airy disk pattern, for clear and obstructed aperture) (dashed line - clear aperture, solid line obstructed aperture) we can see that for attenuation of 99% we are approaching cut of frequency at almost 99%, this means that frequency very close to cut of frequency will be at 1% of strength of original. For visual observation that is significant because loss of contrast is great at those frequencies, but when one applies frequency restoration and enhancement (effectively boosting contrast) of wavelet algorithm - we can see that this frequency can easily be restored by multiplication with factor of 100 (something eye can't do, but computer can easily with appropriate algorithm). Now in my simulation of airy disk influence, I've found that for 200mm aperture with 25% central obstruction, cut off wavelength is ~0.5333" - meaning optimum sampling rate would be ~0.2666"/pixel or in relation to airy disk size of ~1.28" (510nm light) that equals x4.8 airy disk diameter. In order to demonstrate these effects I've made "virtual recording" of Jupiter (original high resolution image used: http://planetary.s3.amazonaws.com/assets/images/5-jupiter/20120906_jupiter_vgr1_global_caption.png) sampled at x3pp, x4pp, x4.8pp, x5pp, x6pp (in relation to airy diameter) of high resolution image convoluted with airy disk psf. So images represent perfect 25% obstructed optical system without seeing influence. This is reference image (no convolution, sampled at 6pp): And this is result of convolution without frequency restoration (sampled at 6pp): Next I did different sampling resolutions, applied wavelets to each, and rescaled for comparison (scaling was done with bicubic interpolation), so here is result 6pp, 5pp, 4.8pp, 4pp and 3pp in that order from left to right: It can be seen that first 3 images look almost identical, while x4pp starts to show little softening and x3pp (as usually recommended for planetary) shows distinct lack of detail compared to others. I also made a zoom of section around GRS - this zoom was made using nearest neighbor scaling (so individual pixels can be seen) - to evaluate small scale differences: In image with x4pp contrast starts to suffer and x3pp lacks both features and contrast, while first 3 (6, 5, 4.8pp) show virtually no difference. So, there ya go, it looks like x4.8 is the way to go with planetary imaging
  18. I would go with 28mm extension / combination of extensions. You need to calculate in filters and their optical path, and they usually add about 0.5mm (well I've heard figure of 1/3 of substrate thickness and that is around 0.7mm for 2mm filters, but 0.5mm is good enough). Also you might find that specs on correct distance are not quite applicable in your specific case (it depends on so many factors, like exact FL of scope that can vary between instruments, ...) so using some spacers to dial in correct distance will probably be good idea. Just start with somewhat shorter distance, and tune it until you are happy with results.
  19. For me it was well worth it. It considerably improved my peak PE, but more importantly made PE curve much smoother and removed nasty spikes when guiding. Made guiding so much easier. Prior to doing the mod, sub 1" total RMS was really rare, and there were significant elongation of stars in RA direction. Post mod - I'm able to guide 0.5" - 0.7" range (I even managed to get 0.42" one night for couple of minutes - but I attribute this new improvement to additional mods - changed saddle plate and Barlebach planet tripod) and stars are almost always round (subject mostly to other influences like wind and cable snag). In general it is an easy mod, but you should take care of tension of belts - I had problem with undertensioned belts - improper belt / gear meshing that gave me nasty +/- 2" 13.6s oscillation. I also "hypertuned" mount by taking it apart, cleaning, replacing bearings with high quality ones.
  20. To answer my own question - it is down to CCDI. I did have problem with collimation, and something loose in optical train (don't know what it was, but I suspect one of the extension rings for RC 8" that comes before focuser (between ota and focuser) and brings focus in 50mm), so I tightened everything, did a round of primary collimation (I might need a second round, don't think I nailed it 100%, still have a bit more curvature on one side, just by eyeballing subs), fixed secondary collimation after, and star shapes improved considerably. I then checked it with 5 shots of NGC 6940 and CCDI, and still got results all over the place, so my conclusion is that CCDI is probably not reliable unless one uses high SNR stack instead of the subs, and even then only as a guide.
  21. I've recently taken a shot of M27, and noticed rather funny shaped stars in the corners. Scope in question is TS (GSO) RC 8", and star deformations are quite unexpected. Some being out of focus displaying astigmatism that I would expect out of RC due to field curvature, others are out of focus without apparent astigmatism (no ellipsis shape, still round with offset center), and some display sort of coma appearance (different corners have different aberrations). Given that I Roddier tested that scope to 0.94 Strehl I think that optics are ok, which would leave camera tilt + collimation (one, the other or probably both). I decided to download trial version of CCD Inspector just to see what results it would give me. Results gave me bit of a scare, to be honest. I ran test on 60 calibrated frames, each 1m long exposure, guiding was spot on (OAG, almost no shift between the frames, I think that largest shift when aligning was sub pixel), I did not use dithering, my guide RMS error was in range 0.5-0.8" (very good given HEQ5 and seeing), seeing was good to fair (FWHM measures between 1.9" and 3" with most of the frames being around 2.2") but results for curvature map differ quite significantly between the frames. I've created a mosaic of curvature maps for first 25 frames (did not want all 60 due to size, mosaic is large as is), and results for individual frames fall in range: Curvature: 47.7 - 62.5" Tilt X: 0.2 - 0.7" Tilt Y: -0.1 - 0.1" Total tilt: 2% facing left - 34% facing right (with changing direction, and angle) Collimation: 1.0" - 4.4" Here is the mosaic to visually show how field curvature is dancing all over the place between the frames Now the question is what does this all mean? I would expect results to differ between the frames (don't know, but I would not be worried if values jumped around 5-10%), but range of values just seems too much. Don't know robustness of CCDI algorithms, but it is possible that target is not suitable for analysis - very rich star field of milky way, it might be that better results can be obtained with in sparser areas of the sky (200-300 stars instead of thousands). Or it might mean that something is seriously loose in my optical train and shifts with each guide correction? Could the camera jumping around be the cause (better), or is it one of the mirrors (probably not so good situation)? Can anyone please help with this?
  22. Please include all precautions to account for carving of Larsen C ice shelf that just happened ....
  23. No worries, I also had to double check if it really is branded Evostar. I'm not hung up on whole FPL-51 vs FPL-53 (or FCD-1 / 100 for that matter ) thing, I just reckon that it is easier to properly figure lens of higher F/ratio (since we are discussing doublets here) by using glass with best index / characteristics (that being FPL-53, flourite would probably fare better, but I suspect price would match that). I simply do not know enough on topic to even know if ED100 level of correction (or very close to it) is possible in F/7 design using FPL-53 / lanthanum combination.
  24. Yes, it's a tough one, isn't it? Out of listed "down sides", only one that actually makes any real impact is CA handling / optical quality, and no info on that apart what is written on TS website. I was hoping that someone would come along and offer their firsthand experience, but I realize now it's a slim chance, because it is relatively new product (have not seen it before listed, and I do scan items on their website from time to time), it certainly differs from already existing TS ED 102 F/7 with FPL-51 element and 2" dual speed crayford focuser (that one is same optics as Starwave 102 ED, possibly Lunt 102 ED) I might not have been clear on that but achromatic refractors are not considered as contenders for this one, I was thinking of Skywatcher EVOSTAR ED100 Pro - F/9 ED doublet with FPL-53, and not achromat doublet one - at F/10. By all accounts ED version is virtually color free and excellent performer. There is, as I already mentioned, FPL-51 F/7 scope from several vendors, but from what I've read in terms of reviews on internet, it looks like it is easily beaten by ED100 pro on color correction and planetary performance. I think that TS102 F/7 with FPL-53 should be ahead of any fairly good 4" F/7 with FPL-51, but that is solely based on my previous experience with TS80 apo (which is truly apo, and I've tested it to Strehl 0.98 in red, 0.94 in green and 0.8 in blue using Roddier test and OSC camera - probably not the best way to conduct such testing since I've used single focus position and real star subject to not the best seeing possible, and my Roddier skills are dubious at best, but I got assurances that any testing error would give worse results rather than falsely present better optical quality). The question is of course, is my assumption correct, and if this glass indeed has required optical quality to compete with ED100 PRO.
  25. I plan to replace my ST102 F/5 with a decent all rounder for visual at some point (not so near future, but hopefully by the end of this year). This is my list of "requirements / wishes": - close to apo (ED) / apo performance, good optical quality, CA minimal to non existent for visual. ED doublet preferred to triplet due to price, weight and cool down reasons. - 4" class (can go bigger, would not go smaller, so 100mm+) - relatively light weight up to 5kg for ota, it will be mounted on AZ4 (can hold up to 6.8kg) - Ok focuser, does not need to be anything special, good 2" crayford with fine focus will do. - Capable up to x250 power on planets (with right eyepieces, plan to use TV N-zoom 3-6mm, 7mm TV delite for planetary role, would like to avoid barlow/powermate combos if possible) giving good sharp views - Capable of up to 3 deg TFOV with right eyepiece (something in the ES 34/86 or ES 30/82 class eyepiece for that, depending on actual scope) - would like to fit most of M31 in the view, also other wide field targets - High bang for the buck is a + of course. - Shortish tube - it will be sort of travel / grab'n'go scope, but it does not need to be "airline travel" size, it needs to fit in medium car boot with room to spare. Now, from all of the above, first / best option that comes into consideration is SW Evostar 100 ED, and so far it has been my first and only choice. But I recently stumbled onto this: http://www.teleskop-express.de/shop/product_info.php/info/p9868_TS-Optics-PhotoLine-102mm-f-7-FPL-53-Doublet-Apo-with-2-5--Focuser.html And that got me thinking which one of these two would be a better choice? Compared to Evostar, TS 102 F/7 brings to the table: - better focuser (I have TS80 Apo which I use for astrophoto, but would like to replace for visual with something more capable, it has the same 2.5" focuser and it is a good one). - Shorter focal length - easier to go wide field, while still been able to go to x200+ power with mentioned planetary eyepieces - I think that fit & finish will be overall better (don't have experience with evostar, only what I've read online, but C&C tube rings on my TS80, and retractable dew shield are rather good). - overall shorter length - easier for transport. - I might even use it for astro photography if it proves capable in that regard. On the other side, there are couple of things that put it behind of evostar: - heavier - 4.5kg vs 3.7kg (not sure if this is a real negative thing, mount will be able to carry it, it is not much heavier to carry around). - more expensive (200e - that is about the amount of money for a decent 2" wide eyepiece) - Being F/7 doublet, there is real concern that CA won't be handled as good as in evostar, although TS on their website state that it is color free (FPL53 + lanthanum element) and they even market this scope in their photoline brand - intended for astro photography. - Unknown optical performance compared to tried and tested evostar. I do think it should be rather good scope optically (TS80 really is), but there simply are no reviews confirming this - at least I did not manage to find any. How well will it behave on planets? A penny for your thoughts ...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.