Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. CC6, Mak150 and SCT 6" are all variants of Cassegrain design. They are short tubed scopes with long focal length. Mak and SCT have spherical primary mirror and corrector plate to offset that (eliminate spherical aberration of spherical mirror) - thus they are catadioptric telescopes that combine both reflective (mirrors) and refractive (corrector plate) elements. CC6 is pure reflective system, while refractor is purely refractive system (no mirrors). Advantages over Mak and SCT would be - pure reflective system, open tube design - thermal management is much better. It has regular focuser while Mak and SCT focus by shifting primary mirror. This has advantage that it enables much greater focusing range but has disadvantage in what is called mirror flop - mirror is not fixed and moves around and image can shift when changing focusing direction and telescope can get out of collimation because of this (this happens on larger scopes not so much on smaller ones because of weight of primary mirror - on 6" going out of collimation should not be concern). CC6" has generous back focus to compensate for fixed mirror. Because it does not have front corrector plate it is much less susceptible to dew. Since it does not have any refractive elements it can't suffer from chromatic aberration - worst enemy of refractor. Refractor in this aperture class - 5" - 6" is going to be very expensive compared to other offerings if you want to have good color correction and eliminate much of chromatic aberration (which lowers the contrast and kills details on planets). It will also be long and heavy and require more expensive mount (capable of carrying large scope).
  2. Yes, I know, but somehow things don't really add up, and I'm just trying to see why is that. First question is - why would GSO sacrifice some of aperture when they don't really need to do that. Second question is why would reviewer make claim that view in that scope is so much less bright than C8 and comparable to 6" refractor when explanation given above does not support such claim. There is more details like having fully illuminated field being 15mm in diameter. That is 7.5mm in radius which means that this size is much less on secondary - maybe 1 to 2mm or even less (secondary is magnifying). This means that with only marginally bigger secondary one would not get stopped down primary - at least in same central diameter. They have 38% CO 6" scope - why push for 33% 8" scope when it would work at full aperture with maybe 35-36%? Alternative is to separate primary and secondary more, change curvature slightly - like F/2.9 or something instead of F/3 and get full aperture with same CO. In any case - I feel that something is not adding up and I just want to know what (even if it is something silly like - "I don't want to hamper sales of C8 too much so I'll say that CC8 is dimmer").
  3. I think I was mistaken in thinking that 10 bits of high speed mode were lower 10 bits of 12bit ADC, while in fact they are higher 10 bits.
  4. So we have 60mm size of secondary, ~204mm size of primary and ~410mm distance between them. We are keeping assumption of F/3 primary. Let's see how much stop down will we be roughly getting with these specs. 612mm FL of primary, at 410 it will have cone width of (612-410) / 3 = 67.333...mm 204 : X = 67.333... : 60 X = 60 * 204 / 67.333 = 181.78mm = 7.15" With above we would get even more stopped down primary.
  5. Do you have any idea what is the size of secondary mirror itself? Btw, you can do simple test to see if primary is obstructed - once you have the chance to test under stars (or indoors if you have access to artificial star). It involves fingers and eyepiece You need to observe defocused pattern of a bright star - like really defocused so that shadow of secondary and spider can be clearly seen - even primary mirror clips. At this point you are actually looking at projection of aperture as seen by eyepiece. Take a finger and slowly extend it over aperture as to block some of incoming starlight. At some point it will be seen as a shadow at the eyepiece - similar to this image of focuser protruding into light path: When this happens - you measure how much of a finger (or ruler ) you needed to put over aperture and based on size of aperture and primary mirror - you can figure out if effective aperture is less than size of primary (is OTA opening minus twice finger length over OTA opening less or equal to primary mirror diameter)
  6. https://www.firstlightoptics.com/stellalyra-telescopes/stellalyra-6-f12-m-crf-classical-cassegrain-telescope-ota.html + https://www.firstlightoptics.com/skywatcher-mounts/skywatcher-eq5-deluxe.html
  7. That review quoted 7.3" primary based on dim image. Explanation was that it was undersized secondary mirror that produced effective aperture stop. We could in fact see if there is any merit for this claim if we knew couple of things: Primary to secondary separation in mm (does not have to be exact - measuring tube length and then subtracting mirror thicknesses and offsets from OTA start would be enough) and speed of primary mirror. If I'm not mistaken, speed of primary is something like F/3 for this scope. This means that it has FL of 609mm. If size of secondary is 60mm (whole obstruction with mirror support and baffle tube is 68.5mm), then distance between between mirrors needs to be 609 - 60 * 3 = 429mm for edge case where only central spot on optical axis is not vignetting and primary is not obstructed for it see diagram: OTA is 536mm long, so there is enough room for scope not to be stopped down, even if size of secondary is 60mm. Primary is probably 20mm thick, there is another 30-40mm for mirror cell and back of the scope - that is 60mm. Secondary is probably less than 10mm in thickness, support is another 20mm and it is about 10mm inside of the tube. That makes total of about 100mm. 536 - 429 = 107mm This is of course just speculation and there is no way of knowing these things without measuring things.
  8. You are in fact right. I just checked this with ASI185 and sharpcap. I was under impression that following happens: 12 bit mode is recorded in 16bit mode by padding LSBs with zeros (4 LSB) 10 bit mode is recorded in 8bit mode by discarding 2LSBs Since ASI says on their website that cameras operate in 12bit / 10bit mode:
  9. Signal yes but not the noise. Calibration removes signal but injects back some noise. If you take bunch of bias subs - they won't have the same value - otherwise, we would only need single bias sub. So every bias sub has bias signal - which is always the same and read noise - which is always different and mostly random (in fact even FPN is random but has different distribution to regular gaussian of read noise). When we stack bias subs to create master bias - we are in fact trying to average out this noise - to reduce it and it works as any stacking does - SNR goes up by factor of SQRT(number of subs). Since we don't really have any interesting signal here - it is the noise that reduces by this factor - read noise. If your camera has read noise of about 2e (modern cmos camera) and you stack 16 bias subs - you'll end up with master bias that has about 0.5e of noise. Now if we subtract this master bias from regular light - we will be adding back in that 0.5e and we do it in the same way noise adds with other noise (although we are subtracting signal - noise does not change if you change it's sign - it is always "jittering" around zero value - so it is always +/- something and it really makes no difference if it is -/+ instead) - in quadrature. There is however difference when we subsequently stack such calibrated subs. If there is no pixel shift between subs (no alignment - perfect guiding) then we can say that pixel at coordinates x=100, y=200 for example - all had same value of bias sub removed. This master bias value contains signal - that is fine, it will be properly removed from sub, but it also contains that residual noise of 0.5e. But this time when we subtract it - it will no longer be random it will always have the same values for every pixel having coordinates 100, 200 in the stack. It has effectively became a constant rather than random value and when we stack our data it will not became smaller. If we "stack" constant values - nothing will happen average of 2,2,2,2,2,2 .... is simply 2 no matter how many "subs" we stack. When we dither - we shuffle both target signal over sensor, but also "calibration files over stack" as calibration files match subs at pixel positions, but these get shifted when aligning if we dithered.
  10. At least there is simple point to all that - do dither it is good for the final image
  11. This matches with 38% and 33% given by TS. It also matches 58mm given by FLO for 6" version, so I guess problem is that this value ended up with 8" version as well instead of 68.
  12. I'm not sure what is true - if that figure on FLO or this one from TS: If you look at specs for 6" model at FLO: https://www.firstlightoptics.com/stellalyra-telescopes/stellalyra-6-f12-m-crf-classical-cassegrain-telescope-ota.html it also says 58mm and I've seen these specs also mentioned on other websites. I suspect copy/paste error somewhere along the information pipeline (maybe GSO rep that sent out specs or similar). @johninderby has both, or at least 8" at the moment and awaiting 6" version as well. Maybe direct measurement of secondary obstruction could settle this?
  13. Not true. Dithering is very beneficial for lowering noise in the image, although most people don't know this. This is especially true when small number of calibration subs is taken. To best explain this we need to look at what is happening when dithering. Let's observe single pixel and part of target on that pixel. If we have perfect tracing, same piece of target always lands on same pixel - with dithering it is always different pixel. This means that with perfect tracking stack of pixels for our part of target come from single pixel in each sub and consequently get calibrated always with same bias value - bias is in this case constant and does not average out beyond it already being stacked and averaged. With dithering - target covers always different pixel and bias sub calibrating always different pixel will have always different value as residual after stacking is random - this makes bias noise that we inject back in the image much smaller than in above example. I know this is poor explanation, but point is - with perfect tracking, bias that we use to calibrate is always the same with respect to target (not image) and can be mathematically "pulled in front of average" and noise from master bias pollutes final image after stacking. With dithering - it pollutes each sub "differently" (same with respect to image but differently with respect to target) and does so prior to stacking and when stacked - same thing happens as with every other noise source - it gets reduced with respect to signal (stacking improves SNR). In fact - you can check this yourself - take any set of calibrated and dithered subs and do two stacks: 1. plain average stack after aligning subs 2. plain average stack before aligning subs Now take empty patch of sky (try to avoid stars) and measure standard deviation. You will find that standard deviation is smaller in first case - noise in background is smaller than in case without alignment. No alignment would be necessary when one has perfect tracking / guiding.
  14. Given that OP asked about ED80 and ED100 and that you have ED80 and ED150 and C9.25 as well, I would say you are in perfect position to advise if swapping C8 for twice the smaller diameter refractor is going to give satisfactory views.
  15. Not worth it in my opinion. In most cases it is about optimizing planetary views rather than choosing particular scope design. Do internet search on optimizing your planetary views and you should get at least couple of results (very nice youtube video). In nutshell it is about minimizing local seeing conditions impact by careful thermal management and choosing your observing location / direction. If you suspect that your C8 is not giving you the best image - check collimation first. There is sample to sample variation between scopes and some are less sharp. If you suspect your scope is like that, then maybe think about swapping it for this one: https://www.firstlightoptics.com/stellalyra-telescopes/stellalyra-8-f12-m-lrs-classical-cassegrain-telescope-ota.html It has some advantages over C8 with respect to planetary viewing and in general.
  16. Absolute sharpness is related to seeing conditions, your mount/guiding and aperture used. Relative sharpness is related to your sampling rate with respect to FWHM. You say that your FWHM is about 4 (pixels I suppose and not arc seconds?) - this makes you over sampled by quite a bit. Ideally, you want your FWHM to be about 1.6 pixels. This means that you should "bin at about x2.5" to get to proper sampling rate. This image looks blurry when viewed at full size: But this is resized down to 40% (2.5 times) - look how stars look pin point and detail no longer looks blurred out. Key is using the proper sampling rate for what you scope and skies can deliver.
  17. Most software indeed does this - stretches histogram to full range of data - hence my example with 8bit vs 16bit. You do have a point here - histogram is somewhat useful for checking of the clipping - either to the left (too small offset) or to the right (over exposure), but stats window can do the same (min max value). You should determine suitable F/ratio with respect to pixel size. ASI178 and ASI174 have very different pixel sizes - in fact 5.86 / 2.4 = about x2.4 of factor between them - meaning that for same sampling rate one would require F/ratio to be larger by factor of 2.4 over the other. If you choose F/18 for ASI178 (color version for example) - you should use ~ F/43 for ASI174 (again color version).
  18. Please don't do that. Histogram is simply useless when determining exposure length. First of all - it is not absolute measure - it is relative measure. Absolutely same signal will produce two very different looking histograms if you do 8bit imaging vs 16 bit imaging. Signal that is at 80% of 8bit range (0.8 * 256 = 204.8) is going to be at 0.3125% of 16 bit range. It is the same signal but using 8bit mode would put it in your recommended range, while using 16bit mode would make it "very very dim - almost nothing can be seen - very poor signal" range. But in reality it is the same signal and hence same SNR. In lucky imaging exposure length is determined by coherence length and coherence time. Even small change in exposure above coherence time will create more blur but hardly any improvement in SNR. For example if we are in 6ms exposure range for our scope and sky conditions, going with 10ms exposure will only improve our SNR per exposure for less than 20%. However blur introduced by motion of atmosphere (lack of ability to freeze the seeing) will be much worse on final image. This is because we are trying to do frequency restoration and our frequency spectrum looks a bit like this: This is how much certain frequencies get attenuated. Level of blur determines how "narrow" this curve is - it shifts it towards origin. Now look at slope of the curve around 0.05-0.1 range in X axis - only small shift in frequency causes attenuation to double. Above coherence time blur becomes much more damaging than noise - you simply can't restore image if you need to multiply certain frequency with factor of x2, because blur attenuated it twice as much and you only improved your SNR by max 20%. Coherence time is ultimate exposure limit in lucky imaging - going above it will only damage your ability to sharpen more than it will improve SNR of stack. Actually it does. This would be true if we had zero read noise cameras and we don't. I'll do a quick math to show above claim that I made that planets produce about max 100e per exposure on critical sampling. This will be very important for remainder of this argument. Mars magnitude now is about -2.5 in best conditions (good transparency, high altitude and Mars highest towards zenith). We will use 200mm scope (unobstructed, obstructed scopes just reduce signal, so do mirrors and so on). Mag 0 star produces about 880,000 photons on top of the atmosphere per cm squared per second. Mars is x10 brighter than this (mag -2.5) so it will produce 8,800,000 photons per cm^2 per second. 200mm of aperture has 10^2 * PI = ~314 cm^ of surface, so it receives 2763200000 photons per second from Mars. We sample at F/11 so our focal length is 2200mm and we use ASI290 so pixel size is 2.9um. Sampling rate is ~ 0.27"/px. Mars is currently ~23" in diameter so it covers angular surface of roughly 415.5 arc second squares. Single pixel covers 0.27 x 0.27 = 0.0729 arc seconds squared. This means that image of Mars sampled at 0.27"/px will cover roughly 415.5 / 0.0729 = ~5700 pixels Light captured by aperture is spread over that many pixels. This means that each pixel gets hit by 484772 photons per second. Average QE over visible spectrum is about 60%, so we get ~ 290863 electrons for visible spectrum, or about one third per color - ~ 96954 electrons. Remember, this is per second. Let's say we use 6ms exposure and that is one 166.666...th of a second, so we need to divide this number with 166.6666 to get number of electrons on average per pixel per exposure, and that is 581.724. Under best conditions, in greatest transparency with Mars at opposition with very high QE camera and unobstructed scope without any losses by mirrors and all at best we get around 580e of signal. Ok, so I overestimated number by factor of x2-x3 since I've not done this calculation on Mars before, but I have for Jupiter and there signal per exposure is less than above (as it is both less bright and bigger in size). In any case, let's get back to point - read noise. Read noise is very important thing to consider when doing planetary imaging as it right on the edge of becoming major factor. Let's say that our system is capturing 256e per exposure (mirrors, central obstruction, filter QE, transparency issues). Let's also say that read noise is about 2e. Read noise becomes an issue when it is higher than about 1/5th of dominant noise source. With 256e of signal we have 16e of shot noise. Read noise here is 1/8th of shot noise - so not really a concern. But let's over sample by using x2 barlow. This will split same signal over 4 adjacent pixels. We have 64e per each of these pixels of signal and 8e of shot noise. Now ratio of shot noise to read noise is no longer in "safe zone" - we now have that to be 1/4 - higher than ~1/5.
  19. I agree with most you said except for some things about the gain. Gain in planetary imaging is only really used to control read noise - it does nothing to actual signal as it is multiplication factor. I find it hard to believe you have saturation at critical sampling rate on Mars. We can do the math, but I think that it turns out to be something like 50-100e per pixel per exposure of signal. Cracking up the gain to get lower read noise is of course a good thing, but I don't think that you can easily saturate 12bit ADC with high gain. In fact you would need e/ADU of about 0.025 to do that. In fact, we can check what sort of e/ADU you might have been using with high gain on this camera. ASI290 has unity gain at 110 and best read noise at gain 350: This is 240 x 0.1dB difference and e/ADU doubles at every 61 x 0.1dB so we have roughly 2^4 times difference in e/ADU or gain 350 is roughly 1/16 e/ADU = 0.0625 or about 3 times lower than saturation value (for 100e per exposure). We can also see that going from F/19 to F/11 will change intensity by factor of ~ x2.97 = x3. Recorded signal will be x3 stronger - again, I doubt that it would saturate in 12 bit. It would likely saturate in 8bit mode though on high gain. In another words - don't worry about saturating exposure when using critical sampling rate, even on high gain settings. If you do saturate, make sure you are using 12 bit (16 bit) mode and if that does not help, then yes, back away a bit with gain.
  20. First thing to understand is that sampling rate is really pixel size - it is distance between pixels. It may seem strange to think that way, but that is correct way of thinking about it and it is helpful to see why colour cam has different optimum sampling rate than mono camera. With mono camera distance between pixels is the same as pixel size (mathematically speaking - same X values only shifted half a pixel - pixel size is measured from start of one pixel to end of it / start of next, while distance between pixels is measured between their centers). With colour cameras that is not the case as sensor has what is called Bayer matrix - alternating colour filters on adjacent pixels, and it looks like this: On the left side is Bayer matrix itself and on the right side is how you should "view" each color. Let's take red for example: what is distance between red pixels? It is now not equal to pixel size, it is equal to two pixel sizes as we have red pixel, green pixel, red pixel, green pixel, and so on ... Same is true for blue and also for green pixels (we even have Gr and Gb notation for two different "grids" of green pixels). This means that distance between same pixels in Bayer matrix is twice as big compared to actual pixel size and in turn this means that sampling rate is twice as low. For this reason we have to use twice as long focal length with colour camera in order to get proper sampling rate (if we talk about critical sampling rate). For 2.9um camera pixel size and it being OSC camera you need F/22.75. Since you'll be using C11 - you only need x2.2 barlow (which would be x2 barlow with tuned distance to sensor to give x2.2 magnification).
  21. I would not recommend that - there is a chance of burning Peltier element - you really need to remove heat from hot side, otherwise, sensor will remain warm and aluminum part will get extremely hot - there will be temperature difference but if you don't dissipate that heat effectively it will build up and even damage your elements.
  22. Yes, please do that. I'm confident that PI will do much better job at calibrating and stacking subs than DSS.
  23. Can you be more specific in what you are asking - I don't really understand, or rather can't interpret. This in particular: Are you referring to the overall background of the image or star shapes in the corners. Overall background is combination of two things - first is stacking in DSS and second is ABE process. Stacking in DSS sometimes leads to very weird effects - like in this image - there seems to be vignetting, but if you took flats, there shouldn't be any. I had similar issue with DSS and red channel - red channel seemed to be clipped around 0 and that created color casts mixed with vignetting like in your image. Second issue is when trying to apply automatic background extraction thingy from PI. This process can probably suffer from edge effects - and this creates that 4 lobed color pattern in the image - best seen when image is small: If you are referring to cross pattern on the stars, best seen in this crop (although it is slight effect): That is astigmatism: Field flatteners often have some astigmatism in far corners of the image and this can be related to spacing or tilt if it is not the same in every corner or one line of cross is longer than the other.
  24. RC telescopes have curved field. They have decent size of corrected field and this is where confusion comes - field is almost flat in large portion. They don't suffer from coma but suffer astigmatism instead. This is symmetrical aberration and for that reason much better for scientific measurements. This is why are RC scopes preferred as observatory scopes - decent size almost flat field without need for refractive elements (which means there won't be distortions in UV and IR part of spectrum) and astrometry precision due to symmetric primary aberration. RC8", from what I've read, gets along very nicely with FF/FRs designed for F/7-/F8 refractors. For example Riccardi x0.75 FF/FR - both flattens the field and acts as reducer. I will try my RC8" with TSRed2 x0.79 as soon as I get the chance to see how the two get along.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.