Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. With "simple" telescope designs - like newtonian and refractor, F/ratio of telescope calculated like focal length / aperture also describes light cone that such telescope produces. When coming to a focus, light beam on principal axis will be "wide" some distance before focus to the same ratio as F/ratio of the scope (beam will have the same "shape" as F/ratio of the scope). Does this hold true for folded design as well, given that primary is strongly curved and secondary magnifies that image? Here is an example that confuses me: I've got RC scope, it has 85mm central obstruction and it is safe to say that secondary mirror is not that big - it is baffled and also it's support is larger than mirror diameter (by at least 5-10mm). It is also safe to say that rays perpendicular to optical axis will not illuminate fully secondary mirror (otherwise - any rays that are at a slight angle will start vignetting straight away, and there is quite large illuminated field). Telescope is physically long about 580mm with focuser and focus point is 159mm from 2" connection of focuser - which gives total of about 740mm from the front of the tube. Primary mirror is not placed directly at beginning of the tube, but rather some distance inside - let that be 3-4cm for collimation mechanism, and let the mirror it self be 1cm - so let's say total of 50mm. So we have about 690mm of separation between secondary mirror and focal plane. If we take that about 75mm (85mm minus support / baffling and some distance to secondary edge) of secondary is casting rays to center of focal plane (from all rays parallel to optical axis) - and we divide that two numbers - 690 and 75, we get : F/9.2 rather than F/8. I used arbitrary assumptions here, but I tried to stay "on the safe side". If we assume that all central obstruction is in fact illuminated secondary and it produces F/8 beam, then focal plane needs to be located at ~680mm from secondary mirror. Still possible, but I think that first scenario is more realistic and that light beam on back side of telescope is slower than F/ratio of scope would suggest because of magnifying secondary. Another example would be TS 8" F/12 Cassegrain telescope. Tube is ~620mm long, back focus is 150mm, together that gives 770mm. Central obstruction is 33% linear - which would mean 67mm (203mm * 33%), and at F/12, 67mm secondary would need to focus at 804mm. That is further than tube+backfocus and not even considering that secondary is not located at tube beginning but a bit further in. Beam at the back side of scope clearly can't be F/12 and needs to be faster in this case. Does anyone have any idea what is in fact happening in the case of folded telescope design?
  2. Indeed, but not unusual for astrograph. RC is not as good visual scope for planets, but otherwise quite good. I think that they might even share some optics, or at least manufacturing process, as Cass has hyperbolic secondary and parabolic primary (primary should be easy as it is same process for fast newtonians GSO makes). RC has both hyperbolic surfaces. I tested mine RC to 94% or above Strehl, so their manufacturing process should be quite good. Not top quality, but not diffraction limited either.
  3. First light! First light! ... (I hear the crowd chanting ) I can't wait for first light report. Have mentioned that scope in couple of threads where people inquired about good planetary visual / imaging scope, but never seen one or read report on one. However, based on RC that has almost the same construction, and your description - it is indeed solid scope, and if my RC is anything to judge by, it should be very good optically as well. I'm very anxious to hear how it performs in the field.
  4. Fancy having a go at cracking that mystery? My bet would be on improper calibration, or slightly "bending the rules" of calibration, because "it works that way as well". Let's see what could be a cause of over correcting, and to do that first - let's agree on what over correction is - I understand that term to be used when vignetting or dust shadow becomes brighter than it should be. If we look at flat correction as - true value = recorded value / flat correction, and over correction means that true value is greater than it should be - we have two choices: 1. recorded value is larger than it should be 2. flat correction is lower than it should be Point 2 is unlikely to happen (might happen if you use incorrect flat darks - one of longer exposure, or ones shot at higher temperature than flats), but there is rather simple explanation for point 1 - you need only light signal in your recorded value. Any other signal still there will make it larger than it should be. Therefore, things like calibrating with bias only (not removing dark current) will produce over correction.
  5. I agree that 178 is very nice little cam - I've got one - color version. It is very versatile camera, and I would recommend it to anyone wanting cheap camera for multiple use cases like doing a bit of long exposure DSO, a bit of EEVA and some planetary imaging. It will work in all of those roles well (matched with suitable optics). I still prefer 1600 for DSO imaging though. It's a bit more expensive but it does offer much more in terms of sensor surface.
  6. Nope Once you take image and confirm that stars are tight and round in each corner - then you don't have to deal with tilt. It might be the case, but it might not. There are couple of reasons that can cause tilt. It can be focuser draw tube that moves left / right, has some slack in it. Even if there is no slack in focuser - it can still tilt / bend slightly under weight of the camera - and that will depend on how you mount your scope (you can rotate tube so camera can be in any position along the circle of tube rotation) and what part of the sky you are imaging. Gravity will pull on the camera, and if pull of gravity is such that it is acting perpendicular to focus travel - it can bend it a bit even if on inspection by hand - focuser seems solid. It might even be the case that focuser is rock solid but the way it's attached to tube is not squared properly - there is a bit of tilt in focuser base. Don't worry about tilt if your focuser is now working ok by touch until you can see what your subs look like - if there is actual tilt then you will decide how to deal with it (for example it might be so small that only one corner is mildly affected - cropping image in that part will remove slightly distorted stars).
  7. 206.3 number can be easily derived with a bit of trigonometry. You are right - it represents mapping between angles in the sky and surface of the pixel on the sensor. 1"/px means that single pixel will "cover" 1 arcsec x 1 arcsec of the sky. Here is simple derivation of that relationship: It is obvious that distance on the sensor equals FL * tan(angle). When we want to calculate angle from focal length and "distance of one pixel" (pixel pitch), we will write: tan(angle) = pixel pitch / focal length (in same units) Now we can use small angle approximation which says sin(alpha) ~= tan(alpha) ~= alpha if alpha is small (and in our case it is, since it is only arcseconds in magnitude), so we have angle in radians = pixel pitch (in mm) / local length (in mm) We want angle in arc seconds, so we must do conversion. One degree = PI / 180 radians, One arc minute = PI / (180*60) radians, One arc second = PI / (180*60*60) radians Pixel pitch in millimeters is pixel pitch in micrometers divided with 1000 After all that unit conversion stuff we get: angle * ( PI / (180 * 60 * 60) ) * 1000 = pixel pitch (in um) / FL (in mm) ( PI / (180 * 60 * 60) ) * 1000 = ~ 0.0048481368111 and reciprocal of that (if we take it to the other side of equation) is ~206.26481 = ~ 206.3 In order to further understand proper sampling rate, you need to understand resolution of imaging system. Stars are never pinpoints in the image - they are more Gaussian shaped curves. This is because there are couple of factors that "smear" point light. First is aperture of telescope - it produces airy pattern from point source. Second is seeing and third is guiding precision. All of these combine to form star profile that is well approximated by Gaussian shape. Based on average seeing conditions, amateur scope aperture sizes (from 60mm to 300mm or so) and usual mount performance - that gaussian shape will have FWHM of couple of arc seconds. Ideal sampling rate for such image - blurred with gaussian blur of certain FWHM is close to FWHM / 1.6. That is why we say that ideal sampling rate is 1-3"/px, depending on conditions, because produced star image FWHM is in range of 1.6" to 5". To see how gaussian profile impacts image detail, examine these images: Left is image composed out of plain sine functions in x direction, but each growing wavelength in y direction (higher frequencies on top, lower on bottom). Next two images are gaussian blurred first image with different size of gaussian blur. Middle image uses gaussian blur with radius of 2 while right image gaussian blur with radius 4. Larger blur radius (larger star FWHM) - more frequencies get cut off (no detail in above images - just gray uniform image), That means that resolution depends on star FWHM - larger it is, less detail there is in the image. This holds for any sort of imaging. It just happens that most telescopes, skies and mounts produce blur that is in certain range - hence recommendation for sampling rate in range 1-3"/px.
  8. Ah brilliant! You just helped me to come up with better way of doing color cast removal In any case, I wanted to point out that methods you mentioned won't do a good job of color cast removal in every case - in fact they are wrong in approach (although they can be used to some extent to do it). Color cast over image is consequence of LP signal being in image and it's distribution over channels. In formula signal = a * original + b, you actually want to remove b completely. If we have 3 channels, we have three different coefficients b - b1, b2 and b3 that have different magnitude, and their ratio gives color to color cast. One way of "removing" color cast is to make them equal - that will give neutral / gray cast (r:g:b = 1:1:1), but better still is to remove b. Putting mean / median value of whole image to same value across channels will not do this. Both mean and median depend on both background and signal values. So content of image changes this and it does not depend only on background. How can we remove b completely on single frame without considering other channels? So far I used following approach: I make mean value of all pixels in the image and calculate standard deviation - I "remove" all pixels above mean + c * stddev and repeat process. This isolates "background" pixels after couple of iterations (and depending on c which I usually put to be 3 or somewhere between 2 and 3). This works very well for images that have "empty background" - like galaxies or clusters, or anything framed so that there is background in the image. It will not work so well on images where there is nebulosity covering most of the image (signal everywhere so you can't really isolate background). Here is an example of it: Left is single channel (already wiped of background, but it does not matter - selection of background pixels works the same), and right is background map - white is where there is signal, and black is where there is background. Now take all pixels that are background - and you can do several things with it - one is to calculate mean value of those pixels and that is your b. Other is to examine if you can do linear interpolation of those pixels (put a plane thru them), or maybe higher order polynomial. That is used for gradient removal, here is example of that: Here is plugin ran on same stack that is not wiped yet - it produces background map and also gradient to be removed (just subtracting gradient from stack will do background and gradient removal). If gradient is not linear, you need multiple rounds of this to get best results ( get gradient, remove gradient, get gradient again (a bit different this time), remove new gradient, repeat multiple times until gradient is negligible in magnitude). Now on to "new" method that I just thought of .... We need to distinguish between parts of the image where there is signal and where there is background only, so we can take only background pixels and "normalize" on those only - by subtracting their mean / average value (or median, or putting mean/median to same value of those pixels across channels, but not much point in that if you can remove background signal completely - by subtracting it from each channel separately). How do we know what is background? Maybe stack statistics can help there. When stacking we need to do both average value to get signal and noise value or standard deviation stacking. Ratio of the two will give us SNR of each pixel. For all intents and purposes, we can consider background - any pixel that has SNR low enough (that you can't pull any visible signal with stretching) - something like SNR of 1 or lower is definitively background. Only issue is that LP is also signal, but LP is very predictable signal - it is uniform, either constant or like above linear gradient, or well approximated with low order polynomial function. Other signal - true target signal is much more complex. What is left is "guessing game" - we guess LP level, subtract that LP level, check all pixels that have SNR less than 1 after signal adjustment for mean value - is that mean value 0? No, back to step one - guess LP level again (and next guess is going to be based on how far off mean value of "background" pixels is to 0 - closer to zero we are, less of adjustment of start LP level we need). Still some things to work out, but I think it will work. Not sure if I can make it work in special case where whole image is covered with signal - but there we can use signal - to SNR relationship (including read/dark noise) to do estimation. Will need to think about it some more.
  9. Yes indeed - you need to do proper calibration of your subs, and even then, you will still need to contend with different gradients and different signal strengths because of selecting reference frames shot at different times for your panel stacks (signal strength depends on target position in the sky - higher it is, stronger the light because less atmosphere to block some of the light). On the other hand - proper calibration is something that you best adopt from the beginning if your camera supports it - and one prerequisite for that is set point cooling - ASI1600 has that, and it calibrates well so don't worry about that aspect. Flats will probably be hardest part, not because they are hard to record but because you will need good flat source - I would recommend flat panel (that is additional cost, but in my view worth it). All other "problems" should be handled by software. Recording signal to certain "fidelity" requires certain number of samples. For example - CD sound is recorded at 44100 samples a second, and that is because most humans hear sound frequencies up to about 20Khz. Number of samples needs to be twice that of maximum frequency (for CD quality it is 22050hz). Similar thing happens when you take the image. Telescope can't "magnify" image past certain point due to laws of physics. Atmosphere also plays a part and on most nights even this theoretical "magnification" is too much - atmosphere blurs details. With long exposure AP there is another aspect - how well the mount tracks the stars (and how well your guiding works to correct for mount errors and other problems) - that also reduces level of detail that can be recorded. It just removes higher frequencies from the image (yes there is such thing as frequencies in the image, and it is not simply related to size of features - it is more complicated than that, but you don't need to understand that fully for this discussion) and if you sample image too much - and that means "zooming in" too much, or using too many pixels to cover certain object (which will make it larger on screen) - you will simply record bigger blurry image without additional detail. In it self - there is nothing wrong with that if it were not for noise. When you do that - record image with too much zoom - you spread the light over many pixels and there is not enough signal per pixel and your SNR is poor - your image will look noisier and you might not even be able to distinguish object from the noise if SNR is too low. That is why you don't want to oversample. There is opposite of oversampling and that is under sampling - you don't zoom image enough to capture all possible detail (that scope, guiding and atmosphere can provide on a given night) - things in this case just look smaller on screen, but you get better SNR by doing that. This is story of sampling very oversimplified, luckily you don't need to know all the details - you need to know how to calculate sampling rate - in is expressed in arc seconds per pixel, depends on pixel size in micrometers and focal length of telescope in millimeters. Divide the two and multiply with 206.3 to get sampling rate in "/px. In your case that would be (3.8 / 750) * 206.3 = ~1.045"/px. For wide field imaging with short focal length scopes - this figure is in general larger than 3"/px (there is no upper limit really - one is limited by focal length of lens used for milky way landscape shots for example). "Regular imaging" that amateurs use is in range of 3"/px - 1.5"/px. Sampling in range 1.5"/px to 1"/px is considered advanced (you need good mount, steady skies and aperture that is capable of it) and you can easily slip into oversampling if some of that is not playing ball. Going below 1"/px for most people with most gear in most conditions will lead to oversampling, you need large aperture, very good mount and guiding and very stable atmosphere to go below 1"/px. Now on to binning. Binning is process of adding adjacent pixels to form larger pixel (usually groups of 2x2 or 3x3). It affects signal collected (summing 4 values gives bigger value, hence better SNR) and also reduces number of pixels covering target (2x2 pixels is replaced with single value - single pixel), or brings up sampling rate (binning x2 on 1"/px image will make it 2"/px). It should be done if you oversample - in that case you don't loose any detail, as there is no additional detail in the image in the first place, and you recover some of SNR lost due to oversampling (you would recover it 100% if you had sensor without read noise - or in case of CCD where you have hardware binning, and there is one read noise per binned pixel). FOV is not affected by binning, but pixel count / resolution is. So image viewed in "screen size" (fit to screen, or posted here of forum) will not change, but image looked at 1:1 (100% zoom, or one image pixel - one screen pixel) will change object scale when binning. You don't need autofocuser - but it is handy to have. I'm yet to make two of them for my two scopes - they will be DIY projects. Using toothpick will stop focuser from being loose and shifting left/right, but it might not solve the problem you have because there is slack in the focuser - tilt. If focuser tube is loose it will shift left/right and in doing so it will change angle, and important thing with focuser is that it is squared to optical axis - that ensures that sensor lies in focal plane and is not tilted to it. If sensor is tilted with respect to focal plane - some parts of it will be out of focus and that produces poor stars (usually corners since they are furthest of center so most out of focus). Using toothpick will stop focuser from tilting but it will lock it in angled position - which will still leave sensor at an angle to focal plane.
  10. Not sure if linear fit is a good tool to be used in this context. Presuming that linear fit is what I think it is (not sure since I don't use PI), and it should be according to name and the way you use it, it is related to something else. Let's see how it works, what it should be used for and why it's not the best choice in this case. Light from the target passes thru atmosphere and that causes certain level of extinction of the light. That level of extinction depends on transparency of the sky and "number of atmospheres" it passes (altitude of target). We can say that resulting signal is equal to a certain percent of original signal. signal = a*original There is one more component to this and that is sky glow. It adds photons to total measured signal so it is additive in nature, and in fact above formula should be written as: signal = a*original + sky_glow Now if you observe two different subs, taken at different times with different target position / transparency and different level of LP / sky glow you can see that following holds: signal1 = a1*original + b1 signal2 = a2*original + b2 Now we have two different subs with different levels of light recorded. We want to "normalize" those frames - make them "equal", even if we don't necessarily know a1, b1, a2 and b2 constants. A bit of math can show us how to do it: signal1 = a1 * original + b1 => c * signal = c* (a1*original +b1) => c * signal1 + d = c * (a1*original + b1) +d = (c*a1) * original + (c*b1 + d) = a2 * original +b2 = signal2 (for c*b1 = a2 and c*b1+d = b2) In another words, by multiplying one sub with some constant and adding another constant we can get second sub. Or, if we observe each pixel value with noise and we write down bunch of equations in form p1*X + Y = p2 (where p1 are pixels of first sub and p2 are pixels of second sub) - we can find X and Y such that it minimizes error - linear regression (linear fit). This is essential process of preparation subs for stacking - you want your subs to be equal in signal strength with same level of background signal. That way sigma reject will work properly and you can use noise to estimate weights of each sub. It is often called equalization of frames before stacking. I developed extension to this that equalizes both signal and background as well as background gradient (so sub with least background gradient can be chosen as reference and background gradient is minimized that way). Now that we know what linear fit is, let's see what it requires to work properly - it requires signal to be equal, and background to have "additive" aspect. If you use linear fit on two subs (or stacks) that don't have same signal, difference in signal will be treated as noise and algorithm will include it in "minimize noise" part and you will not get what you expect. This can sometimes work better and sometimes worse - depending on background gradients in each channel, but also depending on signal strength and distribution of signal in each image. If signal is in the same place - there will be some "Equalization", but if it is in different places, you won't get results that you are looking for. Also if large area is covered with signal in one image, but only small area in other image - there will be poor results.
  11. I'm a bit puzzled with this. The way I've done it is to take linear mono channels and do background/gradient removal prior to any combination. I also don't like the idea of combining data prior to stretching it for NB images - Ha is much stronger and if you combine your data and stretch it - you will get poor saturation in color mix and Ha will dominate image. I tend to do it like this: - calibrate / stack / whatever and do gradient / background removal on linear channels separately. - do stretch of mono images until I get nice looking level of detail in each and brightness is at about same level between mono images (that largely depends on SNR of each channel) - combine images and do channel mixing to get certain palette out of it
  12. 1. Yes 2. 1.25" filters work down to F/4.8 if mounted very close. Here is flat from my ASI1600 and 80mm F/6 with x0.79 FF/FR (effectively somewhere around F/4.75?): Vignetting in upper part looks like straight edge - and it is because it has to do with OAG prism being mounted on that side - just a bit too close to cast some shadow on sensor (and very small diffraction spike on stars in that part of image, but I'm more careful now of it's positioning). Corners are at about 90% (88% in upper left corner) in this combination, but I was using filter drawer which can be mounted even closer than filter wheel. Difference between filter sizes can be summarized as: - bigger - more expensive - bigger - likely a bit better optically (same surface imperfection will be spread over more light cone because you can mount them further away from sensor, so in percentage it will be smaller). Not something that you will ever notice if filters are decent quality - Bigger - more suitable for larger sensors, faster optics and can be mounted further away. ASI1600 has diagonal of about 22mm, 1.25" sensor has clear aperture of about 27-28mm, so it has plenty of aperture to match that sensor. Problem comes from mounting distance, here is a quick diagram describing what can happen: 3. At the time I was looking to purchase mono camera - that was only sensible option in that price range that had set point temperature cooling. I believe it the same now, except there is maybe 183 sensor based camera (and other vendors with same sensor as 1600), but I would still go for ASI1600 - it is the largest sensor at that price. On other comments: - I would not recommend 183 over 1600 for 150 F/5 scope - it will give you sampling rate that you won't be able to utilize, it has smaller surface area. For 750mm ASI1600 is about right (maybe a tad oversampling at 1"/px - but that will depend on your skies and mount / guide performance). - You don't need to get smaller scope to do wide field - you can do wide field with 150mm F/5 as well. It is not as straight forward, but since you like challenging projects, I think you will like it. You can do mosaics - that means shooting multiple panels, stacking each and stitching them to get wider FOV. There is a tiny overhead in imaging time to do that - panels need to overlap a little and you need to center each panel in correct place (imaging software often has mosaic planning feature that you can use). It involves a bit more processing (and some changes to way you process your data unless you have it automated by software like APP). Other than that - F/5 scope with be "as fast" as smaller F/5 scope with same camera, although you need to take multiple panels, if you aim for same resolution (which would mean binning wide field panels to match resolution of scope with shorter FL). Since you have 750mm FL scope, binning x2 in software will produce same sampling rate as 375mm scope. You will need to take x4 panels to cover same area, which means x4 less time per each panel if you want to match total exposure time, but binning x2 will improve SNR by factor of x2 - same as imaging for x4 more time, so it works out the same (except for framing and slight overlap of panels - giving slightly smaller FOV than it would have with shorter FL scope). - Don't have experience with focuser on 150mm newtonian, but I can tell you that focuser tilt is real issue and I ended up upgrading my focuser so I can have threaded connection and everything now stays in place.
  13. As pointed out - depends how seriously you take the term serious I do have my view on that - serious scope is one that provides most opportunities to do serious stuff. I have one such scope in my "stable" and I indeed consider it serious, although it probably is not if other seriousness criteria is applied. That would be RC8" in my case. It is reflector so it does not suffer chromatic aberration - suited for UV/NIR for example. It has symmetric off axis aberration - suitable for astrometry. It is suitable for visual (but a bit less than other models out there due to large CO) and particularly well suited for all kinds of imaging. It is EQ mounted - there fore suitable for longer precision tracking. With additional gear it is suitable for spectroscopy. Only thing that it is not suitable - is solar work. With full aperture filters and additional gear - it might be suitable for that as well (but I'm not even going to calculate cost of that).
  14. With 80ED it is totally different story - I would not worry about guide result being poorly measured - it is certainly within limits that you should aim for with ED80. It has more than x5 less focal length (with field flattener / reducer) compared to Mak180 and hence x5 less sampling resolution - with most cameras it is around 2"/px or higher and you need guiding of about 1" RMS for that - and I think you will be able to do it without problem if your reported values are correct (focal length and pixel size correctly entered) regardless of any lack of precision to measure that value, we can safely say that you were guiding at at least 1" RMS. It is also lighter setup and that can only help.
  15. I advocate 16 for practical reasons - finite precision. Having 16 subs and doing average of that means you divide everything with 16 - which in binary representation means "shifting" by 4 binary places (same as dividing with 100 for example means moving decimal point 2 places) - it does not introduce infinite number of digits. Imagine you have 3 subs and divide with 3. In some cases you will get infinite number of digits as result - 1/3 = 0.3333333...... Once you write that down with finite number of places, you introduce error. If you write it down like 0.333, you are in fact introducing 0.0003333333.... error. Maybe I'm just overly caucuses not to introduce additional error even if it is way to small to make impact
  16. Ok, I'll admit - I might be overboard with the number of flats I take Here is the reasoning on how much flat subs one needs - it involves a bit of math, and results do differ based on type of sensor one is using (and settings): We will use "extreme" case to do calculation - that way we can be sure that it will work in majority of cases. We will "correct" extreme vignetting of about 50% - so half of light is reaching sensor. For all intents and purposes, we want difference between real and corrected value to be less than about 0.4% - that means less than 1 out of 256 intensity levels that modern displays offer. Imagine we have 0.5 intensity of vignetted pixel and we want to make it 1 again (attenuation of 50%). We need to divide it with ~0.5 so that it has less error than said 0.4%. In math terms it is like this: 0.5 / something = 1.004, or something = 0.5 / 1.004 = ~ 0.498008 Max error in our case is 0.5 - 0.498008 = 0.001992 so needed SNR should be around 250. That will correct about 66% of pixels to required value. If we want really good correction rate - we need to do something like 3 sigma (three times that SNR) - that will give 99.7% pixels that will be corrected better than 0.4% error. In another words - our flat needs about 750 SNR in 50% signal area. Let's see how much SNR there is per flat sub in 50% vignetted area. CCD case: let's use common e/ADU value of 0.5 for CCD (you can do math for your sensor), and histogram peak at 80%. CCDs operate on 16 bit, so max ADU will be around 32000ADU (half of 16 bit). Converted into electrons - that will be around 16000e. 80% of that will be 12800 and 50% illumination will be half of that - 6400e. This means that 50% vignetted signal will contain roughly around 6400e. SNR in that region is square root of this value and that is 80. Single flat carries SNR of 80. In order to get SNR of ~750 we need to improve SNR by x9.375, so we need to stack (square of that value) about 88 subs. let's use common e/ADU value of 0.5 for CCD (you can do math for your sensor), and histogram peak at 80%. CCDs operate on 16 bit, so max ADU will be around 64000ADU. Converted into electrons - that will be around 32000e. 80% of that will be 25600 and 50% illumination will be half of that - 12800e. This means that 50% vignetted signal will contain roughly around 12800e. SNR in that region is square root of this value and that is ~113.137. Single flat carries SNR of 113.137. In order to get SNR of ~750 we need to improve SNR by ~x6.63, so we need to stack (square of that value) about 44 subs. CMOS case (particularly the way I use it, so ASI1600 12bit at unity gain): Max signal in this case is about 4000e (4096 to be precise but let's round it up). 80% histogram peak, means that max illuminated part of flat will be at 3200e, while 50% vignetted part will have 1600e. Single flat in this case has SNR of 40. To reach SNR of 750 we need to "boost" it by x18.75 - by stacking ~350 subs. From above calculations we can see two important things - CMOS because lower bit count if used at unity gain require higher number of flats to get same SNR. We can also see that if we want to have minimal error in flat correction (below one in 256 so practically indistinguishable on 8bit displays) while having severe vignetting and we want to achieve that level of performance in 99.7% pixels - we need to take a lot of flats. In reality, light attenuation will not be severe as 50% but closer to 10-20%, we can also make other criteria less strict - like error in single pixel being 1% (most will have difficulty telling if pixel should have intensity 230 instead of 231 or 232), and we can settle for less pixels having "perfect" correction - this will bring needed number of flats down of course - one of the reasons why people use 16 flats without much of impact.
  17. Ah yes, that was one of the reasons I was happy with SA200 - it is low profile so will fit filter wheels / filter drawers ... Depending on design, like "straight thru" version could need only t2 / 1.25" filter adapter - like one often supplied with ASI cameras. It does mean disassembling it each time you want to use SA "regularly".
  18. I was looking at lowspec and it indeed seems interesting. In all of these designs, reflective grating can be replaced with transmission grating if layout is slightly altered - so SA can be reused in higher resolution instrument (it does require some way of easily mounting it and removing from spectroscope - like filter drawer or similar).
  19. Actually, I think that SA can be used to do that level of spectroscopy, although it needs to be "modified" quite a bit. Actually, grating is not modified, but resolution can be improved by following: 1. use of slit 2. putting grating in collimated beam. SA200 for example, with 200 l/mm and clear aperture more than 20mm can in theory produce resolution of R4000 (and that is about 1.5A in visible). Problem is of course that we tend to use it in converging beam and limiting factor is star FWHM as well. Addition of collimation optics (eyepiece and lens for example as in topic I linked in above post) provides collimated beam. Size of the beam (or rather its diameter) will depend on speed of optics and focal length of eyepiece and is equivalent of exit pupil in observational use - which means we can easily have something like ~6mm (maybe a bit less with slow scopes), and 200 L/mm will give us R1200 theoretical maximum from grating, which is enough for classification at about 5A (if I'm doing my math correctly). 20um slit placed at field stop of eyepiece with reduction factor of about 1/3 - 1/4 will be "pixel wide" (with perfect lens and that lens being about 1/3 to 1/4 in focal length of eyepiece used), so we end up with sampling being limiting factor. 20um slit is both hard to make (as far as I'm aware - no diy there so needs to be purchased) and requires certain focal length to make star large enough so that it is about FWHM wide (you want most of the light from the star to be in slit, but keep slit narrow enough). @robin_astro - does my rambling above make any sense?
  20. Stock mounts vary quite a bit in guide performance - some are smoother than others, but I would expect on average EQ35 class mount - which is probably closer to EQ5 than EQ3 in guide performance, to have something like 1-1.5" RMS guide error. One could do sub 1" RMS on such mount if mount is smooth and conditions are particularly favorable on a given night - meaning 0 wind and very good seeing conditions. Orion miniscope has 162mm focal length (if I'm not mistaken) and ZWO 120 mono has 3.75um pixel size. Together that gives 4.77"/px of guide resolution. Centroid calculations are able to determine star position to about 1/16 to 1/20 of single pixel (depending on SNR) - which means ~0.25", and RMS calculation should be precise if it is larger than about x3 of that, so if RMS figure is larger than 0.75" then it can be "trusted" (it is about right) - smaller than that, I'm simply thinking that you won't be able to get accurate reading. Mind you, that is enough precision to guide such mount on stock values (meaning about 1"-1.5" RMS), but does limit your imaging resolution to about 2"/px. Another way to tell if your guiding was really that good is to examine subs you've taken while guiding. What sort of star shapes did you get? Are stars distorted in any way - for example not round, maybe elongated in one direction or egg shaped? If they are round - which means that guide error was uniform in every direction (good thing), next what you want to look at is FWHM of stars in arcseconds. This value is true measure of achieved resolution on the image, and if that number is comparatively small (depends on guiding, scope size and seeing on that night) - then yes, you were indeed guiding really well. Yes, that is quite good general rule of thumb to have guide RMS at least half of imaging resolution or smaller (smaller is of course better, and larger is worse, but it will not "ruin" image - it will just look a bit blurrier).
  21. That is superb result if true. I'm a bit doubtful if it is indeed true. There can be number of reasons why this number is not correct. - first check if you entered focal length of guide scope and pixel size of guide camera correctly into guide program. This is needed to properly calculate arc seconds per pixel. - It can be the case of guide system inability to properly measure guide error. What guide camera and scope are you using? You need to have enough resolution in guide system to properly measure guide error. If system is lacking resolution it will report much smoother guide errors - a bit like estimating height of people with 1meter stick (without cm subdivisions), it will either be 1m or 2m and will look "smooth". In any case, as wimvb mentioned, imaging resolution plays a part in it as well, but if you really guide at 0.5" RMS (or there about) - that is really excellent result for EQ35 mount (that is excellent result for even heavier more expensive and precise mounts).
  22. At the time I was purchasing - I knew almost nothing about all of it, and since that option was in stock, and it seemed better because higher dispersion - I purchased that one As for suitability - there is nice spreadsheet floating around (you can find it in some threads here on SGL as well) that does decent job of calculating expected resolution depending on focal length of the scope, field curvature, angle of the beam (unfortunately, using grating in converging beam produces aberrations), expected seeing. Maybe I have it somewhere downloaded and I can attach that for you, let me see. Yes, of course, have it in my "docs" section, so here it is: TransSpecV3.1.xls You can play around with it and see what setup would give you best results. Just pay attention that you will be using color sensor, so sampling rate will be affected, and also, you can stop down your scopes to make slower beam - that can help. SA200 might be better for slower scopes as resolution depends on how many lines beam intersects (area of the beam at the grating). Larger sensor with large pixels helps with SA200, but you can always bin your data. SNR is not something to be overly concerned about if you are not time limited (like for transient events or real time display of spectrum), guiding will help of course. I think that SA200 is also better in "advanced" configurations - like making collimated beam with eyepiece - grating - lens - camera configuration. There was a thread recently showing this configuration. I plan to make something similar one day - with addition of "half slit" sort of thing - half of FOV clear and half blocked with slit. Upper part will be used for plate solving / centering, and part with slit will provide added resolution and proper background subtraction. Btw, here is that thread: In first go, with just grating and nothing fancy, I plan to use it on RC8" with ASI1600. I did calculations with above spreadsheet and I should be able to get something like R300 in favorable seeing conditions. Of course, I'll be happy with any sort of decent spectrum, but going a bit higher resolution with SA is one of my goals.
  23. It should be fairly easy and even automated process ... Once you extract the spectrum of star - you can compare it to reference spectrum for each class - one that gives best match is your class. I think a lot of fun things can be done with just stellar class identification. One of my goals with this, once I embark on this journey as well (SA200 is ready and waiting for suitable time / weather) is determining distance to a star. Maybe even double star or small cluster. With more samples you get better "precision", but process is fairly easy - get spectral class, get expected absolute magnitude from that, and do a bit of photometry to get relative magnitude and from that - distance.
  24. Just for reference, I measured length of CW bar on my mount and it is ~ 20.2-20.3 cm long fully extended. Gives you comparison value to see if yours is fully extended as well or maybe shorter or something.
  25. Reasons that come to mind: 1. holding the shape - rigidity (carbon fiber might bend under its own weight for example). 2. ability to be polished to needed degree (ease of figuring it) 3. Thermal stability (but also thermal expansion, some glass types are better at this then others - have expansion factor close to zero)
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.