Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm not sure that Maks and SCTs have as much vignetting due to size of rear port as people think. Beam is very slow and focal length very long, so I don't think that much of light gets blocked in percentage - and drop of about 10% is just on the edge of being perceived as difference (just noticeable difference for light is 7% if I'm not mistaken).
  2. That is ok, just remember one thing we will need for further explanations - maximum field stop diameter with 1.25" eyepieces is ~27mm. It can be both good and bad thing - it depends. Larger exit pupil makes object both brighter and smaller in size. Up to a point. If exit pupil is larger than the pupil of your observing eye - then you are wasting light. Look at following image: It describes what happens when you have exit pupil larger than your pupil (in this case, observer is not dark adapted and their pupil is only 2-3mm wide, while exit pupil from eyepiece is 7mm wide). Disregard "result" comment at the bottom of the image as it is actually wrong in this case . In any case - some of the light will hit iris of your eye and will fail to enter your eye. That light is lost. For this reason you want to keep exit pupil of the eyepiece below or equal to how much your pupil dilates in the dark. On second note regarding exit pupil size - I've mentioned that object observed gets brighter when you use larger exit pupil. Same thing happens with the background sky. If we were floating in outer space and we did not have this background light from the sky - largest exit pupil would be ideal, but since we have atmosphere and that atmosphere scatters some light and is not completely dark, then it is matter of contrast. As we increase exit pupil size - we brighten both target and background sky. At some level of target brightness vs sky brightness - contrast will be the best (this also depends on the size and shape of target). For this reason, largest exit pupil is not always the best - we need to try different exit pupils depending on our sky conditions (level of light pollution) to find one that will show target the best against background sky. Ok, yes - I will explain this a bit better. Telescope and eyepiece are really the same thing but in reverse. Telescope takes angle and projects it onto a plane - focal plane, and eyepiece does the opposite - it takes point of focal plane and makes light ray out of it at certain angle. This angle is related to distance of the point from optical axis (center of focal plane) by simple equation that depends on focal length. This diagram explains it nicely - there is entrance angle alpha which gets projected onto focal plane (little vertical arrow between two lenses) and then eyepiece turns that into exit angle beta. alpha = tangent of length of small arrow over focal length of telescope beta = tangent of length small arrow over focal length of eyepiece This is why magnification of a telescope and eyepiece combination is given by ratio of their focal lengths - it is actually the same as ratio of beta / alpha (magnification is change in angles). Ok, so now imagine that this little vertical arrow in the middle can only be of a certain length. This is what field stop / field diameter is. It is diameter of black ring in the image you see: This is usually limited by physical dimensions of the telescope and eyepiece - or to be more precise focuser tubes and telescope tube. You've mentioned that you have 32 and 40mm plossls and that they show the same amount of sky just at different magnifications / AFOVs and exit pupils. This is because both of those eyepieces are limited by 27mm of their field stop. 40mm plossl will show same part of sky limited by same black ring - only smaller (less magnified). Focal reducer helps with following: - if telescope "field stop" (or illuminated field to be precise) is larger than eyepiece field stop - it helps to "squeeze" that field produced by telescope into field shown by eyepiece Hope above makes sense.
  3. Well, this is indeed true. I've recently experimented with 4" refractor and focal reducer to see if I'd be able to get full view of M31 in the eyepiece. There is couple of ways this can be achieved with said scope, and I've chosen one that was available for me. I already had (rather expensive) focal reducer that I knew would work well with this scope, and I also had eyepiece I wanted to try this all out with. If that was not the case - I'd probably go the following route - get one of these: https://www.firstlightoptics.com/astro-essentials-eyepieces/astro-essentials-super-plossl-eyepiece.html 2" 55mm Plossl. In any case - when trying to get wide field of view from such instrument - there is really only two things that you need to be careful about: 1. Exit pupil must not exceed your pupil when fully dilated / dark adapted. This is usually quoted at 7mm but people tend to loose ability to dilate their pupils with age and it's best to actually measure it. 2. Fully corrected and illuminated field of the telescope can't be enlarged and is subject to restrictions. For example - in 2" system, max field of view is around 47mm, while in 1.25" one that is about 27mm. Try to "squeeze" more field into that - and you will have vignetting and poor stars at the edge of the field. In my case above - 4" F/10 can provide good field up to 2" size - so 47mm and 2" 55mm Plossl is one of the few eyepieces that has field stop that large. Also, that eyepiece would give 5.5mm exit pupil so it is a good match (and only drawback is 50 degrees AFOV of plossl - for those that love their wide field EPs). On the other hand, same scope with x0.67 reducer will "squeeze" 47mm down to 47 * 0.67 = ~31.5mm. Eyepiece that I already have is Explore Scientific 28mm 68 degrees. It has field stop of 31.8mm - so again very good match (maybe tiniest bit of vignetting but it was not noticeable when observing). It also produces 2.8mm exit pupil so that checks out as well. They both achieve the same (or rather very similar) thing - one with and one without focal reducer. This is because the scope is capable of showing that much. If I tried to use 55mm plossl with that reducer - it would not work (at least not very nice) - as vignetting would be very pronounced. So I would not see more of the sky (then the scope is capable of showing) - that is physically impossible. Focal reducer just makes it easier in some cases. Similarly - when you try to match sensor size for photography with illuminated field - focal reducer can be used to widen the field of view if scope is capable of otherwise rendering such image on larger field of view. Same effect can be achieved by using larger sensor - so it is matter of economics and convenience - what is affordable and what works well for you - larger sensor or smaller sensor + reducer. There are a few more minor things that need to be taken into consideration. Reducers often work at prescribed distance - so you must take care of that. I for one had to remove nose piece and eyepiece adapter from my diagonal and 3d print direct adapters for reducer and eyepiece - because of distance required by reducer. Reducers also move focus point inward and you might not reach focus if your focuser does not have enough inward travel.
  4. Yes, that recommendation 5 x pixel size is directly derived from formula I wrote and article on wiki I linked to. Given your solar setup and work you do with it, I did suspect that you have academic background. That is why I just linked the article in the first post. Math for calculation of spatial cutoff frequency is straight forward. Math behind wave nature of light producing that cut off frequency is not as straight forward, but it's not very difficult for someone with masters degree or higher in sciences. It boils down to interference effects which turn out to have the same form as Fourier transform of aperture (there is whole field of optics named Fourier optics because of this). This is similar to how Fourier transform represents filters (convolution in spatial / temporal domain is the same as multiplication in frequency domain and vice verse - convolution theorem). In any case - there is clear cutoff point due to circular aperture in frequency domain and there is limit to how much optics can resolve because of this (this is why we need to use radio telescopes in different spots around the globe to be able to get good resolution since radio waves have much much longer wavelengths). Sampling finer than this limit produces no additional detail - same data will be recorded, but using higher F/ratio simply spreads the light (aperture gathers only so much light per unit time since photon flux does not change) over more pixels and signal per pixel gets lower - so does SNR (for same exposure time).
  5. Where does it say that? I think that best results can be obtained provided you do following: - use barlow and "dial in" F/ratio of your optical system to match pixel size you are using. Formula can easily be derived from above spatial cut off frequency and goes like F/ratio = 2*pixel_size / minimum_wavelength - where pixel size and minimum wavelength are in same units (meters, nanometers, micrometers ...) and minimum wavelength is smallest wavelength in range of wavelengths you are recording. If using OSC sensor - use stacking software that supports bayer drizzle (AS!3). - use as short exposure length as possible. This will be governed by QE of sensor and its read noise. Use highest QE and lowest read noise sensor you can get and don't look at histogram to set your exposure length, just use as short as possible. Only use longer exposure length if seeing allows, but most times it will be around 5-6ms per exposure.
  6. Of course, but that is very much related. Spatial cutoff frequency is well defined for perfect optical system (so perfect telescope and absence of seeing aberrations) and sampling above it just has negative impact on final image. It is simply waste as no detail above spatial cut off frequency can be recorded - but it lowers SNR per pixel and thus forces longer exposure. Longer exposure often goes above atmospheric coherence time and along seeing aberrations we also get motion blur (different seeing wavefronts end up superimposed on single exposure). For 2.9um pixel and if imaging in visible light - meaning 400-700nm, following applies: F/ratio = 1 / lambda * sampling_frequency Sampling frequency is 1 / 2 * pixel size (two samples per wavelength corresponding to cutoff frequency), so we end up with: F/ratio = 2 * pixel_size / lambda => 2 * 2.9um / 0.4um => F/14.5 (we use 400nm as lower bound for most detail). That is highest you really need to go in order to capture all the available detail that perfect telescope can provide in ideal conditions using this camera in visible spectrum. You are sampling at ~F/25 (C11 is F/10 and x2.5 telecentric amplifier gives F/25), so your exposures need to be (25/14.5)^2 = ~3 times longer to achieve the same SNR / signal level per sub. This can easily push you over coherence time for given seeing and you enter region where most of the subs are not only distorted by atmosphere but also blurred by moving atmosphere (motion blur).
  7. Do read this short wiki article : https://en.wikipedia.org/wiki/Spatial_cutoff_frequency , and try to spot the problem in above sentence
  8. Best to measure with a set of calipers. M4 screw will have thread diameter (not body but threads) a bit less than 4mm, depending on tolerance class. From your image, right screw seems to be M5 because shaft of the screw is more like 4mm than whole thing with threads. If you look at this diagram: Major radius differs from minor radius by factor of 5 * H/8 where H is related by pitch by pitch * sqrt(3)/2 M5 has pitch of 0.8mm so H in this case is ~0.693 mm, and hence 5 * H / 4 which is difference of diameters (twice difference of radii) is ~0.866mm - or close to whole mm, so if you measuring shaft to be ~4mm - then actual thread diameter is ~5mm
  9. That is a bit loose statement How about hexagonal or maybe octagonal sensor? In fact, best non circular sensor having all straight edges would be N-tagonal regular polygon where N -> infinity
  10. No reason not to use it. It is optical element - it does not care what comes after it (if it is DSLR or some other type of sensor). Only thing you have to worry about is size of corrected field and most dedicated cameras are either same size of DSLR or smaller (there are only few full frame dedicated astro cameras and they cost an arm and a leg). No, it won't be much of a problem. Even if you are over sampled - it will be by small amount.
  11. With 2um pixel size and Mak127, you really don't need barlow for planetary imaging. This will help keep the exposure length lower to freeze the seeing and help you make better image. This is the size of image you can expect 5" telescope to produce: (yes, that is your image slightly tweaked and resized to "normal" size).
  12. I recently acquired Svbony 9-27 and had a chance to compare it against Baader Mark III and Mark IV. Comparison was done in daylight, and while Svbony has narrower field of view - it provided me with better contrast in Svbony. This was in Mak102 and in F/6 80mm triplet. I did comparison to decide if I'm going to get Baader zoom to be my zoom eyepiece, but I decided not to. I had a chance to compare Svbony 9-27 on moon against ES82 11mm and 12mm plossl, and I had the impression that it was lagging behind those two by a hair - but it was hard to tell as seeing was changing moment to moment and it might as well be effects of seeing that I was experiencing. In my mind it was ES82 > Plossl 12 (GSO version) > Svbony, but like I said - exceptionally small difference in sharpness - almost imperceptible to regular observer.
  13. Here it is: https://www.1728.org/angsize.htm
  14. @Kon By the way, above calculation for 8" telescope suggests that one should limit their video to half of that of 4" (because it can resolve twice as much, and resolved blur will be smaller) - so only about ~23 seconds for rotation blur to kick in without alignment points - yet you probably use videos 2-3 minutes long without issues as AS!3 deals with any rotation that happens on those scales.
  15. Yes, there is easy way to test this - do a 4 minute video for example (or even 5 minute) and stack that using best 10% or whatever percentage of frames, then simply take the same video and split it into 2 parts of 2 minutes and then stack those using same percentage of frames. SNR difference won't be huge and you will see if there is issue with rotation. If two short videos produce images of same quality that means that seeing was similar during whole session and that 4 minute video won't be poorer due to seeing alone. If there is a difference it will be due to rotation.
  16. Well, we can do some calculations to see what would happen so that we get a good idea of what is limiting total time. Circumference of Jupiter along the equator is about ~439300Km (diameter times pi). Rotation period of Jupiter is 9h 55m = 35700s Speed of point in equator is thus ~12.3Km/s Critical sampling rate for 0.4126"/px. Let's see what time it takes for a point on equator to move half pixel? At present, Jupiter is 602.34 million Km away (according to google). Half pixel will be 0.2 arc seconds. What length will subtend angle of 0.2 arc seconds at distance of 602.34? With speed of 12.3km, this distance will be covered in only ~47.5s! Above suggests that we can have motion blur even under one minute! However, we are using Autostakkert for stacking and one of its features is ability to correct for lowest order atmospheric disturbance - namely tilt. In average to very good seeing star profile will present FWHM of say 1.5-2". That translates into 0.64 - 0.85 arc seconds of RMS displacement from true position. We can see that when we observe this slow motion recording of lunar surface: Parts of the image "jump around" by at least half of arc second if not more and if we "freeze" the seeing - we won't get motion blur because of this - only geometric distortion. AS!3 handles geometric distortion with alignment points. This means that software can "return" part of the image that displaced up to 1 arc second from its true position. Now, if we have this feature in software - this means that it will also "derotate" image automatically if it moves up to say 1 arc second (or even more, depending on size of alignment point). So we can put 1" instead of 0.2" in above calculation and get x5 higher duration. 47.5s x 5 = 237.5s or ~240s or up to 4 minutes. In fact - if AS!3 takes middle of the recording to produce reference frame - it can correct for rotation for +/- 4 minutes around that point.
  17. Color is a bit weird which suggests that you did not use UV/IR cut filter with that camera? Camera only has AR coated window (anti reflection coating) but passes full spectrum of light and sensor is sensitive in IR part of spectrum as well. If you want proper colors out of that camera you should filter the light to 400-700nm range - which means using UV/IR cut filter. You don't have to use it if you don't mind strange color cast. Here are some tips: - use high FPS and low exposure time. Something like 5-6ms will work good. Don't be alarmed if video looks under exposed and planet looks very dark in video - that is ok as stacking will sort it out - record for at least 3-4 minutes. You should get about 40000-50000 frames in total with these settings (if everything is ok - your computer is capable of recording at those speeds and you use USB 3.0). Do use ROI - no need to shoot at higher resolution than say 800x600px - stack only 5-10% of best frames. With above frame count - that will give you plenty of frames and smooth image in the end.
  18. I would say that using arbitrary ratio of Ha signal to reproduce Hb signal is "cheating" There is no reason to suppose that: a) this ratio is constant (and indeed it is not) in hydrogen gas b) this ratio is constant even on selected target Look at this example: This is part of M42 taken as OSC image - Red versus Green channel. Red will contain Ha obviously, and Green will contain Hb. There are parts of nebula that are visible in both images - almost the same brightness - which means that Ha to Hb ratio is very close to 1:1 while some other features are present only in Ha - which means that Hb is much much weaker if present at all - and all of this in the same object. Given that Hb series is higher energy transition than Ha - something needs to excite Hydrogen gas more in order to produce this emission - and I'm guessing that there is some interesting physics behind finding the ratio of the two. For reference, here is list of visible Balmer series transitions and their color:
  19. Not directly. With planetary imaging it is important to freeze the seeing and for that reason exposure needs to be very short - often in 5-6ms range. Even if image looks under exposed. When you image at those exposure lengths - you can expect to ideally achieve up to 200fps (1s / 5ms = 200fps). There is a limit to how many of those frames can be recorded and this limit is imposed by USB connection speed and speed of your disk drive (SSD/NVME can easily cope with needed speeds so its worth having those in your imaging rig). USB link has limited bandwidth - it can achieve only certain amount of data transfer speed. Each frame you record contains some amount of data. If you increase FPS - you increase amount of data needed to be transferred over USB connection. At some point USB connection can become bottleneck in your recording. When this happens - it is beneficial to reduce ROI as size of each frame determines how much data it contains - smaller ROI less data per frame - more frames per second can be transferred over USB. You can achieve best contrast/detail/surface detail if you capture the most data and ROI can help with that - so in that sense it helps to increase those - but only for reasons of data transfer. After you hit max data rate / max FPS allowed by your exposure length - smaller ROI won't contribute anything. BTW, ZWO publishes max theoretical FPS for every camera / ROI size combination and it is worth checking out. Say that you work in 8bit format and you want to hit 200FPS because you are using 5ms exposure length - then you need to drop your ROI to at least 800x600. One more note - exposure length should really be judged properly. It needs to be short enough to freeze the seeing - which really means that distortion of atmosphere is "static" in short period of single frame. If you don't do this you will have "cumulative effect" of two or more different distortions averaged - which is just "motion blur" of different distortions and is a bad thing. On the other hand - you don't want to have your exposure set any lower than that because it will hurt you final image (more noise then needed). Reality is - that there is no single well defined exposure length, and it is trade off - some frames will be usable some won't as there will be motion blur. Longer the exposure length - more frames you'll need to discard and stack fewer good ones so it is a fine balance of finding good exposure length. Another thing is how bad the seeing is - in average seeing you will need exposure in range of 5-6ms. In really good seeing you might afford to have exposure set to 10ms or even 15ms. In really poor conditions you might need to go as low as 3-4ms. Btw, Lunar imaging can often employ 3ms as standard because of amount of light - this even allows for narrowband filters to be used with lunar with longer exposures - but that is "advanced topic"
  20. Well, I thought that this is well known thing. Here is full explanation of how things behave. Only difference in stack of many shorter subs vs stack of few longer subs (or one very long sub) all totaling to same exposure length is in read noise. If we had cameras with zero read noise - it would be the completely the same (as far as SNR goes) whether you use many or few subs. Since we don't have such cameras and every camera has some level of read noise - this creates a difference since read noise is only noise source that does not grow with time - it is exclusively per exposure type of noise. Everything else grows with time, both signal and associated noise and it does not matter if you sum them physically by using long integration time or you sum them mathematically which is in fact what stacking is (as far as data goes, it makes no difference if we sum or average pixel value - average is sum multiplied or rather divided with constant and image multiplied with constant remains the same - it just does the linear stretch which we alter again in processing anyway). Thus, stack of many shorter subs will always be of worse quality (have lower SNR) than stack of few longer subs with same total imaging time. However - difference in SNR between the two can range between significant down to imperceptible and it solely depends on how big read noise is compared to some other noise source. This is because of how noises add. They don't add "normally" like numbers, but rather "in a special way" - like vectors, or to be more precise - linearly independent vectors (ones that are at 90 degrees to each other). Above image explains what happens. If we have two noise sources, a and b, here a is some noise source like thermal noise or LP noise and b is read noise, then in first example if a is equal or comparable in size to b - resulting total noise c will be obviously larger than a or in fact from either of them (a or b). Diagonal of square is longer then either of the sides. In example two, we have that read noise b is significantly smaller than this other noise source a. This results in total noise c being about the same size as greater component a. Impact of read noise becomes insignificant. All of this explains why people get different results when stacking different exposure lengths and it also gives way to calculate what is good sub exposure length which will not impact stack in visible way. CCD cameras have high read noise and when they were popular fewer people were into astrophotography and most of them tried to do it in dark skies. Cooled camera in low light pollution (or when using narrowband filters) - does not have significant noise source to overpower read noise (thermal noise is low and LP noise is low) and long exposure is needed for either of the two to build up enough to swamp read noise. With CMOS cameras that are very low read noise and increase in popularity of astrophotography which lead to many more people trying to image from cities where LP is high (and also steady increase in LP over years) brought totally different situation. Today many people use 1 minute or even 30s subs without problem rather than 20 or 30 minutes, simply because there is noise source (namely LP) that will quickly swamp low read noise of CMOS sensor. In the end - good value by which LP noise should swamp read noise is in range of x3-x5. I personally advocate x5 because it produces only 2% difference in total noise per sub and that can't be distinguished by human eye. Calculating optimum sub length in above sense is thus easy - one should measure sky signal in their sub and make sure it is at least (5*read_noise)^2 (average background signal in sub in electrons needs to be square of 5 times read noise since signal level and its noise are related by equation signal = noise^2 or noise = sqrt(signal)).
  21. Common misconception is that signal must overwhelm noise, and that is true for final image - but it is really not important in single exposure. You can have single exposure with signal well below noise levels and still end up with final SNR that will be acceptable and show the target. Another misconception is that "no photons" can be captured in single exposure and thus result will always be zero. I'd be happy to image something that has say 0.1 electron of signal per exposure - even if noise in that region is say 3e. In that case roughly 9 out of 10 exposures will indeed fail to capture even single photon from target and all exposures will have signal much weaker than noise - yet, stack enough of them (in this particular example SNR is ~0.0333 and you would need to stack (5 / 0.03333)^2 = 22500 exposures to reach SNR of 5, but it will happen.
  22. Yes. It does depend on type of CC in question. Some are really terrible in that regard - like simple two element models. There is visible blurring/loss of resolution even in long exposure images where seeing mostly dominates, but in general - all will trade off some center of field sharpness for correction in outer part of the field. There are some telescope designs that don't suffer from this - for example Mak-Newtonians are known to be excellent planetary performers (especially F/6 - so somewhat slower models) - while having good star definition over larger field.
  23. Telescope needs to be at least diffraction limited for best planetary performance. Telescopes that have built in field flattener / or have FF/FR added, are usually not diffraction limited telescopes - which means that their Airy pattern is larger than it should be for given aperture or that they behave as smaller aperture telescopes as far as resolving capability of telescope goes. Look at spot diagram of Askar scope, and in particular RMS value on axis it is 1.617 (which is very good value by the way, so this scope is really not that much hampered by being corrected for photography). Now, let's do some math: Diameter of Airy disk of diffraction limited F/7.7 scope is ~4.3um Radius is therefore half that - or ~2.15um There is relationship between Airy profile and corresponding Gaussian profile that goes like this: So we are looking at about 1.46um of RMS (approximation) - we now have two values that we can compare: ~1.62 vs ~1.46 It is clear that Askar provides larger Airy pattern than diffraction limited telescope would - and is not diffraction limited. It blurs image more (much like smaller aperture).
  24. Yeah, I would not really consider that seriously. Best it could read is: FPS potential increased by 30% (but not necessarily actual FPS), and for imaging efficiency - no comment there You need to match pixel size of your planetary camera to F/ratio of the scope of choice. I would use simple telescope design for planetary imaging rather than telescope meant for DSO Imaging. There are some tradeoffs when you aim for good flat field - it usually scarifies diffraction limited performance in center. Something like 6" F/8 newtonian will eat that Askar for lunch on planets. Anyway: - camera pixels must match F/ratio of setup (you can adjust F/ratio with barlow lens) - pixel size * 5 = F/ratio, so for say camera with 2.9um pixels you want your F/ratio to be around F/14.5 - get camera capable of high FPS (USB 3.0 connection and preferably computer capable of recording data at high rate so SSD / NVME drive) - High QE - As Low read noise as you can get - for lunar and solar work, mono is better, for planets color/OSC is less hassle - for white light solar - either get full aperture Baader solar foil filter ND3 version (photographic) for newtonians / maks (anything with mirror), while for refractor get Herschel prism. Get Baader Solar Continuum filter as well - here formula for F/ratio is pixel_size * 3.7 (if you use Baader Solar Continuum filter) - for Solar Ha - that is whole new ballgame - get solar telescope and here F/ratio again different - with F/ratio = pixel_size * 3
  25. No idea of what is going on there. Did you have previous autopec / pec curve loaded? Maybe it just repeated earlier corrections?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.