Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Resolving power of the telescope grows with aperture size. Large aperture resolves more than small For given pixel size longer focal length will give higher sampling than shorter focal length (more pixels per arc second). It turns out that these two grow the same for fixed F/ratio - thus if 6" scope has optimum sampling rate with F/15 for given pixel size - so with 8" scope as difference in resolving capability between 6" and 8" is the same as difference in sampling rate between 2250mm of FL and 3000mm of FL Optimum sampling rate for planetary is identified as sampling rate at which all possible information is captured. Telescope aperture, due to nature of light can only capture so much detail (depends on aperture size). This is identified as frequency cut off point. If we analyze signal produced by telescope at focal plane with Fourier transform we will find that there is certain frequency above which all higher frequencies are 0 - simply no signal there. Nyquist sampling theorem states that one should sample at twice the highest frequency in the band limited signal. We put those two together and we arrive at optimum / critical sampling. https://en.wikipedia.org/wiki/Spatial_cutoff_frequency Contains formula directly used to derive above one. Where f0 is cut off frequency, lambda is wavelength of light and F# is - f number of f/ratio of optical system. We combine that with Nyquist sampling theorem which says that we need two samples (two pixels) per wavelength corresponding to highest frequency (1/f0). We rearrange for F# and we get: F# = 2 * pixel_size / lambda Sampling is being measured in arc seconds - but there is simple relationship between pixel size in microns and arc seconds for given focal length (this shows that sampling rate in arc seconds is not fixed and grows with aperture since F/ratio is fixed - larger scopes resolve more in terms of arc seconds or have higher angular resolution). There is relationship between mount RMS and FWHM that is given by FWHM = 2.355 * RMS Stock Heq5 has guide RMS of around 1" Modded/tuned Heq5 has guide RMS 0.7-0.8 and even down to 0.5" depending on all the mods applied (same is for EQ6 really). Higher tier mounts can go as low as RMS 0.2"-0.3" (Mesu200 for example). Average seeing is 2" FWHM. Good seeing is 1.5" FWHM. Excellent seeing is 1" FWHM and below (below 1" happens only on mountain tops / deserts - special sites). You can always check out seeing forecast for your location (this is without local influence - actual seeing might be worse due to local heat sources): https://www.meteoblue.com/en/weather/outdoorsports/seeing/nottingham_united-kingdom_2641170 You will often find that seeing is below 2" - on a cloudy day / night - but when it is night time and clear - it is going to be around 2" (stable atmosphere helps with cloud formation). In the end - here is sort of rule of the thumb related to aperture size: with <100mm scopes - limit yourself to widefield imaging as you'll be limited to say about 2"/px 100mm - 150mm - upper sampling limit would be 1.8"/px 150mm - 200mm - 1.5"/px above 200mm - 1.3-1.4"/px One can attempt 1"/px - only on the best of nights using very good mount (RMS 0.5 or less). In all reality - odds are - you won't get image that has detail below 1"/px in 99.99% conditions / equipment.
  2. Math / physics For planetary imaging, everything related to sampling can be expressed in one simple formula: F/ratio = pixel_size * 2 / wavelength_of_light Where wavelength of light is in same units as pixel size (either micrometers or nanometers) and represents shortest wavelength of light that one is interested in recording. Can be 400nm for color imaging or perhaps 656 for H-alpha imaging (Solar Ha or lunar with Ha filter), or maybe 540nm for white light with Baader solar continuum - you get the idea. For deep sky imaging there are two basic formulae that are used - first is: sampling_rate = 1.6 * expected_FWHM and second: expected_FWHM = sqrt(seeing_fwhm^2 + tracking_fwhm^2 + airy_disk_gaussian_approximation_fwhm^2) Or - you don't need to calculate expected FWHM - you can measure it from your existing subs Only tricky part is knowing the seeing FWHM (or estimating average / min / max from existing subs) for given location. Anything below these sampling values is ok - you can choose to under sample (without any ill effects) - if you want to get wide field image Anything above these sampling values is going to hurt SNR and should be avoided. Use of binning can help - it effectively increases pixel size by bin factor (bin x2 - x2 larger pixel and so on).
  3. No No I'd say that nowadays we have much more accurate way of matching camera to telescope with respect to resolution, both for planetary and long exposure imaging.
  4. Only difference between say 1h of total exposure made out of 10 minute subs (that is 6 x 10 minute) and 3600 x 1s subs - is in read noise. All other noise sources add like with time. Signal also adds like that (take two 1 second subs and add them - you will get same amount of signal as one 2 second sub). Only thing that does not add with time is read noise. It adds once per sub, regardless of sub duration. Stack of 6 subs will have x6 read noise, and one consisting out of 3600 subs - will have x3600 of read noise. (in fact - noise does not add like normal numbers - it adds like linearly independent vectors - square root of sum of squares). This means that for very low read noise camera - one can get good image even if doing very short subs. Only drawback of using short subs is ability to properly stack them. You need to have enough stars to perform alignment of subs, and with very short subs we run into SNR issue - signal in stars is not very big compared to noise and we have difficulty determining true star position. Sometimes stars are so faint that software simply can't detect them. This is the reason why most of lucky type DSO imaging is performed with very large aperture scopes - they gather enough light for stars to show nicely in even short exposures. As far as resolution increase - Lucky type imaging as term has some sense in planetary lucky type imaging. With DSO - it is just borrowing that term from planetary - but in reality it is far far less effective. With planetary imaging we actually freeze the seeing and while image ends up being distorted - it does not suffer from motion blur (as image shimmers on order of dozen or so milliseconds - any longer exposure will be blurred by this motion). Lucky DSO exposures are far too long to be able to do this - and best we can do is simply select subs with better seeing FWHM than most. It is not different than discarding poor subs that is regularly performed in normal DSO imaging if something bad happens to that sub - like gust of wind or spell of very poor seeing that blows up stars. Advantage of short exposures is that you can still use "part" of long sub. Imagine that you image with subs that are 5 minutes long and in one such sub - there is a gust of wind that shakes the scope enough to ruin stars and image. That shake usually lasts for 10-20 seconds but whole 5 minutes of that sub will be discarded as sub standard. With short exposures - you get to keep 4 minutes and 40 seconds that were fine and just discard those 10 or so short subs that were actually ruined by wind. Same goes for sudden spells of worse seeing - if they are short - they will affect whole sub - but with above technique, we get fine grained control of what data we keep and what we throw away.
  5. Whenever you bin - effective read noise is multiplied with bin factor. Say your camera has 1e of read noise in full resolution at certain gain. Bin x2 will result in 2e of effective read noise Bin x3 will result in 3e of effective read noise and so on. This is for CMOS sensors where binning is performed after readout. For CCD sensors where binning is performed as part of readout - read noise remains the same.
  6. I think that main issue is lack of back focus on that 50mm finder. Focal plane on it is roughly where black tube ends. You can check that by unscrewing that simple eyepiece on finder end - and you will find that it has cross hair where focal plane is (so it is in focus when you look thru the finder). Info online about sky watcher finders is that it has 180mm of focal length. Now if you measure that focuser you have on tasco scope - you will find that focuser + diagonal will have about that much optical path give or take (diagonal alone is something like 60mm) In another words - what you thought of doing probably won't work. What might work would be to get one of these: https://www.aliexpress.com/item/4000923397029.html?spm=a2g0o.store_pc_allProduct.8148356.19.58015d4a4bsY1f&pdp_npi=2%40dis!EUR!€ 44%2C60!€ 23%2C64!!!!!%40210318c916678562655543276e8a99!10000011178628951!sh And 90 degree prism with very short optical path. When you combine those two - you might get short enough optical path to be able to fit it behind the 50mm lens and use that as RFT Alternative to that is simply to DIY small scope Get 50mm lens from AliExpress - this might be good option: https://www.aliexpress.com/item/4000933374862.html?spm=a2g0o.store_pc_groupList.8148356.23.531b2be1TQQwvq&pdp_npi=2%40dis!EUR!€ 15%2C24!€ 12%2C95!!!!!%400b0a050116678564622867697e415c!10000011286156697!sh That is 50mm aperture 360mm FL lens. You'll need some aluminum (or PVC) tubing. You can 3D print lens cell and any adapters needed. Add that helical focuser and diagonal - and you have nice scope (with less CA than the finder).
  7. Someone wanted to know impact of noise vs sampling rate. You can never perform proper comparison with real captures as there are so many variables that can get in the way. Who is going to guarantee that you'll get the same number of usable subs in both captures and if you use same number of subs - who is going to guarantee that blur will be the same. Seeing is random. It changes by the minute. In above simulation - all the variables are kept the same except for sampling rate and people can see for their style of processing (deconvolution / wavelets) - under given conditions (10% good frames, 200fps, 3 minute capture, excellent seeing, optimum vs x1.5 over sampling) - which will give them better results. I think it is useful for many - as most people won't bother to do comparison under night sky - they want to get actual capturing done. This way they can test their processing against different conditions without wasting a good night of imaging on doing comparison.
  8. I'd like to point out that that is not my opinion - that is a fact based on physics of light. It's not like I've chosen that to be the limit to my liking. It is over sampling - regardless of what we consider it to be. Loss of SNR is always an issue. It does not matter if if you have a large or small aperture. Per pixel photon flux is the same for critical sampling in small aperture as well as large aperture. This is because F/ratio is related to pixel size and if you increase the aperture and keep F/ratio the same - you also increase focal length and equally decrease light per pixel - so larger aperture but less light per pixel = constant photon flux per pixel. Fact that you increase gain - does not mean that more photons arrive. SNR is related to number of captured photons per exposure. If you expose for 2ms at high gain and you get same numerical value as exposing for 5ms at lower gain - does not mean that SNR is the same. In 2ms you'll get x2.5 less photons than in 5ms exposure - regardless of the gain setting. It is number of photons that dictate signal. Gain is only important for read noise - higher gain means less read noise. F/ratio is fixed for critical / optimum sampling. Why? Because as you put it 1m telescope being F/7 will give you same sampling rate as say 333mm at F/21 - but 1m has 3 times more resolving power than 333mm aperture scope and can thus utilize 3 times higher sampling rate. If you double the aperture - you can double the sampling rate - which means double the focal length for same pixel size 2 x aperture / 2 x focal length = aperture / focal length = constant F/ratio
  9. Third time's a charm as they say Here are pngs as these can be opened by Registax:
  10. For some reason Registax won't open above images properly, so here are 16bit tiffs optimum.tif oversampled.tif
  11. Here are test simulation files that compare F/14.5 to F/21.75 on C14 under "regular" conditions: - 200fps / 3 minute capture - total of 36000 subs captured - stacked top 10% subs, so 3600 subs stacked for each - Peak signal strength for optimum sampling is 200e (for oversampling it is 2.25 times less as light is spread over 1.5 x 1.5 = 2.25 larger surface). - Poisson noise was use to model shot noise - Read noise was modeled with Gaussian noise of 0.8e per stacked sub. - Each image was first blurred with telescope PSF (same used for both, adjusted by sampling rate) - Each image was additionally blurred with Gaussian blur (again adjusted for different sampling rate - so same in arc seconds) to simulate seeing effects. optimum.fits oversampled.fits Images are normalized in intensity and saved as 32bit fits. Green channel from above HST Jupiter image was used as baseline (I'm hoping that 8bit nature of baseline image won't have much effect given all the processing that went into making simulation as it was all done on 32bit version of data).
  12. I don't think so. Ultimately it depends on sharpening method used, and recently, I've become aware that simplest method might be the most effective method - and no one is using it (no implementation in software). That would be FFT division. In any case - what happens to the noise and signal is best viewed in frequency domain. I'll make some graphs to try to explain what is going on. This is signal in frequency domain - or "pure image". X axis represent frequency and closer to zero - coarser the detail and further away from zero - finer the detail. Telescope has MTF that looks like this: Seeing is very hard to model, and I honestly have no idea of how I would draw the curve. I know how to draw seeing curve without lucky imaging - that is easy, it is modeled by Gaussian shape, but selection of best frames and usage of alignment points "straighten up" things somewhat so they change the curve - in general, for this example, we can ignore it. Above two combine by multiplication in frequency domain. That means that resulting graph will be high where both are high, and low if any of the two is low. Image captured with telescope will thus have graph looking like this: Noise has more or less uniform distribution across frequency domains, and it looks like this: In the end the two add to produce frequency response that looks a bit like this: at some point signal just drops to level of noise and then it stays there. Sharpening is effectively "lifting" this sloped curve to be like original: but while we are lifting it - we are lifting it and also added noise. When we lift part where noise is larger then signal - we will just get noise on those frequencies and not signal. Now two things play part in how combined image will look when we have noise - first is sampling rate and second is level of noise or SNR. when over sampled - like in above image, then following happens. Red is signal, blue is noise and green is combined. When you over sample - then you have signal only in part of frequency spectrum - but noise is present all over the spectrum. This means that when you "straighten" up the curve - you end up straightening one part that is pure noise - there is no signal there. Depending on algorithm used - for example wavelets, this is the case of "finest" slider, or first one. If you over sample and you don't touch that first slider - you won't boost that noise, and there is no much difference between properly sampled and over sampled image, but if you try to sharpen up that "finest" detail that is not there - you'll just boost the noise without doing anything to signal. This shows that in some cases - over sampling can cause even more noise when sharpening that properly sampled signal as there is high frequency component of noise that is not matched by high frequency component of signal. We could actually do some nice experiments. I can prepare some data that is: a) properly sampled b) over sampled and I could also add different amounts of noise to each so we can determine what type of sharpening affects results in which way (we could all download data and try wavelets or other types of sharpening).
  13. Your logic is not wrong - if you want to capture all the detail that is potentially available - it is better to over sample than it is to under sample. I just don't see why would you aim for F/22 or F/24 if you can sample optimally at F/14.5 given your pixel size. If you use barlow - then you can "dial in" magnification by changing barlow to sensor distance. Maybe best approach is to just give it a try one way and the other and just choose what you feel is giving you best results. Most others are using that sort of an approach and don't really care that their images are over sampled, if it is easier for them to work that way.
  14. There is no detail loss due to over sampling - at least not one explicit one. You are quite right - it is far more difficult to model detail loss due to SNR loss and inability to sharpen as much as one would like. I don't think there will actually be detail loss - only more noise at the same level of detail as no detail is really lost if one over samples - it is just that it can't be shown without showing the noise as well.
  15. Yes, but we are exploring what sort of detail can be recorded at given sampling rate. We assume that both instruments can produce level of detail that requires given sampling rate (if not then it is indeed over sampling).
  16. Left image is original, right image has been scaled down to 50% of original (thus "loosing detail") and then enlarged back to 100% left image is original, right has been reduced to 25% (thus loosing detail) and then enlarged back to 100% When reduced to 50% and scaled up back - there is no difference, but when we reduce it to 25% (1/4 of sampling rate) - we can start to notice softening, so some detail has been lost. Same thing for HST image: left at full resolution - right scaled down to 50% and scaled back up to 100% (We can see some softening at this stage - feature edges are not as sharp in right image). this is 100% vs 25% -> 100% Softening is more than obvious now.
  17. Why do you think it is misleading? I just wanted to show that there is indeed over sampling. But ok, let me do another set of images that will show what you want - loss of detail due to different sampling.
  18. Very simple. Don't think about comparison image as being HST image - think of it as - the most detail that can be recorded at certain sampling rate. Fact that I used HST makes sure of that (as it is capable of recording at even higher sampling rates). I wanted to show what image at certain sampling rate looks like - when all the detail / frequencies are present. It looks sharp as we would expect. We can then compare that image with images produced by amateur equipment to see if images with amateur equipment also capture all the details that is possible at that sampling rate. It turns out that they don't - which means that given sampling rate is not adequate for amateur equipment. People are over sampling. That is the point of comparison and using of HST image (just to ensure we are not instrument limited at given resolution).
  19. It is HST image that I just googled as high resolution Jupiter image. It is much larger (and still sharp) than one I used for comparison. Here is original link: https://stsci-opo.org/STScI-01FM5S4SWQQ9M5A15BHBMG0NW4.png
  20. I don't think that you need to look at math at all - just look at first two images and you will clearly see the difference. Left is level of detail captured by over sampling, right is level of detail that is possible at that resolution. When you properly sample - you capture level of detail that is possible at given resolution. I'm failing to see how does their experience challenge the math/physics? Is their image sharp at higher resolution than is supported by physics of light? No, or at least I'm failing to see it.
  21. Here is comparison that should be clear enough: First is part of the image that you pointed to by Christopher Go Second is screen shot shows what sort of image can be recorded at that scale if all frequencies of the image are utilized. You can clearly see the level of detail present in bottom image that is simply missing in top image. This points to the fact that above image by Christoper Go simply does not contain detail for presented resolution. We can then go on to analyze frequency response between two images and to see what happens when we remove high frequency components from the bottom image. Here are again two crops side by side (256 x 256 for easy FFT) and their respective frequency responses. Left frequency response is concentrated in the center - nothing to the sides, while right one utilizes whole frequency space for this image. We can now remove high frequency components for both images and restore images from their FFT representations to see what will happen. Ok, now we have just very center (low frequency) part of both images - and left image is almost the same / unchanged, but right one is very different. It is missing that high fidelity and now it looks as blurred as left image. (vertical and horizontal banding is artifact of sharp cutoff - I just set higher frequencies to 0 - but in nature it is gradual drop towards the zero so we don't see that vertical / horizontal ripples). In the end - we can clearly see - where there is telescope produced frequency cut off point in the left FFT: ImageJ reports that this is at 4.14 pixels per cycle. Nyquist sampling is at 2 pixel per cycle, so this is 4.14 / 2 = x2.07 over sampled. I can't say if above image by Christopher Go would be worse if sampled at proper resolution, but what I can tell you is that it is over sampled and it can be clearly seen both in image softness and in frequency domain as demonstrated. Not only that it is over sampled by factor of x2 - it looks that due to seeing and processing - actual sampling rate is closer to x4 lower than what was used (majority of signal is concentrated in that central 1/4 circle) If we reduce it to 25% of its size and compare to much sharper reference at the same size - we get comparable detail level and sharpness (except for color saturation - one conclude that same instrument / conditions were used for these two images)
  22. Oversampling won't produce less detail / worse image as far as detail is concerned. Issue with oversampling is as follows: As you spread the light more than you need to - you get worse SNR per sub and you need to increase either number of subs or sub length in order to compensate. Problem with either of those is that we are doing lucky imaging. - longer subs means that one won't freeze the seeing, and only people with excellent seeing can afford to increase sub duration to compensate - more subs / longer imaging runs - again, here we are at a loss. SNR improves with square of captured subs, yet number of usable subs grows linearly (there will be certain percent of good subs regardless if you image for 4 minutes of 8 minutes, so number of good subs grows linearly with time). In any case - people with excellent skies won't feel those issues as much as the rest - but oversampling leads to poorer SNR in ending image (which in turn means less room for sharpening). As to why @astroavani gets better results with over sampling versus regular sampling - I can't say, but I can point out that capture is only one variable before we get to final result. There is stacking and sharpening to consider, and it might be due to either of these two that causes regular sampling to present poorer results. In any case - if image is over sampled - that is easily identified in the image. High frequency components will be missing and there is simply no cheating of the physics - one cannot capture what is not there.
  23. Hi and welcome to SGL. What you have is rather decent kit. Only thing that might be worth looking to replace or check out further would be: That should be diagonal mirror / prism, and these come in several versions. One distinction is 90 degrees vs 45 degrees (there are few other versions as well - but are far less common - like 60 degrees and such). 90 degrees can be either mirror or prism and is preferred for astronomical observation. 45 degrees is usually reserved for day time / terrestrial viewing - in form of spotter scope, and is usually a prism of lower quality (there are 45 degrees prisms with astronomical quality - but these are really expensive). What confuses me is term "dielectric" - which is usually associated with mirrors rather than prisms and there are no (as far as I know) 45 degree mirror diagonals available (not even sure if it is possible to make one). Planetary views might be improved with better diagonal mirror if this one is not very good. Having said that - observing is a skill that is learned with practice. You need practice / experience for best planetary views - but also for best DSO views. Some things are even not associated with observing - but still need experience and learning how to deal with them. To optimize planetary views - you should: - wait for telescope to thermally stabilize (depending on size of telescope and temperature difference between indoors and outdoors - this can be anywhere from half an hour to few hours). In your case, I'd wait at least half an hour before attempting high power views. - You need to pay attention to your surroundings - large buildings / roads / lakes - anything that can soak up heat during the day and release it during night - can ruin your views as it causes turbulence in the air. - Check out forecast for JetStream and avoid it when observing planets. In general - you need to learn to judge how good the seeing is on particular night in order to know what sort of results you can expect. Twinkling stars and transparent sky is not the best for planetary views. Haze in the air and still air is much better (no star twinkling) and so on - don't get fully dark adapted - you'll get better planetary views if you are not fully dark adapted (unlike DSO observing). - be patient at the eyepiece, wait for moment when atmosphere calms down and image suddenly becomes sharper - good focus is a skill - don't be afraid to adjust focus often, to confirm that it is as good as can be. That scope should have dual speed focuser and use that for fine adjustments - do a bit in / out focus to check where you are at. When seeing is particularly poor - you'll have tough time finding perfect spot - and that is normal, anyone will feel the same as image is just dancing around - If you wear glasses - try observing both with and without them. For simple correction - telescope focus can compensate, but if you have astigmatism or other aberrations, then you'll get sharper image of planets with glasses on - don't push magnification too high. With that scope - keep it in x100 - x150 range (magnification is calculated by dividing telescope focal length with eyepiece focal length). Sometimes better image (although smaller) will be with lower magnification. With DSOs - again it is about skill (practice) and patience. - get dark adapted - get dark adapted more - get dark adapted some more - transparent / windy nights will help you see more - shield yourself from any light sources even if you are in low light pollution - use lower to medium magnifications to start (x20-x50) - this will both help you locate objects and detect them. Use higher magnifications only if target calls for those (usually planetary nebulae and sometimes globular clusters). 72mm of aperture does not sound like much (compared to other scopes) - but it can show you plenty, especially from dark location.
  24. Yep - good finder properly aligned is the key (some people use eyepiece trick, or even flip mirror).
  25. I'm afraid not. I've taken on too much DIY projects (finishing enclosure for 3d printer and modding it) and on top of that we are finishing top floor of the house (floors, doors, electrical appliances) so I'm all over the place at the moment and most of side projects have been put on hold last few weeks. Hopefully, I'll have more time by the end of November to get it going again.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.