Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Can't happen. At least not for amateur setups and the way we observe. There is something called isoplanatic angle https://en.wikipedia.org/wiki/Isoplanatic_patch It is very small - like 20ish arc seconds in diameter (but it depends on conditions and equipment used). In the conditions we observe - different parts of the sky distort differently. Every isoplanatic patch has different deformation, and you would need laser for each to measure its deformation. That would be many many lasers. Second issue is that with physical correction like bending mirror - again, you can correct for only one of these patches - with others you will increase error by correcting for different patch. There is simply no way to correct for atmosphere over larger distances. For planetary imaging we can do this because we do corrections after we have gathered all the data and examined it for statistics. We also have strong signal in all the points of interest (no way of determining distortion in empty patch of the sky or where SNR is low). In any case - it can't be done in real time as you need to gather the data first and a lot of data (thousands of frames for a good planetary image).
  2. Ok, so first lets explain this. There are several noise sources in the image - regardless if the image is observed with eye or captured with sensor (in fact - sensor adds just a few more noise sources like dark noise which is negligible for cooled sensor and read noise). Two main sources of noise that are present in image are target shot noise and LP background shot noise. Both of these are the same in nature and in fact - you can't tell them apart. There is no way of knowing if photon landing on your sensor or your retina is from target or sky background. You can only do some filtering if target signal is very specific compared to background sky signal (like in Ha nebulae or emission nebulae in general) - but shot noise remains none the less - for every source that comes in discreet packets. Noise is "embedded" in signal when it reaches aperture - it is there before amplification of any kind. Gain or any other type of amplification amplifies whatever number of photons happens to be, and this number of photons already contains noise because of the way light works. Say that on average in some small period of time - like 30ms which is movie type exposure (so we get 30fps) - or "exposure" time of our eye/brain combination (we don't differentiate images if displayed at 30fps and our brain blends them into smooth motion) we get 50 photons. This is on average. Which means that in one integration period we will have 42 photons, in next one we will have 57 and so on - with average over time getting closer to 50 photons. No matter what you do to 42 photons - you can't conclude that it is 50 - 8, or that 57 is actually 50+7. Only with enough measurements (integration time) you can start getting the idea what is real signal - but this happens only when you reduce noise so much. In any case - amplify those 42 photons by 1000 times and you will amplify both signal and noise the same amount 42000 is equal to 50000 - 8000, so both signal has been amplified by 1000 and noise has been amplified by 1000, but their ratio remains the same 50 / 8 is the same as 50000 / 8000 - no change there. So amplification by gain in camera or by night vision device won't change SNR of photon signal so you don't get any sort of "background noise under control" from applied gain. Only way to reduce background noise - being LP shot noise is to block LP signal itself and this is what filter does. It does the same regardless of your use of night vision device. It does it the same when observing (with/without night vision) and when imaging - filter knows nothing of the device that sits behind it and it filters the light all the same. Now, you say - but look at what you can see with night vision device and you also point out images that were taken by phone or camera at the eyepiece. And I'm explaining that in the following way: 1. For visual - difference is only in applied or strength of the signal. Not noise reduction or noise removal. When we observe regular light without amplification - it is dim, we can see it but it takes effort. This is because of our brain kicking in without our knowledge and filtering noisy part of the image - or signal that is too weak not to produce noise. In fact - some of the signal is not noticed (although cells are triggered by photons) because of filtering and some of signal is denoised by brain. We never see the noise in dark images but we do have several sensations that are effect of what the brain is doing behind the scene - we for example might see object "pop in and out of view". Longer we observe - object will be present more (we learn to see it - or our brain has this need to keep our belief true - it is psychology thing and happens in other areas like making up events if we can't remember them exactly and being totally convinced it happened that way and so on). There is a way to trick our brain to show photon shot noise - I once managed to do it by accident. Take Ha filter and look thru it at a bright source but being in dim room. I looked at bright day scene in darkened room thru Ha filter when I noticed that view thru Ha filter looks like there is "snow" (or effect of old TVs when reception is too weak). This is because there was enough light in scene for brain to turn off noise reduction - but there was not enough light coming from Ha filter for SNR to be high enough and noise not to be seen - so I saw it. 2. How can video record amplified image and could normal camera without night vision be able to do the same / take single exposure and record the video. This part has to do with SNR of the image and has nothing to do with night vision. It has to do with "speed" of the system - which we often think in terms of F/ratio, but is actually aperture at resolution. Eyepiece and camera lens together act as powerful focal reducer and resulting resolution or sampling rate is enormous. Add that to the fact that 16" telescope is being used - which is massive aperture and you get enough signal to show object in short exposure with poor SNR. To explain a bit more - let's take one of those images - and analyze it: This is object observed thru 16" of aperture. It is observed in real time or near real time - but unfortunately, we don't know what size of sensor it is. For the sake of argument - let's say it is 4/3 sensor. To get that sort of FOV from 4/3 sensor - we must be operating at approximately: This is simulation of ASI1600 at 600mm Since telescope is 400mm and if we have say 600mm of FL we are effectively at F/1.5. Further more ASI1600 has pixel size of 3.8 but has 4600px across and the image above has maybe only 460px across - that is like having x10 pixel size. This produces enormous "speed" and could allow for very short exposures to show nebula - but that nebula: 1. Has very low resolution compared to normal astronomical images 2. Has itself low SNR compared to normal astronomical images. However - you can yourself achieve the same with regular astronomy camera if you do following: Use eyepiece / camera lens combination to get very large effective focal reduction. Use very large aperture scope and bin data in real time by some crazy factor. I've shown you above that single exposure can have very high SNR by loosing resolution. Here is another example - which we can compare to above: If I take just one uncalibrated sub that I made with 8" telescope of Pacman, that is only one sub - sure it is long exposure sub and can't compare to fraction of a second exposure, but neither can level of detail and SNR, I get above image. So I have x4 smaller aperture - much longer exposure, but also SNR and much much bigger and level of detail is much much bigger. This is the closest I can get without actually doing EEVA live stream on large telescope with afocal method to show that you can view nebula in real time - provided that you use very aggressive focal reduction in form of afocal method (eyepiece + camera lens) and binning data on the fly to reduce sampling rate and improve SNR.
  3. EEVA is already doing that for us - no need for "fancy" equipment or night vision. Issue is that people simply don't understand resolution nor SNR when imaging and their relationship. Want to see example taken with regular equipment that rivals and bests those images? This is single sub of M51 - 60s of integration and it shows signs of tidal tail and no background noise. How is that possible? Well, for starters - it is the same size as objects on that link when imaged thru an eyepiece. So we have traded resolution for SNR - not something people are willing to do (and often in fact over sample). In any case - try afocal method of imaging and produce very small images and you will be surprised of what can be achieved in close to real time - no night vision devices needed.
  4. You can't improve SNR in that way. SNR is the same or worse for visual. With visual observation, several things happen: Eye/brain combination filters out low level light. Even if we are able to detect light sources that are few photons strong - we never see associated shot noise. We never see noisy image. This is because our brain actively filter things out. If we look for prolonged periods of time - we then can "detect" object - because all that signal is accumulated enough for the brain to let us know something is there. When you amplify light enough - you will see it but it will start to show something you've never seen in real life - and that is shot noise. Look at all the amplified footage - you can actually see shot noise and individual light flashes. You can't accumulate more light than those 30ish milliseconds that brain integrates for. You can't make extended light sources appear brighter even when using optics. We always see equal (or less) surface brightness of objects, no matter how big or fast our telescope is. Our sensors when imaging are already working very close to limit of what is possible. We have very high quantum efficiencies of close to 90% (no room for improvement beyond 100% I'm afraid). We also have rather low read noise cameras of ~1e per exposure (this can be further improved, but even with 0e read noise - we would still have shot noise - which is seen in images - that we can't do anything about).
  5. Night vision devices can't remove quantum nature of light and can't amplify signal without amplifying the noise. One can certainly image night sky with 10ms exposures with regular gear (no need for night vision) but you run into two difficulties: 1. read noise. Read noise must be sufficiently small compared to other noise sources not to cause too much issues. This is true for planetary imaging because target shot noise per exposure is often greater than read noise because planets are bright. Even the we select very low read noise cameras to do planetary imaging I would personally wait for affordable close to zero read noise sensor to attempt anything similar. There are sCMOS sensors with very low read noise, but I'm not sure we have sufficiently low read noise for sub 10ms DSO imagin 2. Stacking In order to stack images - you need to have something to align images to. These are usually stars, but with such short exposures - you don't have enough SNR in even stars (except brightest ones) to be able to tell where they are, and besides that, seeing ensures that star image is not in exact position. Same thing happens with planetary imaging - we use alignment points and average of these disturbances is used to determine actual part of planet under alignment point, but for stars - it is more difficult as we have mount motion to contend with. It takes about 2 seconds and even more for star position to stabilize (that is why we use at least 2s guide exposures and why seeing is expressed as FWHM of 2 second star image). There have been attempts to do DSO lucky imaging that involves 16"+ apertures and about 500ms exposures, but it is not really lucky imaging in planetary sense - it is just clever way to remove poorest of the seeing conditions and it still does not move resolution boundary too much after stacking. Large apertures are needed in order for subs to have enough stars to register for stacking in half a second exposure.
  6. https://www.firstlightoptics.com/zwo-cameras/zwo-asi071mc-pro-usb-30-cooled-colour-camera.html Pixel sizes have decreased in recent years so above is one of few with sensible size. Most new cameras have below 4um and some even below 3um pixel size.
  7. @bosun21 Ok, there seems to be some confusion about the Lanthanum elements used and that's probably my fault. This is what I've gathered so far: 1. FPL53 is not the best match for lanthanum glass. Better matches are for example flourite glass like in Tak and FPL51 like in that fast doublet. 2. Lanthanum glass tends to create very high dispersion in Violet / UV part of spectrum, but can probably also absorb some light in this part (not sure about that part) 3. Lanthanum coatings are used to reduce impact of high dispersion in violet and control the color. Quite possibly that 2 and 3 are often mixed by marketing teams, or even used together. 125mm F/7.8 doublet is thought to have above combination of glass and that it employs coatings to reduce issues:
  8. They are not, but lanthanum glass is a material doped with lanthanum - which exhibits the same properties - absorption in violet part of spectrum - read the whole quote
  9. Here are two quotes from said page: That is related to ED doublet with lanthanum glass. On the matter of lanthanoids used with Tak100DZ, this is said: This all emphasizes the point that proper mating of the elements is what is important and there are a lot of different glass types out there.
  10. After reading the above, I don't find this strange. Most achros suffer from contrast loss because of other wavelengths and green part of spectrum (around 500-540nm) is very sharp in them. I remember having very sharp view of the Moon with 4" F/5 fast achro with Continuum filter. Even spherochromatism is often corrected at that wavelength.
  11. If you read the text, you will find that lanthanum glass is used because it filters out violet part of spectrum and reduces CA that way. There are filter that can do that if one wishes CA free view. To be honest, I have F/10 Achromat and I prefer unfiltered view of the Jupiter and I personally don't find CA to be distracting at all. I see it, it just does not bother me that much. Maybe because I owned F/5 achromat and I'm aware how severe and degrading CA can really be - but it's not nearly that much with F/10 achromat and view is still very sharp.
  12. For those who are a bit more interested in performance of refractor telescopes and impact of different types of glass - this is well worth reading: https://www.telescope-optics.net/commercial_telescopes.htm Even if you don't understand what's being said on the page - you will notice that things are far from those often heard: - FPL53 is the best - FPL51 is good enough on slower optics It turns out that selection of mating glass types and scope parameters in general have a huge impact on what can be achieved. I was particularly surprised that F/6 FPL51 scope can have higher hypothetical Strehl in green than say FPL53 F/6.6 scope of the same aperture - because of selection (or rather even availability) of suitable mating element. There is section dedicated to Tak 100 DZ as well - so again, worth a read.
  13. Good point, mount with encoders should have no such issues and yep, above clearly showed the issue with yours.
  14. That is really down to time keeping rather than anything else and it is sign of poor implementation in microcontroller. You can use two approaches - you can say: perform n ticks in next m seconds - where ticks are stepper micro steps or whatever, or you can say it is now exactly that much time - I should be on tick m but my counter says that I'm n - I need to perform m-n ticks to move mount forward. First approach accumulates error over time while second does not and that small error can turn into 30-40 degrees which is really half an hour two hours of time difference which is fraction of micro second over many many steps taken. In any case - not down to mechanical issues nor is important for mount performance for imaging (especially if guided). I'm more concerned with errors that are "real time" - say how much mount deviates peak to peak over few minutes - as those values impact on how well can mount be guided out.
  15. @Adreneline Above idea of yours is actually much better way to asses tracking / periodic error as it gives better resolution for that sort of purpose. Let's say we have 50cm aluminum bar that is as wide as regular vixen dovetail (about 44mm) and say 10mm high or something like that. It should not flex much if at all. At 50cm we have x12 less resolution than at 6 meters, and from above calculation 1 arc second is 0.03mm at 6 meters, so it will be 0.03 / 12 = 0.0025 at 50cm. That is about 1/4 of what instrument can read - but let's say we read every second - so we have movement of 15 arc seconds - so that is 0.0025 * 15 = 0.0375 and that should easily be visible at the gauge. Since we have 10mm of total motion - that is about 266s - not bad at all.
  16. That is also neat idea. How did you rig everything up? Did you use some sort of lever and if so - how long was it and how did you ensure against flex?
  17. Just occurred to me that I don't need to use less magnification - I can just project at closer distance
  18. And of course - it works Tested with green laser pointer and SkyWatcher 50x8 finder (or whatever its magnification is x7, x8 or x9 or somewhere in between).
  19. I was not sure if I should post this in mount section or here. It has to do with assessing mount performance - how well it tracks, without the need for wasting clear skies or always having doubt if seeing is responsible for measured mount roughness (although with multi star guiding this is now much less of issue). I personally came up with it for the need of testing how smooth 3d printed reduction gears work combined with stepper motor - to get the idea of positional accuracy, backlash and any sort of periodic or non periodic error in tracking. Initial idea was to simply strap the laser pointer on top of axis and monitor what the laser point projected on white surface far away from motor is doing (maybe use millimeter grid paper or something like that or record with camera and analyze footage). However, as you will soon see - this does not really fit into "in house" criteria. One degree of arc is roughly 1 meter at 57.3 meters of distance. This further means that one minute of arc is 16.667 mm at that distance and of course, one arc second is 1/60th of that - which is ~0.3mm. Not really something you can easily measure - at least when light dot projected on the paper is in game. Let alone the fact that "in house" distances need to be at least x10 smaller and everything is scaled down x10 - so movement of one arc second would be 0.03mm - now that is 30 microns movement at 6 meters. I played with all sorts of different ideas in my head of how could we amplify everything. Maybe using mirrors to create larger distance by bouncing light ray several times off the mirrors - but every imperfection in mirror surface would be amplified as well - so we would need optical flats of high quality and way to align them properly - too complex for simple "in house" device. If only there was a way that we could amplify light angles easily Well, that was the question I asked myself just before light bulb moment (sometimes its worth just asking the right kind of question). We have and often use (at least when weather allows) devices that are great at amplifying light angles So here is what I came up with (still need to test it, but I think it will work as I expect). Shine a laser thru the front objective of a telescope with the eyepiece at the other end in straight thru configuration (or even use a prism if we want 90 degree bend for some reason). Telescope should do what telescope does - it should amplify incoming light ray of collimated light (it needs to be focused at infinity - but we can easily tweak the focus to get smallest dot on the projection screen - no need to "prefocus" on stars or anything like that). Depending on focal length of telescope and eyepiece used - we can have significant angle amplification. Most of things will happen near optical axis so we don't need wide angle eyepieces - in fact we want as low distortion as possible. We can change eyepieces change magnification of the effect so we can measure different behavior - for backlash and positional accuracy we can use x200 for example to get down to arc second resolution, but for tracking we need arc minute resolution as sidereal is ~15 arc second per second - so we might want to have enough of screen to capture few minutes of tracking and that would mount to say 200x15 = 3000 arc seconds - so we need less magnification for that. In any case - if we have 6 meters distance to projection screen and use x200 - angle of laser beam won't be 1 arc second but 200 arc seconds instead, so deflection won't be 0.03mm but x200 larger - or 6mm - now that is easily measurable with millimeter grid paper. However at that magnification we would need 3000 x 6mm = 18 meters of screen for few minutes of tracking - clearly not good idea, but we can drop magnification for that purpose to say x20 or even less - depending on what we have at disposal (maybe even use finder that is x7 magnification for this purpose). We can even create setup that amplifies just x2 - x3 by combining two eyepieces - one would have "telescope" role and other would be regular eyepiece. 32mm plossl and 12mm plossl (or 17mm one, I'm just listing ones I have on me ) could give interesting combinations. In fact - If I pair 9-27 zoom with 32mm plossl - I can get range of magnifications for this. Anyway, all that is left to do is to try it out (I might just do that now as I have laser and finder on the desk with me). What do you think about the concept?
  20. Dark filters are useful for remote setups and setups used by different people with different requirements for light exposures. Instead of creating every dark imaginable or set of darks and limiting exposure lengths to only generated dark exposures - one can use dark filters in their filter wheel. That way anyone using the remote telescope can take matching darks to their particular exposure (and other settings). However, in amateur conditions - it is much more sensible to do darks with camera off the scope - in basement or other dark room while it is cloudy outside. That lets you take large number of darks (to minimize noise impact).
  21. I was going to suggest alternate name for it : polyscopy, but then I realized it sounds way too much like a very nasty medical exam
  22. This might well be true. I've seen significant drop in LP after say about midnight or 1am when most human activity drops significantly. Less traffic, less lights from houses.
  23. https://www.firstlightoptics.com/adapters/astro-essentials-1-25-inch-t-mount-camera-nosepiece-adapter.html
  24. Yes, Autostakkert!3 is the norm (there is even version 4 coming out, but it is still early beta stage, so for the time being - stick with v3) for stacking. Most people use either SharpCap or FireCapture for capturing. Use about 3-4 minute videos (use SER format). Capture in Raw / don't debayer at the moment of capture - software will do that in special way when stacking. Use about 5ms exposure, even if image seems too dim - again, after stacking you'll process the image and it will look nice. You need something like 30000-40000 frames and then you can decide how much you want in stack - usually around 5-10% but that will depend on seeing conditions. Use Registax 6 or AstroSurface for wavelet processing (sharpening) of the image.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.