Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I've just seen latest episode of PBS Space time: https://www.youtube.com/watch?v=BU8Lg_R2DL0 Which "explains" how many worlds elegantly gives rise to Born's rule. Main objection that I have on this interpretation (which I think is extremely elegant otherwise) is reiterated. Just to sum it up: If we have say 1/3 to 2/3 probabilities of outcome of some event (two possible outcomes) - there will be three copies of the world: one copy with first outcome and two copies with second outcome. My objection is - this violates Occam's razor - if we have 0.0001% vs 99.9999% probabilities instead we would need something like 9999999 same copies of the world to explain it. That is just a bunch of unnecessary copies, don't you think? But my objection goes deeper than this. I'm certain that we can prepare photon polarized in such way that probability of it passing thru polarizing filter is sqrt(2)/2 for example. See the problem? No amount of worlds can provide this probability as it is irrational number and can't be written down as quotient of two integers - so no matter how many copies you have - you can't reproduce Born's rule. This got me thinking - maybe this is a way to experimentally confirm many worlds? If we can't prepare photon in such way (or electron - or any other setup) to have irrational number as probability - that might be step in the right direction to verify many worlds (how ever crazy it may seem with huge number of same copies).
  2. Try setting manual exposure and lower it significantly - to say 30ms or there about (do try different ones). If you have delayed timer or a way to remotely trigger shutter that would be great. When you manually trigger the camera (press a button to take image) - you introduce shake into the setup that should really be avoided.
  3. Yep, that is what it sounds to me like - upgrade from AZGti. I'm sure it is much better in that role than AZGti is.
  4. Sorry for off topic, but it looks like my analysis by max slew speed is spot on
  5. How do you find that mount? I'm a bit skeptical about it, but maybe my skepticism is unfounded? Here is what I base it on: 1. Only RA is strain wave drive. DEC is worm - supposedly without backlash - maybe spring or magnetic loaded worm gear? 2. No mention on reduction / ratio of RA gear. I can only assume that it is not very good. This is based on 6 degrees per second slew speed, and the fact that strain wave drive is used in such a small package. Based on size of mount and say module 0.5 gearing (which is very small) - 80mm diameter will have 160 teeth and that is 160:1 at best reduction in strain wave stage - which I'm guessing is the only reduction stage (no mention of belts for RA and something else would still cause backlash and it is claimed to be backlash free). On the other hand we have slew speed of 6 degrees per second. That is 60s for whole rotation or 1 RPM. Steppers are usually maxing out at about 300RPM - so at best there is 300:1 reduction (look at max slew speed for say EQ6 which is around x800 which is ~3.4 degrees per second. That translates to 0.566 RPM or if we take 705:1 total reduction - 0.566 * 705 = 400RPM max speed of steppers). If it is truly 300:1 or even 400:1 reduction - that is way too low. That is 0.5"/step at 32 micro steps and with stepper positioning error - I'm doubting you'll get guide RMS below 0.7" or so?
  6. How flexible is the budget? €2000 sounds like a lot, but once you start listing things - it piles up rather fast. HEQ5 is very decent mount to start AP with. I'd add something like 130PDS just for cost saving, although it is question if one saves all that much over small refractor since you need to include coma corrector. Here is my list: https://www.firstlightoptics.com/reflectors/skywatcher-explorer-130p-ds-ota.html £240 https://www.firstlightoptics.com/equatorial-astronomy-mounts/skywatcher-heq5-pro-synscan.html £1040 https://www.firstlightoptics.com/guide-cameras/zwo-mini-finder-guider-asi120mm-bundle.html £205 (at the moment) https://www.firstlightoptics.com/zwo-cameras/zwo-asi-533mc-pro-usb-30-cooled-colour-camera.html £770 (at the moment) https://www.firstlightoptics.com/coma-correctors/skywatcher-coma-corrector.html £170 Total so far: £2425 We are already about 25% over the budget and I'm pretty sure that there will be some bits and bobs needed as well. This is also without laptop/computer to control all of that. You can certainly save some cash by getting second hand stuff. Two immediate cost savers are to go for DSLR instead of dedicated astro camera and get EQ5 mount, although - I would not advise trying to save money on mount - it is very important for good imaging experience (and even stock HEQ5 leaves a lot to be desired).
  7. I just figured out that I can do a rough polar align with a green laser. In fact, as far as laser is properly centered and if one is using some sort of visual aid (like a hand held finder or binoculars) - it can be more than rough polar alignment. This is all for star tracker type of mounts / wide field rigs. AZGti and star tracker that I'm 3d printing. So question here is - I want to add simple way to mount things to my AZGti without modding it too much. I'd rather not drill holes into it and maybe just a few drops of CA glue to hold some sort of mounting system in place. It needs to be low profile and unobtrusive. I've seen people mount Picatinny rail - probably because it is readily available to purchase and not that expensive. I'd rather 3D print some sort of attachment. So far, I came up with T-Slot type of attachment - where 3d printed nuts can do the job fairly nicely (just insert say m4 nut into 3d printed T-nut shape for example). Anyone has any other idea that might be better suited for the job? Keywords are 3D printed, easy to use, rigid enough and unobtrusive
  8. Printed focusing mechanism for my wide field rig: \ Above is Nema 11 case with stepper and pulley fitted to it. This is GT2 timing belt that I printed from TPU for testing purposes (did not have appropriate length on hand, but will order one - btw, printed GT2 belt is just fine for testing purposes for such low torque application): Here it is all assembled. That is Samyang 85mm T1.5 (cine F/1.4 version - no click stops for aperture but otherwise the same). Here it is from the back: (that is gx12 connector for stepper) Controller for focuser will be based on RPI pico: Just need to do PCB and print enclosure for it.
  9. vlaiv

    M33

    Here is quick processing in Gimp: I think that data is not bad at all.
  10. Can't happen. At least not for amateur setups and the way we observe. There is something called isoplanatic angle https://en.wikipedia.org/wiki/Isoplanatic_patch It is very small - like 20ish arc seconds in diameter (but it depends on conditions and equipment used). In the conditions we observe - different parts of the sky distort differently. Every isoplanatic patch has different deformation, and you would need laser for each to measure its deformation. That would be many many lasers. Second issue is that with physical correction like bending mirror - again, you can correct for only one of these patches - with others you will increase error by correcting for different patch. There is simply no way to correct for atmosphere over larger distances. For planetary imaging we can do this because we do corrections after we have gathered all the data and examined it for statistics. We also have strong signal in all the points of interest (no way of determining distortion in empty patch of the sky or where SNR is low). In any case - it can't be done in real time as you need to gather the data first and a lot of data (thousands of frames for a good planetary image).
  11. Ok, so first lets explain this. There are several noise sources in the image - regardless if the image is observed with eye or captured with sensor (in fact - sensor adds just a few more noise sources like dark noise which is negligible for cooled sensor and read noise). Two main sources of noise that are present in image are target shot noise and LP background shot noise. Both of these are the same in nature and in fact - you can't tell them apart. There is no way of knowing if photon landing on your sensor or your retina is from target or sky background. You can only do some filtering if target signal is very specific compared to background sky signal (like in Ha nebulae or emission nebulae in general) - but shot noise remains none the less - for every source that comes in discreet packets. Noise is "embedded" in signal when it reaches aperture - it is there before amplification of any kind. Gain or any other type of amplification amplifies whatever number of photons happens to be, and this number of photons already contains noise because of the way light works. Say that on average in some small period of time - like 30ms which is movie type exposure (so we get 30fps) - or "exposure" time of our eye/brain combination (we don't differentiate images if displayed at 30fps and our brain blends them into smooth motion) we get 50 photons. This is on average. Which means that in one integration period we will have 42 photons, in next one we will have 57 and so on - with average over time getting closer to 50 photons. No matter what you do to 42 photons - you can't conclude that it is 50 - 8, or that 57 is actually 50+7. Only with enough measurements (integration time) you can start getting the idea what is real signal - but this happens only when you reduce noise so much. In any case - amplify those 42 photons by 1000 times and you will amplify both signal and noise the same amount 42000 is equal to 50000 - 8000, so both signal has been amplified by 1000 and noise has been amplified by 1000, but their ratio remains the same 50 / 8 is the same as 50000 / 8000 - no change there. So amplification by gain in camera or by night vision device won't change SNR of photon signal so you don't get any sort of "background noise under control" from applied gain. Only way to reduce background noise - being LP shot noise is to block LP signal itself and this is what filter does. It does the same regardless of your use of night vision device. It does it the same when observing (with/without night vision) and when imaging - filter knows nothing of the device that sits behind it and it filters the light all the same. Now, you say - but look at what you can see with night vision device and you also point out images that were taken by phone or camera at the eyepiece. And I'm explaining that in the following way: 1. For visual - difference is only in applied or strength of the signal. Not noise reduction or noise removal. When we observe regular light without amplification - it is dim, we can see it but it takes effort. This is because of our brain kicking in without our knowledge and filtering noisy part of the image - or signal that is too weak not to produce noise. In fact - some of the signal is not noticed (although cells are triggered by photons) because of filtering and some of signal is denoised by brain. We never see the noise in dark images but we do have several sensations that are effect of what the brain is doing behind the scene - we for example might see object "pop in and out of view". Longer we observe - object will be present more (we learn to see it - or our brain has this need to keep our belief true - it is psychology thing and happens in other areas like making up events if we can't remember them exactly and being totally convinced it happened that way and so on). There is a way to trick our brain to show photon shot noise - I once managed to do it by accident. Take Ha filter and look thru it at a bright source but being in dim room. I looked at bright day scene in darkened room thru Ha filter when I noticed that view thru Ha filter looks like there is "snow" (or effect of old TVs when reception is too weak). This is because there was enough light in scene for brain to turn off noise reduction - but there was not enough light coming from Ha filter for SNR to be high enough and noise not to be seen - so I saw it. 2. How can video record amplified image and could normal camera without night vision be able to do the same / take single exposure and record the video. This part has to do with SNR of the image and has nothing to do with night vision. It has to do with "speed" of the system - which we often think in terms of F/ratio, but is actually aperture at resolution. Eyepiece and camera lens together act as powerful focal reducer and resulting resolution or sampling rate is enormous. Add that to the fact that 16" telescope is being used - which is massive aperture and you get enough signal to show object in short exposure with poor SNR. To explain a bit more - let's take one of those images - and analyze it: This is object observed thru 16" of aperture. It is observed in real time or near real time - but unfortunately, we don't know what size of sensor it is. For the sake of argument - let's say it is 4/3 sensor. To get that sort of FOV from 4/3 sensor - we must be operating at approximately: This is simulation of ASI1600 at 600mm Since telescope is 400mm and if we have say 600mm of FL we are effectively at F/1.5. Further more ASI1600 has pixel size of 3.8 but has 4600px across and the image above has maybe only 460px across - that is like having x10 pixel size. This produces enormous "speed" and could allow for very short exposures to show nebula - but that nebula: 1. Has very low resolution compared to normal astronomical images 2. Has itself low SNR compared to normal astronomical images. However - you can yourself achieve the same with regular astronomy camera if you do following: Use eyepiece / camera lens combination to get very large effective focal reduction. Use very large aperture scope and bin data in real time by some crazy factor. I've shown you above that single exposure can have very high SNR by loosing resolution. Here is another example - which we can compare to above: If I take just one uncalibrated sub that I made with 8" telescope of Pacman, that is only one sub - sure it is long exposure sub and can't compare to fraction of a second exposure, but neither can level of detail and SNR, I get above image. So I have x4 smaller aperture - much longer exposure, but also SNR and much much bigger and level of detail is much much bigger. This is the closest I can get without actually doing EEVA live stream on large telescope with afocal method to show that you can view nebula in real time - provided that you use very aggressive focal reduction in form of afocal method (eyepiece + camera lens) and binning data on the fly to reduce sampling rate and improve SNR.
  12. EEVA is already doing that for us - no need for "fancy" equipment or night vision. Issue is that people simply don't understand resolution nor SNR when imaging and their relationship. Want to see example taken with regular equipment that rivals and bests those images? This is single sub of M51 - 60s of integration and it shows signs of tidal tail and no background noise. How is that possible? Well, for starters - it is the same size as objects on that link when imaged thru an eyepiece. So we have traded resolution for SNR - not something people are willing to do (and often in fact over sample). In any case - try afocal method of imaging and produce very small images and you will be surprised of what can be achieved in close to real time - no night vision devices needed.
  13. You can't improve SNR in that way. SNR is the same or worse for visual. With visual observation, several things happen: Eye/brain combination filters out low level light. Even if we are able to detect light sources that are few photons strong - we never see associated shot noise. We never see noisy image. This is because our brain actively filter things out. If we look for prolonged periods of time - we then can "detect" object - because all that signal is accumulated enough for the brain to let us know something is there. When you amplify light enough - you will see it but it will start to show something you've never seen in real life - and that is shot noise. Look at all the amplified footage - you can actually see shot noise and individual light flashes. You can't accumulate more light than those 30ish milliseconds that brain integrates for. You can't make extended light sources appear brighter even when using optics. We always see equal (or less) surface brightness of objects, no matter how big or fast our telescope is. Our sensors when imaging are already working very close to limit of what is possible. We have very high quantum efficiencies of close to 90% (no room for improvement beyond 100% I'm afraid). We also have rather low read noise cameras of ~1e per exposure (this can be further improved, but even with 0e read noise - we would still have shot noise - which is seen in images - that we can't do anything about).
  14. Night vision devices can't remove quantum nature of light and can't amplify signal without amplifying the noise. One can certainly image night sky with 10ms exposures with regular gear (no need for night vision) but you run into two difficulties: 1. read noise. Read noise must be sufficiently small compared to other noise sources not to cause too much issues. This is true for planetary imaging because target shot noise per exposure is often greater than read noise because planets are bright. Even the we select very low read noise cameras to do planetary imaging I would personally wait for affordable close to zero read noise sensor to attempt anything similar. There are sCMOS sensors with very low read noise, but I'm not sure we have sufficiently low read noise for sub 10ms DSO imagin 2. Stacking In order to stack images - you need to have something to align images to. These are usually stars, but with such short exposures - you don't have enough SNR in even stars (except brightest ones) to be able to tell where they are, and besides that, seeing ensures that star image is not in exact position. Same thing happens with planetary imaging - we use alignment points and average of these disturbances is used to determine actual part of planet under alignment point, but for stars - it is more difficult as we have mount motion to contend with. It takes about 2 seconds and even more for star position to stabilize (that is why we use at least 2s guide exposures and why seeing is expressed as FWHM of 2 second star image). There have been attempts to do DSO lucky imaging that involves 16"+ apertures and about 500ms exposures, but it is not really lucky imaging in planetary sense - it is just clever way to remove poorest of the seeing conditions and it still does not move resolution boundary too much after stacking. Large apertures are needed in order for subs to have enough stars to register for stacking in half a second exposure.
  15. https://www.firstlightoptics.com/zwo-cameras/zwo-asi071mc-pro-usb-30-cooled-colour-camera.html Pixel sizes have decreased in recent years so above is one of few with sensible size. Most new cameras have below 4um and some even below 3um pixel size.
  16. @bosun21 Ok, there seems to be some confusion about the Lanthanum elements used and that's probably my fault. This is what I've gathered so far: 1. FPL53 is not the best match for lanthanum glass. Better matches are for example flourite glass like in Tak and FPL51 like in that fast doublet. 2. Lanthanum glass tends to create very high dispersion in Violet / UV part of spectrum, but can probably also absorb some light in this part (not sure about that part) 3. Lanthanum coatings are used to reduce impact of high dispersion in violet and control the color. Quite possibly that 2 and 3 are often mixed by marketing teams, or even used together. 125mm F/7.8 doublet is thought to have above combination of glass and that it employs coatings to reduce issues:
  17. They are not, but lanthanum glass is a material doped with lanthanum - which exhibits the same properties - absorption in violet part of spectrum - read the whole quote
  18. Here are two quotes from said page: That is related to ED doublet with lanthanum glass. On the matter of lanthanoids used with Tak100DZ, this is said: This all emphasizes the point that proper mating of the elements is what is important and there are a lot of different glass types out there.
  19. After reading the above, I don't find this strange. Most achros suffer from contrast loss because of other wavelengths and green part of spectrum (around 500-540nm) is very sharp in them. I remember having very sharp view of the Moon with 4" F/5 fast achro with Continuum filter. Even spherochromatism is often corrected at that wavelength.
  20. If you read the text, you will find that lanthanum glass is used because it filters out violet part of spectrum and reduces CA that way. There are filter that can do that if one wishes CA free view. To be honest, I have F/10 Achromat and I prefer unfiltered view of the Jupiter and I personally don't find CA to be distracting at all. I see it, it just does not bother me that much. Maybe because I owned F/5 achromat and I'm aware how severe and degrading CA can really be - but it's not nearly that much with F/10 achromat and view is still very sharp.
  21. For those who are a bit more interested in performance of refractor telescopes and impact of different types of glass - this is well worth reading: https://www.telescope-optics.net/commercial_telescopes.htm Even if you don't understand what's being said on the page - you will notice that things are far from those often heard: - FPL53 is the best - FPL51 is good enough on slower optics It turns out that selection of mating glass types and scope parameters in general have a huge impact on what can be achieved. I was particularly surprised that F/6 FPL51 scope can have higher hypothetical Strehl in green than say FPL53 F/6.6 scope of the same aperture - because of selection (or rather even availability) of suitable mating element. There is section dedicated to Tak 100 DZ as well - so again, worth a read.
  22. Good point, mount with encoders should have no such issues and yep, above clearly showed the issue with yours.
  23. That is really down to time keeping rather than anything else and it is sign of poor implementation in microcontroller. You can use two approaches - you can say: perform n ticks in next m seconds - where ticks are stepper micro steps or whatever, or you can say it is now exactly that much time - I should be on tick m but my counter says that I'm n - I need to perform m-n ticks to move mount forward. First approach accumulates error over time while second does not and that small error can turn into 30-40 degrees which is really half an hour two hours of time difference which is fraction of micro second over many many steps taken. In any case - not down to mechanical issues nor is important for mount performance for imaging (especially if guided). I'm more concerned with errors that are "real time" - say how much mount deviates peak to peak over few minutes - as those values impact on how well can mount be guided out.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.