Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I don't think it is duplicate of the star in any shape or form ("disturbed extra focal image") - as it stays in center when star drifts.
  2. Depends what you want to achieve. Most people don't have issue with being over sampled. I personally don't like it, and to me it is better to have sharp image when viewed at 100% zoom. If you don't mind slightly soft appearance when viewed at 100% - you don't need to do anything in particular. If you want to have sharp looking image when viewed at 100% - then you have several options. You can debayer with interpolation and then bin x2 resulting image after stacking. This will produce sampling rate that is adequate for the image, but it won't produce expected SNR improvement. For expected SNR improvement to happen - you need statistically independent data (like in mono camera - where each frame is as is - not debayered). With debayering, you interpolate missing values and you create statistical dependence between pixels to some degree. It is a bit like using stacking same sub several times and expecting to see improvement - you won't see it, you need every sub to be new exposure - you can't just copy one sub 10 times and stack that. Anyway - you can do it like that - bin x2 with some SNR improvement (but not x2 SNR improvement as you would expect from mono data), or you can use split debayer - which will only reduce sampling rate, but won't perform any interpolation. If you decide to use mono + filters - then binning is certainly the best option as it will work "as advertised".
  3. Super determinism really surfaces as soon as you try to understand entanglement in context of special relativity. In special relativity, simultaneity is no longer valid assumption and depends on observer. In fact - order of events is not guaranteed to be the same for every observer. There is no problem with that because causality is limited by the speed of light. Two events that can be ordered in reverse - depending on reference frame, can't be causally connected because they are far enough apart that influence can't make it across. Enter entanglement! Our general notion about it is that measurement conducted by Alice - determines state of system of entangled particles and "collapses" state of Bob's particle as well. This happens "instantaneously". See the problem there? SR does not know term "instantaneously" any more as that depends on observer. There is reference frame that observes Bob making measurement first and it's measurement causes "collapse" of particle that is near Alice. Who actually made measurement first? According to SR - it should not matter. This is why Einstein so much objected to this and called it spooky action at a distance that was "forbidden". Simple way out of it would be - it does not matter who made measurement first - since Bell's inequality rules out hidden variables - we are left with A causes B AND B causes A - and that can only be if there is C that causes both A and B. Both measurements are determined by prior event - super determinism.
  4. First, don't get me wrong - I'm not saying that scope is poor in any way. You'll be hard pressed to find well corrected scope that can illuminate 43mm diameter. However, such a small aperture, coupled with usual imaging conditions - seeing FWHM 2" and lightweight mount that usually guides around 1" RMS has limit in how much it can resolve - even for diffraction limited optics. In such conditions - 50mm of aperture can resolve to about 2.4"/px (~3.82 FWHM stars). This is with diffraction limited optics. When you correct over such large field - sharpness of optics goes down and as result FWHM goes up. Here - for this scope, in linear units, diameter of Airy disk is 6.6um (see here: https://www.astropix.com/html/astrophotography/astrophotography-calculators.html#ldad) However, if you look at this: which is spot diagram for RedCat 51 V2 - you see that even on optical axis, Geo Radius (which is half of diameter) is ~7.4um - or diameter is 14.8um That is ~x2.3 larger. Scope acts as if aperture is x2.3 smaller as far as "sharpness" is concerned (or about 22mm of aperture). In above calculation 2" FWHM seeing, 1" RMS guiding - this will produce about 6" FWHM stars - and needed sampling rate is closer to 4"/px rather than 3"/px (you get similar results if you take 3um RMS radius and compare it to 0.95" RMS radius for 51mm aperture - which at 250mm translates to ~1.15um and again 3/1.15 = ~2.6 larger). In any case - this shows that 3"/px is over sampled for this lens in regular conditions - same conclusion we came to by examining MTF graph. Again - this is not fault of the scope / lens, but rather problem with small pixels.
  5. Have you seen the specs for V2 RedCat 51? MTF below 90% for 30lpmm. That is line pairs per mm. 30 of them. Per 1000um - so one line pair is 33.33um or line width of 16.66um. And you think that 3.76um pixel will be under sampled? That is x4.5 smaller pixel size than you need to get below 90% contrast. Samyang 135 F/2 has better MFT fully open over the size of 533mc: Grey line is 30lpmm
  6. I don't see it like that. In fact - there are several examples to the contrary. 1) We now have 16bit CMOS cameras that still have adjustable gain. If only reason for adjustable gain was to over come bit depth - then there would be no need for it on 16bit camera 2) Some of CCD cameras don't need 16 bit at all - they have FWC that can be fully satisfied with lower bit count for example ICX825 has 24K FWC, so 15bits is more then enough to record electron count. Sony ICX814/5 has only 9000, so even 13bit would be enough for that (with a bit of a loss) - and 14bit is more then enough. Yet, they all have 16bit ADC 3) There are CCD models with more than 64K full well capacity - like KAF-09000 or KAF-16803, both with 100K+ FWC - yet, they don't have 17bit ADC to fully exploit that - but 16bit ADC. I think that adjustable gain has more to do with electronics and how ADC is implemented. With CMOS sensor - since each pixel has ADC unit associated and because of advancement of semi conductor manufacturing process (think CPUs and such) - it is easy to make circuitry for adjustable gain.
  7. Ideal F/ratio for ASI224 is about F/15 - this means x2.5 barlow. You don't need to spend any money - your x2 barlow can operate as x2.5 or anything in between (and possibly in range of x1.5 - up to x3 or more). Magnification that barlow lens provides depends on where it is placed with respect to focal plane - which in this case translates into sensor - barlow distance. Larger this distance - larger magnification factor. All you need to do is to tune this distance to get exact magnification you want. This is easy if you have few extension tubes - or best, variable extension, but you can also play with how deep you insert camera nose piece into barlow body. To determine optimum distance - procedure is quite easy and can be done during daytime. Aim telescope at a distant object - like church tower or bridge or tall building. Something that you can measure in pixels. Take image of this object without barlow and measure some length on it (height or bridge span or whatever). Now, insert barlow and camera into it and adjust distance between barlow element and sensor and measure same length on object - adjust distance between barlow element and sensor until you get x2.5 larger size of object compared to reference (shot without barlow).
  8. That really depends on target and conditions. In most cases 8bit capture is sufficient. Some targets like lunar for example (or solar) don't change at great speed, so you can capture for longer time. Jupiter rotates, and depending on aperture size and resolved detail - you will be limited in capture duration (unless you derotate your video - but then it is question of how does derotation affect final quality). If you have limited window to shoot in - faster FPS are better, as you want to capture as much frames as possible. On bright target, depending on gain setting you choose and we often choose high gain because read noise is lower - you run a risk of over exposing. Then you have two options - lower exposure further or use higher bit count. Former won't really bring in any benefit if you are limited at FPS you can achieve (not much point in shooting 2ms exposure if you can't record 500fps and 5ms is seeing limit) but later - using 12bit mode, will then be beneficial. Bottom line - on most targets, go with 8bit, and use 16bit mode only if there is real reason for it. By the way - using ROI can increase fps that you can achieve - which is good for small targets like planets.
  9. You are probably quite right about me missing the point of your post. I don't really see what is all the fuss about selecting suitable gain - one just needs to take a quick look at gain vs different settings and just select sensible value. Nothing to fret about really. In fact - selecting the lowest gain setting is pretty much like using CCD sensor, so that is always an option.
  10. Actually no. With CMOS sensors and adjustable gain - you have selection of read noise and possibility to avoid as much quantization error as possible. Say you have couple of sensors - one 12bit, one 14bit and one 16bit. With adjustable gain - you can make all of them work on unity gain - one electron = one ADU unit (or pixel value is number of electrons). In that case - higher bit count cameras have advantage that they can record higher range of signal values in single exposure. 12bit will be limited to 4095 electrons, 14bit will be limited to 16383 electrons and 16bit will be limited to 65535 electrons (hypothetically as we don't consider offset). This might seem like serious advantage - but it is not. There will always be stars that have higher brightness ratio than any of these numbers - so there will be always "blown" cores of some of the stars and that is dealt with in different manner - by using short exposures for over exposed areas. On the other side - camera with lower read noise will have advantage over one with high read noise as far as bit depth goes - so that is metric that should be looked at more than at bit count. With lower read noise - one can use shorter exposures, and with shorter exposures - one can have more of them for same total integration time, and we have seen that stacking raises total bit count - more subs you stack, more total bits you end up with. Say you compare ASI1600 with 12 bit and some CCD camera with 16 bit ADC. Say that this CCD has high read noise - maybe 13.6 (I've chosen this number for ease of calculation). ASI1600 has 1.7e of read noise. In order to swamp read noise with sky glow - you need to expose much longer with CCD - 13.6/1.7 = 8, so your sub needs to be x64 longer! This means that you'll have 64 less subs in final stack if you expose the same total time. 64 is 2^6, so x64 more subs is additional 6 bits of depth, and it turns 12bit camera into 18bit camera versus 16bit camera. Due to read noise - 12bit camera produces higher bit depth image in the end (all other things being equal).
  11. Very little if any advantage in higher bit count. With stacking, we certainly improve bit depth of the image (stacking 2 subs adds 1 bit, 4 subs - 2bits, 8 subs - 3 bits and so on). Resulting image often has much more bits of depth than camera itself.
  12. Planetary imaging is really game of SNR vs speed tradeoff. You need to use very short exposures in order to freeze the seeing - to be able to select the best subs. You need to keep single exposure at about 5ms or less (regardless what people often say about - histogram and such - that is not good way to set exposure length). Individual subs will be very noisy at those exposure times and you really don't want higher F/ratio than you need as that further lowers SNR per exposure. 150mm is enough aperture for some good planetary images. This was captured with 130mm of aperture: and this with only 100mm:
  13. That really depends on pixel size. There is no difference between average seeing conditions and best possible seeing conditions. You choose F/ratio to match pixel size for "perfect" conditions. Point of lucky imaging technique is to try to capture those short moments of "perfect" seeing, so you want sampling rate to be matched to what the scope is capable without worrying about seeing part. In any case - here is formula F/ratio = pixel size * 2 / wavelength_ of_light (here pixel size and wavelength of light is in same units, and for regular imaging 500nm is often used so wavelength is 0.5um) For regular imaging (not narrow band filters) above simplifies to F/ratio = pixel size * 4 If you have camera like ASI178 with 2.4um pixel size, then optimum F/ratio is 2.4 * 4 = F/9.6 For 3.75 (ASI224, ASI120 and alike) you get 3.75 * 4 = F/15 For F/20 you would need 5um pixel size, and I'm not sure if I know camera with that pixel size to be honest (closest that I've heard of is 5.2um).
  14. Maybe try some of the free apps that tell you where you should place polaris in your polar scope - like this one: https://apps.apple.com/us/app/polar-scope-align/id970157965
  15. There are usually two types of errors that appear when tracking. One is related to actual tracking rate - how well speed of tracker matches earth's rotation. Other is related to how well you polar aligned. First one causes drift in RA exclusively, second one causes drift in DEC exclusively. In your image you appear to have drift in both - in RA and DEC - which means that you have mixture of the two. DEC drift is handled easily - just make sure you are well polar aligned. RA drift can be caused by number of reasons. 1) Selecting wrong tracking rate. Mounts track at different tracking rates - Sidereal, Lunar, Solar. For these kind of images - you need to select Sidereal tracking rate - other two will cause trailinf 2) Mount simply lags or trails actual earth's motion - this is generally almost never the case. While it is 100% guaranteed to happen - effects are only visible after long periods of time - like whole day, so it is never a problem during single exposure, but it can sometimes happen that something is wrong with unit and that this error is much larger than it should be. 3) There is periodic error of the mount. This is related to mechanical build of mount. Circles are never perfectly round and machined parts are never 100% correct in their specification - there is always some tolerance. This causes some variation in tracking speed - sometimes mount trails and sometimes it leads compared to earth. This is called periodic error and it causes either small elongation of stars or streaks - depending on how big this error is. This error is repetitive / cyclic in nature (much like wave / ripple - sometime it is higher than average surface sometimes it is lower - but overall there is average surface height) and there are few ways to deal with it: - shorten your exposure length. Shortening exposure length will create less streaking and less subs will be affected by this - use PEC, or periodic error correction if available for your mount (it is usually present in more expensive units / computer controlled units), not sure if star tracker such as SA has this capability. - guide. This is primary reason why people guide their mounts - to eliminate this error
  16. ST120 - SkyWatcher StarTravel 120mm F/5 short achromatic refractor ...
  17. I think that Meteoblue is actually rather accurate. In fact - your case seems to confirm this rather than show that it is "way too optimistic". I'll explain. In final FWHM of the image - seeing is only one component. There are two additional components - one is mount performance which you mentioned, and another is aperture size and spot diagram of your optics. Guide RMS figures that you have are indeed low (I'd be surprised that it was otherwise with Mesu), but those figures are in RMS not FWHM "units". There is conversion rate that is approximately 2.355 (for Gaussian profile). 0.2-0.3 RMS is in fact 0.471 - 0.7 FWHM FWHM of diffraction limited airy disk of 8" aperture is about 0.56" - and smaller apertures only have higher figures (4" will have 1.12" FWHM from airy disk). When you put those figures together 1-2" FWHM from seeing, 0.5-0.7" FWHM from guiding and depending on your aperture - 0.5"-1" FWHM from airy disk, and you account for any deviation from diffraction limited optics (most of the time, any coma corrector or field flattener and/or focal reducer reduces sharpness of optics from diffraction limited in order to obtain larger flat field and good correction at the edges of the field) - then you see that it is perfectly sensible and normal to have total FWHM to be 2-3". In the end, I'd like to point out that Meteoblue can only offer values for atmospheric seeing - and not local seeing effects - like tube currents, or local thermals - which all add up to give you even higher FWHM. In my view, if you have 2-3" FWHM - you are doing quite ok for 1-2" seeing forecast (giving all that goes into final FWHM number).
  18. I'm sort of aiming for something very basic - partly because it is much easier to start with / build, and also because it will be less expensive and accessible to more people. Other reason is - I'm simply doing something like this for the first time, I just got my 3d printer and I started using CAD software I'm hoping to develop very simple "plug / play" kind of device. Just polar align, aim your camera / smart phone and hit single "go" button as far as tracking goes. No guiding / no slewing / no connection to computer and such in this iteration. Hardware also needs to be cheap and easily accessible - few types of common bearings, screws and nuts and of course stepper + controller (probably RPI pico) + driver. Hope to keep total cost south of $40-$50USD. On a separate note, I'm having second thoughts on cycloidal gear system and instead considering split gear compound planetary type system. First one would still need belt and pulley for additional reduction, while second can easily hit 500:1 in single straight forward assembly (not that I'll that much for 1" / microstep). Gear system can also be fully 3d printed and there is no need for small metal pins. Only concern currently is tracking accuracy - but we will see how it behaves.
  19. But we are not going to use simple DSLR style camera on that beast, are we? https://www.qhyccd.com/qhy6060/ A bit expensive, but let's not be picky QHY6060 BSI Grade 0 Contact QHYCCD QHY6060 BSI Grade 1 USD125000 QHY6060 BSI Grade 2 USD89000 If one has $33000 - what is another $90000 for Grade 2 sensor
  20. I think that point of that scope is shown in this image: Yep, that is 8.5cm rear opening. I would not be surprised that scope has imaging circle suitable for medium size sensor (around 80mm diagonal). Now, that is fast system!
  21. What is the total cost and how much time did you put in building it?
  22. It does seem very short, especially for 0.33"/px sampling rate If you are using longer exposures, then stick with those. I'm not sure if NINA is accurately calculating what needs to be calculated.
  23. You need to have median background higher than certain number that you need to calculate - based on your bias mean value and read noise. If you don't already know how to do it - then don't worry now, just capture as you normally would. For next time when planning a session - you'll examine subs from tonight's session and based on that you can calculate if you need longer exposure or not. There are explanations on how to do it - just search for optimum sub exposure length and you'll find plenty of info.
  24. You are using ASI2600MC, right? That is CMOS camera, and for CMOS camera - it does not matter if you bin at capture time or later in software. In fact - it is better to do it in software as you have more flexibility in the way you do it (for example - first split debayer and then stack + bin). You will get good final resolution for your conditions tonight (and improved SNR) - if you do it like described above. Just capture at bin 1 and you will handle everything later. Only thing to worry about is to swamp the read noise with background signal. That is it for capture.
  25. As far as we know - there is no process that would lead to something like that, and even if there were - we simply could not see anything now, or in eternity as no light comes from beyond event horizon. Only thing that we so far know can happen is black hole evaporation.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.