Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

vlaiv

Members
  • Posts

    13,014
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Signal does not need to swamp the read noise at all - and when imaging the faint stuff - it almost never does, at least not signal of interest - target signal. What is important when we talk about read noise is to swamp read noise with some other type of noise. Out of basic types of noise present when imaging - only read noise is "per exposure". All others are time dependent - that is dark current noise, light pollution noise and target shot noise. Since target signal is often weaker then read noise in single exposure - that is not our candidate. Neither is thermal noise (dark current noise). Only real candidate is light pollution noise. Noise adds like linearly independent vectors and if one is much bigger than the other - result will be very close to that larger one (think right angled triangle and one side being particularly short - that makes hypotenuse almost the same length as the other side). Histogram is just a distraction - it has almost zero value in astronomical imaging. Main thing it can tell us if there is some clipping - but we can see that from the stats as well, so no real reason to use histogram. When you want to calculate optimum exposure length - you should simply measure background signal per exposure - from that derive LP noise (which is square root of sky signal) and compare that with your read noise. This LP noise should be 3-5 times larger than read noise. Any exposure length longer than that is just bringing in diminishing returns (there is only 2% difference in SNR if one uses x5 swamp factor over single long exposure - and humans can't tell that difference in SNR by eye). This makes no sense - as I've shown above with the example of one sub. In that sub read noise is many times larger than signal in tidal tails - yet binning works just as fine. SNR impact of read noise depends on other noise sources and not on signal we are trying to capture. As long as we keep it (read noise that is) below some fraction of some other noise source - it is irrelevant regardless of how low our target signal is. Let me ask you a question like this: Say you have fast scope with large pixels and two different cameras, but cameras differ only by read noise. First camera has 1.5e of read noise, and second camera has 3e of read noise. Under which circumstances will you actually see decrease in SNR between these two setups? Full size well has nothing to do with bringing signal below read noise levels into view. Full well size of camera is largely inconsequential in astrophotography.
  2. actual FPS will depend on several factors - but most important one is the size of frame you are downloading: Other is actual USB speed of computer and settings in capture software (there is something called usb speed or similar which regulates how much of usb bandwidth is hogged up by camera - raise that value to get better FPS but lower it if you start experiencing unresponsive camera / freezes or similar). @Dunc78 I concur it is probably ASI224. Given certain priorities - some other model might be better suited (for example, for lunar, size of sensor might be more important than max FPS as moon is rather stationary target that often needs larger FOVs, or for Ha solar due to fact 656nm wavelength is captured with Ha scopes that are often very high F/ratio - larger pixels are important factor in order not to bin the data)
  3. It's not bad - it is just one of weighing schemes that indeed produces an infinity. Many different weighing schemes also produce infinity, but there are some that produce -1/12. I also think that you can't have arbitrary weighing scheme - only one that satisfies certain criteria. I'm not sure what that criteria is, but as far as I can tell - Terence Tao did research into that and probably proved that certain class of weighing schemes are equivalent. Perhaps one criteria is that weighing function needs to tend to 0 as one moves to the right of N at greater speed than the speed of N approaching infinity and similarly that weighing function needs to tend to 1 as one moves to the left from N (again going faster than N goes to infinity) - or some other requirement like that. My firm belief is that sum 1+2+3+4+5+.... of to infinity is one way of calculating certain value - but flawed way of doing that as it does not converge in classical sense. Which does not mean that actual number does not exist. Another way of calculating the same value would be by using different weighing function - and some of those weighing functions are not flawed and allow you to calculate the value in that way. Zeta function is yet another way to calculate the same value (which again works). Point is not in the infinite sum being equal to -1/12, but rather it is that we have some value that is -1/12 and that there are different ways to calculate it - one of which fails but we know why it fails and when we encounter this way of calculating - we know what the answer should be - regardless if we can't actually pull off that particular calculation. This is why it works in physics - we know that answer is right - it is just "algorithm" to calculate it that it is flawed, and above paper gives us better insight into why it's flawed and what are correct ways to calculate such values that we can use when we stumble onto a flawed way of calculating them.
  4. I don't think it was aimed an anyone in particular - but in general notion that often repeats - advice is "get large newtonian" rather than "get large aperture telescope of any design type that suits you the best". Both will have the same speed at the same pixel scale, but other types might be, and often are, more manageable than large newtonian on several basis. First, they can often be used without corrective optics which often reduces strehl ratio of the telescope in center (to be able to correct over larger field). Second - they can be of a compact design which is of course easier to manage and mount. There are some drawbacks - like ease of collimation (which I think might be debatable) and soundness of construction - but that is just different type of discussion - bad vs good telescope execution. Newtonians also have one major thing going for them and that is price - they are often cheaper. There are some other smaller things in favor of folded designs - like better baffling, slower to dew up, easier to produce flat fields (less chance of light leak) and so on ..., but again, that might be better directed at bad vs good telescope execution rather than inherent design type.
  5. Not only the aperture - it is aperture in combination with sampling rate - or "how much of the sky is covered by a single pixel" Actual formula would be aperture_area * area_of_the_sky = speed of the setup If you think about it - that is what "speed" of the telescope does in a sense (if we keep the physical pixel size the same): "faster" telescope = shorter focal length = more sky covered with single pixel (if we keep the physical pixel size the same) = faster speed because of "area_of_the_sky" part of above equation. However - we don't need to alter F/ratio of the system in order to cover more of the sky - we can simply use larger pixel (in physical size sense) - either by using camera with larger pixels in microns or by binning our existing pixels.
  6. Ah, ok. This is because 8" F/8 telescope working at 1"/px will produce the same SNR image in the same imaging time as 8" F/4 telescope working at 1"/px. Both gather the same amount of light and spread it over the same amount of "pixel surface" - so resulting signal and thus SNR is the same. Fact that one is F/8 and one is F/4 is irrelevant - hence, speed of the scope is irrelevant once you set your working resolution. Makes sense?
  7. Yes - you would produce the same quality image (in terms of SNR) in 1/4 of the time with Newtonian as you would with 100mm refractor. This is because you have x4 more light gathering area with 200mm of aperture versus 100mm of aperture. Second benefit is that given same sky, same mount, same conditions - 8" would produce very slightly sharper image than 4" - if both scopes are diffraction limited (which might not be the case if you use CC for newtonian or field flattener for refractor).
  8. No, it really does not. While it is good to have the mount that can keep the target on sensor - I was able to do that with Eq2 mount with simple DC tracking motor that had potentiometer speed control (so I adjusted tracking rate in real time to keep the planet on sensor). Individual subs are so short that mount simply does not have time to make impact - it is virtually stands still for duration of ~5ms (actually - we can calculate it roughly if sidereal tracking speed is 15"/s then in 5ms - mount moves for 1/200th of that or about 0.075" - there is simply no "room" for it to make error or any sort of jitter that would show in single frame).
  9. I like how they use rigorous mathematical framework to actually derive this identity in the video. Standard procedure for divergent series, or in fact any series is to define a partial sum up to certain N and then find a limit as N tends to infinity. We can see this as weighted summing where weights are 1, 1, 1, 1, ... (all the way up to Nth position) and then 0, 0, 0, 0, .... for the rest Now, if I got this right from the video, it was Terence Tao who showed some years ago that any distribution that starts of as 1 and then transitions to 0 around N can be used all the same as weights. When we examine different distributions we get that limit of the sum is in form of c * N^2 - 1/12 + .... some other stuff that we ditch as N goes to infinity. Some of distributions set c to 0 thus leaving us with 0 * N^2 -1/12 = -1/12 no matter how big N gets (it is still -1/12 in the limit as anything multiplied with 0 is still 0). And the final punch line is that symmetry dictates for which distributions c = 0 (think symmetry of physics - Noether's theorem rather than symmetry of distribution). This is why we can use this framework to normalize infinite sums in QFT (or at least there is general impression that two are deeply linked somehow - thus calculations work as they should although there are "infinites" involved).
  10. Valid point - but you can always choose to let read noise have a bit larger impact by shortening individual subs. You might choose swamp factor of 3 over 5 or something like that and simply go with shorter subs if wind gusts are real concern. Alternatively - everyone likes lower read noise camera, so maybe they will keep reducing the read noise further
  11. I indeed skimmed over that because there is no issue there really. Let me explain by using two points. First is - read noise in terms of CCD vs CMOS. CCDs used to have very large read noise - like 7-8e and sometimes even more (very few models had read noise as low as 5-6e). Modern CMOS sensors have read noise in 1-2e range. That is at least x4 less then CCD sensor - so one would need to expose for x16 loner with CCD to reach the same level of "overwhelm" with sky noise. Indeed, back in the day, exposures of 20 or more minutes very fairly common (even half an hour or longer for NB imaging). Now onto mounts and guiding. Most mounts have periodic error that is order of up to 10 minutes or there about. That is full period, and half period - where mount takes to go from peak to peak is half that. We could argue that "road" from peak to peak is either a) smooth - making RA drift same for first two and a half minutes as for second two and a half minutes - then if you can image/guide for 2.5 minutes - you should be able to image whole 5 minutes without issues and by extension whole worm cycle as it is the same road in other direction or b) one of two parts is significantly steeper - so it can't be guided - then you would loose every other sub to not being able to guide. If that is not the case - and you don't loose subs - then you should be able to guide whole RA period - and if you can guide whole RA period - what stops you from guiding 2 consecutive periods? In any case - I don't think that sub duration is very important issue. If one can't guide for 10-15 minutes, one should sort out that bit first before attempting to do close up galaxies.
  12. For lunar and planetary (lucky type), aperture is king.
  13. Just for anyone doubting software binning - I'll give an example. Can you capture M51 tidal tail with 1 minute exposure with 8" telescope from Bortle 8 light pollution? Most will agree that it is impossible, right? Ok, so this is single 1 minute sub at native resolution: It looks like one might expect, right? Look what the same sub looks like - if I bin it quite a bit: That is one sub - one minute, no stacking, no monkey business - enough SNR to show tidal trails! However, I had to trade in large amount of resolution to get it - image of galaxy is now tiny
  14. Nope. It improves SNR by bin factor from recorded image - regardless of how read noise is treated. Once you have image - no matter how it was acquired - was it CMOS or CCD - software binning will improve its SNR by bin factor (if you bin x2 - you will get x2 improvement, x3 - x3 SNR improvement and so on). Binning is the same underlying procedure as stacking - which is in turn the same underlying procedure as longer integration. You effectively trade spatial resolution for integration time when you bin.
  15. 100% effective Only difference between software and hardware binning is level of read noise. With CCDs and hardware binning - you have the same read noise regardless if you bin or not, but with CMOS sensors "effective" read noise is increased by bin factor. If you bin x2 - read noise is x2 larger, if you bin x3 - read noise is x3 larger and so on. However - this does not make any difference on final result if you already expose to swamp the read noise at bin x1. When you bin and increase read noise - you also increase other noise source in the same manner so their ratio - or "by how much you swamp" the read noise with say sky noise - remains the same. Best way to bin is to actually not bin at all - best way is to split your subs so that different pixels end up in different sub subs. This way you avoid any mathematical operations with pixels - you reduce sampling rate (because you leave every other or third pixel) but you end up with multiples of subs - as you've imaged for longer. This shows that there is really nothing is lost - it is pure trade off between sampling rate and SNR, Just to clarify what I'm saying - you take one sub and you split it into 4 smaller subs - first containing odd, odd pixels (in x and y), second odd, even, third even, odd and fourth even, even (a bit like bayer matrix splitting). In both axis - in X and Y you have twice as few pixels so each new sub is half the height and half the width - sampled at twice smaller sampling rate - but you have x4 more sub to stack - which improves total SNR x2. Ok, but no software in reality implements (that I know of) above approach, although I'm sure that PI script can be written, so next best thing is to simply bin each sub after calibration and before stacking - either average or sum will do, but take care to save each sub in 32bit floating point format to avoid loosing precision (you should do this anyway when calibrating). Third option is to simply take your stack and bin it before you start to process it. In principle - above three are equivalent bar some minute differences that have to do with interpolation when aligning the subs for stacking (mostly academic arguments - no practical difference).
  16. I wonder why people insist on "speed" of the telescope as being crucial thing when imaging galaxies? Most galaxies are very small in angular size - maybe dozen of arc minutes at most. That is about 700px or less across the image of galaxy if one samples at highest practical sampling rates for amateur setups - which is 1"/px. Now take any modern sensor that has more than 12MP - that is 4000x3000px or more, so you have 4000px/700px = ~x5 at least x5 larger sensor than you actually need in terms of pixels. You can bin x5 and you'll be still able to capture galaxy in its entirety + some surrounding space. Btw - bin x5 will make F/15 scope work as if it was F/3 scope - so what is the point in going for F/4 Newtonian scope when you can comfortably use compact Cass type - be that SCT, MCT, RC or CC and produce excellent galaxy image. My take on this would be - get largest aperture that you can afford and comfortably mount and use and adjust your working resolution to range of 1-1.2"/px for best small galaxy captures.
  17. Indeed it is. There are a few possible explanations. - funny white balance settings in the capture app (set to auto but failing to do its thing properly for example). - wrong order of bayer matrix elements while red being stronger as per QE. Bayer matrix order is usually something like this: RG GB But if there is change in software or drivers - image can be read out backwards (bottom to top instead of top to bottom) - which changes order of bayer matrix - it flips it in vertical so it becomes GB RG With normal - "green strong QE" cameras, this produces sort of pinkish tint because two green pixels (which usually have the same / very similar value as they are adjacent) turn into R and B - with the same levels and that becomes dominant thing, but with R being stronger - this inversion can make green image - very strong luminance cutoff filter like L3 filter used. Since it removes outer parts of spectrum (blue and red sides of spectrum) - it effectively removes "volume" in these colors or surface under the QE curve that is actually what counts towards total color weight (peak QE is good indicator only if three component cover roughly the same wavelength ranges).
  18. That is really not the reason why images turn out green It does nothing to do with number of pixels - but with relative sensitivity in different parts of spectrum. Most sensors have strongest QE in green and this will result in green cast if image is not color corrected (often wrongly referred as white balance). Some cameras have peak in red part of spectrum (usually those very sensitive in IR) - and those produce reddish tint images. @LaurenceT You have "white balance" controls in your capture software which you can tweak to get color neutral image. Alternatively - do color balance afterwards in software while at processing stage.
  19. I think that colour is much more simple to use because you don't need filters / filter wheels and so on, and there is very little difference, if any, in the end result (thanks to some clever software tricks). I'm not sure what your budget is, but excellent planetary camera is not that expensive - have a look at this one: https://www.firstlightoptics.com/zwo-cameras/zwo-asi224mc-usb-3-colour-camera.html If you can get something like that second hand - even better. I would not mind using second hand planetary camera if in good working order - as long as it is USB 3.0 and is supported with drivers at the moment (the example I gave above was captured with QHYIIL color camera - which is only USB 2.0 and I'm not sure one can find good drivers for that model any more, so I would not get it second hand).
  20. On surface - it is just some nice mathematics, but the fact that you get finite answers when obeying symmetries just blows my mind (not pretending to fully understand how it works - I just get that it works).
  21. Yes, greatest improvement will come from using dedicated planetary camera. These two images were taken roughly month apart - both with 5" newtonian, first one with modified web cam (Logitech C270 with removed front lens) and second with proper planetary camera (although USB 2.0). Almost all conditions, including my planetary imaging ability were the same - only difference was with the camera model used.
  22. If you really want to give focal reducer a try - then simply go for cheap x0.5 1.25" reducer. I've used it with small sensor on several occasions with different scopes and it works. Varying the spacing will vary magnification / compression, so you can experiment. See here for some ideas of what to expect:
  23. Larger FOV on lunar is usually achieved by making a mosaic rather than using focal reducer. Focal reducers, while giving you large FOV, introduce optical aberrations. These might not be as bad for deep sky imaging where atmosphere effects are dominant, but if you plan on doing lucky type planetary imaging (and of course you should, and you have right gear for it) - then you don't want to loose any sharpness. You simply image 4 separate panels (or more) that you stitch together to create large image. I think it took 9 panels, last time I did lunar imaging with 4" mak and ASI178 to cover whole lunar surface (ASI178 is a bit larger sensor than ASI224, but mak has a bit more focal length than SCT, so they should be in the ballpark).
  24. I'm not known to be envious, but seeing that these forum "stickers" come with real world mugs ....
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.