Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Valid point - but you can always choose to let read noise have a bit larger impact by shortening individual subs. You might choose swamp factor of 3 over 5 or something like that and simply go with shorter subs if wind gusts are real concern. Alternatively - everyone likes lower read noise camera, so maybe they will keep reducing the read noise further
  2. I indeed skimmed over that because there is no issue there really. Let me explain by using two points. First is - read noise in terms of CCD vs CMOS. CCDs used to have very large read noise - like 7-8e and sometimes even more (very few models had read noise as low as 5-6e). Modern CMOS sensors have read noise in 1-2e range. That is at least x4 less then CCD sensor - so one would need to expose for x16 loner with CCD to reach the same level of "overwhelm" with sky noise. Indeed, back in the day, exposures of 20 or more minutes very fairly common (even half an hour or longer for NB imaging). Now onto mounts and guiding. Most mounts have periodic error that is order of up to 10 minutes or there about. That is full period, and half period - where mount takes to go from peak to peak is half that. We could argue that "road" from peak to peak is either a) smooth - making RA drift same for first two and a half minutes as for second two and a half minutes - then if you can image/guide for 2.5 minutes - you should be able to image whole 5 minutes without issues and by extension whole worm cycle as it is the same road in other direction or b) one of two parts is significantly steeper - so it can't be guided - then you would loose every other sub to not being able to guide. If that is not the case - and you don't loose subs - then you should be able to guide whole RA period - and if you can guide whole RA period - what stops you from guiding 2 consecutive periods? In any case - I don't think that sub duration is very important issue. If one can't guide for 10-15 minutes, one should sort out that bit first before attempting to do close up galaxies.
  3. For lunar and planetary (lucky type), aperture is king.
  4. Just for anyone doubting software binning - I'll give an example. Can you capture M51 tidal tail with 1 minute exposure with 8" telescope from Bortle 8 light pollution? Most will agree that it is impossible, right? Ok, so this is single 1 minute sub at native resolution: It looks like one might expect, right? Look what the same sub looks like - if I bin it quite a bit: That is one sub - one minute, no stacking, no monkey business - enough SNR to show tidal trails! However, I had to trade in large amount of resolution to get it - image of galaxy is now tiny
  5. Nope. It improves SNR by bin factor from recorded image - regardless of how read noise is treated. Once you have image - no matter how it was acquired - was it CMOS or CCD - software binning will improve its SNR by bin factor (if you bin x2 - you will get x2 improvement, x3 - x3 SNR improvement and so on). Binning is the same underlying procedure as stacking - which is in turn the same underlying procedure as longer integration. You effectively trade spatial resolution for integration time when you bin.
  6. 100% effective Only difference between software and hardware binning is level of read noise. With CCDs and hardware binning - you have the same read noise regardless if you bin or not, but with CMOS sensors "effective" read noise is increased by bin factor. If you bin x2 - read noise is x2 larger, if you bin x3 - read noise is x3 larger and so on. However - this does not make any difference on final result if you already expose to swamp the read noise at bin x1. When you bin and increase read noise - you also increase other noise source in the same manner so their ratio - or "by how much you swamp" the read noise with say sky noise - remains the same. Best way to bin is to actually not bin at all - best way is to split your subs so that different pixels end up in different sub subs. This way you avoid any mathematical operations with pixels - you reduce sampling rate (because you leave every other or third pixel) but you end up with multiples of subs - as you've imaged for longer. This shows that there is really nothing is lost - it is pure trade off between sampling rate and SNR, Just to clarify what I'm saying - you take one sub and you split it into 4 smaller subs - first containing odd, odd pixels (in x and y), second odd, even, third even, odd and fourth even, even (a bit like bayer matrix splitting). In both axis - in X and Y you have twice as few pixels so each new sub is half the height and half the width - sampled at twice smaller sampling rate - but you have x4 more sub to stack - which improves total SNR x2. Ok, but no software in reality implements (that I know of) above approach, although I'm sure that PI script can be written, so next best thing is to simply bin each sub after calibration and before stacking - either average or sum will do, but take care to save each sub in 32bit floating point format to avoid loosing precision (you should do this anyway when calibrating). Third option is to simply take your stack and bin it before you start to process it. In principle - above three are equivalent bar some minute differences that have to do with interpolation when aligning the subs for stacking (mostly academic arguments - no practical difference).
  7. I wonder why people insist on "speed" of the telescope as being crucial thing when imaging galaxies? Most galaxies are very small in angular size - maybe dozen of arc minutes at most. That is about 700px or less across the image of galaxy if one samples at highest practical sampling rates for amateur setups - which is 1"/px. Now take any modern sensor that has more than 12MP - that is 4000x3000px or more, so you have 4000px/700px = ~x5 at least x5 larger sensor than you actually need in terms of pixels. You can bin x5 and you'll be still able to capture galaxy in its entirety + some surrounding space. Btw - bin x5 will make F/15 scope work as if it was F/3 scope - so what is the point in going for F/4 Newtonian scope when you can comfortably use compact Cass type - be that SCT, MCT, RC or CC and produce excellent galaxy image. My take on this would be - get largest aperture that you can afford and comfortably mount and use and adjust your working resolution to range of 1-1.2"/px for best small galaxy captures.
  8. Indeed it is. There are a few possible explanations. - funny white balance settings in the capture app (set to auto but failing to do its thing properly for example). - wrong order of bayer matrix elements while red being stronger as per QE. Bayer matrix order is usually something like this: RG GB But if there is change in software or drivers - image can be read out backwards (bottom to top instead of top to bottom) - which changes order of bayer matrix - it flips it in vertical so it becomes GB RG With normal - "green strong QE" cameras, this produces sort of pinkish tint because two green pixels (which usually have the same / very similar value as they are adjacent) turn into R and B - with the same levels and that becomes dominant thing, but with R being stronger - this inversion can make green image - very strong luminance cutoff filter like L3 filter used. Since it removes outer parts of spectrum (blue and red sides of spectrum) - it effectively removes "volume" in these colors or surface under the QE curve that is actually what counts towards total color weight (peak QE is good indicator only if three component cover roughly the same wavelength ranges).
  9. That is really not the reason why images turn out green It does nothing to do with number of pixels - but with relative sensitivity in different parts of spectrum. Most sensors have strongest QE in green and this will result in green cast if image is not color corrected (often wrongly referred as white balance). Some cameras have peak in red part of spectrum (usually those very sensitive in IR) - and those produce reddish tint images. @LaurenceT You have "white balance" controls in your capture software which you can tweak to get color neutral image. Alternatively - do color balance afterwards in software while at processing stage.
  10. I think that colour is much more simple to use because you don't need filters / filter wheels and so on, and there is very little difference, if any, in the end result (thanks to some clever software tricks). I'm not sure what your budget is, but excellent planetary camera is not that expensive - have a look at this one: https://www.firstlightoptics.com/zwo-cameras/zwo-asi224mc-usb-3-colour-camera.html If you can get something like that second hand - even better. I would not mind using second hand planetary camera if in good working order - as long as it is USB 3.0 and is supported with drivers at the moment (the example I gave above was captured with QHYIIL color camera - which is only USB 2.0 and I'm not sure one can find good drivers for that model any more, so I would not get it second hand).
  11. On surface - it is just some nice mathematics, but the fact that you get finite answers when obeying symmetries just blows my mind (not pretending to fully understand how it works - I just get that it works).
  12. Yes, greatest improvement will come from using dedicated planetary camera. These two images were taken roughly month apart - both with 5" newtonian, first one with modified web cam (Logitech C270 with removed front lens) and second with proper planetary camera (although USB 2.0). Almost all conditions, including my planetary imaging ability were the same - only difference was with the camera model used.
  13. If you really want to give focal reducer a try - then simply go for cheap x0.5 1.25" reducer. I've used it with small sensor on several occasions with different scopes and it works. Varying the spacing will vary magnification / compression, so you can experiment. See here for some ideas of what to expect:
  14. Larger FOV on lunar is usually achieved by making a mosaic rather than using focal reducer. Focal reducers, while giving you large FOV, introduce optical aberrations. These might not be as bad for deep sky imaging where atmosphere effects are dominant, but if you plan on doing lucky type planetary imaging (and of course you should, and you have right gear for it) - then you don't want to loose any sharpness. You simply image 4 separate panels (or more) that you stitch together to create large image. I think it took 9 panels, last time I did lunar imaging with 4" mak and ASI178 to cover whole lunar surface (ASI178 is a bit larger sensor than ASI224, but mak has a bit more focal length than SCT, so they should be in the ballpark).
  15. I'm not known to be envious, but seeing that these forum "stickers" come with real world mugs ....
  16. I guess that continuity of star trails would be giveaway, or rather which stars made which segment of circle. With panoramic mosaic you need to capture separate parts and combine them - but you can't do that at the same time.
  17. For all intents and purposes - they would still collide, but in principle - it would have an effect as long as there is any sort of inhomogeneity in stellar composition. This would happen even if stars are not spinning (any type of asymmetry would cause rotation to start). Spin itself just contributes to stronger curvature of space time as it represents energy and mass and energy are equivalent.
  18. That is quite normal if done with very wide lens - fish eye type that captures more than 180 degrees in single go. Maybe simplest explanation would be using your own hands. If you circle with both of your hands in the same direction (like gym exercise), here is drawing of what I mean: say that you move them in "forward" direction (like butterfly swimming technique) - that is exactly how stars move in north and south hemisphere. They perform large circles - but in reality they circle in the same direction (because it's the earth that is spinning). But if you look at your left hand while doing this - it will look as if it's circling clockwise and if you turn your head to the right to observe right hand - it will look like it's spinning counter clock wise. Above images are simply done with lens that can "look at both hands at the same time" - meaning it has more than 180 degrees field of view.
  19. I think that it's down to two things: - most people that use telescopes to split doubles are familiar with influence of seeing, and it's often omitted for that reason, but when wanting to be fully accurate in description - it is included - eyesight is not that important variable if one can change magnification. You select magnification that allows you to easily see what the telescope is capable of. There is seldom discussion (but it does happen) - what can you split with say x40 power or similar. Most of the time, recommendation is to go with very high powers, even higher than one would use for planetary for example. That removes eyesight from the equation as at those magnifications - eye has no issues resolving things. With binoculars - it is a thing since one is tied to certain magnification - and that magnification tends to be on very low side of things - which is not suitable for splitting doubles because of eyesight issues.
  20. Not sure if that is true. I'm sure that sky conditions play major part when talking about visual separation of doubles as well. Sometimes talk about theoretical resolution of the telescope is had in context of planetary imaging for example. There we don't really entertain these variables as they are effectively excluded by the process of planetary imaging (lucky imaging where we discard subs that are too distorted by atmosphere).
  21. Ah, sorry - my bad, I pressed ' instead of " (which is just shift away). Aperture is variable - but binoculars resolve independent of the eyes (they produce the image regardless if someone is actually looking thru them) so things don't really compound. If binoculars do resolve and human eyes are able to resolve that resolved image - we have a separation, in other cases - we don't (if either binoculars or eyes can't do their part - or both).
  22. I think that you won't come close to theoretical resolution of 70mm aperture for several reasons. First is quality of optics, but more important is magnification - that is too low. If we assume perfect optics, then it's down to visual acuity of observer. https://en.wikipedia.org/wiki/Visual_acuity There is table on above page that lists MAR for different grades of visual acuity that is important factor - it is minimum angle of resolution measure and is expressed in arc minutes in said table. 20/20 vision equates to 1 MAR of resolution - which means that 20/20 person needs to see two equal doubles at one minute of arc separation to be able to just resolve them (see the gap). Since you have binoculars that provide x18 magnification - that angle will actually be 1 minute of arc / 18 = 60 arc seconds / 18 = 3.33' separation. This is for person having 20/20 vision and perfect optics. Binoculars are often fast achromats that suffer from spherical aberration which will somewhat soften the view so the actual figure will be larger, and if you have less than 20/20 - this will add to separation needed. For example 20/30 vision adds 50% to separation so you'll be able to resolve around 5'.
  23. I'll probably just swap out my F/10 achromat for this: https://www.firstlightoptics.com/stellamira-telescopes/stellamira-110mm-ed-f6-refractor-telescope.html For some inexplicable reason, I'm sort of drawn to that scope . It's not color free, but it has something in it ... and also, it seems to be able to deliver very good views with that combination of glasses according to this: https://www.telescope-optics.net/commercial_telescopes.htm#error There will be some residual color, but at that level (somewhat more than 4" F/15) - it will be less than 4" F/10 that I already have and I'm not particularly bothered by CA in that scope. A bit more aperture, less focal length to be able to show wider field views and a bit less color, with potential for very sharp views (e-line Strehl design limit of 0.997 - which is potential to be almost perfect in that line) - what is not to like?
  24. I've since managed one observing session with this combination - and it works great. Ergonomics is not that great, especially if using diagonal and eyepiece in regular way and then trying to screw in all adapters in the dark, but once properly setup - gives great reduced images. Very nice stars to the edge and can fit whole M31 in the FOV.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.