Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. vlaiv

    M13 RGB

    There is advantage with CCD cameras in using hardware binning. If you use hardware binning - you maintain read noise at original level. CCD cameras have same read noise for regular pixel readout, bin x2, bin x3 - etc ... Software binning multiplies read noise. If you have camera like say ASI1600 that has 1.7e of read noise (at unity gain) and you bin that x2 - it will behave like camera with twice as larger pixels - but with 3.4e read noise. If you bin x3 - it will have 5.1e read noise, etc ... If you have CCD - on one side you have - faster download times, smaller files and read noise thing - on the other side - you have benefits of software binning presented above. With CMOS - it is far simpler - as you only give up smaller file size. Download times are fast as is and there is no read noise benefit - so it stands to reason to go for fully software binning versus firmware binning. With CCDs - well, its up to you to weigh pros and cons and decide which one is better for you.
  2. vlaiv

    M13 RGB

    Except with small issue of FWC - for all intents and purposes - both software and hardware binning is the same as using larger pixel. If you don't saturate individual pixels in 2x2 group - you won't notice any difference - and if you saturate one pixel - odds are - you are going to saturate others as well. If there is big difference between adjacent pixels - well that means detail and you are under sampling - most of values will be very similar in that 2x2 group as signal is changing very gradually - it is smooth (except for noise of course). In any case - it is the same except for that FWC thing - and saturation in pixels is handled differently (by using shorter filler exposures). There is no camera that has infinite FWC - and there will always be star that is strong enough to saturate your pixels - no matter how big FWC is.
  3. vlaiv

    M13 RGB

    There is no single pixel well for binned pixels in hardware binning. There is again 4 x 20000 and not single 80000 If you manage to put 79000e in those 4x20000 without saturation then you'll be able to do so in software as well. If any one of those 20000 wells saturate - they will do so in hardware as well. Hardware binning is the same as software binning that adds electrons together - but it adds them after exposure and before ADC stage - before read noise is added - so we get single read noise for total value. In fact - it looks like hardware binning might have issues with higher bin number that software bin does not have. Taken from here: https://www.aavso.org/does-binned-full-well-capacity-differ-among-different-vendor-camera-designs As you can see - first two pixels are summed - but each of them can saturate its own well prior to moving into serial register. Then serial register is read out and two successive pixels are again summed in summing well - which has x4 capacity - meaning that summing process wont overflow for 2x2 bin (but it might for 3x3 - unless both serial register and summing well are not properly designed). However - individual pixel full well capacity is still a limit - you don't to "overflow" electrons from one pixel into another to save them for binning stage - if pixel hits saturation - it will saturate prior to any binning done.
  4. vlaiv

    M13 RGB

    I understand what you want to say - and you are right - but that happens to both hardware and software binning. When you image with hardware binning turned on - binning happens on read out. With CCD sensor, ADC is away from pixel - and electrons are "marshalled" to ADC column by column. This is where pixel binning happens. But before that there is pixel potential well that gathers electrons - and if you saturate that one - any surplus electrons are drained away by ABG. Only CCD sensors without ABG keep those electrons and they "spill over" to other pixels - this creates funny looking saturated columns on bright stars that look like this: In any case, with anti blooming gate - saturated electrons are lost before binning - in both CCDs and CMOS sensors - in both hardware and software binning.
  5. vlaiv

    M13 RGB

    No, not of course - it will be exceeded only where it is normally exceeded - bright star cores or perhaps galaxy cores. It really does not matter if those are exceeded - if we use short subs to fill in those parts. You have a point there - it is down to software availability. Most people just uses what software offers them. This does not mean we should just stick to basic functionality - it means that our software should be upgraded with new features and then people will use it. I still think that it is better - regardless of the fact that it is not readily available. I also think that Ferrari is a better sports car then my Dacia Duster - but I still drive Dacia Duster - matter of availability.
  6. vlaiv

    M13 RGB

    You can still reach 16bit capacity in some cases. If you have 14bit camera - anything over bin x2 will potentially overflow 16bits. With 12bit camera - it is harder to do so - you need to bin more than 4x4 in order to potentially overflow. I still feel that binning in software is better than binning on camera / in firmware for couple more reasons. 1. you can use different bin methods - like "smart" binning which will take into account noise statistics of each pixel in the group and possibly hot / dead pixels and assign weights. Normally - each pixel is weighted at 0.25 with average bin (1/4 of sum) - but you can tell if pixel is noisier - CMOS cameras often have telegraph type noise and based on pixel values and that statistic - assign different weights. For example - let's say that you have group of 2x2 pixels and you know that one of those pixels exhibits telegraph type noise (you know this because of statistics from darks), you also know that those pixels represent background (from values compared to other values in the image) - you can then decide to lower contribution of that noisy pixel Similarly - you have dead pixel that does not record any value - you simply exclude it from the group and average other three pixels. 2. Regular binning produces something called pixel blur. Or rather - fact that pixels have physical dimensions (and are not simple points) causes pixel blur. When you bin - you are getting the same effect as if using larger pixel. Larger pixel - larger pixel blur. Since we align and stack our subs - we can exploit this and do special kind of binning - split bin. Split bin produces all the same effects of regular binning - reduction in sampling rate, predictable improvement in SNR, no pixel to pixel correlation - but it avoids this increased pixel blur. This thing is not very known (I'm not sure if anyone besides me thought of it really - never heard of it other than me thinking of it and testing it out - it works) - but it can be easily implemented. It works by producing 4 smaller subs from regular sub (for x2 bin) - or 9 smaller subs for x3 bin ... Pixel size is kept the same (no increase in pixel blur) - but since we use every other (or third) pixel - then sampling rate is reduced. We end up stacking much more subs - and improvement in SNR is obvious. 3. You can do fractional binning - or combination of resampling and binning. If you are far from your target sampling rate - then you can do this kind of combination. It will result in bit lower SNR than if using pure binning - but higher SNR than if binning to "closest" integer bin. Say you natively have 0.6"/px - but you figure out that optimum sampling rate is 1.5"/px. You can bin x2 to get to 1.2"/px - and that is still over sampling, or you can bin x3 to get to 1.8"/px. Those work with integer bin factors. How about binning x2.5 - to get exactly 1.5"/px. You can't do that with regular binning - but you can with various fractional binning techniques.
  7. vlaiv

    M13 RGB

    Don't use median if you use 32bit float (as you should). Sigma stacking (or sigma clip / sigma-kappa reject and all other names it comes in) handles outliers with regular average. Average produces better value than median. In perfect distribution with very large number of samples - median and average are going to tend to each other - but for real data, average is often better. As far image is concerned, if you work with 32bit values - add and average are the same - or at least equivalent. Think of it this way. If you have image in 32bit format and you multiply it with some number (meaning you multiply every pixel value with that number), will the image change? Image represents relative intensities of pixels with respect to other pixels and when you add a constant value or you multiply with constant value - contents of image does not change. You can get the same image by using two minute exposures and four minute exposures. You can get same image using 1e/ADU and 0.5e/ADU gains. Both of these "multiply" image with some value. You then further multiply pixel values in processing to get them into wanted range for color rendition - you do linear scaling (and non linear - but I'm not talking about that part). Only difference between addition and average is that you divide average in the end with some value. Sum of 4 numbers is a+b+c+d, while average of those numbers is (a+b+c+d)/4. If you divide every pixel in the image (and you will because you'll calculate average of every pixel group) - that is the same like dividing whole image with same number. Take average binned image and multiply with a number and you'll get sum binned image - and you can do it other way around as well. Take median binned image - and there is no theoretical way to convert that into either average or sum binned image. Quite the opposite. This shows that software binning is better than hardware binning - or binning in firmware. Both hardware binning and software binning "in camera" (or in camera firmware) still work with 16 bit data and there is upper limit to how large a number you can store - it will produce clipping as result in example you quoted. If you work with 32bit float point numbers (and you should), well, there is also max value there, but it is something like ≈ 3.4028235 × 10^38 and you are not likely to produce it even if you add all the pixels in the image. When working with 32bit float - you can bin and you won't neither clip nor loose precision due to rounding. To reiterate: - average and add are identical if you use 32bit float precision, median is different. You can easily get sum binned image from average binned image by multiplying pixel values by bin factor and same is true in reverse - divide sim binned image with bin factor to get average. There is no way - even in theory, to reconstruct either add or sum bin from median bin and vice verse - average and add produce good / correct / unsaturated / not rounded up or down values when you use 32bit float point precision. This is better than hardware or software bin "in firmware".
  8. vlaiv

    M13 RGB

    Add, average and median refers to math operation used to produce single value (single sample) from multiple adjacent pixel values - in bin x2 case - there are 4 samples / pixel values that are used (in bin3 - its nine - 3x3 grid of samples for each resulting sample). Add and average are virtually the same if underlying data type is float point precision number. If underlying data type is integer - different things happen. Add can saturate as resulting value is sum of individual values and hence higher than each - can hit max for that integer value. With average - it can lead to loss of precision. For example - what is average of 2 and 3? It is obviously 2.5 - but if you need to store it as integer value - you need to discard that decimal place and you'll store it as 2 (or 3 if you round up) - so there is additional error introduced in some cases. If numbers are floating point - then add and average is the same in principle. Add behaves as regular hardware binning - where electrons from each pixel are joined and then read out - here they are "joined" by addition. Average is just addition divided with number of samples - and for each group of 2x2 pixels - that divider is the same - its 4. Average binning is same as hardware binning where you changed e/ADU by factor of 4. As far as contents of image is concerned - they are identical (as you can scale and set black and white point to any image in processing). Median is there just for special cases. Pretty much same as median stacking. Sometimes it can do better job if your camera has very strange noise patterns - it will for example do better job of eliminating hot pixels if you don't use darks or hot pixel map. It is less sensitive to outliers and can sometimes be used instead of average (in fact - for perfect Gaussian distribution both median and average give same result, that is why sometimes median is used for bias - those should be pure Gaussian type noise).
  9. vlaiv

    M13 RGB

    Ok, so lets clarify things. First is sampling rate / resolution. Resolution is maybe not the best term to be used as it is used in so many contexts and has different meaning. Sampling rate just means how many pixels / sample points covers certain part of the sky. It is expressed in arc seconds per pixel. It can be thought of as size of pixel at certain focal length - or maybe better way to think about it would be distance between image samples (hence sampling rate). When we work with image - samples don't have size. Physical pixels have size, but samples are dimensionless points - just "coordinates" at place where we measure light intensity. Sampling rate is distance between these points. Up sampling and down sampling - just simply means change of sampling rate. General process is called Resampling - or change of sampling rate. Up sampling - means adding more sampling points - sampling at smaller intervals, while down sampling means using fewer sampling points. Result of up sampling is that we have lower number of arc seconds per pixel, while result of down sampling means higher number of arc seconds per pixel. Say we have original sampling rate to be 1"/px - or in another words - there is one arc second between each consecutive sample (both in X and Y direction). Up sampled image would then have 0.5"/px and down sampled image would for example have 2"/px (I used x2 in this example to go both up and down - but you can use any number). Binning is one form of resampling. In some ways it is the same as any other type of resampling, but in other ways it is quite special. It is the same as other resampling methods because it changes sampling rate - in particular - it down samples by factor that is the same as bin factor. Say you bin x2 - you'll get twice down sampled resolution. For bin x3 - you get 3 times down sampled image. It is specific in that it: - has very precisely defined improvement in SNR - improvement in SNR is equal to bin factor - bin x2 yields x2 improvement in SNR, bin x3 - yields x3 improvement in SNR, etc ... - it does not introduce sample to sample correlation (this is math stuff - we can go into it - but not that important for this discussion) - increases "pixel blur". This is equivalent of saying - there is no difference between hardware and software binning - or results of binning in software where we add / average group of samples is the same as using equivalently larger pixel to image with. This pixel blur is consequence of the fact that pixels have physical size and are not point samples and we treat values they produce as point samples (btw - there is no point in treating them otherwise - in case you were wondering what if ...) In the end - it is important to understand that certain sampling rate can record information of certain resolution. You can't record smaller detail than what is possible at certain sampling rate. For example - you can't resolve two stars that are separated by smaller distance than that of sampling rate. This sort of stands to reason. There is well defined theory about sampling and famous Nyquist-Shannon sampling theorem, which defines when signal can be perfectly restored. It needs to be band limited and you need to sample at twice highest frequency component of that band limited signal in order to record it in such way that it is possibly to perfectly reconstruct it. Up sampling will not decrease level of detail - but it will not add any detail either. Down sampling (and binning as a form of down sampling) - will loose detail that is finer than particular sampling rate can sustain (from above Nyquist criteria - this really means that we will loose high frequency components that are larger than half of sampling rate). However - if image is blurred and does not contain that finer detail - we can down sample it without loosing anything - it can be restored to original with even lower sampling rate. In example above where I down sampled image and then up sampled it - I played on that card. Up sampling will not do anything in terms of detail loss / gain, and down sampling will only loose detail - if detail is there. If it is not - result will be the same. As far as other differences between some other sort of down sampling vs binning: - other types of resampling first interpolate point samples with a function: Let's assume that bar graph are pixels, red dots are sample points (values at "centers of those pixels") and green line is interpolation function. Now that we have green line - we can just "measure" it at any point we want - we can take a set of sampling points at wanted distance and get new set of samples - either more of them - up sampling, or less of them - down sampling. Type of curve that we "draw" thru our original set of sampling points - defines type of interpolation that resampling uses. It can be bilinear (meaning just linear in both X and Y), bicubic (which is cubic in X and Y) and various others like splines, windowed functions like Lanczos - which is windowed Sinc interpolation and so on ... All those different interpolation algorithms have different properties and do different things on your data. They improve SNR by different factors, cause different levels of blur similar to pixel blur, etc ... If you look it like that, then binning by x2 is the same as half sample shift and down sampling using bilinear interpolation. Hope this helps a bit to understand a bit more samples / resampling and binning.
  10. vlaiv

    M13 RGB

    I understand what you are saying. I guess that is personal preference, and this image was obviously taken in rather poor seeing conditions. I rather like when image looks like this in detail it presents (that is equivalent of x4 bin for this particular image): Now there is no trace of noise in the image and contrast is good and image looks sharp. I would be more than happy if image looked like that, at this resolution: But it starts to go a little soft and a little noisy at this resolution (which is 50% of original - equivalent to bin x2). By the way - this is in ball park of what can be expected from amateur setups with larger apertures and good mounts - sampling rate of about 1"/px-1.2"/px. Anything below that is 99.99% unrealistic for amateurs in long exposure imaging.
  11. vlaiv

    M13 RGB

    Not sure if your M13 needs to be binned by factor of x4. It contains quite a bit of detail.
  12. vlaiv

    M13 RGB

    No - I did not increase to 400% - I first reduced size to 25% of original image. Then I took that small image and I increased it back to 400% of that small image - in effect, I was back to original size. I reduced by 1/4 and then enlarged that by x4, 1/4 * 4 = 1. Both images are at 100% as recorded. I wanted to demonstrate that data in the image can be recorded by 1/4 of resolution with almost no loss in detail - which means it is over sampled by factor of almost x4. If it were sampled properly for detail it contains - it would have had x4 SNR it has now (no idea if I managed to put everything in proper tense there ).
  13. vlaiv

    M13 RGB

    That is exactly what I'm talking about. Imaging so that your image looks too blurry when viewed 1:1 - just means that you are wasting SNR. We might not agree on how much difference in depth there is between base image and image that has x2 higher SNR - but anyone of us can see that on real data. Just take the data you are working on - and stack only 1/4 of subs. Simple as that.
  14. vlaiv

    M13 RGB

    That is Dave's original image size when viewed 1:1 or 100%
  15. vlaiv

    M13 RGB

    Fair enough. Don't know if there is much difference in print quality between you scaling the image and printer printing on lower DPI setting.
  16. vlaiv

    M13 RGB

    I don't know about that. If you bin x2 in linear stage - you recover x2 SNR. That is equivalent of imaging additional x3 as much as you already did (for total of x4 exposure length). If you bin x3 - that is as much as additional x8 time. That sort of sounds to me like - go out of the way additional 50 yards (really - binning is couple of clicks) to save half a tank of gas.
  17. vlaiv

    M13 RGB

    Now comes million dollar question. Why do you feel you need to upscale it after binning? I mean - if detail is not there - what is the benefit of upscaling it?
  18. vlaiv

    M13 RGB

    @DaveS Well, here is interesting thing - these images can be almost binned x4 without loss of detail. Bottom is original version you posted and top one is that same version (as 8bit processed) reduced to only 25% size and then enlarged to 400% of that. Almost no detail is lost (tiny stars appear just a tad sharper in bottom image). Of course - noise grain is different (noise is high resolution - it is in fact equally present across frequencies - it changes grain when we discard high frequency components when we reduce sampling rate).
  19. I don't doubt your calculations at all. I'm quite aware how detrimental sky brightness is on capturing faint detail. I was just surprised with brightness of IFN being mag24.5. I would have expected more images to capture IFN at that brightness given that many people manage to capture mag24-25 parts of other objects in their images.
  20. I find it quite interesting that IFN is only magnitude 24.5. Are you sure of that? We regularly capture magnitude ~25 in fainter parts of bright galaxies without too much of a problem. For example, here is M51 surface brightness: You can see the image and reached magnitude in that part - it goes down to mag23 without much depth in the image itself and no mention of tidal tail I made similar chart of M51: It shows that much of tidal tail is mag25 (and similarly - where line ends in above image - we have mag23 band). In any case, reaching mag24.5. By the way - above diagram of M51 was made in sqm18.5 skies with 2 hours of exposure on F/8 telescope and ASI1600 (with use of IDAS LPS P2 filter). It did take some clever manipulation - like extensive x4 binning and morphological transforms of the image to get it relatively smooth like that.
  21. vlaiv

    M13 RGB

    I'm almost certain that you can select 2,3,4,5 ... in integer resample in PI. Just change outlined factor to wanted bin factor - 2 for x2, 3 for x3, 4 to x4, etc ...
  22. vlaiv

    M13 RGB

    If you are starting a new project, I think it makes more sense to go with bin x1 on integration and later bin to the level it suits you. You might find that seeing was excellent and you only need to bin x3, or maybe seeing was poor and you bin x5 instead of x4.
  23. vlaiv

    M13 RGB

    If you already have bin x2 data - then keep it and bin x2 additionally in software.
  24. vlaiv

    M13 RGB

    I did this example on 8bit stretched data, so maybe there will be very small difference - but in reality, what you see as noise can be perceived as sharpness - but it is really noise. Mentally we perceive a bit of noise as sharpness - because blurriness looses that "edge" of individual pixels. I'm for software binning after acquisition. Best time to bin is after calibration and before integration. Best way to bin is "split" bin. Simplest way to do it is to just integrate as is and then bin result. In fact - that is what you can do if you have stacked linear data - just bin that. In any case - binning should be done in linear state. No noise reduction / no processing.
  25. vlaiv

    M13 RGB

    What is the difference between these two images except for noise grain?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.