Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. With focal reducers distance between it and sensor determines factor of focal reduction, but also determines "in/out focus" travel to reach new focal point. Take for example CCDT67 (AP focal reducer) - it has focal length of 305mm and is designed to operate on x0.67. When you place it like it is designed (101mm away from the sensor) it requires about 50mm of inward focuser travel - that is quite a bit of change between places of focal point with and without focal reducer. Let's see how distance change of only 1mm between camera and focal reducer impacts focus position: If we place sensor at 100mm distance, focus point changes by: optimum setting: distance - (distance x fr_focal_length) / (fr_focal_length - distance) = 101 - (101 * 305) / (305-101) = 50mm change of only one millimeter: 100 - (100 x 305) / (305 - 100) = 48.78 Focus point changed by 1.22mm So there you go, even small change in focuser position will have same magnitude in change in FR/sensor distance. If your focuser has been 2mm away from where it should be, you could have measured 54mm or 56mm as "optimum" spacing. Again it would be optimum spacing for focal reducer only at that configuration, but field flattening is sensitive to these small changes in distance so if we are talking about field flattening - only way to get that right is by examining actual point sources over surface of the sensor for different spacing. You can do that and still not waste imaging time / clear sky at night by use of artificial star - just place it far enough. That way you can change the spacing and examine each corner by moving scope so that artificial star gets in each corner before taking exposure - you can check for tilt and collimation this way as well.
  2. Not sure how this works? You are saying that you flashed torch at the front of the scope with FF/FR mounted at the back (but no camera or eyepiece) and then you used projection screen to find focus position and hence expected sensor position in relation to FF/FR? That is not good way to do it, because actual focal point of scope+FF/FR will depend on scope lens - FF/FR distance. You can verify this by simply moving focuser in/out and distance between FF/FR and projection screen when in focus will change - it will not tell you anything about optimum distance for best correction.
  3. I think that second scope is Altair Astro branded version of there scopes: https://www.teleskop-express.de/shop/product_info.php/info/p9868_TS-Optics-PhotoLine-102mm-f-7-FPL-53-Doublet-Apo-with-2-5--Focuser.html https://www.highpointscientific.com/stellarvue-102-mm-access-f-7-super-ed-apo-refractor-telescope-sv102-access And should be better corrected for chromatic aberration than TS scope you listed. That TS scope you listed has some purple around bright objects. It might not be very important to you since you have 127 Mak to fill planetary/lunar role, and you want a scope that you can use for wider field observation (but 150PDS has same focal length and more aperture). If you want single scope to do all things - provide wide field as 150PDS and render pinpoint stars, but also be able to render planets and the moon at high magnification without color issues - then choose second, better corrected scope. If SW80ED is an option and you want something in smaller package, then take a look at this one as well - it is excellent little APO scope: https://www.teleskop-express.de/shop/product_info.php/info/p3881_TS-Optics-PHOTOLINE-80mm-f-6-FPL53-Triplet-APO---2-5--RAP-Focuser.html Or maybe something like this, if you don't want triplet: https://www.teleskop-express.de/shop/product_info.php/info/p8637_TS-Optics-PHOTOLINE-80-mm-f-7-FPL53-Apo---2-5--Focuser.html I think it is better value than SWED80 in your case as you have better focuser, better tube fit and finish and retracting dew shield All scopes I've listed from TS website might be available under other brands as well - so see what is best purchasing option to you (shipping and price).
  4. Offset is applied to all subs, so bias has it, dark has it, flats and lights have it. Point with your offset is not that it is missing, but rather - it is set too low and should be higher. It is set so low that even in 300s sub that has it "all" - you have minimal values. That should not happen. I know that changing software is sometimes not easiest thing to do, but like I said - don't do stuff automatically (in this case you even were not aware it is being done by software). Best to just capture all the files in capture application and do your calibration elsewhere - in software that will do specifically that - and let you choose how to do it.
  5. Here is the best "expert" advice that I can offer: Just do it! I mean - try different spacing and select one that works the best. I really don't know how it all works - never investigated what happens with field flatteners / reducers - other than how to mount them and use them. My first suspicion was that maybe this corrector does not provide large enough corrected field as 071 sensor is large - but according to TS - it should provide 45mm diameter and that should offer plenty of correction for 071 even if corners at 45mm are not perfect (as it has diagonal of only 28.4mm). Not even sure that one can specify FF/FR distance based solely on focal length. Focal length is related to field curvature / radius of curvature, but I think that lens design also plays a part in that - triplets having stronger field curvature than doublets for example (I might be wrong at this). For this reason I believe that TS table of distances is rough guideline and not general rule (maybe suitable for doublet line of their scopes, but deviates with triplets for example, or scope using different glass elements). Best thing to do with any FF/FR is to experiment with distance unless you have FF/FR that is matched to particular scope - then distance specs should be precise.
  6. Yes indeed - it follows both from energy conservation and the way how we calculate actual probability from wave function (those are related). Total probability related to wavefunction needs to be added to one "to make sense" - as with normal probability - you can't have probability of all events sum to nothing else but one - "something will surely happen" but we can't have something happen with higher probability than certainty Once we do that we get proper probabilities confirmed by experiments (there is "deep insight" into nature right there - why do probabilities in quantum realm work exactly the way one would expect for mathematical probabilities?), and yes that is related to why more photons go thru then reflect - if you make minimal probability of those reflected photons, and energy is conserved - all those photons must "go" thru - transmission is increased. It is again a bit "wrong" to think that more photons are somehow "forced" to go thru - when we do that we think of little balls whizzing around, but it is the wave function - or more specifically disturbances in quantum field that move around and things that we think of when mentioning photons is just interaction of that field with other fields (and their wave fucntions / disturbances). There is still one unresolved "mystery" about all of that - how does wave function "choose" where to interact (or what mechanism leads to interaction in one place and not other). Accepted view is that it "just happens" on random. But I think it needs to be more than that. It's a slippery slope as it quickly leads to hidden variable territory
  7. First thing to understand in explaining that is relation between wavefunction and what we detect as photon. There is relationship between the two that is best described as: wavefunction has the information how likely is that we will detect photon at a certain place (in fact wavefunction describing quantum mechanical system carries this sort of information for all things that we can measure - what is the likelihood of measuring certain value - be that energy, position, spin, ...). This is why we have interference pattern in the first place - there are positions where wavefunction is such that there is high probability that we will detect photon, and places where it gives low probability that we will detect photons. And if we place detector - photons will be detected with these probabilities over time and pattern will form. By changing the "shape" of wave function we can to some extent "direct" where we want "influence" to go. One way of shaping wave function is to let it interfere with itself - it has a wave like nature so there are peak and trough in it (with light their distribution depends on wavelength / frequency of the light), and if we split wavefunction and then later "combine", but let the path of wavefunction to be of different length between two points - we can "adjust" how it aligns with itself. We can either have peak align with peak, or peak align with trough (one will amplify probability, other will cancel to lower probability) - in fact due to phase difference we can have anything in between probability wise (from low to high). Now imagine you have a thin layer of transparent material on top of lens. It's thickness is order of wavelength of light (or multiple of). Wave will reflect somewhat from first surface of this boundary layer, and it will reflect from second surface of boundary layer - now we have a split of wave into two components. Depending on thickness of that layer - one component will travel larger distance than the other. Path that one wave traveled when passing arrow and then reflecting of first surface and going back to arrow will be twice the distance of arrow to first surface (marked as red). Path that other wave (it is in fact same wave function) traveled from arrow head to second surface and then back to arrow - will be two times the distance between arrow and first surface + thickness of the layer. If thickness of the layer is such that adding that distance to path traveled by orange wave makes it 180 degrees out of phase with red wave (depends on wavelength of light) they will perfectly cancel out because peaks will be aligned to troughs. If they are not perfectly out of phase - probability will not be 0, but will be rather small instead. In fact - you can't get 100% anti reflective coating for polychromatic light (one containing different wavelengths in continuum), because layer thickness does this for only precisely one wavelength / frequency (and its harmonics). If you have light come in at an angle - distance traveled will be changed and you will loose perfect out of phase alignment. You will also loose perfect out of phase alignment if frequency of the light is not exactly as it should be. This is why there is multi coating - layers of different thickness impact different wavelengths of light. Multicoating just means that there are different layers applied - each one will "work" on different wave length and not all wavelengths will be covered, but even if there is small offset from perfect out of phase - there will be significant reduction in reflection. Btw - this is the same principle used in interference filters - layers of certain thickness are layered on top of each other in such way that they block certain wavelengths of light - doing the opposite, instead of creating destructive interference on reflected wave - they create destructive interference on forward going waves, lowering the probability that photon will pass thru the filter. There are other things at play here that I don't have enough knowledge about to go into detail - like why is glass reflecting about 4% of light on air/glass boundary, and if different materials have that percentage higher or lower and such, but I'm certain that there is explanation for that as well
  8. No magic going on here It can be interpreted as magic, but in reality it is not. Well, all quantum weirdness can be said to be magic. I'm referring to the part about observer .... Video was made to imply that act of observation has something to do with it - but it does not. At least no consciousness is involved. Let's examine regular dual slit experiment and see what is going on. First we need to drop the notion of particle being a little ball. It is not "marble", or mass point or anything. It is in fact a wave. Not simple wave, but rather complex wave. We "see" it as being a particle because it interacts once and localized. But when it moves in space, in between of interactions - it is wave (it is wave even when doing interaction in fact - we only think of it as particle because measurement ends up with quantum of something). Wave part is rather straight forward, no need to explain that. What about when we "look". Well, act of looking brings in something interesting to the table - decoherence. It is a consequence of entanglement of particle with the environment. Maybe best summarized would be like this: No measurement case: state of electron going to double slit is superposition of "electron went thru slit one" state and "electron went thru slit two" state. This superposition interferes with itself and produces interference pattern. Measurement case: state will be somewhat different in this case, it will be: superposition of "electron went thru slit one and was entangled with the environment and path was recorded" with "electron went thru other slit, was not recorded and there was no entanglement with the environment, but we using logic conclude that we know which slit it went thru as 'it must be other case'". This superposition will not interfere with itself because "state that is entangled with the environment in effect produced "disturbed" wave that is no longer capable of interfering with "regular" wave" (I put quotations because it is somewhat layman explanation of what is going on - almost conceptually right but wrong terminology - easier to understand this way). As electron becomes entangled with the environment - properties of electron and environment become correlated, and environment is rather messy - thermal chaotic motion everywhere, so electron also gets "messy" and is not "pure" wave that can easily interfere with itself. Similar thing happens with delayed choice quantum eraser experiment, except we have added layer of complexity. We now have additional entangled photon being produced after the slit - and that photon lands first on detector D0. Now it may appear that what happens after (at D1-D4) determines where entangled photon will land at D0 and that there is some "going back in time, and influencing past events". What really happens is that we have single wave function propagating in all paths and probability of detection of photons at certain places is altered whenever complete wave function either does or does no decohere to the environment. Photon hitting detector D0 will hit it somewhere - and that single hit can't be ascribed neither to interference pattern nor dual lines pattern with 100% certainty. It has certain probability of belonging to either one or the other distribution - this is important thing because we like to conclude that when we do correlation between hits at D0 and D1/2 we get interference patterns at both, and when we correlate hits at D0 with D3/D4 we get dual lines - it must be the case that each photon was exactly in one group or the other group - but this is based only on behavior of ensemble of particles - no reason to think that each photon was distinctly in one group or the other. It is behavior of wave function in a given setup that produces this correlation in hit position probabilities and not photons being distinctly in one group or the other. No influence of the observer, no future events causing past ones - just a regular wavefunction behaving as it normally does. It is just our wrong conclusions that paint a picture of something magical going on here. There is something magical going on here and that is wavefunction and weirdness of quantum world, so let's focus on that one and stop inventing significance where there is none.
  9. This one is still bugging me. How come that SNR improvement in x1.5 bin is x2 instead of x1.8? I created "simulation" of what happens. I've taken 4 subs consisting of gaussian noise with standard deviation of 1. Then I multiplied first sub with 4, second and third with 2 and did not touch the last one. Added those together and divided with 9. As expected, result of this procedure is noise with stddev of 0.555 as per measurement: To attest that algorithm is working, here is small image 9x9 pixels that consists of pixel values 0,1,2,3, ..., 8 and result of bin x1.5: Just to verify that all is good, here are values of pixels in original and binned version: Let's do first few pixels by hand and attest it's working. (4*0 + 2*3 + 2*1 + 4)/9 = 12 / 9 = 1.3333 (2*3 + 4*6 + 2*7 + 4) / 9 = 5.33333 (2*1 + 2*5 + 4*2 + 4) / 9 = 2.66666 (8*4 + 2*5+2*7+4) / 9 = 6.66666 Everything seems to be in order - resulting pixel values are correct. However stddev of image binned x1.5 is lower by factor of x2 rather than by factor of x1.8, even if I remove correlation by splitting it. This is sooo weird
  10. In above example there will be some broadening of PSF due to pixel blur - larger pixels, larger pixel blur so increased FWHM of PSF. In split method if you oversampled to begin with - that does not happen. I wondered why my initial calculation was wrong, it gave SNR improvement by factor of 1.8 but test resulted in 2.0. Then I realized that I did not account for correlation. We can attest this by doing another experiment - splitting binned data into sub images. As correlation for bin factor of 1.5 happens to adjacent pixels - if I split result of binning, it should have proper distribution and 1.8 improvement in standard deviation. It should also show no modification to power spectrum. Let's do that Here are results for x1.5 bin of gaussian noise with sigma 1: And power spectrum of it: It shows slight drop off towards the edges - high frequencies. Hm, another twist Here is result of splitting binned sub: Standard deviation remains the same, but power spectrum is now "fixed": So this removed correlation as expected - and can be seen in power spectrum, but SNR improvement is still 2.0? I'm sort of confused here
  11. Here are the results, and are quite "unexpected" to say the least I ran gaussian noise on 512x512 image and measured standard deviation at 0.1 intervals (starting at 1 and ending up at bin x3). Interestingly enough - it almost represents a straight line with dips at certain points - whole numbers and 1.5 and 2.5. Probably because level of correlation is small at those factors. This type of binning - at least how I see it is a bit different than using interpolation especially for bin coefficients larger than 2. Take for example bilinear interpolation - it will only "consult" 4 neighboring pixels depending where interpolated sample falls. With factor larger than 2, fractional binning will operate on at least 9 pixels. But you are right, as implemented above, fractional binning is very much like rescaling algorithm. I plan to implement it slightly differently though - like split algorithm described in first post - at least for some fractions (2 and 3 in denominator and suitable values in enumerator). That will remove this sort of pixel to pixel correlation and implement a sort of "weighing" in the stack, because some pixel values will be present multiple times.
  12. It turns out that it was artifact due to wrong condition in for loop - < instead of <= - but you are right, that sort of artifact happens when you rotate image by very small angle and use bilinear interpolation. It is visible only in noise if you stretch your histogram right - due to different SNR improvement - small fraction of one pixel and large of another will have small SNR improvement, but equal size fractions will have SNR improvement of 1.414 ... So you get "wave" in noise distribution going over the image.
  13. Managed to find a bug - no banding is now present, and power spectrum displays nice attenuation from center to edges. I'm now trying to hunt down another bug (don't you just love programming ) - I get SNR improvement of x2 for binning x1.5 for some reason - which means values are not calculated correctly (or maybe my initial calculation was wrong - will figure it out on small image 3x3 if values are calculated correctly).
  14. I'm getting that sort of result, but here is what confuses me - it is more pronounced on low fraction than on high fraction. If I'm binning by factor of x1.1, I'm getting quite a bit of artifacts in pure noise as a result of frequency distribution change - but it seems to be only in "one direction". FFT confirms this. It is probably due to error in implementation, but I can't seem to find it (as is often case with bugs). Here is example: Distribution is no longer gaussian - but more narrow, and there are vertical "striations" in the image. Here is raw power spectrum via FFT: It's obvious that higher frequencies are attenuated - but again it is in single direction - which is weird and should not happen. There is obviously a bug in software that I need to find. On the other hand, here is bin x1.5 - effects is much less pronounced: Both image and distribution look ok, but power spectrum is still showing above "feature":
  15. Ok, here is couple of things you should try out. First, yes go for ASCOM drivers. Those can also be selected in SharpCap as far as I remember. In my (outdated) version of SharpCap, these are available in menu: SharpCap then issues a warning that you are using ASCOM or WDM driver for camera that SharpCap can control directly - but just click that you are ok with that. Alternative would be to use different capture software. Nina seems to be popular free imaging software that you can try out: Second would be examining your subs. I wanted you to post your subs for examination - but that is just a few measurements and overview of fits header information that can help. You don't need to post complete files you can just post following: 1. Statistics of each sub - one flat, other flat, related flat darks, ... 2. Histogram of each sub 3. Fits header data For sub that you uploaded, here is that information: From this I can tell you straight away that you have offset issue. You should not have value of 16 in your sub even for bias subs, let alone subs that contain signal! Here is histogram for example: This one looks rather good, but I can bet that darks and flat darks will be clipping to the left due to offset issue. Here is fits header: So I see that you have your gain set at 300 that temp was -15, that you used 5 minute exposure and so on ... When comparing this information between subs one can see if there is mismatch or whatever. Most important thing - don't do "auto" things. When you let software do something automatically for you - you loose a bit of control. In my view it is best to know exactly what software is doing if you are going to do that. Collect your darks, your flats and your flat darks like you do lights and then you can make your own masters the way you want. It also gives you chance to examine individual subs and see if there is something wrong with them. Only when you know that your settings work "manually" and you know what software is doing automatically (by adjusting parameters / reading of what it does in certain procedure) - then use automatic approach to save time.
  16. Yes, another thing - could you post above files as single fits files for examination (no stacking, no calibration, just single unaltered sub from camera in fits format)? That might give us clue of what is happening.
  17. What drivers and software are you using to capture these subs? You should be using ASCOM drivers for DSO imaging. SharpCap offers native drivers for ZWO cameras - these enable very fast frame rates and are suitable for planetary imaging but might not be the best option for long exposure.
  18. I figured that as well. Do you by any chance have idea if this correlation is going to affect results of stacking? Does it change distribution of noise in such way that it lowers SNR gains in stacking? I don't think that local correlation of noise is necessarily a bad thing in astro imaging. It certainly can be when you are doing measurements, but for regular photography I can't see any down sides except possibility of altered noise distribution.
  19. This is actually not correct as there will be even third case where we have distribution like 2x2, 2x2, 2x2 and 2x2 (second pixel in second row of resulting pixels) which will have SNR improvement of x2 I'm doing this of the top of my head, but I think that resulting SNR in this case will be 3/8 * 1.6 + 3/8 * 1.7889 + 1/4 *2 = ~1.771 but can't be certain that I got it right this time (could be wrong again). Maybe best thing to do is to implement it and measure - will do that tomorrow and post results.
  20. I think it could - no wonder that numbers you got have to do with square root of two, that is something that does not surprise me (because we work in 2d and area is square). 1.414 is square root of 2 (well rounded down), and 0.707 is 1/sqrt(2). We do have to be careful when doing things with arbitrary fractions. There are couple of reasons why I presented fractional binning in the way I have above. Doing x1.5 bin is going be beneficial in two things 1. It can be implemented in "split" fashion rather easily (not as easy with arbitrary fraction - you get very large number of subs that mostly repeat values) - alternative is to do actual binning on each sub and increase pixel blur. 2. It is symmetric I'll expand on this a bit and we can then have a look at other example to see if there is indeed difference that I believe exists. When we do x1.5 bin like I explained above we get certain symmetry. If we observe first 2x2 group of resulting pixels we can see that: - first one contains one whole pixel, two halves of other pixels and quarter of fourth pixel - second one will contain the same thing - one remaining half (other half of orange group), one full pixel (next original pixel or group if you will in X direction), 1/4 of blue group and again half of pixel in bottom row. - same will be for third an fourth pixel - ingredients will be the same just geometrical layout will be different, but math involved for each will be the same. Let's try now with different approach where we loose this sort of symmetry. Let's break up original pixels in groups of 9 (3x3) and bin 4x4 these little sub pixels. If we go in X direction, first resulting pixel will consist of: 3x3, 3x1, 1x3 and 1x1. But second resulting pixel will consist out of a bit different "configuration" - it will be: 2x3, 2x3, 2x1, 2x1. We need to do the math for each of these and see if they add up to same thing (and I'm guessing they are not, but we need to check just in case): 9^2 + 3^2 + 3^2 + 1^2 = 81 + 18 +1 = 100 6^2 + 6^2 + 2^2 + 2^2 = 36+36+ 4+4 = 80 So first pixel will have something like 16/10 = 1.6 improvement in SNR, but second pixel will have something like ~1.7889. Now this seems a bit strange that different pixels have different SNR improvement - but again this is due to noise correlation, and if you do this on single subs and align those subs after - these SNR improvements will average out. But the point is - we can't determine SNR improvement based on single resulting pixel case if there is asymmetry like above. In fact in above case we will have something like: 1.6, 1.7889, 1.6, 1.7889, .... 1.7889, 1.6, 1.7889, 1.6, .... 1.6, 1.7889, 1.6, 17889, .... Each other pixel will have different SNR improvement in checker board pattern. There is about half of 1.6 and half of 1.7889 - and I honestly don't know how SNR improvement will average, will it behave like regular numbers or like noise. If it behaves like noise then average will be: sqrt ( 1.6^2 + 1.7889^2) = 2.4, but if it behaves like normal numbers average will be (1.6 + 1.7889) = 1.69445 It stands to reason that it will average like regular numbers, so I'm guessing that total average SNR improvement after stacking will be 1.69445 This logic is much harder to apply when you have bin like x1.414 because number of possible combinations of areas that go into resulting pixels is infinite, and each one will have slightly different SNR improvement - and in the end you have to average those out. Probably easiest way to do it would be to actually implement "regular" fractional binning (that one is easy to measure because it works on single sub and does not need alignment and stacking afterwards) and test it on bunch of subs with synthetic noise and measure standard deviation of result to see what level of improvement there is for each fractional bin coefficient. Maybe we can derive some sort of rule out of it.
  21. I'm not sure it would produce result of 2.0. Let's do the math and see what happens. Instead of splitting each pixel in 4 smaller ones (upsampling by 2 as you would put it), let's do that with 4 and repeat the math. This time we will be averaging 36 pixels instead of just 9, right? (6x6 grid instead of 3x3). So we need to divide total noise by 36. Total noise in this case will be sqrt( (16*red)^2 + (8*green)^2 + (8*orange)^2 + (4*red)^2 ) = sqrt( 256*noise^2 + 64*noise^2 + 64*noise^2 + 16*noise^2 ) = sqrt( 400 * noise^2) = 20*noise So total noise will be noise * 20 / 36, and we need reciprocal of that to see increase in SNR, so it will be 36/20 - and that is still x1.8 You gain nothing by splitting further. It is respective "areas" of each pixel that add up to form larger pixel that determine SNR increase - not number of splits. I used splitting of pixel because that allows me use the same principle that I outlined before that does not introduce additional pixel blur. By the way, there is explanation for this, one that is not immediately obvious. With integer binning we are not introducing correlation between noise of each pixel. When we sum 2x2 grid of pixels, those 4 pixels contribute to value of one resulting pixel and no other pixels. Each resulting pixel will get its own unique set of 4 original pixels that make it up. With fractional binning - this is not the case. Look at the image in my first post ("color coded" one) - resulting pixel will contain whole red group (single pixel) - half of orange and half of green group and only quarter of blue group. This means that green group gives value to both this pixel, but also next one in vertical. Orange does the same but gives value to the next right pixel, and blue will give value to 4 adjacent resulting pixels. This introduces correlation in noise of those resulting pixels - their noise is no longer independent. This is concern if we are doing any sort of measurement, but it is not concern for creating nice image. In fact if we use any sort of interpolation algorithm when aligning our final subs for stacking (except original shift and add) - which implies sub pixel alignment of frames, we will be introducing both signal and noise correlation in pixels of aligned subs. It does not hurt the final image.
  22. Yes there is. Even with Poisson type noise. Consider following example: We have a barber shop, people go in to have their hair cut. On average 2.4 people will come in to barber shop per hour. There will be hours when no one enters, and there will be hours where 4 persons appear. Signal in this case is 2.4, and measured value per hour can be 0 or 1 or 2 or 3 or 4 .... If no one appears error for a given hour will be -2.4 or measured_value - true_value. If 4 people walk in on particular hour error will be 4 - 2.4 = 1.6. Same is true for photons - there is certain brightness from a target that is expressed in number of photons per time interval - that is our value of signal. Measured value will be either less or more than this. Errors can be both negative or positive. Noise on the other hand is average error value, or can be thought of as likelihood that measured value is close to true value. It has magnitude and that magnitude is always positive - like a vector, it has direction and size and we consider size to be always positive. Binning both reduces error and hence spread of error, or average error value - noise. Signal is still 25 counts, but deviation from that signal - or average error goes down when you bin it. Binning is nothing more than stacking images - same as stacking improves signal to noise ratio, so does binning. Same thing happens when you average four subs - you in fact average four "overlapping" pixels, and with binning - you average four adjacent pixels, and provided that you are oversampling - those 4 pixels will have roughly the same signal, same as in four subs. There is no more data and splitting original sub into 4 subs is the same as binning - except for one thing - pixel blur. Pixel blur is consequence of pixels having finite size - larger the size more blur is introduced. Sampling should be done with infinitesimally small points, but in reality it is not. Because of this, there is a bit of blur. When you add 4 adjacent pixels, you are effectively getting the same result like using larger pixel. With splitting you are lowering sampling rate (larger space between sampling points) but you are keeping pixel size the same. Each of these 4 subs will be slightly shifted in alignment phase, but there will be no enlargement of the pixels and no pixel blur increase.
  23. I don't think this is very good way to do things. Your noise contains signal in it to start with. Where are negative values? Noise can't be characterized by single number being error from true value. That is just error not noise. Noise has magnitude and is related to distribution of samples that represent true_value + error, or if you remove true_value it represents distribution of errors. Having noise of 1 does not mean that error is 1. - it means that (depending on type of distribution of random variable - let's take Gaussian) - 68% of errors will be in range -1,1, 95% of all errors will be in -2,2 range and 99.7% of all errors will be in -3,3 range. For noise of magnitude 1 here is for example 0.15% probability that error will be larger than 3 - but it can happen. If you have a pixel that has value of 17 - you can equally assign error of -1 or +1 to it, or -2 or 2 or even -5 and 5 (it can happen but with low probability). Your errors show bias - they are all positive values - so they don't have proper distribution to be considered noise (although they might be random). If you want to create such spread sheet and do "micro case", you need to randomly produce number that has certain distribution. But I think it is better to operate on synthetic images rather than on cells - easier to process. Here is example This is image with Poisson noise distribution for uniform light signal of value 25. Next to it is measurement of image - it shows that mean value is in fact 25 and standard deviation is 5 (which is square root of signal). Here is same image binned 2x2 with measurement Now you see that signal remained the same but standard deviation in fact dropped by factor of 2. SNR has improved by factor of x2. Here is it now binned by x3 There is no question about that - binning works, and it works as it should. You should not think about errors being the noise - you need to think about distributions being the noise and having magnitude like a vector rather than being plain old number. Here is an example: Let's do 2x2 pixel by hand - with noise that we invented, but we will try to at least honor distribution with the fact that errors can be both positive and negative. Let's say that base number / true value is 25 for 4 pixels, but we don't know that we simply have: 25, 26 23, 26 Errors in this case are: 0, 1 -2, 1 What would be average error? Average error would in this case be 0, and that would imply that on average there is no error in values but there is. This is why we don't use simple average for errors but rather thing similar to sine and RMS - root mean squares. In above example our average error will be - sqrt((1^2 + 1^2 + 0^2 + (-2)^2)/4) = sqrt(6/4) = ~1.225 This means that our error on average is 1.225 as displacement from true value (either positive or negative). Let's average our pixels and see how much error will be then: (25+23+26+26) / 4 = 25 Error reduced - now it is closer to true value. That can be expected if you average some positive and some negative values then result will be closer to zero (you see why having all positive values did not produce good results for you?).
  24. Yes - I thought of building one unit myself. Nothing much to it the way I envisioned it. It will consist of old refrigerator - just casing and insulation, no need for other components so it can be broken unit. I'll need regular refrigerator and deep freeze to prepare and store goods. First step would be to take fruits / vegetables and spread them onto trays and get them cooled to regular refrigeration temperature - 5C or so. After that I load them into "fast freeze" unit. That is old fridge casing with some modifications: - inlet that can be connected to high pressure tank - with nozzle - maybe safety valve if pressure builds up (but on second though it might even create lower pressure inside if cool air starts mixing with hot air inside and there is drop in overall temperature in closed constant volume - pressure will go down, not sure if it will offset added air. - Pressure gauge and thermometer - to monitor what is going on After release of the compressed air if temperature drops enough it should start rising again. If insulation is good - maybe it will provide cold enough environment for deep freezing to be completed (I have no clue how long it will take - maybe 15 minutes or so?). Next step is using regular freezer bags to package goods and storing them in regular freezer - ready to be used in winter months Whole procedure will take maybe less than an hour, so I can do multiple rounds and store enough goods without much expense (except for electrical bill of compressor and refrigeration), but like I said - it's more about producing and storing own food than any economical gains.
  25. Fractional binning is not widely available as data processing technique so I'm working on algorithms and software for that as I believe it will be beneficial tool to both get proper sampling rate and enhance SNR. In doing so, I came across this interesting fact that I would like to share and discuss. It started with a question - if regular binning x2 improves SNR by x2 and binning x3 improves SNR by x3 and so forth, how much does fractional binning improve SNR? Let's say for arguments sake that we are going to bin x1.5, how much SNR improvement we are going to get? It sort of stands to reason that it will be x1.5. In fact that is not the case! I'm going to present a way of fractional binning that I'm considering and will derive SNR improvement in particular case of x1.5 binning - because it's easy to do so. First I'm going to mention one way of thinking about regular binning and feature of binning that this approach improves upon. Binning method here discussed is software binning - not hardware one. In regular binning x2 we are in fact adding / averaging 2x2 group of adjacent pixels. Following diagram explains it: We take signal from 2x2 group of pixels add it and store it in one pixel, we do the same for next 2x2 group of pixels. This leads to result being the same as if we used larger pixel (in fact x4 larger by area, x2 by each axis). SNR is improved, sampling rate is halved and there is another thing that happens - we increase pixel blur because we in effect use "larger" pixel. There is slight drop in detail (very slight) that is not due to lower sampling rate because of this. There is a way to do the same process above that circumvents issue of pixel blur. I will create diagram for that as well help explain it (it is basis for fractional binning so it's worth understanding) : Ok, this graph might not be drawn most clearly - but if you follow the lines you should be able to get what is going on. I'll also explain in words: We in fact split image into 4 sub images. We do this by taking every 2x2 group of pixels and each pixel in that group goes into different sub image - always in the same order. We can see the following: samples are evenly spaced in each sub image (every two pixels in X and Y direction on original image), and sampling rate has changed by factor of 2 - same as with regular binning we have x2 lower sampling rate. Pixel size is not altered, and values are not altered in any way - we keep the same pixel blur and don't increase it. We end up with 4 subs in place of one sub - we have x4 more data to stack and as we know if we stack x4 more data we will end up with SNR increase of x2. This approach does not improve SNR of individual sub, but does improve SNR of whole stack in the same way as bin x2 improves individual sub - with exception of pixel blur. Now let's see what that has to do with fractional binning. Here is another diagram (hopefully a bit easier to understand - I'll throw in some color to help out): Don't confuse color with bayer matrix or anything like that - we are still talking about mono sensor, color just represents "grouping" of things. Black grid represents original pixels. Each color represents how we "fictionally" split each pixel - in this case each pixel is split into 2x2 grid of smaller pixels - each having exactly the same value. I stress again this is fictional split - we don't actually have smaller pixels or anything. If we want to do fractional binning x1.5 - we will in fact sum / average outlined group of "fictional" pixels. Each purple outline will be spaced at distances 1.5 larger than original pixel size - so we have appropriate reduction in sampling rate. In reality algorithm will work by splitting subs like above - again to avoid pixel blur, but for this discussion we don't need that. We need to see how much SNR improvement there will be. Let's take the case of averaging and examine what happens to noise. Signal (by assumption) is same across pixels involved so in average it stays the same. Reduction in noise will in fact be improvement in SNR. We will be averaging 9 "sub pixels" so expression will be: (total noise of summed pixels) / 9 What is total noise of summed pixels? Noise adds like square root of sum of squares. But we have to be careful here. This formula works for independent noise components. While noise in first pixel (red group) is 100% independent of noise in other pixels (orange, green and blue groups) it is in fact 100% dependent within group itself - and adds like regular numbers. It is 100% dependent because we just copy value of that pixels four times. So expression for total noise will be: square_root((4*red_noise)^2 + (2*orange_noise)^2+(2*green_noise^2)+(blue_noise)^2) or sqrt( 16*red_noise^2 + 4*orange_noise^2 + 4*green_noise^2 + blue_noise^2) Because we assume that signal is the same over those 4 original pixels - and noise magnitude will be the same - so although red_noise, orange_noise, green_noise and blue_noise are not the same values in vector sense - they do have the same magnitude and we can just replace each with noise at this stage. sqrt(16*noise^2 + 4*noise^2 + 4*noise^2 + noise^2) = sqrt(25 * noise^2) = 5*sqrt(noise^2) = 5*noise. When we average above sub pixels we will end up with noise being 5*noise / 9, or SNR improvement will be 9/5 = x1.8 That is very interesting fact - we assumed that it will be x1.5 but calculation shows that it is x1.8 Now, either I made mistake in calculation above or this is in fact true. Any thoughts?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.