Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. Distance is the issue for sure.

    Actual prescribed distance for flattener only works if flattener is matched for that particular scope - otherwise it is general guideline and one should try different spacing to get good results.

    Above diagram should be right in principle, but again depends on optical configuration of field flattener. I don't have enough knowledge on different flattener designs (if there are indeed different configurations) to be able to provide more detailed insight into all of that but I suspect that both things: spacing and above diagram should be taken as guideline more than fact unless specified exactly for flattener / scope combination by manufacturer.

    In any case, answer is trial and error. There might even be a case where you can't find "perfect" distance - there is always some level of aberration in the corners. This can happen if flattener does not correct large enough field to cover whole sensor. 16200 is full frame sensor with 34.6mm diagonal - certain flatteners might have difficulty to correct for stars that far off axis (not saying that TS one is such flattener - but it can happen).

  2. 23 minutes ago, andrew s said:

    @vlaiv are these differences to be expected? Maybe a students t test would tell you if they are.

    Regards Andrew

    Well, I still don't understand t test completely (will need a bit more reading), but here is quick calculation of it with online calculator:

    image.png.f08bf6d10bf5e12c2ee02243df378841.png

    I just took set of bias subs from other camera, split into two groups, calculated mean ADU value per each sub and treated those numbers as members of each group1 and group2 for t test and performed calculation.

    Result speaks for it self - it looks like these two groups of bias files (although from the same camera taken in succession) don't belong to same distribution.

    Not sure if this is the test you had in mind, if not, can you expand on what I should do?

  3. 3 minutes ago, andrew s said:

    @vlaiv are these differences to be expected? Maybe a students t test would tell you if they are.

    Regards Andrew

    I thought about that, but can't figure out how to properly do calculations.

    My simplistic (although might not be accurate) method would be: Bias noise per pixel is order of 10e. There are about 8.3Mp on this sensor, so average value of all of those pixels should be some value +/- 10 / sqrt(8300000) = 0.0035e

    So for 99.7% of time, one would expect average of bias (average value of all pixels in bias) to be some value +/- 0.01e (3 sigma). This indicates that for most part, if it were only for read noise of pixels, average bias value should be fairly constant.

    There seems to be some other noise component related to bias in case of ASI1600, as similar calculation performed on my darks would give something like 0.001 sigma, so most frames would be +/- 0.003 yet as we have seen from above table, most values are in range ~63.945 - ~63.971 so actual range is about 4 times as large, or +/-0.013.

    Having said all of that, and comparing ASI1600 results with my rough calculation - here bias varies by two orders of magnitude more then expected in case of these two cameras. Other explanation is of course that my approach to estimating variance in mean bias value is completely wrong :D

     

  4. Here is what should be the difference for camera 1 - "normal" subs and "higher ADU" subs:

    image.png.9e5ef7b7adc8f069a0d40599be19e48c.png

    I stacked normal subs to one stack, higher ADU subs to other stack (both average stacking method) and then subtracted resulting stacks. There is obvious gradient in difference of the two.

    Here is again comparison of the two groups - side by side, this time using stacks instead of single subs (one stack has 13 subs, other only 3 and is therefore more noisy). Higher value stack was "shifted" to get same average ADU value, and same linear stretch again applied. I added dark background so any edge brightening can be seen more easily:

    image.png.9021bc17bf4361f076875df9d0b5faff.png

    What could be causing the difference? For a moment I suspected light leak of the sorts, but doubt it would be visible in 0.001s bias subs and not in 300s darks.

  5. 24 minutes ago, Filroden said:

    In the three bias frames on camera 1 that show the higher average ADU, can you detect any significant variance within parts of the image or is the higher value fairly constant across the image?

    It does not look like there is significant difference, here is one sub with "normal" mean ADU level and one sub with higher ADU level side by side. I subtracted uniform level of 6.6 from higher ADU level one and applied same linear histogram stretch to both:

    image.png.bede541ab155bbc2e67a775463c68117.png

    FFT of both frames show pretty much same thing (this is central part of each FFT next to each other):

    image.thumb.png.16b97c7facab021cc7ff03f779c55faf.png

  6. Recently I had a chance to do some measurements on bias subs from CCD sensors (two in total) and I've found something that really confused me.

    At first it seemed like issue with power source, but second measurement on one of the sensors with what could have been power issue corrected - gave similar if not even worse results.

    I have not taken subs myself, but have no reason to believe there were any issues with acquisition (-20C, 0.001s exposure, no light - regular "protocol" for bias subs). My intention was to measure read noise, so idea was to take 16 bias subs - split into two groups, stack, subtract and measure standard deviation of result (corrected for stacking addition). I expected this sort of bias - bias calibration to provide nice uniform noise with mean value of 0. Neither set of data actually produced such result. There were residual mean value quite different than 0 in each case. This should not be the case so I inspected mean values of bias subs in each run and found what I can only describe as instability of the bias. Here are results:

    Camera 1, 16 bias measurement, nothing done with data (no e/ADU conversion or anything - values in table are measured ADU values straight from 16bit fits):

    image.png.fdd839dbb6da30085800678552a66718.png

    I pointed out funny results. I would expect bias mean value measurement to roughly behave as other 13 subs in this list - slight variation around some value - in this case 363.4 or there about. What happened with those three outlined subs?

    Camera 2, again same measurement done on 16 bias subs as with above camera (same sensor, but different vendor hence different offset and e/ADU values - resulting values will differ but I expect again same to be true - bias mean value should be very close to some value across measurements):

    image.png.10de76c8429250014ed58179f0387515.png

    Here we see much more scatter and larger deviation. Mean ADU levels per sub vary from ~661.5 up to ~655. Not something that I would expect. Standard deviation varies much less than mean ADU value, so read noise remains pretty much the same.

    Camera 2, second take with different power supply conditions (and probably changed gain since mean values are larger than in first batch of subs from that camera, but stddev is lower):

    image.png.472f62a80f989b4ddf819650874622f8.png

    This one is even more interesting - very large variation in mean ADU values - and almost sorted in descending order - difference of about 20ADU from first to last bias sub.

    Set of associated darks show similar thing (taken at -20C, 300s exposure, again 16 of them used):

    Camera 1:

    image.png.f12cb05569d31edf634ba49c3cee5631.png

    This time only two subs out of 16 had +6 ADU increase (same thing as with bias for this camera), while mean value of other subs is relatively stable (there might be slight trend to rise - probably due to cooling).

    Camera 2 (set1 and set2 - same 300s, -20C):

    image.png.956e1b3dd22078217f6e94a4662eb96c.pngimage.png.d90c51363d9fef4f902d611b17c8d31b.png

    One can say that these variations in mean ADU level of darks are associated with cooling (how fast it reaches set temperature and if there is overshoot), but I don't think they are - look at noise values - they don't follow the same trend. It is bias related.

    I don't own CCD camera, nor I have ever worked with one, but think I have fairly good understanding on what should be expected in above tables. As a comparison here is similar measurement done on 16 subs from my ASI1600 at -20C, unity gain, 240s exposure:

    image.png.3557c6b4104be7c794a392dd25559563.png

    Mean values are nice and "uniform".

    Anybody have a clue what is going on with above CCD sensors and their bias?

     

  7. 12 minutes ago, Magnum said:

    Its precisely for the reason that the dark current is so low, that I dont use darks. As both Atik and Starlight Express state, using dark with these Sony chips can actually introduce noise into the image. I think Craig Stark also came to this conclusion when he first tested the 314: quote "The dark current is so low, however, that you may-well be better off simply taking a large stack of bias frames once to use for bias correction and use a simple bad pixel map for hot pixel removal". 

    Yes, I'm well aware that you introduce additional noise when doing calibration. You can however control how much noise you introduce.  You add both dark current noise and read noise back in when you use master dark. If you for example dither and use 64 dark subs to create master dark you are in fact raising both dark current noise and read noise by 0.8%. Too much? Use more dark subs to create master dark.

    Is there actual reason to use dark calibration if dark current is low and uniform? Yes there is - without removing dark signal (even really small like less than 0.5e per sub) you will have wrong flat calibration. Flat calibration should operate on light signal alone. Not removing dark current makes it operate on both - you will be "correcting" what is uniform offset of dark current thus creating "imprint" of master flat on your image.

  8. Why do people choose not to use darks with set point cooling is beyond me :D.

    Not much to be lost if one uses cloudy night to build master dark.

    As for original question, 460ex has about 0.0004 e/px/s at -10C per original specs - and that is really low dark current. It means that in 10 minute sub, average dark current per pixel at -10C will be 0.24e and associated noise will be less than 0.5e - much lower than read noise of said camera which is 5e.

    Sensors have doubling temperature of about 6C, so going to -16C will slightly improve things - 10 minute sub will be about 0.12e of dark current and associated noise will be ~0.35, so not much improvement over 0.5e. It is same for going warmer - at -4C dark current will be about 0.48e and associated noise ~0.7e - again not anything to be worried about.

    • Like 1
  9. If you want to do proper calibration - yes you do.

    There are certain cases where you can get away with not using darks, and in case when one does not have set point temperature - it can be difficult do properly apply them, so you need to be careful.

    Not doing darks can lead to problems with flat calibration.

    You need to try different scenarios and see which one works for you. If you want to try without darks - use bias subs instead whenever darks are required (for dark calibration of lights and for creating master flat). If you decide to use darks then you should do dark optimization (there is option in some software that does this) - it is algorithm that tries to compensate for mismatched temperature in your darks. It is also not bullet proof technique and it will depend on characteristics of your camera. Some cameras have issues with bias subs and can't use dark optimization / dark scaling.

    • Thanks 1
  10. 8 hours ago, Rich1980 said:

    Ah so like shooting bracketed photos to merge afterwards, how do you work out exposure times when working to 30 minutes etc? 

     

    I'm gonna need to buy that book for sure! 

    Exposure length depends on several factors.

    There is a limited amount of signal that sensor can accumulate, and if you "over do it" - it will end up saturated and actual signal in saturated areas is lost. This happens on bright parts of target and star cores in long exposure. For this reason you go with additional short exposures to fill in missing signal - like Olly mentioned.

    Apart from those short "filler" exposures, best exposure length is as long as you can. Fewer long subs will give better result over more shorter exposures for the same total integration time. However at some point you get to a place where difference is so small that it is not worth doing longer - any difference that exists will be too small to be perceived. This depends on read noise of the camera and other noise sources. Once read noise is sufficiently small compared to other noise sources - difference in final result becomes small. CCD cameras have larger read noise and thus require longer exposures. CMOS cameras have lower read noise and can use shorter exposures.

    One can work out rather easily what is best exposure length for their setup (scope / camera / light pollution levels), but even then you don't need to go with that exact exposure to get good results. Sometimes people choose to use shorter exposures for other reasons - like tracking / guiding precision, bad weather and such. If there is wind gust or some other event that causes sub to be ruined - less data is wasted if sub exposure is short (it's better to discard one 2 minute sub rather than 15 minute one - less time wasted).

    • Like 1
  11. 7 minutes ago, Rich1980 said:

    Ah I see, and is field rotation where the framing of your shot would basically rotate due to not being alone right? 

     

    Am I right thinking the brighter objects M31, M42, etc would need less guiding as exposure times won't be as long yes? 

     

    Thanks too for the responses. 

    Yes, field rotation is due to RA axis of scope not being ideally parallel to axis of earth rotation. Over the course of the night, although you don't touch scope and camera - resulting shot will be slightly rotated in relation to first one (there will be progression in rotation angle between each subsequent frame). This is not an issue if angle is too small during the course of single exposure. Software that stacks subs can correct for any rotation between subs.

    It only becomes a problem if your polar alignment error is big and there is visible rotation of the field during single exposure - stars in corners will star to trail while those in center will still be circular.

    There is no such thing as less guiding :D - you either guide or not. But yes you are right, need for guiding depends on exposure length. If you are using short exposures there is less need for guiding, but that depends on the mount. Some mounts don't even need guiding for exposures up to a minute or so. Some mounts show trailing of stars after dozen of so seconds. Better the mount - longer exposure it can handle without guiding.

    Mount performance is complex topic and there are many aspects of mount performance that are related to guiding. If you are guiding - you should not worry much about peak to peak periodic error if your mount is mechanically smooth - guiding is going to take care of that. On the other hand, mount with small periodic error that is "rough" and has error in position that changes abruptly will be hard to guide with good results. 

    • Like 2
  12. It works by having second camera / telescope (or attachment to primary scope called OAG - off axis guide) that monitors position of a star (called guide star).

    If that position changes by small amount - it adjusts pointing of the telescope to compensate.

    These corrections are done in short time period - order of a few seconds.

    You still need good polar alignment and all the rest when doing auto guiding as polar alignment error causes more than tracking issues - there is field rotation as well. Polar alignment does not need to be very strict in case of guiding - but it is beneficial to get it right.

    • Like 1
  13. 1 hour ago, andrew s said:

    If the bias were unstable then so would match darks and flat darks as they include a bias component. In fact it would be unusable. When I measured my ZWO ASI 16000 M the bias was perfectly stable across time and switching on and off.

    The only reason to not use scaled darks is due to the nonlinear amp glow.

    Regards Andrew 

    I had different conclusion about my ASI1600. Level of bias was not equal in minimal exposure (few milliseconds) vs longer exposure darks. In fact I had higher average sub/stack value in bias then in 15 second darks. Otherwise bias is stable across sessions - two master bias files created at different times will have same average value and will calibrate out normally.

  14. 1 minute ago, newbie alert said:

    It was a crude method Vlaiv admittedly but surely it's the lense that brings the point of focus?  Focus was already set from a previous clear sky many weeks before!!

    For me it verified what my spacing was.. and wasn't near where the previous owner said..10mm out

    With focal reducers distance between it and sensor determines factor of focal reduction, but also determines "in/out focus" travel to reach new focal point.

    Take for example CCDT67 (AP focal reducer) - it has focal length of 305mm and is designed to operate on x0.67. When you place it like it is designed (101mm away from the sensor) it requires about 50mm of inward focuser travel - that is quite a bit of change between places of focal point with and without focal reducer.

    Let's see how distance change of only 1mm between camera and focal reducer impacts focus position: If we place sensor at 100mm distance, focus point changes by:

    optimum setting:

    distance - (distance x fr_focal_length) / (fr_focal_length - distance) = 101 - (101 * 305) / (305-101) = 50mm

    change of only one millimeter:

    100 - (100 x 305) / (305 - 100) = 48.78

    Focus point changed by 1.22mm

    So there you go, even small change in focuser position will have same magnitude in change in FR/sensor distance.

    If your focuser has been 2mm away from where it should be, you could have measured 54mm or 56mm as "optimum" spacing. Again it would be optimum spacing for focal reducer only at that configuration, but field flattening is sensitive to these small changes in distance so if we are talking about field flattening - only way to get that right is by examining actual point sources over surface of the sensor for different spacing.

    You can do that and still not waste imaging time / clear sky at night by use of artificial star - just place it far enough. That way you can change the spacing and examine each corner by moving scope so that artificial star gets in each corner before taking exposure - you can check for tilt and collimation this way as well.

  15. 3 minutes ago, newbie alert said:

    I had this problem, but found out my camera was playing up in the end, magnum on here kindly fixed it for me.. but as a crude way of finding out where the focus point was I removed everything down to the reducer, used a bright torch from the front a moved a box forward and backwards until I had a sharp focus point and measured that point to the focal reducer..in my case it was at 55mm..

    Not sure how this works?

    You are saying that you flashed torch at the front of the scope with FF/FR mounted at the back (but no camera or eyepiece) and then you used projection screen to find focus position and hence expected sensor position in relation to FF/FR?

    That is not good way to do it, because actual focal point of scope+FF/FR will depend on scope lens - FF/FR distance.

    You can verify this by simply moving focuser in/out and distance between FF/FR and projection screen when in focus will change - it will not tell you anything about optimum distance for best correction.

  16. I think that second scope is Altair Astro branded version of there scopes:

    https://www.teleskop-express.de/shop/product_info.php/info/p9868_TS-Optics-PhotoLine-102mm-f-7-FPL-53-Doublet-Apo-with-2-5--Focuser.html

    https://www.highpointscientific.com/stellarvue-102-mm-access-f-7-super-ed-apo-refractor-telescope-sv102-access

    And should be better corrected for chromatic aberration than TS scope you listed.

    That TS scope you listed has some purple around bright objects. It might not be very important to you since you have 127 Mak to fill planetary/lunar role, and you want a scope that you can use for wider field observation (but 150PDS has same focal length and more aperture).

    If you want single scope to do all things - provide wide field as 150PDS and render pinpoint stars, but also be able to render planets and the moon at high magnification without color issues - then choose second, better corrected scope.

    If SW80ED is an option and you want something in smaller package, then take a look at this one as well - it is excellent little APO scope:

    https://www.teleskop-express.de/shop/product_info.php/info/p3881_TS-Optics-PHOTOLINE-80mm-f-6-FPL53-Triplet-APO---2-5--RAP-Focuser.html

    Or maybe something like this, if you don't want triplet:

    https://www.teleskop-express.de/shop/product_info.php/info/p8637_TS-Optics-PHOTOLINE-80-mm-f-7-FPL53-Apo---2-5--Focuser.html

    I think it is better value than SWED80 in your case as you have better focuser, better tube fit and finish and retracting dew shield

    All scopes I've listed from TS website might be available under other brands as well - so see what is best purchasing option to you (shipping and price).

    • Like 1
  17. 1 hour ago, Tommohawk said:

    Re the offset - I thought the idea was that the offset should be present in the lights? Maybe I have this wrong.

    Offset is applied to all subs, so bias has it, dark has it, flats and lights have it.

    Point with your offset is not that it is missing, but rather - it is set too low and should be higher. It is set so low that even in 300s sub that has it "all" - you have minimal values. That should not happen.

    I know that changing software is sometimes not easiest thing to do, but like I said - don't do stuff automatically (in this case you even were not aware it is being done by software). Best to just capture all the files in capture application and do your calibration elsewhere - in software that will do specifically that - and let you choose how to do it.

    • Like 1
  18. Here is the best "expert" advice that I can offer:  Just do it! :D

    I mean - try different spacing and select one that works the best.

    I really don't know how it all works - never investigated what happens with field flatteners  / reducers - other than how to mount them and use them. My first suspicion was that maybe this corrector does not provide large enough corrected field as 071 sensor is large - but according to TS - it should provide 45mm diameter and that should offer plenty of correction for 071 even if corners at 45mm are not perfect (as it has diagonal of only 28.4mm).

    Not even sure that one can specify FF/FR distance based solely on focal length. Focal length is related to field curvature / radius of curvature, but I think that lens design also plays a part in that - triplets having stronger field curvature than doublets for example (I might be wrong at this). For this reason I believe that TS table of distances is rough guideline and not general rule (maybe suitable for doublet line of their scopes, but deviates with triplets for example, or scope using different glass elements).

    Best thing to do with any FF/FR is to experiment with distance unless you have FF/FR that is matched to particular scope - then distance specs should be precise.

  19. 25 minutes ago, markse68 said:

    Thanks Vlav, I get the wave cancellation bit but it was the “ so it doesn’t reflect in first place” leading to increased transmission that I don’t get. But I think what you’re saying is the wave function cancels so the probability reduces of the photon appearing after the “reflection” and increased probability of it being transmitted? 🤔

    Yes indeed - it follows both from energy conservation and the way how we calculate actual probability from wave function (those are related).

    Total probability related to wavefunction needs to be added to one "to make sense" - as with normal probability - you can't have probability of all events sum to nothing else but one - "something will surely happen" but we can't have something happen with higher probability than certainty :D

    Once we do that we get proper probabilities confirmed by experiments (there is "deep insight" into nature right there - why do probabilities in quantum realm work exactly the way one would expect for mathematical probabilities?), and yes that is related to why more photons go thru then reflect - if you make minimal probability of those reflected photons, and energy is conserved - all those photons must "go" thru - transmission is increased.

    It is again a bit "wrong" to think that more photons are somehow "forced" to go thru - when we do that we think of little balls whizzing around, but it is the wave function - or more specifically disturbances in quantum field that move around and things that we think of when mentioning photons is just interaction of that field with other fields (and their wave fucntions / disturbances).

    There is still one unresolved "mystery" about all of that - how does wave function "choose" where to interact (or what mechanism leads to interaction in one place and not other). Accepted view is that it "just happens" on random. But I think it needs to be more than that. It's a slippery slope as it quickly leads to hidden variable territory :D

     

    • Like 2
  20. 17 minutes ago, markse68 said:

    Hi Vlav, can you explain the way anti-reflective coating principle works in similar way? The way the photons don’t reflect as they would cancel if they did? I think it’s related and always had me perplexed 🤔 

    First thing to understand in explaining that is relation between wavefunction and what we detect as photon. There is relationship between the two that is best described as: wavefunction has the information how likely is that we will detect photon at a certain place (in fact wavefunction describing quantum mechanical system carries this sort of information for all things that we can measure - what is the likelihood of measuring certain value - be that energy, position, spin, ...). This is why we have interference pattern in the first place -  there are positions where wavefunction is such that there is high probability that we will detect photon, and places where it gives low probability that we will detect photons. And if we place detector - photons will be detected with these probabilities over time and pattern will form.

    By changing the "shape" of wave function we can to some extent "direct" where we want "influence" to go. One way of shaping wave function is to let it interfere with itself - it has a wave like nature so there are peak and trough in it (with light their distribution depends on wavelength / frequency of the light), and if we split wavefunction and then later "combine", but let the path of wavefunction to be of different length between two points - we can "adjust" how it aligns with itself. We can either have peak align with peak, or peak align with trough (one will amplify probability, other will cancel to lower probability) - in fact due to phase difference we can have anything in between probability wise (from low to high).

    Now imagine you have a thin layer of transparent material on top of lens. It's thickness is order of wavelength of light (or multiple of). Wave will reflect somewhat from first surface of this boundary layer, and it will reflect from second surface of boundary layer - now we have a split of wave into two components. Depending on thickness of that layer - one component will travel larger distance than the other.

    image.png.df6a8208cc7f679f56bc2c07935f334c.png

    Path that one wave traveled when passing arrow and then reflecting of first surface and going back to arrow will be twice the distance of arrow to first surface (marked as red). Path that other wave (it is in fact same wave function) traveled from arrow head to second surface and then back to arrow - will be two times the distance between arrow and first surface + thickness of the layer.

    If thickness of the layer is such that adding that distance to path traveled by orange wave makes it 180 degrees out of phase with red wave (depends on wavelength of light) they will perfectly cancel out because peaks will be aligned to troughs. If they are not perfectly out of phase - probability will not be 0, but will be rather small instead.

    In fact - you can't get 100% anti reflective coating for polychromatic light (one containing different wavelengths in continuum), because layer thickness does this for only precisely one wavelength / frequency (and its harmonics). If you have light come in at an angle - distance traveled will be changed and you will loose perfect out of phase alignment. You will also loose perfect out of phase alignment if frequency of the light is not exactly as it should be.

    This is why there is multi coating - layers of different thickness impact different wavelengths of light. Multicoating just means that there are different layers applied - each one will "work" on different wave length and not all wavelengths will be covered, but even if there is small offset from perfect out of phase - there will be significant reduction in reflection.

    Btw - this is the same principle used in interference filters - layers of certain thickness are layered on top of each other in such way that they block certain wavelengths of light - doing the opposite, instead of creating destructive interference on reflected wave - they create destructive interference on forward going waves, lowering the probability that photon will pass thru the filter.

    There are other things at play here that I don't have enough knowledge about to go into detail - like why is glass reflecting about 4% of light on air/glass boundary, and if different materials have that percentage higher or lower and such, but I'm certain that there is explanation for that as well :D

     

    • Like 2
  21. No magic going on here :D

    It can be interpreted as magic, but in reality it is not. Well, all quantum weirdness can be said to be magic. I'm referring to the part about observer ....

    Video was made to imply that act of observation has something to do with it - but it does not. At least no consciousness is involved.

    Let's examine regular dual slit experiment and see what is going on. First we need to drop the notion of particle being a little ball. It is not "marble", or mass point or anything. It is in fact a wave. Not simple wave, but rather complex wave. We "see" it as being a particle because it interacts once and localized. But when it moves in space, in between of interactions - it is wave (it is wave even when doing interaction in fact - we only think of it as particle because measurement ends up with quantum of something).

    Wave part is rather straight forward, no need to explain that. What about when we "look". Well, act of looking brings in something interesting to the table - decoherence. It is a consequence of entanglement of particle with the environment. Maybe best summarized would be like this:

    No measurement case: state of electron going to double slit is superposition of "electron went thru slit one" state and "electron went thru slit two" state. This superposition interferes with itself and produces interference pattern.

    Measurement case: state will be somewhat different in this case, it will be: superposition of "electron went thru slit one and was entangled with the environment and path was recorded" with "electron went thru other slit, was not recorded and there was no entanglement with the environment, but we using logic conclude that we know which slit it went thru as 'it must be other case'".  This superposition will not interfere with itself because "state that is entangled with the environment in effect produced "disturbed" wave that is no longer capable of interfering with "regular" wave" (I put quotations because it is somewhat layman explanation of what is going on - almost conceptually right but wrong terminology - easier to understand this way). As electron becomes entangled with the environment - properties of electron and environment become correlated, and environment is rather messy - thermal chaotic motion everywhere, so electron also gets "messy" and is not "pure" wave that can easily interfere with itself.

    Similar thing happens with delayed choice quantum eraser experiment, except we have added layer of complexity. We now have additional entangled photon being produced after the slit - and that photon lands first on detector D0. Now it may appear that what happens after (at D1-D4) determines where entangled photon will land at D0 and that there is some "going back in time, and influencing past events".

    What really happens is that we have single wave function propagating in all paths and probability of detection of photons at certain places is altered whenever complete wave function either does or does no decohere to the environment.

    Photon hitting detector D0 will hit it somewhere - and that single hit can't be ascribed neither to interference pattern nor dual lines pattern with 100% certainty. It has certain probability of belonging to either one or the other distribution - this is important thing because we like to conclude that when we do correlation between hits at D0 and D1/2 we get interference patterns at both, and when we correlate hits at D0 with D3/D4 we get dual lines - it must be the case that each photon was exactly in one group or the other group - but this is based only on behavior of ensemble of particles - no reason to think that each photon was distinctly in one group or the other. It is behavior of wave function in a given setup that produces this correlation in hit position probabilities and not photons being distinctly in one group or the other.

    No influence of the observer, no future events causing past ones - just a regular wavefunction behaving as it normally does. It is just our wrong conclusions that paint a picture of something magical going on here. There is something magical going on here and that is wavefunction and weirdness of quantum world, so let's focus on that one and stop inventing significance where there is none.

    • Like 2
  22. This one is still bugging me.

    How come that SNR improvement in x1.5 bin is x2 instead of x1.8?

    I created "simulation" of what happens. I've taken 4 subs consisting of gaussian noise with standard deviation of 1. Then I multiplied first sub with 4, second and third with 2 and did not touch the last one. Added those together and divided with 9.

    As expected, result of this procedure is noise with stddev of 0.555 as per measurement:

    image.png.cf3d3a98747019e58b69a89412a68ed6.png

    To attest that algorithm is working, here is small image 9x9 pixels that consists of pixel values 0,1,2,3, ..., 8 and result of bin x1.5:

    image.png.6e6992567b186dc8a14884bf705bbbdf.png

    Just to verify that all is good, here are values of pixels in original and binned version:

    image.png.34fe085774e49d17d18a3c07d40e0a8d.png

    image.png.098ead2cd3ef04cc742163650bd7626f.png

    Let's do first few pixels by hand and attest it's working.

    (4*0 + 2*3 + 2*1 + 4)/9 = 12 / 9 = 1.3333

    (2*3 + 4*6 + 2*7 + 4) / 9 = 5.33333

    (2*1 + 2*5 + 4*2 + 4)  / 9 = 2.66666

    (8*4 + 2*5+2*7+4) / 9 = 6.66666

    Everything seems to be in order - resulting pixel values are correct.

    However stddev of image binned x1.5 is lower by factor of x2 rather than by factor of x1.8, even if I remove correlation by splitting it.

    This is sooo weird :D

  23. 2 minutes ago, pete_l said:

    Agreed. It seems to me that all you do by processing correlated noise is to broaden the point spread function. Essentially adding blur. That would definitely suppress the appearance of noise, but it would alter any "data" in the image, too.
    What I would like to see is that instead of a field of just noise, to analyse how this reacts with astronomical targets in the image.

    In above example there will be some broadening of PSF due to pixel blur - larger pixels, larger pixel blur so increased FWHM of PSF.

    In split method if you oversampled to begin with - that does not happen.

    I wondered why my initial calculation was wrong, it gave SNR improvement by factor of 1.8 but test resulted in 2.0. Then I realized that I did not account for correlation.

    We can attest this by doing another experiment - splitting binned data into sub images. As correlation for bin factor of 1.5 happens to adjacent pixels - if I split result of binning, it should have proper distribution and 1.8 improvement in standard deviation. It should also show no modification to power spectrum. Let's do that :D

    Here are results for x1.5 bin of gaussian noise with sigma 1:

    image.png.bac171e30e205f4dfd757aecfcc1598b.png

    And power spectrum of it:

    image.png.5f4da5cc1193e3e398133d1dcba15ac0.png

    It shows slight drop off towards the edges - high frequencies.

    Hm, another twist :D

    Here is result of splitting binned sub:

    image.png.e7de78440d07fce289086ea6c977daf5.png

    Standard deviation remains the same, but power spectrum is now "fixed":

    image.png.18f7cd214b43664164a969e647a6b929.png

    So this removed correlation as expected - and can be seen in power spectrum, but SNR improvement is still 2.0?

    I'm sort of confused here

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.