Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I just realized that I too subscribe to Copenhagen interpretation Well not original one that treats measuring aparatus to be classical system as opposed to quantum system, but I also believe there is "quantum function collapse" - in a sense. I would not call it collapse, I would actually describe it to be something else - transition from superposition of states into ensemble of states. Difference being - superposition of states, let's say 1/2 up + 1/2 down is real and system is in this complex dual state where spin is not having one definitive value or can be just due to our lack of knowledge - it is either up or is either down but we don't know which one, so we ascribe 1/2 probability to each. Measurement or rather decoherence causes this transition. In that sense - it is wave function "collapse"
  2. @andrew s I think that third aspect is not problematic, or rather that it resolves outside of QM domain. It is related to the fact that once measurement is made, evolution of QM system "starts from that point" and proper probability distributions expect system to be in "prepared" state. This is fundamental part of "wave function collapse". However, we can see that once QM system transitions into "ensemble" domain - it is no longer in quantum superposition - regular probability rules apply. Then we no longer need to model problem as QM problem, but rather can model problem as probability problem. For this reason, I would suggest we look at following: imagine we have regular coin and we do two tosses with it - but we do it in two different ways. 1. we toss coin twice in a row but we don't look at intermediate results - we let automatic "score keeper" to record results. We are expected to calculate probabilities of different "total" outcomes, and we are right to say: 1/4 to each hh, ht, th and tt. 2. we toss coin once, and we look at it, record the value of it and want to give final prediction before we toss it next time. Now we can clearly see that we no longer can predict final outcome as being 1/4 for each hh, th, ht and tt if we recorded that after first toss coin landed heads - we need to modify our prediction to 1/2 hh or ht. Decoherence deals with measurement problem because it prevents superposition of complex entangled states once measurement is made and reduces things to normal probabilities. Once system decoheres it is effectively in one of eigenstates, because it can't be superposition of them any more. It is just that we can't tell with certainty which eigenstate it will be and our prediction can only give us classical probabilities of what that state can be.
  3. Who we? First copy, second copy, third copy or fourth copy? We can be any of them with equal probability, unless "we" we are somehow special. Is there anything special about hh copy of us?
  4. What exactly were you not pleased with? What was the DPI of your print? Minimum that you should go with is 300dpi, and use 600dpi if you can. Size of your image is roughly 2500 x 2000. If you used 300dpi to print, this would make it 8.333" x 6.666" or about 21.16 cm x 17cm. That is about A5 sheet of paper size. That is rather small and this is lowest resolution that you should use for printing. With 600dpi - you are looking at a6 or post card size. How large did you want to print it?
  5. It has solved measurement problem - it just did not provide us with mechanism for "determining". I believe it to be following, and I'll use Schrodinger's cat here because that is the reason why it is sentenced to death in the first place Usual reasoning goes: We have an atom that is about to decay, and it is in superposition of two states - decayed and not decayed. There is a vial of poison connected to detector and hammer or whatever and it is consequently in superposition of broken and whole states and eventually we come to cat - that is in superposition of dead and alive, but when we open the box, we always find it in dead or alive state and never in superposition. Decoherence tells us something different is going on here. Atom is in superposition of decayed and not decayed states and that is fine. Now it interacts with detector and becomes entangled with detector and this means that it is not "detector is in super position of detected and undetected" states, but rather proper assessment of situation would be: System is in "superposition" of following states "atom has decayed and detector is triggered" state and "atom has not decayed and detector is not triggered" state. We have lost "cross" terms (there are no terms for atom decayed but detector not triggered and atom not decayed and detector triggered). Another important thing to note is that detector consists out of enormous number of particles - each having wave function so detectors wave function is extremely complex with may phase terms. Once our atom becomes entangled with detector and all that thermal noise - it goes "out of phase with itself" - or rather resulting two states written above are out of phase. Photons interfere with themselves in dual slit experiment because they maintain phase. Imagine having photon with one set of frequencies and with another set of frequencies (not single frequency any more since we now have bunch of particles in the system and state is composed out of all their states). Can such photon interfere with itself to clearly see interference pattern or will it just behave very uniformly random - like a particle? This is decoherence - that going out of phase prevents states: "atom has decayed and detector is triggered" and "atom has not decayed and detector is not triggered" to interfere and form superposition - you are left with "ensemble" sort of "superposition" - where you have either of two pure states with their respective probabilities. This is why we don't ever see cat in superposition of dead and alive states, yet we have certain probability of finding it in each state when we open. Problem that is not solved by this or any other interpretation - to my knowledge is why is particular state being observed and not the other one - what made it become reality out of two possibilities and when did it happen in the course of decoherence?
  6. I was just thinking - motivation for interpretation of QM is among other things "wave function collapse", or rather explaining how after decoherence has happened and superposition is lost and we are left with "ensemble of states" - we end up in one particular state. We have explanation why macroscopic world is not in superposition, yet we don't have answer why spin up was "selected" in this round. But do we really want that answer? Finding mechanism for that will put us in completely deterministic world. In order to live in probabilistic world we simply can't have other explanation except - it happens at random. I see the appeal of MWI to scientists. - It is deterministic - there is formula for everything and it is 100% correct - It kind of explains why we think we live in probabilistic world (if we skip these details about Born rule) - It allows for free will - as in fact, not only you have freedom to take any choice - you will take every choice! I think I need to get better understanding of dynamic of decoherence - in that process what is the evolution of states and how long does it take to transition from superposition to ensemble of eigenstates and if there is anything there that could lead to "moment" of "choice" if not "reason" for "choice".
  7. Oh, I love this quote from that page: Want to escape very uncomfortable position - anthropic principle to the rescue!
  8. I'm not sure why we are still arguing this? Everett new that he has issue with Born rule and tried to modify it. In fact everyone tried to do so in order to make MWI work - and I see no point in doing that - trying to modify basic rule supported by all evidence so far. Even I proposed one solution - which is easily shown to be mathematically inconsistent (expecting all probabilities to be expressed as rational number while QM allows for irrational numbers to be probabilities). On wiki there is even paragraph addressing this - Frequentism. To my eyes, MWI is clearly flawed and should not be taken seriously. However, that goes against the fact that is is generally accepted as plausible. But then again, we have this: where Copenhagen interpretation carries 42% votes. I'm not sure how can anyone take Copenhagen interpretation seriously after they hear about decoherence?
  9. Both cases will have ttttttttt ad infinitum sequence in them - question is - how likely it is? Interpretation of QM (if there is in fact need for such thing) should account for what QM is telling us. QM is telling us that such sequence is possible but very improbable. In fact we have a mathematical framework to calculate probability of it. We are in infinitesimally small number of world . All other worlds simply don't contain us and there is so much more of such worlds Sheer number of worlds is one thing that I object with MWI - Occam's razor epic fail. Ok, imagine following scenario We have biased coin that lands heads one million times more likely than tails. We toss that coin two times. If there is two worlds split per coin toss and we toss the coin two times, we will end up with 4 resulting copies of universe. hh, ht, th, tt How likely is that we are any one out of four resulting copies? Well we could be third copy with probability 1 in 4 or 25%. How likely it is that we are fourth copy? Well exactly the same probability 25%. There should be no bias which copy we end up being. Now, probability to lend hh is extremely big and probability to lend tt is extremely small. If we are 1/4 chance fourth person, we just witnessed extraordinary event one in a million in a million chance happened before our eyes. We do another two consecutive coin flips. There is again 1/4 chance that we will end up being tt copy. What is that? We had two consecutive one in a million in a million events just happen before our eyes????? Do this ten times, or how ever many times you need to understand that probability as we know it would be broken in this system. Now, let's consider approach where 1,000,000:1 weighted coin produces 1,000,001 worlds - 1,000,000 heads worlds and 1 tail worlds. Do coin flip two times. hh will be in 1,000,000 * 1,000,000 worlds ht will be in 1,000,000 * 1 worlds th will be in 1,000,000 * 1 worlds and tt will be in 1*1 = 1 world We can equally be any one copy after we finish tossing the coin. There is 1,000,002,000,001 worlds in total Probability that we end up in one of hh worlds is 1,000,000,000,000 / 1,000,002,000,001 and probability that we end up in tt world is 1 / 1,000,002,000,001 Hold on, but that is just the same as probability that we will have hh or tt respectively - so if we are randomly chosen copy - we will still have proper probabilities If we by any chance land tt in first two flips, what is probability that we will land another tt in two subsequent flips? It's certainly not 1/4 - it is the same as it should be 1 / 1,000,002,000,001 Two states outcome can't always split in two worlds - that would mess up our probabilities as we know them - chain of events would not have probability that we expect it to have based on math that we have established. This is what I meant when I said "resulting universe will have unlikely long streak of subsequent electrons with spin down. "
  10. Here is what I've found on wiki article on MWI regarding probabilities: Given the state of affairs, I wonder how did MWI end up to be mainstream interpretation?
  11. @Synchronicity and @globular I understand that you think that there is only two worlds being split, however in such approach there is no way to account for probabilities, and as @andrew s mentioned, probabilities lie at the heart of this interpretation. Try making tree diagram of split worlds for spin up / spin down 2:1 case with only two branches and try to explain probabilities. It simply does not work. If you make tree diagram of this case for 3 worlds splitting - 2 with up and one with down - then explaining probabilities becomes trivial - only assumption needed is that all copies exist simultaneously and we might be any one of those copies (we traced any one path down the tree without bias - since bias can't be here - why would we be "this" copy rather than "that" copy?). If you have problem with world splitting in more than two, then just look at event that has continuous probability distribution - like detecting particle along X axis. There are infinite number of outcomes, and not only that - there is probability density that must be observed. @andrew s I don't mind being wrong, however, I'm not prepared to accept "smarter people would have thought of it by now" as explanation where I did wrong. I'm sure simple explanation why my argument is flawed can be given.
  12. Size of what? Of universe? Many worlds postulates that each split produces exact copies of universe except for outcome of quantum "measurement". No way they can be different in size.
  13. Ok, I'm not going to go in circles. Can you explain probability in case of weights? In repeated experiments why do we see about twice as many spin up electrons then spin down electrons if each time world splits in two with some "weights"?
  14. It is not the same thing. Weight approach does not explain probabilities and requires existence of some weights that are "external" to the worlds - conjecture not supported by quantum mechanics. In number of worlds approach - you don't need any external weights and probabilities work - with simple assumption - there is no preference to any one copy in any way (a sort of cosmological principle - same rules apply to all).
  15. That is true, however, weight does not account for probability but number of worlds does. Imagine you have experiment with 3:1 ratio and 1:1 ratio - both of those create two worlds of different weights according to you. How do you distinguish them, or rather what distinguishes these two cases so we can say - look here is why this one has probability 3:1 and other has 1:1 probability.
  16. Ok, but what is the weight then? Who keeps the score? Why is there then probability different than 1:1?
  17. What does weight mean in this context? Let's say we have 2:1 event like described above. World split into two. One world has "weight" of 2 and other has "weight" of 1. Can you tell what weight do you have and based on what? Since there are two copies of the world after event - you can't really say that one is "more likely" somehow. Once you have three copies - two with one outcome and one with other outcome - then yes, out of three copies of me - two will see spin up - so it is indeed more likely that I will see spin up and that probability is indeed 2:1 given number of worlds.
  18. Ok, here is the argument, I'll be short and to the point as I don't feel to write too much at the moment. Many worlds states that when there is "choice" in quantum world - worlds are split and one choice happens in one world, and the other in another world. This is of course very simplified. Some transitions have continuous distributions and are split to infinite number of copies. Here is a question - say we have electron with a spin in some direction and we have spin detector at an angle to this direction such that it produces detections at rate 1:2. Twice as many electrons with spin up as those with spin down. Does this mean that this event splits world into two? One having electron up and one having electron down? Right away, I must say that this is not possible since one resulting universe will have unlikely long streak of subsequent electrons with spin down. Obvious answer to above question would be that universe splits into 3 copies - two of those will have spin up and one will have spin down. This way probability density is maintained over all copies in future. Let's then get back to same question. Let's again say that we have detector set at such angle to prepared electron spin that probability of measuring spin up is let's say third root of 0.25. There is such angle since formula for probability of measuring spin up electron is cos^2 ( theta / 2). Third root of 0.25 is less than one so it can be square of number less than one and of course we know that cos some angle is in 1 to -1 range (1 to 0 for this instance) - so there is such angle. However - there is no number of universes that universe can split as to maintain ratio of cube_root_of(0.25) / 1 - cube_root_of(0.25) This is because this number is irrational as cube_root_of(0.25) is irrational. Quantum mechanics predicts probability for detecting electron spin that can can't be explained by splitting into multiple universes. Does this sound right to you?
  19. It is not supposed to happen but it can happen and often it does happen. This usually has two causes: - polar alignment is not spot on so there is mismatch in rotation of the earth and rotation of the mount which produces slight drift over time - this drift is in DEC direction - Mount is not perfect mechanical device and it can track slower or faster than the Earth rotates (few seconds over the course of a day or similar). This again produces slight drift - this time in RA direction. Your drift can be combination of the two, and you can check what is dominant component by looking in which direction is majority of the drift. That is the purpose of the guide scope. In 99% of cases it corrects for above said drifts, however, be careful in 1% of the cases (numbers 99% and 1% are arbitrary - emphasis was on "rarely") there is something called differential flex. Telescopes sometimes move because they are not 100% tightly secured on the mount. You can't feel this motion by hand and sometimes it is even not related to scope itself - it could be mirror inside telescope that is moving for example. This motion is due to Gravity - depends where scope is pointing and where center of mass is. Two telescopes can move at different rate because of their size and way they are attached (one scope could be fine while other has some drift). If guide scope moves differently than main scope - you will again have this sort of drift - in this case this is due to differential flexure. That is normal - each frame contains some signal and some noise, and noise comes from different sources - read noise, thermal noise, light pollution noise and even target noise (shot noise). That is the name of the game - improve SNR or signal to noise ratio. Stacking multiple images improves this ratio. Best to think of it as signal being the same and noise reducing. SNR improves as square root of number of stacked frames. Yes and no. There will always be some noise in the image it can't be completely eliminated and there is no such thing as non-noisy background. That was yes. Any image can be made to look noise free. That was no, and I'll expand. It is about signal to noise ratio. There is some signal in the image and there is some noise in the image. If you make signal stronger than noise - that is good and that is what we are after - higher signal to noise ratio. This enables you to stretch the data in the image in such way as to see the signal while not showing the noise. Most problems in processing comes from the fact that one "knows" there is signal in the image - after all you imaged M31 - it is there, you can see it, and one tries to bring out that signal. If noise is not low enough compared to that signal - in bringing out the signal you will bring out the noise as well. It is great skill to stretch the data only to the limit it will let you - show the signal while not bringing out the noise.
  20. Deringing feature is related to frequency restoration part. In process of restoring the frequencies, you can easily get ringing - which is "real" thing - but we don't like to see it in images. Airy disk "rings" - here is image of airy disk produced by green laser: Same thing that produces rings around airy disk is responsible for ringing that we sometimes get. However, thing in your image is not ringing. Here is what ringing looks like: If you are using AS!3 to stack your image - try using small alignment points, like 25px or smaller. This is what your Mars should look like under alignment points:
  21. I think this is stacking issue - try changing alignment point size and see if it helps.
  22. That one should be better version of ASI290 with additional benefit - IR performance. With IR 820nm and above filter, this camera behaves almost as mono sensor - all 3 elements of bayer matrix have more or less the same response curve. It has lower read noise which is excellent. Only drawback is that we don't yet have QE figures on this camera. According to ZWO, it is supposed to be higher than ASI290, but I'm somewhat skeptical of that:
  23. I agree, but bayer drizzle seems to be removed from latest AS!3 incarnation for some reason? There are additional issues with bayer drizzle - fact that you have sparse data means that resampling used needs to be very tricky in order to work properly - if one gets that wrong in implementation it will negate benefits of drizzle.
  24. Yes, because with OSC camera - sampling rate and presentation rate should not be the same since there is bayer matrix. You got better results with ASI224 closer to F/15 than those at F/30 because you should sample at F/30 but work with data at F/15. I know this is somewhat confusing to most people - but reason for this is bayer matrix. As far as pixel size is concerned ASI224 has 3.75um pixels and for those pixels proper F/ratio for critical sampling rate is ~15. However, blue, red and each component of green are not sampled at each pixel - but at every other pixel (look at blue and red in the image. Green in the image can be viewed as "two separate" greens - again having same spacing). This means that for OSC sensor, actual sampling rate is twice lower than pixel size would suggest. For this reason you need to sample at F/30 and then to reduce size of the image by factor of two - or preferably use another method to achieve what is needed - split debayer or super pixel debayer (first is better then second). That way you will end up with F/15 data although you used F/30 optics. Same thing goes for ASI178 - pixel size suggests F/9.4 is needed but if you use OSC camera, you need to sample at twice that and then use appropriate debayer method to bring it back to F/9.4 What you did was to sample at "9.4" but sample every other pixel and then use interpolation to fill in missing data.
  25. Not sure if this is right place to put this topic since it does not involve any astro images. Just wanted to share quick test on newly arrived Samyang 85mm T1.5. This is same optics as Samyang 85mm F/1.4 but aperture ring does not have hard stops - lens is designed with cinematic application in mind (click stops on aperture ring can be heard when making a video and smooth aperture is advantage there). Since most of these fast lenses are usually used stopped down for AP, I figured it is better to have smoothly varying aperture instead of preset stops - that way I can have lens at F/2.2 if F/2 is not sharp enough - no need to stop it down all the way to F/2.8 - where next stop might be. It is chunky piece of glass - feels rather solid in hand. I was surprised by smoothness of both focusing and aperture ring - expected a bit more resistance. This way I worry slightly that it could shift on its own. There is of course question of sharpness. This is by no means very scientific test, but it does show level of sharpness lens has in the center of the field. I'll be using it with 9mm diagonal sensor (ASI178) to begin with, so not overly worried about corner sharpness at this point. ASI178mmc has rather small pixels at 2.4um - should be kept in mind when evaluating lens performance. Methodology - well I used international standard measure for such things: we'll be examining fine print contrast: This is screen capture of 200% zoom preview in SharpCap (preview with debayer turned on). First row is lens stopped to F/4 for reference. Not much difference at F/2.8. At F/2.0 we start to see some chromatic aberration starting to creep in. This increases with F/ around 1.7 (no way to tell exact f number - I just turned aperture ring someway between F/2 and F/1.4) and letters start to be a little soft. At F/1.4 softness is even more apparent and there is noticeable contrast loss. I actually believe this lens to be very sharp for AP use - at least the way I intend to use it. I'll either use it as 85mm with 4.8um pixel sensor and resolution 1500 x 1000 (original is 3096x 2080 with bayer matrix) or as 40mm lens with same sensor. This can be of course achieved with 2x2 mosaic and data further being binned 2x2. To put this into perspective, here are above letters scaled back to proper size in either of these two scenarios: 85mm super pixel debayer: or if we see it as ~40mm lens (actually 42.5 but I subtracted a bit because of mosaic overlap): I think I'll have no issues using this lens at F/1.4 when doing FOV that is similar to 40mm lens, but of course, true test will come under stars (once these clouds go away, but you know, new gear - clouds ... ). Here is FOV of this lens used in 40mm "mode" (2x2 mosaic) with ASI178: And here it is as standard 85mm with ASI178mmc:
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.