Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,100
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I have opposite opinion. I think they are elements of reality - regardless of the fact that we can't measure wave functions themselves.
  2. Let's take that dual slit experiment then - how do you calculate interference pattern on the screen? We know what this diagram means. You take a little clock with hand going at some "speed" around the clock and you set it on a certain path and when it hits something - you again let it fly all over the place - there are some rules like when you hit something clock hand flips and so on ... Now instead of tracking all possible paths one by one and adding probability - track them at the same time, take each path and when hand is at 12 o'clock - mark that position. Join all those points. Know what you get? A wave front of a wave function propagating in space. Wave that I'm describing is not classical wave - and you should not look at it as EM wave that you mentioned - a bunch of photons - it is wave in above sense - all path integrals represented as single entity rather than bunch of separate paths.
  3. Classical EM field approach prevents us from understanding of some of the phenomena and people will get the sense that wave in the field is the thing that is carrying momentum and energy. Well - that was all about entanglement or rather decoherence. How else are we supposed to explain case with "single" photon and two detectors - wave reaches both detectors but once there is detection on one of them, energy is removed from whole wave function - and that wave function indeed changes. It happens with the speed of entanglement. In fact - let's do that thought experiment like this - there is hydrogen atom, and electron jumps down to ground state and photon is emitted. We have our rock in a pond - it is spherical shell that starts propagating from that atom. Two detectors on opposite sides of this atom far away - one closer by few miles to the atom. Wave function "arrives" to the first detector and photon is detected - it is removed from quantum field - does wave function reach other detector in the same state? It can't. As we know it represents probability of detection of a photon - but there is no probability (I know that one photon example is flawed as we assume one and only one photon) for second detector to fire - hence wave function must have evolved so. It indeed evolves so by mechanism of decoherence and entanglement - as wave function is entangled with first detector it changes and we know (or rather think) that entanglement is instantaneous (but it can't carry information so all is good with GR). I made this example because without it - people keep getting back to little bullet photon traveling thru the space. With this example and above explanation they can see that it is wave function that is traveling - thus it is traveling along every possible path at the same time and interference is clear. It also explains that photons do really exist only in two instants - once when energy is given off into field - "ripple stars" and when photon is absorbed - decoherence happens and "ripple ends". In all other cases they are "smeared" or "interwoven" into this thing we call wave function. I know that notion of wave function gets us back to plain old QM - but I'm more talking about ripples in EM field than anything else.
  4. Ok - that is really tough question. What is light made of and what are photons in fact? We often think of light as either wave or stream of some sort of bullet like particles. Neither is in fact true. Best way to describe that phenomena is QED - and language is mathematics. If we start to "visualize" things or try to find analogy in everyday life - we are bound to get it wrong. However - we don't necessarily need to be right here - we can still try to describe it in layman terms and perhaps wrong analogies. Let's start with waves, or rather wave function. Imagine a pond and a rock is thrown in - waves start to spread out from that location. Would light be that ripple? What is photon in this case? That ripple is actually a wave function. It is complex valued function - something that ripples in space in electromagnetic quantum field. Depending if you subscribe to notion of wave function being element of reality or not - you'll be in favor of some interpretations of QM versus others. See here to be confused further: https://phys.org/news/2012-04-quantum-function-reality.html In any case - we must look at that wave function, real or not (not in the sense that it is just mathematical tool used to calculate things rather than being something that actually "waves around') and see how it behaves in order to understand photons and light. Imagine now having two simple cameras that are trying to detect "photons" from this wave - one on each side. What happens when wave hits either camera? Well one of two things can happen - either nothing or one of cameras detects photon and whole wave changes shape instantaneously. Wave can hit camera and do nothing. That is important thing to realize. Also important thing to realize is if we assume that only one photon is in this wave disturbance (here we are making a mistake - I'll explain later why is that) - once that photon "hits" something - either camera A or camera B - whole ripple changes at the same time. Does not make sense? Well - spooky action at a distance - faster than light thing, you are not the only one to be baffled by this Now lets look at why we can't really say there is one photon in that ripple (or rather poor analogy for that). here is one function - it is wave like. Imagine that function represents a "particle". Particle with exact speed (momentum) would have precisely known frequency / wavelength. In order to have exactly known wavelength and frequency - you need to have cyclic function - sine/cosine thing - that repeats to infinity. Above function is not like that. Our pond ripple is not like that. Do you know precise location? Well - we could say that "center" of above function is position - but what if ripple is not symmetric - in fact "particle" is anywhere along that oscillation. More you know exact wavelength - more this function needs to repeat and be longer - more position is spread out - less you know where particle is. Now imagine you only have one "up/down" change - it will be very localized. Here is another image: left is what we are talking about - single ripple, few ripples, more ripples - more and more you go - finally you arrive at sine wave going of to infinity. On top we have very defined location - but we don't really have wavelength defined - where do we start to measure it? When we have few ripples - it is a bit easier to say wavelength is probably this much - but suddenly we start to have problem with location and so on.... This is uncertainty principle explained in terms of wave function. Single photon has well defined wavelength - now take above ripple. Can you decompose it into single well defined wavelengths? You can't really because it would need to go off to infinity - you would need infinite number of such well defined wavelengths. For this reason we can't really tell how much photons are in a wave function ripple. Like Andrew said above - photon number operator is not well defined. Once energy transfer takes place - camera detects a photon - it is removed from above wave function and it then changes shape. That interaction is very complex thing as it involves bunch of entanglements with environment and so on... We did not mention one more very important feature of wave function and that is interference with itself. We often say - light interferes or photon interferes with itself - in reality it is wave function that interferes with itself. It defines probability that energy transfer will take place. How is this for starting to define what light is?
  5. Here is some starter info: http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.html Maybe most important paragraph is the first one: As for daytime exhaustion of Rhodopsin - take a look here: https://en.wikipedia.org/wiki/Night_vision Biological night vision section. Here is interesting part: Interesting fact - keep your liver clean if you want good night adaptation. High levels of light during day cause depletion of these chemicals and it takes a day or two for things to get back to normal.
  6. I think that it is something like 7 photons that is threshold to trigger sensation. Eye/brain is very interesting in how it processes image. It is well capable of seeing very low photon levels and we know that at these levels - there is big level of noise - but we never see that noise. This is because our brain has some sort of noise filter. I only once managed to see noise associated with light and that was "cheating" - I took Ha filter (Baader one 7nm) and was in relatively dim room while outside was bright summer day with plenty of sunshine. I held filter in front of my eye (not too close) and looked outside. It was noisy image! I guess couple of factors contributed to trick the brain to shut off denoising. It was bright outside - enough for pupils to contract. It was rather dim in the room itself (compared to outside) - I guess that prevented too much stray light to interfere. It was not very dark so I guess brain was still in daylight "mode" - expecting plenty of light.
  7. Ah, ok - exit pupil is imaginary circle all parallel lines from a single star pass thru. There is special "exit pupil" - or rather same thing but at a special place - imagine that each set of parallel lines forms a circle. This circle is not at any particular place - it can be anywhere along those parallel lines. However - if you take all parallel sets of lines - they all intersect in one place (or at least majority of them) - this is exit pupil that we talk about when we talk about eye relief and thing that hovers. Here we have another diagram and exit pupil and eye relief marked. Note that place of exit pupil is where all parallel light rays intersect - blue rays are from one star - they converge in one point on focal plain, red are from another star and green are from third star. All three sets of parallel rays exit - and have same "diameter" - or same circle between them - blue rays pass imaginary circle of certain diameter, green do as well and red do as well. They all cross at same place - and this is where all three imaginary circles come to same place - to form a single circle all parallel rays come thru - this is the exit pupil and distance of it is eye relief.
  8. Yes - you can use eyepiece as magnifying glass - screw off barrel - put it close to your smartphone and enjoy large pixels Or you can do following to turn around "diagram" - take binoculars and look at the front - things will look smaller and more distant than larger and close.
  9. In a sense yes - for same "grid" (like camera sensor with pixels) - longer focal length will increase resolution in terms of arc seconds per pixel. We have to be careful when using word resolution as it has so many meanings Final piece of puzzle. First here is this: Which stands for - converging rays will come to one point at focal plane but will continue to diverge if nothing stops them. Simple as that. It is important to note that angle of convergence is the same as angle of divergence for a ray - same thing as saying light rays are straight lines. Now we take a small telescope and run things in "reverse". Here in this diagram arrows are put on rays and objective is marked as objective and eyepiece is marked as eyepiece. In reality - this diagram can be read in reverse - it can go from right to left and things would remain the same. If we remove labels and change arrow directions - it will still be valid diagram. Eyepiece is just a small telescope, or rather telescope with short FL where light is "running" in reverse" - or rather light does what it does usually - move in straight line. Same as rays arrive parallel to entrance pupil - same way they leave at exit pupil. Difference being that entrance pupil is larger because focal length of front scope is larger and we call it aperture. Exit pupil is smaller because focal length of second "telescope" is smaller. Rays diverge at same angle - they just don't have enough room to spread as much since focal length is shorter - that is all. In fact aperture:fl = aperture:fl where left side is one telescope and right side is other "telescope" (or eyepiece). Only thing that we did not see in above diagram is magnification. We have seen how parallel rays become parallel rays again at exit. How their pupil decreases (or increases if we swap telescopes around - its up to focal lengths). Last piece of puzzle has to do with focal lengths and angles and distance from optical axis that we talked about. We said distance of a point in focal plane depends on 1) angle of parallel rays 2) focal length If point on focal plane is the same for two scopes, and one scope has smaller focal length than other - then one scope will have angles smaller than the other. In fact - scope with larger focal length has smaller angles. It is this angle amplification that is actual amplification of image that telescope + eyepiece (or two telescopes, or two lenses) provide. That is why we see larger image - because for our eye it is like there is no telescope only parallel rays coma at larger angles - and they will come at larger angles if thing is indeed larger - we see it as enlarged. Makes sense?
  10. Important point for further understanding: Distance of star image (focal point for that particular star) - depends on angle but it also depends on focal length of that telescope. Short focal length telescopes are "wide field" and long focal length telescopes are "narrow field" because of this. If we have a star at a same angle and we have two telescopes one with 500mm FL and other with 1000mm FL - second telescope will form image of the star at twice the distance to center of the frame compared to first telescope. Note - this is not magnification (although it looks like it) but it is related to magnification - this is why it is easier for longer FL telescopes to magnify more.
  11. Yes, yes, yes Yes - that is what we sometimes call focal point - but we also call focal point - any point on focus plane that is of interest. Principal focal point shall we say. When we talk about lens then focus point / focal point of that lens is the principal focal point - places where rays parallel to principal optical axis converge. If we are talking about focused star that is off axis and want to refer to place on focal plain where all those rays converge - we will say - it's the focal point of that star.
  12. What you have here is aperture obstruction or aperture mask. It just makes less light reach objective / mirror. Here is an example: Some of parallel rays lend outside of tube - they miss telescope. Some of parallel rays lend on "inside" of the tube but don't reach mirror Some enter tube and hit main mirror Some hit tube on the outside. We don't know nor don't care about all those parallel rays that did not make it - mirror will collect all photons that made it and converge those onto a star. What you see here is form of vignetting - image is a bit fainter further away from principle axis you get because less photons made it at a larger angle.
  13. Yes, in simple design curvature of focal plane has to do with focal length (and hence to some degree F/ratio - but aperture is not that important). Focal plain is curved because rays that converge further away from center of focal plane, converge "closer". This is in fact not true - they converge at the "same distance" - but since distance is measured at an angle - it look shorter. Look at central rays - both blue and red - those two that go thru exact center of the lens. Points where rays converge are at the same distance to center of the lens and these two rays traveled same distance to their respective converging points. This is field curvature and it depends on focal length of lens.
  14. No. FOV depends on focal length, or how much bent light rays after they arrive. If you block portion of rays while they are still parallel - it is just as using smaller aperture.
  15. Great Next thing to realize, and here moon will be great help - is that different points in infinity arrive at different angles. Do this thought experiment. Take a ruler and point it at the center of the moon. When you look at one edge of the moon - it is at slight angle to that ruler. When you look at the other side of the moon - it is at slight angle again but to the other side. Angle at which parallel rays arrive at aperture is related to where in the sky point of origin is. If scope is aimed directly at a star - parallel rays will arrive at 90 degrees to aperture. If scope is not aimed directly at a star, this will happen: Rays will arrive at an angle to front of the lens, but they will also converge not directly behind lens - but a bit "lower" - also on a focal plane but some distance to center. This is why image forms at focal plane of telescope - star in the center of the FOV is one scope is aiming at while star at the edge is at an angle to telescope tube.
  16. Quite possible. When you switch side of pier, DEC worm rotates 180 degrees. Backlash depends on shape of worm and how "close" worm gear is. Newer mounts utilize spring or magnetic loaded worm gear to address this.
  17. I should have added above - that holds for any object / point that we see as a point - that we don't resolve - like star or point on moon surface.
  18. Here is how to best understand it: Further the object is, irrespective of relative sizes of object and aperture angle between two lines that connect object to opposite sides of aperture is shrinking. Here left edge of triangles is aperture and right vertex is object. Aperture is small relative to distance between objects means that angle at vertex is small. When you have very small angle between two lines - they are effectively parallel to you (and here I mean very very very small angle - like couple light years vs 20cm of aperture small angle - although we don't need to go that far - this holds for moon as well although it is 384000Km away). Makes sense?
  19. Want to go bit by bit and see where you get stuck? From a point (for our purposes this can be a star) that is very distant (like really distant) incoming "rays" of light are parallel. Do you understand why is this?
  20. Here it is - this image explains it all. If you have a star / point at some angle Alpha to optical axis, following will happen: - all rays from that point will be parallel before they reach aperture - same angle - after objective they will start to converge and finally converge at focal point - all light from original star falls into single point on focal plane - this is why star is in focus on camera sensor (provided it is focused well) and it also means that field stop won't remove any light - it only limits angle that can be seen as bigger angle means point on focal plane is further away from center. - then rays start to diverge (just happily go on their own way and since they came to a point they continue now to spread) - eyepiece catches those rays and makes them parallel again. Few things to note - angle is now different - that is magnification. All parallel rays occupy certain "circle" - that was aperture earlier and now it is exit pupil. Ratio of angles and ratio of sizes of these pupils is magnification. - Eye is same thing as telescope - it is device that again focuses parallel rays. Thus field stop can't act as aperture stop because all rays from "aperture" have been squeezed into single point on focal plane.
  21. I'll be interested in this thread too so replying in part to follow and in part to say that I'm certain single photon does not contain full information about an object. It would be a good start to define what is full information about an object - and even if there is such a thing
  22. Other graph with my added red line is why planetary imagers don't worry about CO and the fact that human eye/brain can't sharpen is why visual observer prefer small CO. There is no information adding in there. There is only information removing - and that is the point when graph line hits 0. Graph actually reads as follows: For a given frequency (X axis) - image that is observed thru a telescope and decomposed into frequencies (Fourier transform) - will have that particular frequency attenuated certain amount. It's like someone, selectively per frequency, put some sort of ND filter and removed percentage of light on that frequency. What you end up is a bit less intense light at that frequency. In fact - height of a graph shows what percent of light remains. As long as there is some light and as long as you know this curve (or can guess it) - you can do inverse on recorded image - you can take that frequency and multiply it with inverse number from graph. If a frequency was halved - then multiply it with 2 and you'll get same intensity as before. Only place you can't do that is if you multiply something with 0. That is information removal because I can't use anything to get original back. X * 0 = 0 => X = 0/0 but division with zero is undefined so X can be anything - information on what was the value of X is forever lost. In classical picture - you can restore image fully. In quantum picture - you can only restore image up to a point since there is uncertainty involved and that uncertainty is noise. When signal is very low - SNR is very low and amplifying that signal won't increase SNR - it will remain the same. For that reason above red curve that I draw is not rectangle - it is still a bit curved - you can't beat the noise. More you stack and better SNR you have - more sharpening you can perform - but there is always a limit - or rather two part limit - one is information loss - you can't restore information that has been lost due to multiplication with 0 and you can't restore information / signal that has poor SNR (is attenuated so much that noise is bigger than signal and SNR is <1 - in fact SNR needs to be >3 in order to start recognizing stuff). Makes sense?
  23. There are two things that I would like to address here. Mr Peach is right that astrophotography can resolve more detail on planets - but not for reasons listed. First - let's address what resolve means: In astronomy: "separate or distinguish between (closely adjacent objects)". Resolving of Encke Division is not really resolving at all. You are resolving two pieces of ring A and not resolving division. Similarly you are resolving two stars. I'm not talking here about light or lack of light - that is not the point. You can still resolve two dark features - but you need to distinguish between two dark features. If there were two divisions next to each other - and you record two division - then you resolved two divisions. Recording contrast drop from Encke division is not resolving - in the same sense that seeing single star instead of two stars is not resolving that star. Maybe there is 50 of stars in there? Maybe it is 10000 stars. How can we tell? We can't because we did not resolve it. Once you resolve pair of stars - you can tell - there is at least two stars there. Maybe there is more - but we did not resolve them. In that sense - telescopes don't actually resolve Encke division - but they do record it - in same sense as you'll see a star if you observe double star but don't resolve it. Now onto other part - photography will record / resolve more than human eye can resolve with same telescope. This is due to two things - first is "Frame rate" and atmosphere. We look at movies at 30fps and can't tell it is series of single frames in succession - this tells you that our eye/brain "exposure" time is at least 1/30s. In fact - some people can see faster than this - there is anecdote that someone saw M1/Crab pulsar pulsate in large telescope. Person was a pilot and could tell the difference to atmospheric influence. Crab pulsar pulsates with 33.5ms - which means that light and dark each last for half of that. Some people can see flicker at 30fps. In any case - exposures for planetary photography are often 5-6ms. That is much faster and it is used to freeze the seeing. In another words - Human eye sees more atmospheric motion blur than camera due to "exposure" length. Second thing is that images of planets are processed - contrast is enhanced and sharpening is performed. Detail is really about contrast. For that reason resolving power is defined with two high contrast features - black sky and very bright stars. Sharpening can sharpen telescope optics as well. That is something human eye/brain can't do (efficiently? I'm sure there is some sharpening involved - but not the way we think of it - brain does all sorts of funny things to image that we see). Here is another screen shot from Mr Peach's website: That talks about sharpness of optics. This is MTF of a telescope (with different levels of spherical aberration in this case). What it does not tell is that image processing or sharpening in particular tends to do this: This graph shows how much detail looses contrast. Once this line reaches zero - no more detail can be seen as all contrast has been lost. If you look at obstructed telescope diagram vs unobstructed - you'll see something like this: More central obstruction you add - more "dip" there is in this curve - more contrast is lost. This is why we say that clear aperture gives best contrast to the image. Now back to sharpening - sharpening just straightens this curve and restores even more contrast and detail that scope can deliver to human eye that can't sharpen. How much you can restore this curve - depends on how much noise there is in the image because as you raise the curve - you raise the noise as well. Important point - once curve reaches 0 - there is no straightening it up - zero just means zero - information lost, no way to recover it by sharpening (or in math terms - any number times 0 is 0 - you can't guess original number you multiplied with zero if your result is zero - it can be any number).
  24. Very interesting point about resolution. I wondered that often in another context - secondary mirror on folded design scopes - like Mak. How does it impact resolution. For human eye, we can do the math. In fact, I've seen math done and it matches well with experience. Resolution of human eye is about 1 arc minute. If you take size of photo receptor cells and their density and do airy disk from regular eye pupil of 5mm - you get the same number - resolution should be 1 arc minute. This means that we have diffraction limited eyesight
  25. Mirrors and gentle use of delicate blower at their finest
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.