Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Hi Andrew and welcome to SGL. If you are interested in planetary imaging Celestron Nexstar 6SE is very decent option (for visual astronomy as well). Stacking software is something that you can either purchase or download for free. In fact, software most used for planetary type imaging is free (some require purchasing a license but you have free/open source alternatives). Most of software that you'll use accept donations as well - so you can donate some money to keep software evolving and being better. Planetary imaging works by taking "a movie" - or rather fast succession of stills and then stacking those - or some of those "subs". You stacks ones that are not terribly ruined by atmospheric seeing. Software automates this for you and selects best N% of frames (you tell it how much subframes you want to stack). Then there is sharpening and post processing stage. You can use DSLR to record a movie, but better option is to purchase dedicated astronomy camera for that. Not terribly expensive. 10MP celestron Neximage "eyepiece" - is one such camera, but I would recommend you to purchase something like this instead: https://www.firstlightoptics.com/zwo-cameras/zwo-asi224mc-usb-3-colour-camera.html Or if you want a bit more resolution / larger sensor (but a bit more expensive): https://www.firstlightoptics.com/zwo-cameras/zwo-asi-385mc-usb-30-colour-camera.html You'll need a barlow lens (but not necessary - it will only help you reach maximum "zoom" that you can use) - don't go crazy on magnification and get x3 or x5 - x2 will be enough with your scope. Better yet - get telecentric lens like this one: https://www.firstlightoptics.com/explore-scientific-eyepieces/explore-scientific-2x-3x-5x-barlow-focal-extender-125.html T ring is adapter to attach DSLR to your telescope. It converts from lens mount of camera to one of standard threads in astro applications - there are few most popular ones like T2 (so T ring is actually T2 ring ), M48 or 2" and 1.25" (which has M28 thread) T2 is M42 if I'm not mistaken but with fine pitch (regular M42 has 1mm while T2 has 0.75mm) - but take these as intro into what exists and not hard facts on mm sizes (I could be wrong - worth checking elsewhere). T ring will be needed only if you plan to image with DSLR. Btw - It's worth doing for lunar if you want to go for whole moon shots instead close up - you can do that with regular images (not video) - just shoot bunch of them (like couple of dozen) and stack those. Sometimes people don't bother with stacking and just shoot couple of frames and pick the sharpest one. Hope this helps, and I'm sure other members will contribute with their views and advice.
  2. With barlows you want to be on exact distance to sensor (not so with telecentric amplifiers). Barlow magnification depends on barlow sensor distance. Required focus change also depends on this - so if you over do it - you won't be able to get proper focus. Usually you don't need field flattener as x2 barlow will only show inner half of the field (or quarter by surface) - and field is pretty flat there. Proper configuration would be scope/focuser - barlow - camera where barlow - camera has certain distance to achieve wanted magnification (around what is "prescribed" for that barlow). In order to know exact distance - you need to know focal length of barlow - but you can treat camera as eyepiece and adjust as you see fit. So screw in nose piece onto camera and slide that into barlow - that is a good starting position.
  3. Except you won't see it Problem, like I said is wording and interpretation. Secondary shadow appears in image in exit pupil - problem is - when you look at exit pupil, the way we think of looking at exit pupil - you won't see secondary - you'll see the moon for example. Take a piece of tracing paper and put it where exit pupil is - it won't show the moon - it will show shape of primary with secondary shadow. I'm aware of that - problem is that I don't think of it as "image".
  4. Same here. Although I know what you mean - when you say image - I instantly think in terms of focused light (probably due to too much time spent contemplating imaging rather than observing).
  5. Btw - when I say wave function I actually mean quantum state.
  6. I have opposite opinion. I think they are elements of reality - regardless of the fact that we can't measure wave functions themselves.
  7. Let's take that dual slit experiment then - how do you calculate interference pattern on the screen? We know what this diagram means. You take a little clock with hand going at some "speed" around the clock and you set it on a certain path and when it hits something - you again let it fly all over the place - there are some rules like when you hit something clock hand flips and so on ... Now instead of tracking all possible paths one by one and adding probability - track them at the same time, take each path and when hand is at 12 o'clock - mark that position. Join all those points. Know what you get? A wave front of a wave function propagating in space. Wave that I'm describing is not classical wave - and you should not look at it as EM wave that you mentioned - a bunch of photons - it is wave in above sense - all path integrals represented as single entity rather than bunch of separate paths.
  8. Classical EM field approach prevents us from understanding of some of the phenomena and people will get the sense that wave in the field is the thing that is carrying momentum and energy. Well - that was all about entanglement or rather decoherence. How else are we supposed to explain case with "single" photon and two detectors - wave reaches both detectors but once there is detection on one of them, energy is removed from whole wave function - and that wave function indeed changes. It happens with the speed of entanglement. In fact - let's do that thought experiment like this - there is hydrogen atom, and electron jumps down to ground state and photon is emitted. We have our rock in a pond - it is spherical shell that starts propagating from that atom. Two detectors on opposite sides of this atom far away - one closer by few miles to the atom. Wave function "arrives" to the first detector and photon is detected - it is removed from quantum field - does wave function reach other detector in the same state? It can't. As we know it represents probability of detection of a photon - but there is no probability (I know that one photon example is flawed as we assume one and only one photon) for second detector to fire - hence wave function must have evolved so. It indeed evolves so by mechanism of decoherence and entanglement - as wave function is entangled with first detector it changes and we know (or rather think) that entanglement is instantaneous (but it can't carry information so all is good with GR). I made this example because without it - people keep getting back to little bullet photon traveling thru the space. With this example and above explanation they can see that it is wave function that is traveling - thus it is traveling along every possible path at the same time and interference is clear. It also explains that photons do really exist only in two instants - once when energy is given off into field - "ripple stars" and when photon is absorbed - decoherence happens and "ripple ends". In all other cases they are "smeared" or "interwoven" into this thing we call wave function. I know that notion of wave function gets us back to plain old QM - but I'm more talking about ripples in EM field than anything else.
  9. Ok - that is really tough question. What is light made of and what are photons in fact? We often think of light as either wave or stream of some sort of bullet like particles. Neither is in fact true. Best way to describe that phenomena is QED - and language is mathematics. If we start to "visualize" things or try to find analogy in everyday life - we are bound to get it wrong. However - we don't necessarily need to be right here - we can still try to describe it in layman terms and perhaps wrong analogies. Let's start with waves, or rather wave function. Imagine a pond and a rock is thrown in - waves start to spread out from that location. Would light be that ripple? What is photon in this case? That ripple is actually a wave function. It is complex valued function - something that ripples in space in electromagnetic quantum field. Depending if you subscribe to notion of wave function being element of reality or not - you'll be in favor of some interpretations of QM versus others. See here to be confused further: https://phys.org/news/2012-04-quantum-function-reality.html In any case - we must look at that wave function, real or not (not in the sense that it is just mathematical tool used to calculate things rather than being something that actually "waves around') and see how it behaves in order to understand photons and light. Imagine now having two simple cameras that are trying to detect "photons" from this wave - one on each side. What happens when wave hits either camera? Well one of two things can happen - either nothing or one of cameras detects photon and whole wave changes shape instantaneously. Wave can hit camera and do nothing. That is important thing to realize. Also important thing to realize is if we assume that only one photon is in this wave disturbance (here we are making a mistake - I'll explain later why is that) - once that photon "hits" something - either camera A or camera B - whole ripple changes at the same time. Does not make sense? Well - spooky action at a distance - faster than light thing, you are not the only one to be baffled by this Now lets look at why we can't really say there is one photon in that ripple (or rather poor analogy for that). here is one function - it is wave like. Imagine that function represents a "particle". Particle with exact speed (momentum) would have precisely known frequency / wavelength. In order to have exactly known wavelength and frequency - you need to have cyclic function - sine/cosine thing - that repeats to infinity. Above function is not like that. Our pond ripple is not like that. Do you know precise location? Well - we could say that "center" of above function is position - but what if ripple is not symmetric - in fact "particle" is anywhere along that oscillation. More you know exact wavelength - more this function needs to repeat and be longer - more position is spread out - less you know where particle is. Now imagine you only have one "up/down" change - it will be very localized. Here is another image: left is what we are talking about - single ripple, few ripples, more ripples - more and more you go - finally you arrive at sine wave going of to infinity. On top we have very defined location - but we don't really have wavelength defined - where do we start to measure it? When we have few ripples - it is a bit easier to say wavelength is probably this much - but suddenly we start to have problem with location and so on.... This is uncertainty principle explained in terms of wave function. Single photon has well defined wavelength - now take above ripple. Can you decompose it into single well defined wavelengths? You can't really because it would need to go off to infinity - you would need infinite number of such well defined wavelengths. For this reason we can't really tell how much photons are in a wave function ripple. Like Andrew said above - photon number operator is not well defined. Once energy transfer takes place - camera detects a photon - it is removed from above wave function and it then changes shape. That interaction is very complex thing as it involves bunch of entanglements with environment and so on... We did not mention one more very important feature of wave function and that is interference with itself. We often say - light interferes or photon interferes with itself - in reality it is wave function that interferes with itself. It defines probability that energy transfer will take place. How is this for starting to define what light is?
  10. Here is some starter info: http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.html Maybe most important paragraph is the first one: As for daytime exhaustion of Rhodopsin - take a look here: https://en.wikipedia.org/wiki/Night_vision Biological night vision section. Here is interesting part: Interesting fact - keep your liver clean if you want good night adaptation. High levels of light during day cause depletion of these chemicals and it takes a day or two for things to get back to normal.
  11. I think that it is something like 7 photons that is threshold to trigger sensation. Eye/brain is very interesting in how it processes image. It is well capable of seeing very low photon levels and we know that at these levels - there is big level of noise - but we never see that noise. This is because our brain has some sort of noise filter. I only once managed to see noise associated with light and that was "cheating" - I took Ha filter (Baader one 7nm) and was in relatively dim room while outside was bright summer day with plenty of sunshine. I held filter in front of my eye (not too close) and looked outside. It was noisy image! I guess couple of factors contributed to trick the brain to shut off denoising. It was bright outside - enough for pupils to contract. It was rather dim in the room itself (compared to outside) - I guess that prevented too much stray light to interfere. It was not very dark so I guess brain was still in daylight "mode" - expecting plenty of light.
  12. Ah, ok - exit pupil is imaginary circle all parallel lines from a single star pass thru. There is special "exit pupil" - or rather same thing but at a special place - imagine that each set of parallel lines forms a circle. This circle is not at any particular place - it can be anywhere along those parallel lines. However - if you take all parallel sets of lines - they all intersect in one place (or at least majority of them) - this is exit pupil that we talk about when we talk about eye relief and thing that hovers. Here we have another diagram and exit pupil and eye relief marked. Note that place of exit pupil is where all parallel light rays intersect - blue rays are from one star - they converge in one point on focal plain, red are from another star and green are from third star. All three sets of parallel rays exit - and have same "diameter" - or same circle between them - blue rays pass imaginary circle of certain diameter, green do as well and red do as well. They all cross at same place - and this is where all three imaginary circles come to same place - to form a single circle all parallel rays come thru - this is the exit pupil and distance of it is eye relief.
  13. Yes - you can use eyepiece as magnifying glass - screw off barrel - put it close to your smartphone and enjoy large pixels Or you can do following to turn around "diagram" - take binoculars and look at the front - things will look smaller and more distant than larger and close.
  14. In a sense yes - for same "grid" (like camera sensor with pixels) - longer focal length will increase resolution in terms of arc seconds per pixel. We have to be careful when using word resolution as it has so many meanings Final piece of puzzle. First here is this: Which stands for - converging rays will come to one point at focal plane but will continue to diverge if nothing stops them. Simple as that. It is important to note that angle of convergence is the same as angle of divergence for a ray - same thing as saying light rays are straight lines. Now we take a small telescope and run things in "reverse". Here in this diagram arrows are put on rays and objective is marked as objective and eyepiece is marked as eyepiece. In reality - this diagram can be read in reverse - it can go from right to left and things would remain the same. If we remove labels and change arrow directions - it will still be valid diagram. Eyepiece is just a small telescope, or rather telescope with short FL where light is "running" in reverse" - or rather light does what it does usually - move in straight line. Same as rays arrive parallel to entrance pupil - same way they leave at exit pupil. Difference being that entrance pupil is larger because focal length of front scope is larger and we call it aperture. Exit pupil is smaller because focal length of second "telescope" is smaller. Rays diverge at same angle - they just don't have enough room to spread as much since focal length is shorter - that is all. In fact aperture:fl = aperture:fl where left side is one telescope and right side is other "telescope" (or eyepiece). Only thing that we did not see in above diagram is magnification. We have seen how parallel rays become parallel rays again at exit. How their pupil decreases (or increases if we swap telescopes around - its up to focal lengths). Last piece of puzzle has to do with focal lengths and angles and distance from optical axis that we talked about. We said distance of a point in focal plane depends on 1) angle of parallel rays 2) focal length If point on focal plane is the same for two scopes, and one scope has smaller focal length than other - then one scope will have angles smaller than the other. In fact - scope with larger focal length has smaller angles. It is this angle amplification that is actual amplification of image that telescope + eyepiece (or two telescopes, or two lenses) provide. That is why we see larger image - because for our eye it is like there is no telescope only parallel rays coma at larger angles - and they will come at larger angles if thing is indeed larger - we see it as enlarged. Makes sense?
  15. Important point for further understanding: Distance of star image (focal point for that particular star) - depends on angle but it also depends on focal length of that telescope. Short focal length telescopes are "wide field" and long focal length telescopes are "narrow field" because of this. If we have a star at a same angle and we have two telescopes one with 500mm FL and other with 1000mm FL - second telescope will form image of the star at twice the distance to center of the frame compared to first telescope. Note - this is not magnification (although it looks like it) but it is related to magnification - this is why it is easier for longer FL telescopes to magnify more.
  16. Yes, yes, yes Yes - that is what we sometimes call focal point - but we also call focal point - any point on focus plane that is of interest. Principal focal point shall we say. When we talk about lens then focus point / focal point of that lens is the principal focal point - places where rays parallel to principal optical axis converge. If we are talking about focused star that is off axis and want to refer to place on focal plain where all those rays converge - we will say - it's the focal point of that star.
  17. What you have here is aperture obstruction or aperture mask. It just makes less light reach objective / mirror. Here is an example: Some of parallel rays lend outside of tube - they miss telescope. Some of parallel rays lend on "inside" of the tube but don't reach mirror Some enter tube and hit main mirror Some hit tube on the outside. We don't know nor don't care about all those parallel rays that did not make it - mirror will collect all photons that made it and converge those onto a star. What you see here is form of vignetting - image is a bit fainter further away from principle axis you get because less photons made it at a larger angle.
  18. Yes, in simple design curvature of focal plane has to do with focal length (and hence to some degree F/ratio - but aperture is not that important). Focal plain is curved because rays that converge further away from center of focal plane, converge "closer". This is in fact not true - they converge at the "same distance" - but since distance is measured at an angle - it look shorter. Look at central rays - both blue and red - those two that go thru exact center of the lens. Points where rays converge are at the same distance to center of the lens and these two rays traveled same distance to their respective converging points. This is field curvature and it depends on focal length of lens.
  19. No. FOV depends on focal length, or how much bent light rays after they arrive. If you block portion of rays while they are still parallel - it is just as using smaller aperture.
  20. Great Next thing to realize, and here moon will be great help - is that different points in infinity arrive at different angles. Do this thought experiment. Take a ruler and point it at the center of the moon. When you look at one edge of the moon - it is at slight angle to that ruler. When you look at the other side of the moon - it is at slight angle again but to the other side. Angle at which parallel rays arrive at aperture is related to where in the sky point of origin is. If scope is aimed directly at a star - parallel rays will arrive at 90 degrees to aperture. If scope is not aimed directly at a star, this will happen: Rays will arrive at an angle to front of the lens, but they will also converge not directly behind lens - but a bit "lower" - also on a focal plane but some distance to center. This is why image forms at focal plane of telescope - star in the center of the FOV is one scope is aiming at while star at the edge is at an angle to telescope tube.
  21. Quite possible. When you switch side of pier, DEC worm rotates 180 degrees. Backlash depends on shape of worm and how "close" worm gear is. Newer mounts utilize spring or magnetic loaded worm gear to address this.
  22. I should have added above - that holds for any object / point that we see as a point - that we don't resolve - like star or point on moon surface.
  23. Here is how to best understand it: Further the object is, irrespective of relative sizes of object and aperture angle between two lines that connect object to opposite sides of aperture is shrinking. Here left edge of triangles is aperture and right vertex is object. Aperture is small relative to distance between objects means that angle at vertex is small. When you have very small angle between two lines - they are effectively parallel to you (and here I mean very very very small angle - like couple light years vs 20cm of aperture small angle - although we don't need to go that far - this holds for moon as well although it is 384000Km away). Makes sense?
  24. Want to go bit by bit and see where you get stuck? From a point (for our purposes this can be a star) that is very distant (like really distant) incoming "rays" of light are parallel. Do you understand why is this?
  25. Here it is - this image explains it all. If you have a star / point at some angle Alpha to optical axis, following will happen: - all rays from that point will be parallel before they reach aperture - same angle - after objective they will start to converge and finally converge at focal point - all light from original star falls into single point on focal plane - this is why star is in focus on camera sensor (provided it is focused well) and it also means that field stop won't remove any light - it only limits angle that can be seen as bigger angle means point on focal plane is further away from center. - then rays start to diverge (just happily go on their own way and since they came to a point they continue now to spread) - eyepiece catches those rays and makes them parallel again. Few things to note - angle is now different - that is magnification. All parallel rays occupy certain "circle" - that was aperture earlier and now it is exit pupil. Ratio of angles and ratio of sizes of these pupils is magnification. - Eye is same thing as telescope - it is device that again focuses parallel rays. Thus field stop can't act as aperture stop because all rays from "aperture" have been squeezed into single point on focal plane.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.