Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Create master dark with sigma clip stacking instead of simple average. There is a minimal chance that cosmic ray will hit two frames at exact same place, so you can set reject to fairly low value. In principle it should reject only one pixel, maybe two with such low probability.
  2. While we are at it, I might be missing something but we now accept decoherence as explanation for measurement problem, except it is not full explanation as far as I can tell. It nicely explains how "complex" superposition reduces to "ensemble" superposition (terms in quotes are just my mental explanation, don't know proper terms for those), but it still does not explain how a single value is picked out from "ensemble" nor why does system continue to evolve from that particular measured value. We still seem to lack "ensemble" superposition -> single measured value transition, and explanation for it?
  3. Person in the space ship might experience shorter travel time. In our reference system star is at 4 ly from earth. For someone traveling fast, due to length contraction distance is less and us / star system is moving in opposite direction with the same speed so it will take less time for star to arrive to that observer in their reference frame.
  4. Well, neither When we talk about entanglement and relativity - we think of a problem phrased "nothing moves faster than speed of light". Then we naively assume - measurement at A instantaneously "collapses" wavefunction at B, or "causes" wavefunction collapse at B. So it might seem that there is something traveling faster than the speed of light - information that A has been measured "causes" state of wavefunction at B. It looks like some sort of information traveled faster than the speed of light - but that is not the case. It has been shown that you can't transfer any sort of information / do communication via entanglement. It is also wrong to assume wave function collapse due to measurement and impart "causal" to measurements of A and B. Measurement of A does not cause result of measurement at B. They are indeed correlated, and when you throw time in to the mix, and particularly different order of events then your run into "paradox" if you think in terms of causal. My concern is that correlation together with different order of events points to something that we discard at the moment. I'm still trying to wrap my head around if it indeed means determinism of the sorts or there is "a way out" - different explanation. Depends on reference system measured in. If I'm on earth and I expect to get a message from you that you arrived to nearest star, I'd be silly to expect it sooner than 8 years from your departure
  5. Don't do bias frames with CMOS sensors - most have unusable bias frames for some reason or another. You don't need bias frames for proper calibration if you match temperature and exposure length of matching frames (lights and darks, flats and flat darks). I think it should work well. Not sure what you are asking - they are easy to see in a sub, especially if you have multiple subs - just blink them and you will see if there are any. You will recognize it by: - looks like hot pixel, but it is not a single pixel, but multiple adjacent pixels - most of which are saturated - it does not look like star (no gaussian profile) - if it is "round" it means that impact was close to perpendicular on sensor surface, if it is a streak, it means that impact was closer to parallel to sensor surface. Depending on number of subs you are stacking - you can see them also in final stack if you don't use sigma clip stacking, but it usually takes stretching the data to notice them (much like hot pixels). Although signal saturates in single frame - it is only one of frames and more frames you stack, less contribution those saturated pixels will have. Stack enough frames and you won't be able to distinguish it from noise.
  6. Still don't get it, how is causality associated with quantum entanglement different from "naive causality". We still naively believe that measurement at A causes correlated value of measurement B. As shown above, both A can cause B and at "the same time" (no not real time, figure of speech) - B causes A. This is actually deeper than paradox would suggest. As each A and B measurement interact with environment thus leak information into environment via decoherence, or become entangled with environment on some level - which one becomes entangled first? There is no first one - A is first and B is first - depending on your reference frame. This entanglement propagates in environment, but again which one is "first" - or if we accept that both are first, we need to accept that on some level reality can depend on reference system (beyond things that we know depend on reference system like length, time and so), or that we live in predetermined universe.
  7. I'm not sure if you understand both Schrodinger's cat and entanglement. Schrodinger's cat has more to do with measurement "paradox" - since resolved. Currently accepted theory is one of decoherence. Idea behind the cat is question on why do we measure / see / observe only definite state of system, but QM tells us that system exists and evolves in superposition of states. Why can't we see that superposition. Dead Cat / Alive Cat sort of thing. Answer to that is in decoherence - once quantum system stars interacting with environment it "looses phase" that enables superposition. Phase correlation "leaks" into environment, and you are left with states that are not in superposition. We still have problem of measurement - how particular state is "selected" - and currently accepted answer is: random with certain probability. Entanglement on the other hand is different sort of effect - it is much like two QM systems forming one large QM system that behaves as one entity, regardless of how far constituents of that system are. There is a bonus to this - larger system behaves with certain correlation permitted by laws of physics - usually conservation law.
  8. I would argue that we assign term causally connected to these events due to our experience - local forward time flow. We say that something is causally connected if there is certain order of events and there is information transfer between events in our frame of reference. In our mind we equate that to mundane term - A caused B. Forget physics for a moment, and just look at logical systems. A => B & B=>A (A implies B and B implies A) leads to A <=> B (A equivalent to B ) I would say that above logical expressions are quite like saying: A caused B but B also caused A - A and B were bound to happen both.
  9. Not sure if there is straight forward answer to that. In sense, yes. Newtonian determinism implies that there is only one system trajectory allowed by the laws of physics. Quantum Uncertainty is telling us that there is no single trajectory of a system allowed by physical laws. This is not because laws of physics are incomplete (can't predict trajectory), but rather because nature acts as if all trajectories are possible, but one is taken. In the same exact setup, laws of physics would again predict all possible trajectories and assign them "probability" and again system would follow one of "allowed" trajectories but you can't know in advance which one. In that sense there is no Newtonian determinism. Laws of physics are complete in sense that they describe possible trajectories and assign "probability" to them, and show that you can't even in principle know which trajectory will system follow in advance out of possible trajectories. Here I use term trajectory in Newtonian sense. It is in fact case that system always follows one trajectory (which we can view as being sum of classical trajectories) but the measurement is one that you can't give exact answer to - it is probabilistic in nature. If we follow up on that, we need to understand what sort of probability are we talking about - or rather interpretation of probability. It is closest to ensemble interpretation of probability - do experiment enough times and limit of occurrence of certain measurement when number of measurements approach infinity will start to approach said probability. There is another interpretation of probability (well there is more than just two, but let's focus on that one for a second) - that is propensity interpretation of probability. System has property of being inclined to act more one way than another. Above thought experiment with entangled pair and different reference frames goes against this interpretation of probability. It is more in line with "lack of knowledge" (even in principle) of how the system would behave. We have just enough knowledge not to know exact particular measurement but to characterize it to a certain degree - likelihood of occurrence over repeated experiments (not to be confused with hidden variables).
  10. Yes, I'm aware of all of that, but don't see how it is connected to causality. You can't use that to establish causal connection in the same way you can't use it to send information (causality and information transfer are just the same thing). On the other hand, that brings us to very interesting paradox that has not been resolved, at least to my knowledge it's not. That would be same setup, entangled pair produced and separated and then measured by Alice and Bob. Nothing strange so far, but given enough separation between measurement events, there exist reference frame in which A measurement is performed first, and also exists reference frame in which B is performed first. Now that is a paradox - decoherence at A could cause correlated measurement at B, but also in another reference frame, decoherence at B would cause correlated measurement of A. Only solution to this is that A and B were predetermined even if in principle there was no information about outcome (no hidden variables) - which would mean that we live in "deterministic" world - although we have probability of events - that probability arises because we don't have knowledge of what is to happen, and not because there is no true determinism.
  11. I would add following: Get as many of dark as possible. Don't settle for 20. I sometime used more than x10 that number. Remember, each dark carries both dark current noise and bias noise in it. Stacking 20 of them will reduce that noise by about x4.5 (square root of 20). You end up "injecting" that back into each sub when doing calibration. If you don't dither between exposures, this can translate into injecting that much noise in final image (if all lights are perfectly aligned). Don't be afraid to get as many darks as possible, it will make a difference. Depending on your environment, you might consider using different stacking algorithm than average. Average is good if you have "clean" environment, but if there is any chance of cosmic rays, or radiation or anything, then use sigma clip stacking instead. This is especially important if you use long exposure subs and you take many darks - at some you could have a cosmic ray hit or radiation hit. It happened to me more than once, I had one set of darks where I had such "event" on almost all frames at least once - it was due to wood ash near camera. Camera was placed next to fireplace (not burning) and there was residual ash that emits a bit of radiation. Don't have such issues, or it happens very rarely if I take my darks in basement. Master dark is very worth making, and not only one master dark - in fact do several master darks - at different temperatures (summer darks / winter darks) as well as of different exposures (if you do short exposures "fill-ins" to capture star color that would otherwise saturate in regular exposure - this might be needed for very bright object sometimes as well).
  12. You will be using DSLR camera. When using DSLR / OSC (one shot color) cameras, you don't need additional RGB filters and workflow is just a bit slightly different in that case. OSC sensors have something called Bayer matrix. This means that each adjacent group of 4 pixels (2x2 arrangement) has 1 "red" pixel, 1 "blue" pixel, and two "green" pixels. Each of these pixels have their own little integrated filter - either R, G or B. Raw image taken with such camera is "mono" - meaning it only has one value per pixel (and not 3 values like RGB). To turn such image into RGB image you need to do something called debayering / demosaicing. Software handles that for you, but you need to tell it that you are working with bayer matrix RAW images rather than regular mono images. You also need to know bayer matrix order (I think that it is written in raw files for DSLR and you can get specs from camera vendor for astro OSC cameras). It will be something like R-G-G-B or similar (left to right, top then bottom row in 2x2 segment). When working with DSLR camera, this would be your workflow: 1. Because your camera is not cooled, get some bias frames at the beginning. Bias frames are just readout signal from camera sensor - that means minimum exposure time and no light, so cover scope and set to minimum exposure length. 2. Get some Dark frames before you start Dark frames represent camera response to long exposure without any light - it just involves signal accumulated due to heat of sensor. It contains bias signal within (that is why we take bias separately so we can remove it later if it suits our purpose). Dark frames must be on same settings as lights and of same duration with scope covered - meaning no light is present. Bias frames need same ISO and other settings, covered scope and minimum exposure length 3. Do set of exposures on your object (same settings as darks, same exposure duration, but remember to uncover the scope ) 4. Do another set of darks at the end. In principle you don't need to do this, but since you don't have set point cooling camera, and there will be temperature drop during the night, darks at the end of the session will be better match for light subs taken close to the end of the session. Most people don't bother with this and only take darks at the beginning or at the end of a session (some even don't bother with darks at all, but more on that later). 5. Take flats and flat darks. Flats are taken by various means - some people use dedicated flat boxes, some wait till dawn and point scope to the sky and cover it with white T-shirt or some other cloth to make field illumination even. Some use laptop screen held against the telescope. Point with flats is to give some sort of flat/uniform illumination inside telescope. You want your exposure to be such that there is no clipping in histogram and that histogram peak is somewhere between 2/3 and 3/4 (you are aiming for strong signal but no clipping and in linear region of sensor). Flats usually are very short exposures (sometimes as low as dozen of ms, sometimes up to 1-2 seconds, depends on camera and strength of flat source). Flat darks are subs taken with exact same settings as flats except light is removed - you get them with your scope covered. They are used to calibrate flat frames. 6. After you have all the frames, you fire up stacking program, tell it about all the frames you have, select suitable options - like tell it that it is working with OSC bayer matrix and such and it will produce high dynamic range stack of your images, usually in either 16 or 32 bit. If you load this result it will be color image, but you won't see almost anything in that image except few stars. This is because image is so high dynamic range and target is so dim that it hides in shadows. Next step is to "develop" your image - or do processing in your favorite image manipulation software. You load this image, and first thing you do is histogram stretch to bring signal that hides in shadows into view. This is delicate operation because there will be noise as well. You want to hit sweet spot between showing enough of signal while not showing too much of noise. Next you can do some color balancing, saturation, denoising, sharpening - what ever suits your fancy. Now just few more pointers. There are couple of ways to do proper calibration. All other calibrations will not be "proper", but that does not mean that they can't work in particular cases. Main problem that you will face is the fact that you don't have set point cooling on your DSLR, so sensor temperature will change with ambient temperature (as imaging session lasts in to the night, ambient temperature will fall - this means check your focus from time to time, and also means that sensor temperature will change if you don't have set point cooling). If you don't have matching darks, you can't do proper calibration. This includes temperature at which both lights and darks were taken needs to match. There is algorithmic way around this and it is called - dark frame optimization / scaling. Algorithm tries to match darks to lights by scaling them. This works fine in some cases, but for it to work you need bias frames. Bias signal does not depend on temperature and needs to be removed from darks before algorithm tries to scale them. Some sensors don't have usable bias and you can't use this algorithm with them. I think that most DSLRs have stable bias and you can use it. Because of this issue with darks and temperature, some people choose not to use darks at all - they just use bias files to remove bias and count on the fact that dark current is uniform across the sensor (sensors with amp glow have non uniform dark current and amp glow will show in this case). You can also skip flats - you run risk of dust bunnies or vignetting showing on your image. You can skip flat darks, but in that case, as well in case of not using dark frames you run a risk of flats over or under correcting. How much of these problems will show if any, depends on particular camera and settings, so you will have to experiment, or take advice from someone who used/uses DSLR camera (I only do dedicated astro cameras, so can't give precise advice for DSLR cameras). Hope this helps and answers your questions (and sorry if I just added confusion).
  13. We call it speed of light because historical reasons. Proper term would be speed of light in vacuum, or modern day description - speed of causality. True speed of light varies with medium it is traveling through. Yes, mass varies depending on a reference system in which it is measured. This is true even for "massless" particles like photons. Granted, they don't have "mass" in classical sense, but do they have momentum, and that momentum depends on frequency / wavelength. One observes different wavelength depending on reference system, so "mass" changes in that case as well. Length changes with reference system, and so does time. Everything changes with reference system except for speed of causality.
  14. I quite like the idea of RC scopes being used for imaging - I have 8" one and I think it is great. These scopes have a bunch of strengths when it comes to imaging - being all mirror systems, they don't suffer any sort of CA, can work in IR part of spectrum (and UV for that matter, so no UV/IR cut filter needed unless you want it for some reason). They have pretty decent corrected field. Drawbacks could be considered: diffraction spikes for those that don't like it in images (I actually do like them - it reminds me of "professional" images that scientists do - at least those that were available in books and magazines of youth ), a bit harder collimation and longer focal length / slower scope. However, I don't like the idea of division of scopes into fast and slow camp, at least not as far as imaging is concerned. I like to thing in terms of aperture at resolution as being defining term for "speed" of acquisition. These RC scopes provide quite a bit of aperture for imaging, starting at 6" and going up. Imaging resolution can be controlled by either focal length or by use of pixel size. These scopes are very suited for cameras with larger pixels, or modern cameras with small pixels that you can bin to obtain wanted resolution. Software binning is now very much feasible with low read noise CMOS sensors. All of this means that "slowness" of these scopes is in fact not an issue if you approach it the right way. They are capable of quite decent sized fields with suitable corrector / reducer-flattener, and in general I see them very good choice for galaxies, galaxy clusters and smaller nebulae.
  15. From what I gathered, there are two recommendations out there, I have one of them and I was not pleased in the first go, but since then I upgraded focuser on my RC so now have a threaded connection and it might make a difference. Options are CCDT67 by AstroPhysics (I have CCD47 by TS - which looks like same item rebranded, or maybe knockoff of AP item) and Riccardi x0.75 FF/FR - which is supposed to work real good - both reducing FL but also flattening the field.
  16. Your best bet is to find the same second hand / scrap scope with broken primary or damaged tube and salvage corrector.
  17. Quick fiddle around and I got this (not the method I recommended)
  18. I would try following - provided that both are shot at the same ISO and aperture, and only difference is exposure length: Take earth shine image and scale it down by factor of two exposures (for example if long exposure is 800ms and short one is 100ms - divide signal by 8). Then combine two frames using "max" values. After that use histogram stretch to bring earth shine into view.
  19. Although you have very large high def sensor (in planetary terms), do be careful what scope are you using it on. On some scopes you might be limited to smaller ROI in center because of aberrations of the scope. SCT for example will have coma away from the center, and that will impact sharpness of image. In order to utilize such a "large" sensor for high res work, you need scope that will provide 1 inch of corrected field without much aberrations. For example, this is spot diagram of C8 Moon is a half a degree, so you can expect top row level of coma at the edges of the moon (quarter of a degree away from center). If you expect very good sharpness with C6 or C8 and that sensor, you will need to work with a bit more mosaic panels than simple sensor angular coverage for given focal length suggests. You can still do full sensor shots and then decide where to crop by visually inspecting level of blur, but that way you run a risk of not properly gauging panel spacing. Another option is to simply use ROI in center and have smaller angular coverage in single panel (that has bonus of faster download time / higher fps achieved). For high res mosaic, I think much better option would be MCT type of scope, or possibly classical high F/ratio cassegrain if you can get hold of one from your local club. If you are planing a dedicated lunar scope, might want to have a look at this model: https://www.teleskop-express.de/shop/product_info.php/info/p10748_TS-Optics-6--f-12-Cassegrain-telescope-154-1848-mm-OTA.html There are 8" and 10" versions, but are rather expensive. Also, importing to Canada could pile up expenses, but this sort of scope is often branded under different brands (like RC model being sold by GSO, TS, Altair Astro and even iOptron judging by B&H offer) so other vendors might include this model at some point in their lineup as well. That might be a good option. On the matter of setup time for any of those mounts, I'm afraid I can't be much of a help - have not used any of them.
  20. You don't get that much under sampling as you might think. Actually with 80mm scope, I don't recommend going below say 1.5"/px - 2"/px because there is no point really. There is quite a bit of difference in optimal sampling rates of DSO AP vs planetary AP. With lucky imaging you are used to going after critical resolution - maximum provided by aperture. You discard lesser frames and in the end you have enough SNR to do some sort of frequency restoration process - be that wavelets or deconvolution or whatever. With DSO ap things are quite different - there is different metric that is associated with optimum sampling rate and it is FWHM of stars in your subs. Scope aperture comes into that equation, but there are other contributing factors - seeing, guiding/tracking precision and pixel blur (that one is important to some extent in planetary ap as well). We are used to think about seeing in terms of FWHM, but most of the time, FWHM that we measure, even in short exposure like 2s FWHM where guiding/tracking error is minimized, still depends on aperture as well as seeing conditions. You will get larger FWHM with smaller scope for the same atmospheric seeing. Actual seeing (instrument independent one) is actually better in terms of FWHM than people usually suspect. It is often in range 1-2". Local seeing can be quite different to that if there are local thermals as you undoubtedly know (scope thermals, local heat sources like concrete / asphalt pavements and roads, houses even bodies of water). These have cumulative effect in long exposure. In any case, here are some "tips" for sampling rate when doing DSO imaging: - On most nights with 80ED you will have about 3"-4" FWHM, not because of seeing, but because of all things combined. Different scope will have smaller FWHM than that (for example 8" reflector) - Let's say optimum sampling rate (but things are not so clear cut here as in planetary, because we use Gaussian approximation, and gaussian shape is not band limited so we can't find critical frequency when doing such approximation - but we can find sensible cut off point) is FWHM / 1.6. For example for 3" FWHM you would need to sample at 1.875"/px, while for 4" that would need to be about 2.5"/px. - Due to pixel blur you can in practice go as low as x2-3 that value, meaning you can go up to 4-6"/px and you won't get aliasing artifacts from under sampling. Blocky stars are sort of a myth. You can get single pixel stars if you under sample greatly, but it is up to interpolation algorithm when you "zoom in" (and in reality you should not zoom in past 1:1 - meaning one image pixel per one device pixel) how that star will look. Nearest neighbor interpolation will indeed make it look like square, but other interpolation algorithms will not (they might have other "artifacts"). After all - just look at images done with simple lens of short focal length (like samyang 135 or even 50mm lens) - no blocky stars, although aperture is comparable to 80mm (Samyang 135 is F/2 lens so aperture is 67.5mm - very close to 80mm scope). - DSLR or any other OSC sensor will sample at twice "base" pixel size in reality - due to bayer matrix, every other pixel "counts" as sampling point for particular color, so samples are spaced not single pixel width but double of that. However pixel blur will be that of single pixel width
  21. Very nice image! I have another contender to throw in the mix? How about AZEQ5 by Skywatcher? It should be in similar price category as two mentioned above, but has certain advantage over both of those. It can operate in both EQ and ALT-AZ mode. Alt-Az mode is sufficient for most planetary imaging demands, but there is actually one thing that EQ does better, and it looks like it might be beneficial to your work from what you mentioned you are interested in. As you know, Alt-Az suffers from field rotation, but on time scales of single frame for planetary or even full imaging run it is simply not of a concern. It can however be of a concern if you plan to do for example full Moon disk image at high resolution. This would involve multiple panel mosaic approach - you can have as many as 25-30 panels for your mosaic in some cases (or even more, depending on sensor size and focal length) and if we budget about 5 minutes for each panel (both imaging run and positioning, some focus tweaking if there is focus drift due to temperature change) that would mean total session time of up to two hours. Sometimes you will have even longer session for special events - like eclipses, or maybe trying to do animation of a planet or whatever. In such cases it is better to have EQ mount because there will be much less field rotation if any (depends on how good your polar alignment is). With mosaics it simplifies creation of it and minimizes chances of "holes" in mosaic because each panel will be oriented the same. With AZ mode you can have significant rotation between first and last panel if enough time passes, and also depending on target position in the sky. Then there is EQ mode if you ever decide that you have enough time and want to try some other types of imaging, so it's kind of "future proof" in that regard (although AZEQ5 is beginner type mount, it will suffice for casual DSO imaging / wide fields and in general except for high resolution / demanding work).
  22. Maybe you should use different approach given your choice of sensors? Rather than thinking in terms of maximum resolution on a given night for particular target, you decide on target based on conditions for any given night? ASI120mm is not really suitable for large targets unless you want to do mosaic. DSLR on the other hand is very good for wider field and larger targets. Less than optimal seeing - go for something large enough that won't suffer so much because of missing finest detail. On a good night choose to do high resolution smaller targets - like PNs or small galaxies?
  23. I see from your signature that you have ES 4.7mm 82. Any idea how that one compares with 4mm SW in terms of sharpness?
  24. I'm not sure you would want those SW UWA EPs. I've got one of those, at least I think so, got it in rather dodgy box that says "WA Plossl" EP says UWA, while box says SWA (but it is 58 degrees, 7mm - FL marking on EP wore off). I was rather pleased with that eyepiece when I first got it, because I simply did not know better. It is not as sharp (6mm Ortho is noticeably sharper, so is 11mm ES82 with barlow, and without for that matter ), and it has annoying ghosting on bright targets like Jupiter. Otherwise it is ok - eye positioning is easy and there is enough eye relief. Performance of that EP is what kept me from getting these although they are supposed to be as good as originals and best of TMB clones: https://www.teleskop-express.de/shop/product_info.php/info/p154_TS-Optics-7-mm-Planetary-HR---1-25--Eyepiece--58---fully-multi-coated.html
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.