Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. That is what QM says (or rather something very similar), but that is where problem lies. Imagine we have quantum system that is like a coin - it's flipped and can land in either of two states - heads and tails. These are eigenstates of system. We will assign a bit different eigenvalues - 1/3 and 2/3 instead of usual 1/2 and 1/2 for this example. Once coin is "thrown" (prepared in superposition) and it entangles with environment - world "branches" into two. Heads and Tails world. When we throw coin again - world again branches, etc. After three flips - we will have 8 branches (2 to the power of three). There is 8 copies of us doing this experiment. We can label those worlds as: HHH, HHT, HTH, HTT, THH, THT, TTH, TTT Some of our copies will be very "unlucky" and will get just a series of heads, some will get just a series of tails - however, problem is that we all are going to measure probability as 1/2 and 1/2 and not 1/3 and 2/3. There is simply no explanation as to why branching into two distinct states would result in us seeing different type of probability than simple 1/2 - 1/2, yet there are many systems that don't have simple 1/2 - 1/2 probability - like prepare electron with spin along some axis and measure it along 30° angled axis - you'll get either spin up or spin down - but with different probability. Branching does not explain that. We can't end up in spin up "on average two times out of three" - as there is no branching into three branches for two distinct eigenstates. That is not what QM says - it says a*|up> + b*|down> - there is not third branch we only have combination of two eigenstates.
  2. For me - it is really simple reasoning - you get more sensor for your money with ASI294. Amp glow calibrates out. Sensor area = speed (when paired with adequate optics).
  3. When doing mosaics, it is usually easiest to do it with camera oriented in RA-DEC or DEC-RA (what we would call landscape or portrait). In your example - you calculated coordinates in Portrait mode, but camera was randomly aligned. In order to align it properly - there is a simple trick if you don't plate solve. With plate solving - it is simple, plate solving in field will give you camera angle - and you rotate it until you get 0° or 90°. If you don't have plate solving set up - then aim your scope at bright star, start exposure and slew scope in RA with your handset (or computer if you control it via computer). Star should make a streak on your exposure. If streak is horizontal - you have landscape orientation. If streak is vertical - you have portrait orientation. If its neither - well, turn your camera a bit and check again until you reach horizontal or vertical - depending on what you need for best framing.
  4. Here is decent source of info on different camera models. Not sure how accurate all the info is - but there is a lot of it: https://www.photonstophotos.net/ In particular, since you have experience with 1100D, here is comparison of the two: M100 has significant boost in QE - 56% vs 35%. Green and orange values represent read noise - although I'm not entirely sure how to interpret those as website splits them into two - pre / post ADC. Pre ADC noise is pretty much the same 2.7e vs 2.8e but post ADC noise is very different 3.9DN vs 10DN. However, I would think that post ADC noise level depends on e/ADU gain factor - so not sure how to interpret those. FWC is about the same at around 30K. Blue and pink bars are Unity ISO and ISO invariant ISO. First one is just unity gain - second one is - well, have no idea , and I'm pretty sure it is useless as metric . There are other factors to consider - like how useful it will be in field. I was looking into Canon mirrorless range and am waiting for model that will be able to shoot wireless (use wireless connection like USB). That is very interesting idea for no cables EEVA setup based on AzGTI - which is wireless itself.
  5. You should probably crop original flats to twice required resolution - 4144 x 2820 and then bin it x2 (or simply crop binned version to exact size). "Firmware" binning (CMOS don't really bin data in hardware due to sensor electronics) - should really be the same as software binning - but that depends on how it is implemented in camera software. Maybe ASI decided to discard one row for some reason when binning in firmware - so try to replicate that.
  6. To be honest, I find it difficult to compare those two graphs. They used different wavelengths on those graphs - both in terms of color and in terms of wavelengths. Triplet graph only uses three colors - Hb 486nm, Ha at 656 and some unspecified green at 588nm (which is technically not even green any more but rather orange). While 125mm uses five colors - and again, colors are not matched to wavelength - yellow at 620nm, purple at 680nm and red at 546 In both cases - highly misleading. I think both of those two graphs are more marketing than meaningful data. What I know is that 100mm FPL-53 doublet needs to be around F/9 in order to be color free - that is SkyWatcher 100ED for example. As you move down in aperture, you can make it faster without having too much color issues - that is why SW 80ED is F/7.5 Above 125mm is pushing this with F/7.8 - it is too fast for ED doublet - for imaging anyway, it might be very good for visual. On the other hand - there is very little difference in triplet based on FPL-51 glass and FPL-53. In any case - I still do believe that 115mm is better corrected than 125mm, and yes - for visual - one might not even notice any difference. I think they are both great scopes of the money, and given my preferences - I'd still go for 115mm.
  7. I was rather surprised to see how much CA there is in 125's little brother - 102 SD doublet (Altair Astro branded 102 ED-R).
  8. Highly doubt that 125mm F/7.8 ED doublet will have as good color correction as triplet lens. In reality, for visual - both will offer virtually color free image (that virtually is more meant for 125mm doublet, as triplet is really color free), but for imaging applications - triplet will simply pull ahead.
  9. Can't contribute much in terms of actual experience - I can only say that out of the three, I only really considered TS 115 as possible purchase.
  10. Prime focus is when you place sensor directly at imaging plane (no eyepiece or camera lens used). Alternatives for imaging are: - eyepiece projection - where eyepiece acts as projection lens - it can both act as reducer or as amplifier - depending on what distance you use between sensor and eyepiece - Afocal method - also involves eyepiece, but camera lens as well. It can also act as reducer or amplifier - depending on focal lengths of eyepiece and lens used (lens > eyepiece - amplification, lens < eyepiece - reduction)
  11. 1. Yes (haven't used it like that but no reason why it needs to be on guide scope). Only caveat is that it can have rather small field of view on large focal length instrument and sometimes that can prevent plate solving (depends on how much stars you'll get in FOV and how many stars there are in plate solving database). 2. I use mine with T2 thread so never worried about scratching body of the camera. There is also 1.25" nose piece included that screws onto T2 thread - it is replaceable part so you don't have to worry too much about screw markings Yes, it is very versatile camera - it can do all sorts of things, however, it has drawbacks in some uses. For deep sky - it is a bit small sensor. For all sky - it is a bit large sensor . I use my ASI185 with small cs-mount lens that has very wide FOV - but when I attach that lens on 178 - it does not cover whole sensor. Btw, I have cooled version and yes, it is very good for planetary / lunar / solar use and it is decent for deep sky. It can also be used as guide camera.
  12. PI has integer resample which is software binning - simply bin your flats x2 in software and you should be fine.
  13. You are right in first part - it is enough to subtract bias to remove most of 'correlated' noise to use that term. Dithering is beneficial regardless if you use darks or not as it "shuffles" around that uncorrelated bit from calibration frames (be that darks, bias or flats). When I say that not using darks will mess up flats - I'm not thinking about flat darks - but not using those does the same thing (to lesser extent). Imagine following scenario - we observe two pixels - one vignetted with light at 70% and one in center of image with light at 100%. Both receive same light signal of 100ADU. There is dark current signal that is say 5ADU. Bias is removed. Now we have image that looks like 75ADU and 105ADU (that is with dark current and vignetting). We have two options - remove dark current signal (dark calibration) - or leave it. Let's leave it and apply proper flat calibration. Flat calibration in this case will be to divide first with 0.7 and divide second pixel with 1.0 (opposite of 70% and 100% light throughput). After calibration we should get "flat result" - or both pixels should have same value. 75 / 0.7 = ~107.143, 105 / 1 = 105 We have slight over correction of our vignetting - we did not get proper values - equal values, because we did not remove dark current signal. Let's now try with dark calibration: (75 - 5) / 0.7 = 70/0.7 = 100 and (105-5)/1.0 = 100/1.0 = 100 Now we get perfect correction. That is why we need to do dark calibration - if dark current is significant in comparison to recorded signal and we are doing flat calibration - flat calibration will fail if we don't remove respective dark current signal. Same happens if we don't do flat darks, but flat darks are usually very short because flats are short and can be substituted with bias in most cases - unless you want to be 100% correct in the way you are calibrating your data.
  14. Darks don't serve to remove noise from image (which is random component of signal) - they serve to remove fixed signal from the image. Dithering will turn part of fixed signal into random signal, but dark current level will not be removed with dithering. That makes flat calibration wrong. Flat calibration tries to correct attenuation of light that comes from objective lens. If you have some residual signal in image that is not light and you apply flats - you'll correct that signal - that does not need correction - so you'll "over correct it" - or you'll have issues with your flat calibration not working. That is main reason why people should use darks - otherwise, bias is really enough to remove almost all that fixed signal / pattern present in the image. If dark current of particular sensor is not strong enough then issue with flat calibration might be very slight and might not even show - in that case, well, don't bother with darks, just use bias (if stable of course).
  15. I think that most DSLRs have stable bias. I noticed issues with bias only on non cooled dedicated astro cameras and even some cooled ones. For example - my ASI1600 - has useless bias (although it is cooled camera). It has higher average ADU value than that of dark - which should not happen as dark is bias signal + dark current signal while bias is just bias signal and dark current signal can't be negative. This has something to do with different way exposure are timed - anything less than 1s is timed on sensor and above 1s is timed on computer. In other cases - sensors have auto calibrating bias - each time camera is powered on it adjusts its own bias level automatically - but problem is - it is different each time. In order for algorithm to properly scale darks - it needs to have dark signal only with bias removed, and for that you need good bias files. Here is a simple test that you can do to see if your darks are properly scalable. Take camera and shoot: - some bias subs (you don't need a lot of then - something like 8 of each is fine). - some dark subs of certain duration - say 30s darks - some dark subs of double duration - say 60s darks Stack each to their respective stack using simple average algorithm and then using pixel math create following image: 2*(30s_dark - bias) - (60s_dark - bias) Resulting image should have average ADU value of 0 and no discernible patterns in the image - it should be pure noise. If you don't know how to do all of that - I'll do it for you if you wish - just shoot the files and send them over. Actual dark optimization is very simple to do - provided that it is implemented in software - like in DSS: You just check appropriate checkbox and DSS will to that for you. You only need to have both bias and darks for your calibration.
  16. That really depends. If you don't have set point cooling and don't use advanced algorithms - best course of action is to do half / half. Shoot half of your darks prior to lights and half after. That will sort of average out darks / temperature difference. If your camera has stable bias - well, then you can use advanced algorithms like dark optimization. This algorithm will try to scale darks automatically to compensate for temperature difference. In this case - it does not make much difference when you shoot your darks - and you can do it prior to session. In fact - you can have sets of master darks taken at different temperatures and use one that is closest to shooting temperature. This will save you time - so you don't have to take darks every session.
  17. Best course of action would be to get decent 6" dob for your budget. I'm tempted to say: "forget photography" - but no, let's not do that. If you have DSLR or even compact camera and a bit of DIY skill - you can get your self started in AP with very low budget. There is something called Barn Door tracker - that is easy to make (couple pieces of plywood, hinge, screws and a DC motor). That will track the sky good enough to start you in long exposure imaging with camera and lens. Start there - learn stacking and processing part - and if you find that you've been bitten by AP bug - then look into further spending. https://nightskypix.com/how-to-build-a-barn-door-tracker/ https://www.instructables.com/Build-a-Motorized-Barn-Door-Tracker/ https://www.diyphotography.net/how-to-make-a-30-diy-star-tracker-for-astrophotography/
  18. Because barlow lens does not change geometry of the mirror - it simply enlarges image at focal plane. Geometry of the mirror is responsible for creating aberration, and while we see light as being F/10 - it was still produced by F/5 mirror and not F/10 one. Barlow changes light beam only after it has been bent by F/5 mirror and therefore aberration has already been "imprinted" in it. Similarly - if we take fast F/5 achromat and add barlow - we won't make chromatic aberration be equivalent to F/10 achromat of same aperture. We will have same level of CA only magnified in focal plane. You don't need to stop right at the edge, but what will happen depending on where you stop is very complex topic. Coma is asymmetrical aberration. That is somewhat problematic when we try to do frequency restoration in the end (try to sharpen image). When we do lucky imaging, we end up with image that is somewhat blurry after stacking and we need to sharpen it up. Better SNR we achieve and less blurry image is to start with - better sharpening results we will get. We are actually able to "sharpen" up even diffraction limited image. Here is a graph that represents inherent "sharpness" of telescope: Black line is ideal perfect unobstructed aperture - red looks like about 25% central obstruction or maybe 1/4 wave spherical aberration. Ideal image with perfect resolution for given sampling rate would have constant line at Y=1 - it would not fall down to 0. When we sharpen up image - we take above graph and try to "straighten" it - try to raise that line to stay at constant Y=1. For example, sharpening is able to correct for spherical aberration of the telescope. This is why people make excellent planetary images with SCT telescopes that have rather large central obstruction - over 30%. In fact, I took this image of Jupiter with rather fast newtonian that has spherical mirror - it was with 130mm F/6.9 spherical mirror: But it is rather sharp and detailed image for such a scope, isn't it? This is because sharpening corrects even for blur caused by spherical aberration. However, with coma, things are different, look at this image: Look at the shape of coma blur - next to being blurred, it is also "smeared" in one direction and if you correct the blur - there will still be some "smear" left. In above MTF diagrams - some aberrations have two lines on their graph and some have single line. Coma, astigmatism and seeing in above graphs have two lines - this means that they are asymmetrical aberrations. Seeing here is just a single moment of seeing and in stacked image - it turns in more or less symmetrical aberration - simply because we stack many subs each having different seeing PSF - and it averages out to "round" shape. Back to understanding why limiting to diffraction limited field is important: - coma is asymmetric aberration - that means that sharpening will not fix it properly - you'll be able to deblur it somewhat, but there will still be some level of "smear" left - coma depends on distance from optical axis - that means that single panel will have different level of coma blur in center and on edge (unless you keep it constrained to diffraction limited field - which in theory should have same - minimal level of coma blur). We don't yet have selective sharpening algorithms - that will apply different levels of sharpening in center and at the edge. This in turn means that you'll either over sharpen the center or under sharpen the edge for same amount of sharpening. Result in any case is image that does not have equal sharpness with zones of blurriness where panels join. This is still possible with mosaic images even if you have perfect optics - as seeing conditions can change between panels and you end up with some panels being sharp and some being less sharp due to changing seeing - but there is nothing we can do about that - and it is transient in nature - meaning next time it might not happen again. In the end - it is good to understand how fast coma grows with height of field and we can use spot diagram to represent that. Look at B ) part of the image above / top right. It shows different F/ratio newtonians at center, 2.1 mm and 3mm of axis. Circle represents airy disk and in theory we want to keep all the dots inside airy disk. This will slightly enlarge airy disk - but not by much (airy disk is interference pattern that forms when all rays come to single point and above spots are different rays hitting in different spots. They have minimal phase shift due to not hitting in same place and this will somewhat change interference pattern as well). In any case - at 2.1mm off axis at F/5 we have slightly larger spot diagram than airy disk, but at 3mm - it is twice as large - so aberration grows rapidly. In fact - in left top part of the image - you can see airy disk and first ring and corresponding spot diagram. For F/5 newtonian at 2.1mm off axis - we have same spot diagram as F/8.15 newtonian at 6mm of axis and Airy pattern looks like classical case of miscollimation - and you know how soft image is when there is a bit of miscollimation. Bottom line - I would try not to exceed diffraction limited field if at all possible, but you can get larger sensor and simply use ROI and experiment. Take larger and smaller panels and see what sort of difference you are getting - then settle for best compromise in panel size and sharpness. HTH
  19. I never really liked many worlds interpretation - or should I better say - what I thought it was all about, until I watched this video (possibly could be used with adverb again? - can't really tell if I watched it more than once): https://www.youtube.com/watch?v=5hVmeOCJjOU In any case, last time I saw it - two very important sentences stuck with me, and I finally properly understood many worlds interpretation. I'm going to paraphrase them: - Everett did not add many worlds to quantum mechanics - he simply acknowledged that they are there - If we can accept that electron can be in superposition of two states, why can't we accept that universal wavefunction (or wavefunction of universe) can also be in superposition of many states? I must say that terms "Many Worlds" as well as "Branching" are very unfortunately chosen (as are many terms related to QM) - because they make it much harder to grasp what is actually going on. In any case - interesting fact arises from MW interpretation - our physical reality is nothing but particular vibration in universal wavefunction. Yes, I'm quite aware how "new age" that sounds . Other worlds are similarly - just different vibrations in same universal wavefunction that are no longer in phase with "our world" - and in that sense, yes, it can be thought of as really separate worlds (no longer able to interfere with ours) - but we are still all part of the same universe / same universal wavefunction. Some of my original objections to MW are gone now that I have better understanding of it, however - principal objection still remains, and it is not related to MW interpretation any more - but rather our understanding of probability and how it relates to "weights" of eigenstates in wavefunction.
  20. vlaiv

    M16 Ha

    Very nice! Clickything is nice as well
  21. Hi and welcome to SGL. None of the mentioned goto kits will work on your mount. These are specifically built for those mounts - EQ3 or EQ5 by SkyWatcher, or more specifically - these mounts: https://www.firstlightoptics.com/skywatcher-mounts/skywatcher-eq3-2-deluxe.html or https://www.firstlightoptics.com/skywatcher-mounts/skywatcher-eq5-deluxe.html I understand that EQ3 or EQ5 sound like "generic" Equatorial class mounts - but they are in fact actual mount types produced by SkyWatcher. There are similar mounts made by other manufacturers (while in fact being made in same factories in china) - like Celestron CG4 for example https://www.celestron.com/products/omni-cg-4-mount-and-tripod which is the same as EQ3 mount by SkyWatcher. If you can - find goto conversion kit built for your particular mount. If not - you have couple of options: 1. Replace your mount for another mount that has goto capability 2. See if you can "adapt" existing goto kit to your mount - that would involve measuring worm diameters and number of teeth and such 3. There are a few DIY projects for motorizing your mount - like AstroEQ and similar, where with a bit of DIY skill, couple of stepper motors and some electronic, you can motorize the mount yourself.
  22. CLS is rather crude type of LPS filter. It is not as aggressive as UHC type filter but does throw off color balance quite a bit and looses important parts of spectrum. Whether it is going to work depends on what type of target you'll be imaging and what is composition of your local light pollution. For emission type targets - it will be better than OIII type filter. It will give you good color of targets themselves, but star colors will suffer. You can't get proper star color with CLS type filter. You can circumvent this by taking two sets of subs - one filtered and one unfiltered - and then using star color from unfiltered set of subs and everything else from filtered set of subs. If you are going to image anything else - like galaxies or clusters or reflection nebulae - it is likely that using CLS filter will hurt more than it will help - unless you have very specific type of light pollution. I would recommend that you try both - not using filter and using that CLS filter - but use CLS filter only on emission type nebulae and Ha regions. This time of the year (and a bit later - beginning of autumn) a lot of interesting Ha regions will be available for imaging. You should take advantage of that and just experiment. Extreme UHC type filters and duo / tri / quad filters work ok - again for specific types of targets. However, you are in not that big LP to absolutely must use some sort of filtering for imaging.
  23. Hi and welcome to SGL. Yes, filter is the problem. That filter is narrowband filter that passes only single wavelength of light. It is good for visual use or when you image with mono camera and other narrowband filters to produce false color (say HST palette) narrow band image. You have chosen two targets that emit in OIII - so that is good and you have some signal, however, since this is basically a single wavelength of light - don't expect any color in it - it is monochromatic image by nature - even if you capture it with OSC sensor. Filter is good to combat light pollution - if you are happy with monochromatic images of only OIII signal. However, if you want to go that route - maybe Ha filter would be better option as H alpha signal is generally much stronger and more targets shine in Ha. Alternatively - if you want to do regular color photography - maybe try without filter first. At Bortle 5 sky - LP filter can sometimes hurt more then help. If you know the type of light pollution you have - mostly yellow street lighting meaning High pressure sodium lights - then yes, get LPS filter, but if mostly you have broad band - led type light pollution - just shoot unfiltered or get Astronomik L3 filter if you notice that your scope is giving you a bit of bloat in blue part of spectrum (slight chromatic aberration).
  24. What is actually your concern about resolution? If you can afford and mount large telescope - I'd say go for it regardless of the fact that you'll be seeing limited. Just make sure you pick up sensible working resolution for your conditions. I'd say with such a large scope - target somewhere around 1.2-1.4"/px. That will be good sampling rate for most time. There will be few nights a year where it will be too high and few nights a year where it will be too low (provided your mount is good enough). Even if you can't utilize resolution - you can utilize light gathering. Large aperture gathers more light and will offer faster imaging at given resolution.
  25. I think that it sort of shows what I was referring to. With long focal length lens - of 40mm (Meade) - edge performance captured with phone is better than captured with DSLR+lens because DSLR+lens does not stop the aperture down and does not cut away part of the wavefront. For example: taken with phone versus: taken with Zuiko or with Sigma below: Even 21mm F/3.5 lens shows some improvement because it is acting as slight aperture mask (but very slight - exit pupil of 6.66mm versus entrance pupil of 6mm): If this is due to stopping down by phone lens aperture - then we should not see the difference with 9mm eyepiece - because it produces very small exit pupil that will fit in both phone entrance pupil and lens entrance pupil Here is Phone vs Sigma: Here is a diagram that explains what happens: There is wavefront that hits the telescope and in ideal circumstances - telescope + eyepiece would replicate that wavefront on exit pupil. However, in real life, both telescope and eyepiece add their own disturbances to that resulting wavefront (it is wavefront because emerging rays in exit pupil are again collimated - parallel as if coming from infinite distance). Question is - what happens with that wave front if we record it with phone camera lens vs large DSLR lens? Camera lens is small compared to the size of wavefront (if exit pupil is sufficiently large) - and will "see" only smaller and often flatter portion of the wavefront. On the other hand - DSLR Lens has much larger aperture and will not stop down wavefront - it will see whole of it and bent wavefront will do its thing and create blurrier image (or rather - image as is): In another words - using phone to record eyepiece image will over estimate quality of edge of the field in long focal length eyepieces but will record short FL eyepieces just fine (as long as you center phone properly on exit pupil - you don't have these centering issues with DSLR Lens as it is much larger and picks up rays easily). Using lens also eliminates most of lens aberrations. You can shoot edge of the field with central part of lens where lens is sharper (lens will act as if stopped down to aperture equal to exit pupil of eyepiece) and any sharpness of the lens issues will be minimized because you are using lens with longer FL then eyepiece (if you are using such lens) - it will grossly over sample - but you can always resample image down to reasonable size / proper sampling. Any lens blur will then be sub pixel in size once you sample back down and won't affect quality of final image.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.