Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I guess that casual user of Maksutov scope simply would not notice this under regular use. Most would describe this as "a bit fiddly to find proper focus" - but in reality this is going back and forth with focusing mechanism until both happen at the same time - proper focus is reached and mirror is properly aligned. There would also be noticeable image shift when changing focusing direction. If there is something else going on and there is actual collimation issue one side of focus (bent mechanism) - then it really depends on use case. It might be that at proper focus everything is in collimation - and one would notice nothing, but with Maksutovs - it depends on accessories used - different diagonal or eyepiece or barlow or binoviewers - it all changes focus position and at some point - mirror might be in trouble area. Image will be slightly softer there (but judging by level of miscollimation - just slightly). Maybe there is nothing to worry about - just run a test again at less defocus both ways and rings might be concentric in that focus zone. Also - keep in mind to hold star on optical axis when doing these sorts of tests.
  2. I'll try to explain it better. Sometimes mirror flop depends on last direction of travel with moving mirror focusing. If you've ever heard that there is image shift when changing direction of focus in Mak and SCT scopes - it is because of this same thing. Mirror gets very slightly tilted when changing direction of focus. I'm going to exaggerate effect in diagram just to make it easy to understand: When you "push" primary towards secondary (upper part of diagram), it can tilt slightly one way with respect to optical axis, and when you "pull" primary away from secondary - it will tilt the opposite direction. If you want to see if this is what is causing slight collimation issue with in/out focus image, then you should do two things - first normally "push" mirror in but when you "pull" mirror out - follow with slight reversal of direction to end with "push" to have same mirror tilt in both places - like this: When defocusing in one direction - just defocus, but when defocusing in opposite direction - at the end - reverse direction of turning the focuser know for half a turn - to end up going the same way as you did the first time. You can do the same with in focus - reverse there and do outward motion as is. If one of these ends up in concentric rings on both sides of focus and other with collimation issue on both sides of focus - then it is mirror flop. If you have same thing as you had before - regardless of any direction changes - then it is "bent focusing lead screw" - or following diagram: Hope this clears things up.
  3. This problem is related to moving mirror. It is either due to direction in which you end your focusing or maybe due to bent rail - which makes primary slightly tilted and hence very slightly out of collimation in that focus position. First try to see if it is down to focusing direction - try to end your focusing in same direction for both in and out focus images (in one case - defocus more and then reverse direction to get wanted defocus level - but in other direction use only one direction of defocus - does this make sense?).
  4. Yep, more data is needed - also think about adding flats to your workflow - uneven background is starting to show ...
  5. Is 4" the largest you are willing to go? If visual only, I would say that this scope: https://www.teleskop-express.de/shop/product_info.php/info/p10133_TS-Optics-Doublet-SD-APO-125mm-f-7-8---FPL-53---Lanthan-objective.html could be good option in your budget range. It will sit nicely on SkyTee II (non tracking) or already mentioned AZEQ5 (tracking / goto).
  6. Atik 428ex is a bit small sensor for that scope, have you considered replacing it with something bigger?
  7. Well, no. ISO is just multiplier - it can't magically increase amount of light that you gather. Upping ISO is like increasing brightness in processing. Signal to noise remains the same with both low and high ISO. In another words - if we neglect read noise (and sometimes it makes quite bit of difference - but that is in case you have cooled camera, very dark skies or use narrow band filters - in essence when you remove other noise sources) it is total integration time that counts regardless of sub duration or ISO setting. Modded camera makes not difference to this.
  8. Do you mean that you can use shorter subs and same total imaging time, or that there is some magic involved and that on ISO6400 you can make same quality image in say 15 minutes as with ISO400 in an hour?
  9. Depends. Both are right, but differences might not be what you expect. ISO is just multiplication factor - meaning that it does not change SNR - which is signal to noise ratio (multiply top and bottom of fraction with same number and fraction does not change). In that sense - after you enter region where there is no quantization effects - there is no difference between ISO 200 and ISO 800, except in full well capacity and one more thing. What is this other thing? It is read noise. With CMOS sensors - some of the noise is injected before A/D conversion and some of it after A/D conversion. This means that part of read noise is subject to "amplification" by ISO. This means that total read noise is actually smaller at higher ISO values. This is best seen if you look at read noise vs Gain settings for astro cameras (not sure if anyone made such graph for DSLRs): So higher ISO offers somewhat lower read noise because of that. Now, read noise is combated with exposure length (depending on your light pollution) - so it is not that important - unless you are forced to use short exposures. Conclusion? If you use short exposures - then take care of read noise and use high ISO, but if not - consider ISO to be of no importance and "fixed" for your purposes.
  10. In order to see if you can use dark subs and bias subs - you need to check several things. First thing to check is if camera bias is stable and are darks repeatable. Sometimes sensor/camera manufacturers create internal self calibration that gets done each time you power on the camera. This leads to different offset levels each time - which is a bad thing for our use case. You need to check following: - Take set of bias then power off camera, power back on and take another set. Stack both sets and subtract them. Resulting image should have 0 average ADU value and ideally be pure noise (no patterns). In reality, you'll most likely see horizontal bands in resulting image and FFT of it will have two distinct dots - Next, take a set of darks at certain temperature (can be room temperature, but important thing is that ambient temp does not change), power off camera, power on camera, take another set of darks at same temperature but twice the exposure length. Stack first set of darks, stack second set of darks, remove bias from both (you can use bias subs from previous point) and then multiply first image with 2 and subtract from second. Again, you should get image that has 0 mean ADU value and not visible patterns (in image or in FFT). This means that darks and bias are working as they should - at given temperature, dark current linearly depends on time. Next you want to know your dark current doubling temperature. Dark current depends on temperature in exponential way. If you increase temperature by certain constant - dark current value increases by factor of 2. This is around 6°C for most sensors. You need to establish this value because that will let you manually scale your master darks. There are algorithms that try to automatically scale your darks for you, but I guess it is better if you measure things and apply correct factors yourself. For this measurement - you need two sets of darks at different temperatures. Stack both, remove bias and see how much mean ADU changed for certain temperature change - calculate doubling temperature (if you want to be really precise with this - take at least 5-6 readings at different temperatures, plot result and do curve fitting). For example, here is one such curve for set point cooling camera: Note that Y scale doubling with each mark and line being almost straight line most of the time above -10°C (it does have small kink at very low temperatures). Once you have all those things - you can shoot your master dark library - choose temperature and duration, say 3 minutes at 20°C. Now you have taken 2 minute lights on particular evening and they were taken at 14°C (and your dark doubling temperature is about 6°C). How do you apply your dark? First you take your master dark (with bias removed of course) and divide it with 180s to get dark current per second at 20°C. Next you divide that with x2 - because you want to get to 1s at 14°C, and in the end you multiply it with 120s to match duration of your light. If you don't want to go thru all that trouble - just make sure your bias is stable by doing point number 1 (take two sets of darks with different exposure lengths and one bias and try to do dark-dark calibration) and later on, rely on dark scaling to do its thing. That is algorithm that tries to figure out proper scaling factor to match darks to lights. (Dark optimization is automatic thing, but there is also check box to the right - dark multiplication factor - that one is for manual multiplication factor that we calculated in our example above - use one or the other depending on your approach).
  11. Dark calibration is not about removing noise - it is about removing signal, one that should not be there and one that can mess up your flat calibration and such. I think that it is good thing to use darks - even with cameras where you can't control darks. Before you do that, and to make your life easier - you could do couple of tests. - see if your camera has temperature sensor and if you can get it to record it. - if not - think about getting external temperature sensor that you will use to record ambient temperature - in controlled ambient temperature (does not need to be cold - it just needs to be stable enough - like not more than 0.5-1°C of change during testing) - test if you can achieve stable and repeatable darks and if your bias is working properly - change ambient temperature (again, make it stable if you can) and do another set of darks. See if dark scaling is feasible and maybe determine your doubling dark temperature If all above checks out, and you find your dark doubling temperature - then you only need one duration of dark subs at one temperature to do dark calibration.
  12. I've seen few people produce such "landscape" captures of the moon - and I personally like the effect - but never thought about sensor real estate utilization that way. I guess that once people start thinking in "mosaic" mode - they don't really think of capturing the moon in single go - you can always capture a few panels and stitch those together to create larger image.
  13. You could fit more than two full moons inside inner circle (by diameter or 4 if you want to cover the surface) - this means that distance between Polaris and NCP is slightly larger than a full moon - about 1/3 larger than the moon itself.
  14. Funny - I don't crave Tak now - I want to learn to do metal casting instead What is the melting point of aluminum?
  15. Yes, give that a try. Btw, ASI290 should have ASCOM driver and that one should let you bin as well (maybe even with selection between x2 and x3) - but PHD2 has noise reduction and you can select 2x2 bin and 3x3 median.
  16. It indeed looks like quite large prism. I'd say something like at least 8-9mm diameter. At F/8, you need to be at least 60mm to stop it down, so you are ok with 45mm. Mine prism is also 8mm on the side - so there is benchmark for you - you don't need sensor larger than about 8-9mm diameter as it will vignette. You want mono sensor with high QE and low read noise. Pixel size is not as important as you can bin your pixels. ASI290 seems like obvious choice - you can bin your pixels even 3x3 with 2.9µm pixel size. That should give you high sensitivity while still being precise enough (1.1"/px guide resolution - more than enough).
  17. What is diameter of your pick off prism and what is the distance to guide camera? This is important to minimize vignetting - but also means that larger sensor might not be used fully. I have RC8" as well and use ASI185mc for guiding. I did not have any issues with finding stars so far with it. That is 8.6mm diagonal sensor. Look at vignetting caused by my OAG: In current configuration, I don't think I would need larger sensor than this, and even this one is not fully illuminated. Distance is important as well, although at F/8 you are using rather tight beam, but if you don't pay attention to spacing - you might be stopping down your pick off prism as well and running guide camera at lower aperture as well. Another trick is to use ASCOM drivers at 16bit data format and use exposures of 2-3s.
  18. I was also at one point thinking of mentioning UHC filter - but I wonder what the real effect behind that is. It might have to do something with color adaptation rather than seeing actual color. This can be checked easily though. Our eye/brain system adapts to different illumination to preserve expected color appearance. In broad daylight we see paper as white, but also - at sunset or inside our houses with artificial illumination - we also see that paper as white. Actual color of the paper is color of light illuminating it - but we pick up on general "scene" tone and our brain automatically adjusts perception. When we look at the eyepiece - most stars that we see are fairly white-ish - no particular color to them unless we are observing particular stars. Sky is also fairly dark / gray maybe sometimes bluish. Our brain adapts to certain "neutral illumination mode" - and we see grey nebulosity because there is no enough light to trigger our color response. Throw in UHC filter - and everything turns greenish / teal / blue sort of - all the stars turn into that sort of color and sky goes dark but it also now has that "tint". I think that we still see grey nebula but our brain now compensates for "general scene illumination" and turns grey into something else - although we are not actually seeing color? This can easily be tested (maybe) - try using UHC filter on galaxy, will it turn greenish too - and should it with even less light reaching the eye?
  19. No, don't worry about it - such drift is so slow that it can't affect individual subs and is easily corrected with guiding. In fact - if you don't guide - it is good to have such slow drift as it provides "natural dither" of your subs. In very rare occasions it might produce (I'm still not 100% sure of relation between the two) - so called walking noise.
  20. Drift due to polar alignment error is always in DEC. Drift that you are seeing is due to imbalance you created with east heavy scope. Did you tighten the clutches well? Imbalance needs to be small - just to keep small tension on RA drive train. This sort of drift can also be due to worm gear itself - it is largest drive component, and spins one revolution per day. Often in a session we don't get to spin it more than 1/4 of its revolution. There might be side of it that is some what distorted from perfect circle - and on that side it will run slower or faster (actually it needs to do both so that average rotation speed remains the same). Assuming one is imaging for 4h each session - above scenario would repeat each 6 imaging sessions (that sort of total drift).
  21. Hopefully someone will come along and recommend good alternative. I can name 3 - two that I heard people using, and third that I use myself, but that one I would not recommend as it is too complicated for regular use. Siril and Iris are two often mentioned (I think I've installed both but never used either for stacking) I use ImageJ / Fiji - but that is too scientific for everyday use as you need to manually do calibration and choose bunch of parameters when registering images. You also need to do manual stacking - so not really user friendly for astronomy.
  22. DSS usually freaks out like that when it can't find alignment stars or does poor job of matching up stars. Change parameters in hope that it will properly solve alignment info - maybe increase threshold, or reduce it - see if either helps.
  23. In any case - every version of EQ5 has 144 teeth - that means that basic period is 10 minutes (1440 minutes in a day) regardless of motors being used. This in turn means that whatever drive system is connected to worm - it needs to operate at 0.1rpm on output shaft. Imagine for a second that stepper motor is attached directly to worm without any reduction. Steppers have 200 steps per revolution. That means full step would be triggered every 3 seconds. Even with micro stepping - it would be visible - this is why there needs to be reduction stage - less microstepping - more reduction. Simple motors don't have enough torque to operate on many micro steps - so 16 micro steps is common with them. Notice that RA motor has output shaft that is parallel but not central with respect to housing: This is because stepper itself has reduction built in.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.