Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I don't like it because: 1. It uses bilinear interpolation when aligning frames 2. I have no idea of what sort of algorithm is used for frame normalization
  2. One of the reasons to use small sensor is coma Diffraction limited field on F/5 scope is rather small - it is few millimeters in diameter. Coma corrector reduces coma but introduces other aberrations that are usually not seen in long exposure imaging as seeing masks them (sometimes they are noticed like spherical aberration producing softer stars in long exposure imaging in simple two lens coma corrector designs). For this reason, planetary imaging that requires best sharpness out of optics is best done without CC and in central region of the field that is diffraction limited. It's not only camera that is important for getting good sharp image - it is host of factors, I'll list few since you are already familiar with planetary stacking techniques (like using AS!3 for stacking and registax wavelets for sharpening). - use very sort exposures, 5ms or less. - use higher gain settings to limit read noise, but do be careful not to over expose (reduce exposure further if over exposing parts of the image) - capture plenty of frames at high fps - like 20000-40000 per panel. Use 5-10% top quality ones in the stack Don't use reducer for the same reason as CC above - it will introduce optical aberrations. Use just central part of the field. Since you have F/5 scope that is 125/90 = 1.4mm radius or 2.8mm diameter. You want diagonal of 2.8mm for your ROI. Your camera has 6.45mm of diagonal, so if you don't use barlow, you need to use less than half of sensor - 640x480px ROI to get best sharpness. Alternative is to use barlow - and many imagers use barlow to achieve critical sampling rate (capture all the detail that their scope allows for). Your camera has 2.9um pixel size and critical sampling F/ratio will be x4 that value (actual formula is pixel_size*2 / wavelength - but we use 500nm wavelength - that is 0.5um in microns because pixels are in micrometers and resulting formula is pixel_size*2/0.5 => pixel_size*4) You need to be at F/11.6. Your scope is F/5 so you need either x2 or x2.5 barlow (you can change magnification of a barlow by changing distance to sensor - so you can dial in x2.2 needed with either of those). This will have additional benefit of allowing you to use full sensor instead of ROI because 2.8 x 2.2 = 6.14 (almost 6.45mm).
  3. You can use FitsWork to bulk convert CR2 raws to fits if you need (some stacking software might only work with fits).
  4. @Stefan73 As far as camera goes - I guide with color camera - ASI185mc (less sensitive) at 1.6m of focal length F/8 system and have no issues, so I don't think camera sensitivity will be any issue with OAG - just take care that body of the camera won't hit either focuser or DSLR body. There are slim options of camera available. Do also pay attention to the overall optical length of the setup. OAG9 is 9mm of optical path - if you use 2" connection, but if you use M48 - it is 11.4mm long (above is from TS website). What sort of connection your coma corrector has? Which model is it? If it has T2 connection - you also need to account for M48/T2 adapter on telescope side. Something like this: https://www.teleskop-express.de/shop/product_info.php/info/p9307_TS-Optics-adapter-from-M48---2--filter-thread-to-T2-male.html This will cause problem for coma corrector as you will have: 44mm + 11.4mm + 2.2mm = 57.6mm That is more than prescribed 55mm and you might get less than optimal correction in corners. Depending on type of CC you have - you might make better or worse combinations, like these: Skywatcher x0.95 CC - has M48 thread so you don't need T2 adapter - working distance is 55m - total optical path will be 44mm + 11.4mm = 55.4 - very small mismatch, correction will probably be as good as with 55mm Baader MPCC - it has both T2 and M48, with T2 it requires 55mm but with M48 (when T2 adapter is removed) it requires 58mm of optical path. 44mm + 11.4 = 55.4 - you'll require 2.5mm M48 extension / distancing ring to get to 58mm correct distance (or 57.9 to be precise - but that is the same as 58).
  5. I don't use OAG with DSLR of 130pds. I use it with ASI1600 which has very short back focus requirement of only 6.5mm. I fit filter drawer between it and OAG. OP wanted regular 9mm OAG from TS that he could reuse with other scopes and cameras. EOS OAG is not usable with other cameras. Your initial post confused things a bit - since you were advocating OAG - without specifying which version. Yes, EOS OAG will work - but question remains if OP really wants that or something to be reused later with other gear.
  6. How do you get it to work with coma corrector that needs 55mm distance and DSLR with standard T2 ring?
  7. Still the same. It only impacts min mo parameters in sense that it changes its value as value is pixel related and not in arc second. 0.1 of 8.6um pixel is 0.86 microns and that would be something like 0.25 of 3.75um pixel size.
  8. That part also confuses me in EQMOD as well. Why the limit and why default value of 50ms? It literally says: Being computer programmer I know that we can measure time with much grater precision. In fact even very simple controllers can accurately measure micro second intervals let alone personal computers with multi gigahertz clocks that can measure down to nanoseconds.
  9. Although it seems like a good idea to go for OAG (and it is - I use it), it will be problematic for that particular setup. OAG on the fast system is best placed as near the sensor as possible. Minimum distance to place it with DSLR is 55mm (or maybe slightly less if you get some sort of very low profile T2 adapter - usual one is 11mm optical path and there is 44mm of flange focal distance). You are already at 55mm away from the sensor and at F/5 scope with 8mm pick off prism - you will start loosing light in the center at 40mm away from the sensor. Then there is issue of coma corrector and focuser travel. Most coma correctors are built so that they need 55mm of distance to sensor - they simply screw into T2 adapter. If you put OAG in between - you won't have proper working distance. You can use coma corrector with longer working distance - but then you risk running into issue with focuser. You need very low profile focuser to be able to rack it all the way in to accomodate for long working distance + OAG + T2 adapter + 44mm of DSLR flange distance. For this reason, there are T2 adapters for DSLR with integrated OAG. https://www.teleskop-express.de/shop/product_info.php/info/p2722_TS-Optics-Off-Axis-Guider-for-Canon-EOS-cameras---replaces-the-T-ring.html You loose a bit of flexibility of OAG positioning as this one is non rotating (at least I think so). Also with regular OAG - you won't be able to freely rotate it as guide camera might hit body of DSLR. However - you loose flexibility of OAG in this case as above unit is specially designed for DSLR. Using guide scope is much simpler option for the setup you have in mind.
  10. Yes, aggressiveness is percentage of calculated pulse used. Lower aggressiveness - shorter pulse duration is used than needed. I would lower the both. Set min pulse to 20ms and use 0.25. There is nothing wrong with using slower guide rate. People are afraid that correction will not be made sufficiently fast - but we are talking milliseconds here and guide cycles of 3-4s. For example - something weird happens and you need to correct for 1" of error at once. How long will it take? At x0.25 guide speed - that is quarter of sidereal so about 3.75"/s - correction will last for about 300ms - or 1/3 of a second. That is still pretty fast, even for large correction. Lowering guide speed also helps with stabilizing the mount. Sudden large change in speed introduces jerk (change in acceleration) and that this impacts things if your mount is not rigid (it may feel rigid under hand but we are talking micro things here in gear train) - which leads to oscillations if there is sudden jerk on system. Think of it in terms of driving car. Is it better to slowly break or slam on the breaks. If you slam on the breaks everyone in the car gets tugged forward. And if you slam on the accelerator peddle - everyone gets pulled in their seats. Same thing happens with the scope if you there is drastic increase in speed. You need to match three things in your guiding setup: - guide speed - min pulse duration and - min mo parameter Set guide speed to as low value as possible for smooth corrections without introducing too much vibration in the mount - x0.25 is good value (but not so low that it takes seconds to recover from serious error) Lower min pulse duration to say 20ms. From the two - calculate what is minimal correction that your mount will make. For example with x0.25 speed is 3.75"/s and for 20ms (1/50th of a second) that will give movement of 0.075" - that is very fine movement of the mount. Now you have to adjust min mo parameter to match this value. Min mo is minimum movement in camera pixels. In another words - if there is need for correction that is larger than "so and so pixels" correction will be issued - otherwise, nothing will happen. Guide assistant advised you that high frequency motion is around 0.04px - so I would set min mo to a bit higher than that - say 0.08. This will ignore 95% (two standard deviations) of high frequency motion due to seeing and will still issue corrections for errors larger than 0.4" (4.76"/px * 0.08 = 0.38").
  11. There is another problem with having high guide speed - that is minimum pulse duration as parameter. EQMod sets this at 50ms or something like that. It is some archaic setting from the time when computers had issues of measuring time to higher precision than that so it was set at 50ms to avoid imprecision of measurement. What it means is that how ever small pulse needs to be issued - it will be overridden to 50ms. If you have guide speed set to x0.9 sidereal - that is about 15"/s. 50ms is 1/20 of a second so minimum mount correction can be 15 / 20 = 0.75". My recommendation to everyone using EQmod is to lower guide speed significantly - to x0.25 of sidereal or change those 50ms to lower setting - like 20ms or less.
  12. System seems to be quite capable of measuring motions down to 0.02-0.04px which is less than 0.2". How can it then be responsible for 1" RMS guide performance? Maybe post a guide log so any actual issues can be seen?
  13. +1 on mechanical issues - change of guide setup won't help much.
  14. For capture use SharpCap. I usually take full set of calibration videos. PIPP will do "video calibration" for you. Use ROI - figure out what is diffraction limited field of your scope and then use ROI that is smaller than this region. For best resolution - go for critical sampling which is f-ratio = 4 x pixel size (or x 2 / 500nm that is x 2 / 0.5 in microns or x 4 in the end). With ASI294 you'll need to barlow to ~F/18.5. Use high gain and very short exposure times - about 5ms or less. Take video with at least couple dozen of thousands of subs. Keep few top percent when stacking. Use AS!3 for stacking. If FOV is smaller than whole moon - create mosaics with multiple panels. Shoot each panel in the same way. Use Microsoft ICE (no longer available but you should be able to find download link somewhere) or iMerge or similar software for stitching. In the end use Registax6 with wavelets for sharpening. There are bunch of tutorials on youtube that show you the workflow - so check them out.
  15. It seems that in this particular case - it was the lens that was tilted. As far as I gathered from the discussion - lens was not opened but was properly aligned with optical axis - cell seems to be adjustable for tilt.
  16. Yes, those are correct. Depending on what you need that for - you might want to include various other losses in the system. If you for example want to see what sort of refractor will provide you with same light gathering as 6" CC then it would go like this: You have calculate area of CC to be 15029mm2 but did not account for mirror reflectivity - according to FLO that is 95% for both primary and secondary mirror. Effective aperture is then 15029 * 0.95 * 0.95 = ~13563.67mm2 For doublet refractor - you have x4 air/glass surface (two lenses - each having front and back surface) and if they are properly coated - they transmit light at ~99.7% efficiency. We now do things in reverse: square_root( ( 13563.67 / (0.997*0.997*0.997*0.997) ) /PI ) = sqrt( 13727.67 / PI) = sqrt(4369.65) = 66.1 being radius in mm So diameter is 132.2mm 130mm refractor will have roughly the same light gathering as 6" CC. Do pay attention that light gathering is not the same as resolved detail. Above does not mean that 130mm refractor will show the same planetary image as 6" CC. Other things come into play when calculating which one will show more detail.
  17. Yes, software binning. It is only thing to do at this stage since you already captured the data. It works the same regardless of the source of data (CCD or CMOS) - at this stage. Since you binned your RGB data in hardware already, I doubt you'll get anything from binning that, but you'll need to do it to match lum data as well. Take all your data while still in linear stage before you start any processing and bin each stack (lum, r, g, b) and then process data as you normally would. I use ImageJ for binning - it is as simple as load fits, then do Image / Transform / Bin (choose x2 and average) and afterwards - save image as again fits.
  18. This is a bit of data manipulation in ImageJ (binning, background removal) and then simple stretch in Gimp.
  19. I only once tried that target and indeed - it's not easy from light pollution. Maybe try binning your data to improve SNR? Looking at 100% - stars are not quite pin point that they can be, so maybe there is a bit of room to trade (missing) resolution for SNR?
  20. Interesting idea. I think it is long term project - maybe over the whole year as I doubt many will have whole month of clear weather. In any case - you'd need bunch of lunar images - every one a day older than previous. Then you compose final image from parts that have same angle of illumination - you take "a slice" of the lunar surface from each day for final image. Something similar is done when doing macro photography to achieve large depth of field with fast optics (that inherently has shallow depth of field) - it is called depth of field stacking. Focus is shifted between shots and only part of image that is in focus is used from each image.
  21. Maybe simplest explanation would be to show it? this is 100% zoom level. Stars look lovely but nebulosity is obviously much lower resolution than stars. It almost looks like water color painting rather than actual image with detail suggested by size of stars. That is the sign of heavy use of denoising that softens up image way too much.
  22. Very nice image. I guess I have objection on level of denoising used. Stars are nice and tight and I'm guessing you used StarNet++ to separate the two - but background nebulosity is too soft and does not match the stars.
  23. No, it will be something like this: this is "relay" arrangement. First lens takes image at its focal plane and projects it to infinity - creating collimated beam. Second lens operates as "telescope" - taking beam of light coming from "infinity" and projecting it on focal plane. Object and image are both at focal planes of respective lens and any amplification of image will depend on ratio of focal lengths of lenses - it can act both to magnify or compress the image. Here are some fun projects that you can make with binocular parts: - 3d printed telescope / finder scope / wide field instrument - virtual / electronic telescope. This is rather interesting project as such telescopes otherwise cost thousands of pounds and you'll be making "half of it" very cheaply. Setup consists out of: 1. collimating lens 2. objective lens 3. eyepiece 4. mobile phone Schematics is a bit like above relay lens setup, and goes like this: Then you can display astronomical object on your phone screen and "observe" it at eyepiece like when using proper telescope. If you hook up such device to EEVA style telescope that records things in real time - you get electronic telescope. - you can create home cinema projector with lens and smart phone as well (works best in very dark room with small wall image).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.