Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. In that case, you'll need few bits. MPCC III has both M48 and T2 threads. Working distance from M48 is 58mm (as per TS website info): Nikon F mount flange distance is 46.5mm And this Nikon F mount M48 adapter from TS has optical length of 8.5mm: https://www.teleskop-express.de/shop/product_info.php/info/p3629_TS-Optics-T-Ring-from-M48-Thread-to-Nikon-F-Mount.html So you have 46.5 + 8.5 = 55mm and you are 3mm short for M48 distance of 58mm - this means you'll need 3mm M48 extension: https://www.teleskop-express.de/shop/product_info.php/info/p6435_TS-Optics-3-mm-Extension-with-M48---2--Filter-Thread-and-2--Diameter.html You can try other Nikon F adapter - like this one: https://www.teleskop-express.de/shop/product_info.php/info/p195_TS-Optics-Optics-T2-Adaptor-for-NIKON-Cameras.html but unfortunately it does not say how much of optical path it uses up. You can also use this one: https://www.firstlightoptics.com/adapters/ts-ultra-short-t2-adapter-for-nikon-dslr-1mm-length.html but I would avoid putting anything shiny in optical train, and since it is only 1mm long - you'd need very specific extension of 7.5mm (not sure if anyone makes extensions that are not whole number of mm long). You can always use distancing rings though - like 7mm extension and 0.5mm distancing ring, but that adapter is shiny and can cause reflections.
  2. Bit depth is rather inconsequential for planetary imaging as most imaging happens at 8bits anyway, and you'll be hard pressed to get saturation at planetary speeds with 10bit if you decide to go with higher bit depth. As far as pixel size is concerned, here is F/ratio - pixel size relationship for critical sampling case: F_ratio = pixel_size * 2 / wavelength where pixel size and wavelength are in same units (meters, millimeters, nanometers - choice is yours). Take wavelength to be either exact wavelength - like 656nm for Ha solar or if using Ha filter for lunar for example, or ~500-520nm in general case - for full spectrum / color imaging. From above equation, we have following F-ratios for given pixel sizes: 3µm gives F/11.5 1.4µm gives F/5.4 2.9µm gives F/11.1 Two additional bits give only x4 more levels (2^2 = 4).
  3. Oh yea, you need a slim one instead :
  4. @Tony1967 Can you post linear images - straight from DSS without any modification, from both scopes?
  5. Yes, you can't do much about seeing as it "averages" out on 2s scales and "freezes" on 5-6ms scales. Planetary astrophotographers utilize shorter exposures than say 10ms in order to capture image while seeing is frozen (distortion does not change) and then select subs with the least amount of distortion. How much will seeing impact your final image depends on your working resolution and performance of the rest of your system. If you use small aperture scope or lens and going for wide field with low resolution and using mount that is not very precise (your tracking / guiding is already in 1.5-2" RMS range) - then seeing won't be issue most times. But if you want to work at medium to high resolution then poor seeing will be issue. If working with LRGB - you can capture color in nights of poor seeing - work with R and G as these are less affected by seeing than say blue.
  6. It actually does not need to be, but it helps if color is relatively neutral (grey is). Very strong primary color will effectively reduce light in rest of the spectrum and you'll get lower histogram for say blue and green then red if you use red cloth. This is in case you are using OSC camera of course. Second problem is that it will impact color cast of the image if each flat channel is not normalized separately (something one should do anyway - and should be part of calibration software). Other than that, I'd say - it does not matter much.
  7. I'm not really sure of this. USB3.0 by specification needs to deliver higher amperage than USB2.0. Maybe some "power allocation" happens when device is plugged into USB3.0 versus USB2.0 connector and that leaves less power for the rest of the system? It's not power saving mode - but there is something called throttling https://www.martinrowan.co.uk/2019/09/raspberry-pi-4-cases-temperature-and-cpu-throttling-under-load/ RPI4 will lower CPU speed if it overheats. Maybe it can lower CPU speed in response to low power detection as well?
  8. I'm not really sure you do get it correctly. CC does not provide back focus - it "removes" it sort of. Before you start, maybe have a look here: I'm sure someone will come along to advise on actual back focus of 130PDS and rest can be found online (check Nikon flange focal distance on wiki: https://en.wikipedia.org/wiki/Flange_focal_distance) and CC and T-ring usually have their optical path requirements / properties listed so that won't be much of a problem to check out.
  9. It looks like Xiaomi 20000mah is indeed very good power bank. It's not very expensive either, around 50e at our local Mi store. Just be careful - there are two (or more) models available. One that says 50W (at least on our local site) has 3A for 12V (same as in review)
  10. Actual reading of that graph is easy, but problem is interpreting it. Each line on such graph is one wavelength of light - usually used are Fraunhofer lines: https://en.wikipedia.org/wiki/Fraunhofer_lines (in above linked wiki article there are letter designations like e line is 546nm and C line is 656 or H-alpha transition) Once you pick any line - vertical axis represents distance from center of the lens. In the image you linked, above graph it says Pupil radius of 65mm - so that is for scope of 130mm diameter. Y value represents point on the lens that is Y millimeters away from center of the lens and towards the edge of the lens. Graph assumes that lens is spherically symmetric and just shows what happens along one radius of it. X axis on the graph represents change in focal length for that particular ray. Say we want to interpret these two points on green (around 520nm) line on above graph: It really means this: Ray with 520nm coming in at 6.5mm away from central optical axis will fall a bit short of true focus position (X tells us how much shorter in mm), and ray coming in at 58.5mm (or 65mm - one 1/10 mark and that is 6.5mm) will be a bit further away than true focus point. If we examine any given line on such graph - how much it deviates from straight line - represents level of spherical aberration on that wavelength (spherochromatism). It's position left or right tells us how much defocus for that wavelength will there be. Graph you linked shows that most of residual color comes from spherochromatism rather than chromatic aberration as all lines are in fact very close to 0 but are bent. As a contrast - look at this graph: This graph shows F/10 4" achromat, and shows two different graphs. Larger graph shows defocus for particular wavelength (right on Y axis there is wavelength scale and certain points are marked with their Fraunhofer line letter designations). That graph assumes 0 spherochromatism - or simply does not show it, but it shows all wavelengths of light in single continuous curve - each point on the curve having its own defocus / focal shift. Small graph in top left corner is graph that we are talking here about. Most lines are fairly straight and vertical - small change in X if any, so there is very small amount of spherochromatism but they are separated in X axis - clearly showing residual spectrum that comes from defocus. Hope this explains how to read above graph - curved lines = spherochromatism, lines away from X=0 equals secondary spectrum due to defocus (what we normally think of as residual color). Problem is that one can't really easily tell level of it from the graph, because it depends on focal length of the scope and in relation to that focal shift. Slower scopes will have larger critical focus zone - so same defocus might not have same implications for slow and fast scope. What will star look like at particular wavelength also depends on focus position. Graphs are made so that green is "in focus" - but sometimes best focus is in different place and once we change focus - things change because waves align differently and interference pattern is different. Remember it is about waves rather than rays, in the end, image is formed as waves "traveling" along these lines arrive out of phase with one another and then interfere constructively or destructively depending on phase shift.
  11. Flat darks also don't depend on filter and focus position - they are the same as regular darks - they only need to match flats in gain/offset/temp/exposure length. Filter and focus position is only important for those calibration files that measure light throughput and that is flats. All other calibration files measure something else - darks (regular and flat) measure dark current during exposure and do that without presence of any light (need covered scope or ideally camera off scope) and bias measures offset characteristics of camera - again no light is present.
  12. That look very good, but it really depends on other things not show in that graph - like what is X scale in this image and what is focal length of the telescope? I'm guessing it is TSA120?
  13. Sometimes it is - sometimes it's not. You can measure it though, rather easily. Take a piece of thin paper (like tracing paper) next time there is bright moon outside and point the scope to the moon while placing paper over the back of the scope (put it over focuser opening - don't use eyepiece of diagonal or anything just back port of the scope). Focus until you have clear image of the moon on the paper (it will be small). Measure how much you needed to rack out focuser to get that. Mind you - don't do it on close object - like during the day (and never with Sun! - never point scope at the Sun unless you have proper equipment) because focus point depends on how close object is. We determine true focus for object that are effectively at infinity (meaning just really far away like astronomical objects are).
  14. I think you are right - you can't just scale lens as it will keep focal length the same, but if you apply above "black box" approach - you'll see that larger lens actually bends light less for same distance of optical axis or the same if we measure distance from optical axis in relative units of focal length or 1/focal length (f-ratio). or simply scaled to 0-1. In any case - there is no more dispersion due to this for same F/ratio.
  15. Actually, having same radii of curvature back and front would create same FL lens - so longer FL lens is actually less curved than short FL one. Not sure what is dependence on thickness when designing larger lens - if it really needs to be that much thicker as it is larger in diameter (for same F/ratio).
  16. I'm not entirely sure about that, but yes - if you keep front and back radii of curvature the same and have "higher" lens - then in the middle it has to be thicker.
  17. It is about angles and refractive indices of glass and air. Say that light travels further thru glass because glass is thicker, right, and colors are more "separated" because of this - well it turns out that after longer journey - they hit part of the lens that is less curved or at a shallower angle and that offsets precisely difference in "spread" that happened. For lens calculation - it is curve of the front and back surface that matters (distance between those matter as well). Another way to think about it: Instead of lens let's have "black box" that bends the light - we don't really know how it happens. Top one is 100mm F/6 - so 600mm of FL, while bottom one is 60mm F/6 - so 360mm of FL. We now observe ray that is 10mm of optical axis going into first black box and second black box - which one bends it harder?
  18. https://en.wikipedia.org/wiki/Snell's_law No mention of lens thickness - angles depend only on indices of refraction between glass and air. In order to get same F/number, larger lens actually bends light less than smaller lens at same distance from lens center. You can see that if you stop say 100mm F/6 scope to 60mm - then it will turn into 60mm F/10 scope, so edge ray bends only at F/10 not as fast as F/6 like with 60mm f/6 scope. It is all down to wave nature of light. It has nothing to do with brightness of the image, or size of airy disk. Look at this diagram: It characterizes both spherochromatism and residual spectrum. If you just scale up the scope - increase aperture but leave F/ratio the same, then said graph will "grow" as well. Shape of it won't change - it will be the same - but X and Y axis will grow proportionally. For any of those lines - difference in X axis between Y0 and Ymax represents "lag" of light wave. That out of phase wave will either constructively or destructively interfere with itself - to produce pattern at said wavelength / frequency of light. If you change X axis - then obviously number of waves between any two points on any line change. With perfect straight lines - no spherochromatism - scaling up won't introduce spherochromatism, but if there is some spherochromatism in design - making scope larger will worsen it. With perfect straight lines scaling up will only make defocus in waves larger - larger purple halo.
  19. Can happen. I was able to guide at 0.38 RMS with Heq5 DEC around 60°, and mount is heavily modded, but guide resolution is ~1"/px - so I'm fairly confident in numbers.
  20. a) It's true. It's not horrible (although looks scary), but it is definitively there. b) Yes - all do, it is the feature of sensor in long exposure c) Yes. Depending on type of camera you have, there are two approaches. If you have set point cooled camera - simple dark subtraction where darks are at matching temp, exposure, gain and offset will fix the issue. If you don't have set point cooled camera then you can utilize special calibration algorithm called dark scaling / dark optimization. It is rather involved technique and not guaranteed to work properly - depends on data and how usable bias is (there are some tricks to get bias signal even if bias is not stable/usable).
  21. What is your guiding resolution and what is your DEC?
  22. I'm tempted to say that Journalistic ethics is all but dead and that 99% of texts out there are crafted with "a purpose" (usually just financial gain) - in one form or another.
  23. I think easiest way to explain that is like this: Both 60mm and 100mm F/6 scopes have same geometry - same light bending, same everything - they are just "scaled" versions - like if you took 60mm and then scaled it by 10/6 factor. Lens diameter scales, focal length scales - but angles don't scale (think triangle - you can scale sides but angle remains the same). Why would then up-scaled version behave differently than "base" model? Because of the way light works. Almost all phenomena that we observe it telescopes (aberrations, diffraction effects, ...) come from wave nature of the light. When we scale telescope we also scale its "errors" - but wavelength of light remains the same. We don't scale wavelength of light with telescope as well. If 60mm telescope had one wave of defocus for particular wavelength and we enlarge everything but that particular wavelength - suddenly we have 1.6667 waves of defocus for that wavelength - larger defocus in waves. Since chromatic error is defocus of all wavelengths but two (and some spherochromatism as well, but let's not complicate things) - larger geometrical error translates in larger wave error since "size of a wave" remains the same. If light behaved like geometry rays - then it would make no difference - larger focal length - larger image at focal plane but we use longer FL eyepiece to get same magnification and image is the same if magnified the same. However - light does not fully behave like that - geometric rays are only approximation and it actually behaves by interference of those waves - and we can't scale those. Makes sense?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.