Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I just noticed something and I'm infuriated. My observatory is in the middle of the build. Crude works have been completed. It has observing deck (concrete slab) and pier made out of reinforced concrete - those two are decoupled (about inch of space between them). Pier has its own foundations. I did not inspect it previously, but since work around the house is (hopefully) coming to an end - I went outside to see what has been done today and decided to check out the obsy as well - climb on observing platform (it is about meter of the ground) and see the skyline / horizon. I also then inspected the pier and mounting plate. It has been exposed to elements for few days and rust started to show up and I wanted to check how to best protect it - and then I realized. Pier is loose! I have no idea what has happened - but I can move it couple of centimeters left right with one hand. This thing should be rock solid and it's bending like piece of rubber. Dark was starting to set in so I could not really identify the problem, but I think that concrete pouring did not go well - or some sort of crack formed next to foundation. Now I'm wondering what would be best course of action in attempting to make it stable. My initial gut feeling says - break and remove that one and redo concrete pouring for it again? Will that work? How to make sure it is solid and does not budge? (I'm really confused with how this happened in the first place so I'm worrying in advance that it will happen again). Anyone has any ideas how to best proceed on this one?
  2. It does not work quite like that. You need to be looking at two different graphs. First graph is similar to this one (but will be different depending on scope type): On the left right side - you have wavelength of light. This graph represents focus shift versus wavelength. For example - if you have 550nm in focus (e line) - then h line will be about 4 mm defocused. Above graph is for 4" f/10 Fraunhofer type doublet Next graph that you want too look at is filter response curve: and then compare those two graphs. For example - in above image - Wratten #8 cuts all light below about 460nm and starts to pass significant amount of light only after 500nm or even a bit more Getting back to first graph - this happens: You effectively removed outlined part of the spectrum - most defocused bits really - what is left is either in focus or similarly defocused as r line slightly above 700nm.
  3. I used #8 and it works very well - except the overall yellow tone to the image. When imaging - it can be somewhat corrected with proper color calibration, but for visual it is noticeable. Here is test that I conducted with 102mm F/5 achromat: Each row represents same image (one combination of factors) - in three different levels of stretch - high, medium, low. From top to bottom we have certain aperture (I used aperture masks) and next row is same aperture with wratten #8 filter. First is full aperture, then 80mm, 66mm and finally 50mm (I think) - but point is - look at photographic impact of wratten #8 in removing purple fringing (here blue as camera sees it as blue rather than purple). Even at full aperture filter reduces halo significantly (and remember - this is 4" F/5 achromat). I've heard that 495nm long pass by Baader produces even better results.
  4. I think it can be adjusted much like most mounts out there. Have a look at this thread - it contains doc with steps to strip down the mount - that will come in handy when you adjust worm / worm gear engagement:
  5. I guess that most people don't have gear to fully utilize CCD sensor like Kaf8300. It has very high read noise and therefore requires long exposure times. 10-20 minutes long. People have issues when needing to guide for 20 minutes per sub. It is simply much easier to switch to CMOS that you can use with say 2-3 minute exposures - or x5-6 shorter ones (Kaf8300 has about 9e of read noise, while modern CMOS often has less than 2e). You don't have to go for APS-C sized sensor. You can go for say 294mm type camera. That will be both cheaper and allow you to use your filters and will provide higher QE (and shorter subs). There is also convenience when operating CMOS vs CCD camera that is in line with modern life style. Who has the time and patience to wait for multi second sub download when they can have multiple FPS live previews I'm not saying that you should switch - if 383L+ camera is working for you - then sure - CMOS will work a bit better, but question is - can you justify the cost.
  6. Depends what you see as subtle / significant. There is simple way to compare the two - QE. All other things are pretty much "tie" in the right hands, but you can't optimize for QE - it is either there or not. I'll go by new prices - and find comparable camera on CMOS side. At FLO - 383+ is around ~£2000 For same amount of money you can get Altair Astro 26M APS-C. You get larger sensor - with QE above 90% while KAF8300 peaks below 60% - that is 50% improvement in QE - and consequently - you would get the same image in 2/3 of the time - judging by just that one parameter. Not to mention that you get larger sensor - APS-C sized vs 4/3 one of KAF8300 - and sensor size is speed when paired with appropriate telescope. If you captured above image in less than 3 hours - would it be subtle or significant improvement to get the same image in 2h or less?
  7. I don't think that will work particularly well. I have similar x0.5 focal reducer and effect it has on 1.25" eyepieces that are already close to max field stop for that format is - vignetting (and it also messes up with eye relief - which can be issue as well - blackouts and things like that). If you want to observe M31 - you really want 2" eyepiece, but even then - 1200mm FL is just too much to observe it whole. Use this tool to get the idea of what you can see in your scope with particular eyepiece: https://astronomy.tools/calculators/field_of_view/ Even with one of the best budget wide field 2" eyepieces - Aero ED 40mm - you won't be able to fit M31 in FOV with that scope: To completely fit M31 in FOV - you want scope with about 700mm or less focal length (maybe 750 - 800mm but that is pushing it - below 700mm you will get nicer framing). For example - Skywatcher Star Travel 120 (famous ST120) - wide field scope with 35mm version of AeroED eyepiece will give you very nice framing (here you are approaching binocular magnifications - like x17, btw binoculars are very good way to observe M31 as it is very large): Just one more note on original topic of that focal reducer - it has clear aperture smaller than max field stop of 1.25" eyepiece and that will be "squeezed" (reduced) into even smaller circle - that is why vignetting occurs with it. It is otherwise ok for shorter length eyepieces like 17mm and below - when you try to make them work as wider field eyepieces (but it is often simper to just get wider field EP - like that 25mm BST).
  8. Scope will be diffraction limited while camera lens are not. This simply means that it is possible to get far sharper images of some targets with telescope than it is with lens. Telescope will gather more light as well - which can be translated into "speed" - but will have narrower field of view than all of your lens. Even with reducer 72ED will have 360mm of FL - while your "longest" lens has 300mm of FL. This means that all errors in focusing / guiding and so on will be more visible with telescope. Will you be able to exploit the sharpness of the telescope - depends on your whole setup. If your mount does not track particularly well, or you don't pay attention to focusing very precisely - then you won't have very sharp image. Just to give you idea of sharpness difference: This is your M31 taken with lens: That is as good as good lens goes - I have similar image taken with Samyang 85mm wide open (stopping it down will probably fix some of issues): But here is what 380mm FL scope (80mm F/6 scope with reducer) can produce: In fact - it will go much more in detail and remain sharp - above is just scaled to match size of lens made images:
  9. This sort of balances fine I can't really tell as my AzGti is a bit stiff in RA axis but it is much better than without it. I need to open it up and adjust tension on bearing so it moves more freely when clutch is disengaged. Now just need to go and find actual weight that I'll use (above one has 30mm center bore and is needed for exercise so I need to get another for astronomy purposes - possibly smaller center bore).
  10. That is good temporary solution I think. I do have 1.25Kg weight which I can clamp with washers I already have. It take some fiddling to get right position - but once done - I can mark bolt position for when I need weight for other stuff. I just hope weight won't interfere with tripod or anything since it is quite flat and large diameter. Will need to test that.
  11. That was an option too - using DIN 931 type bolt. Problem is with imports again. Shipping and import duties cost much more than such single bolt. Longest one that I have access here is less than 200mm long (180mm I think). That is not long enough. I would be happy with 300mm version like you have in the image there - but that I need to order from abroad.
  12. Yes. Here is rather crude example. Say you have 0-1 range and you have "fixed precision" of 100 units in that range. I'll be working with decimal numbers to make it easy to understand and calculate. Your image consists out of three pixels: One bright, one medium and one very faint. Bright will be 0.851 Medium will be 0.5 Very faint will be 0.001 That is in "floating point representation" - or you don't need to worry about precision. If you write that our fixed precision - look what happens: 0.851 will be written as 0.85 (only two decimal places allowed) - you loose here very little of the value, error is (0.851 - 0.85)/0.851 = ~0.12% of error 0.5 will be written as 0.5 - here due to number it self you don't loose anything by rounding 0.001 will become 0 - here you lost all information. That is the problem with fixed point format - you loose information in very faint areas. But what will happen if we apply stretch first? Stretch that we apply can be represented by power law - raising to certain power. Here we will use power of 0.1 (equivalent to gamma of 10). 0.851 ^ 0.1 = ~0.984 = 0.98 when we round it to two digits 0.5 ^ 0.1 = ~0.933033 = 0.93 when we round it to two digits 0.001 ^ 0.1 = ~0.5012 = 0.5 when we round it to two digits. Now we did not loose any of pixel values completely. Nothing was rounded to 0 we still have values that are very close to those in full precision. In fact - if we "inverse" those fixed point numbers - we will get very close to original values: 0.98 to the power of 10 is ~0.817073 0.93 to the power of 10 is ~0.484 0.5 to the power of 10 is 0.0009765625 As you can see - error is larger where we had large signal to begin with and very small with very small signal. You can view all of that in another way: Linear data and fixed point format - uses equal precision for both high and low values in the image - it introduces same level of rounding error. Stretching and then using fixed point format - uses high precision for faint parts and low precision for bright parts - so rounding error is small on faint stuff and high on bright stuff - which is better from SNR perspective as faint parts already suffer from poor SNR and don't need more (rounding) noise added.
  13. I really like that idea of using 32bit floating point precision. It is simply enough precision for all amateur astronomical uses. Although camera is for example 12 bit or 14 bit - we end up stacking a lot of subs. Each new sub we stack "adds" to bit depth. If you stack for example 128 subs - you'll add 7 bits of depth to the image. Even if you took those with 12bit camera - you are already at 19 bits of fixed point precision (I'm emphasizing fixed point here). That means - stacking in 16bit is simply silly and should not be done - you should convert to 32bit floating point as soon as you start your processing workflow, even before calibration. Although 32bit floating point has only 23 bit precision in mantissa: It is much more precise than 23 bit fixed point equivalent. This is because of "floating point" part. With fixed point format - you have fixed ratio of brightest and darkest pixel in the image. For example 16bit - brightest value is 65535 while darkest value (non zero) is 1. So that is ratio of 65535:1. With floating point - every pixel has its own 23 bits of precision - regardless how bright or dark it is. That means huge dynamic range without loss of precision For small values - say values of 100 or so (in electron count) - you'll get enough precision even if you stack enormous number of subs - say you stack 4096 subs and you use 14bit camera. With fixed point numbers - that would require 12bit + 14bit = 26 bit of precision. With floating point numbers - you have 23 bit of precision - but that is more than you need. In single sub SNR is 10. In stack of 4096 subs, SNR improvement is x64 so overall SNR when you finish will be 640 Noise is 640 times smaller than signal. Signal is 100, so noise is ~0.15. You only need about 10 bits of mantissa to write 100 +/- 0.15 within noise limits. You can't do that with fixed point numbers - as fixed point numbers need bits to maintain dynamic range - they describe both value of pixel and also how large it is compared to all other pixels in the image. With floating point - you get exponent to do that and mantissa is just for precision of that single pixel. Anyway, back on your question. Doing processing in 16 bit is real concern and I would not do it. At least - not directly like that. I discourage people from saving stacks in 16bit format and then using PS or other image manipulation software to stretch that image. That does not make sense. However, with Starnet++ - you can use 16 bit format because StarNet ++ expects stretched data. If you take 32bit format and stretch it - you will take all those faint low SNR areas and make numbers big enough to be comparable to other numbers in the image. You will compress dynamic range - and by compressing dynamic range - you'll again be in the zone that is handled with 16bit format ok. Only issue with this approach is if you need linear starless image for whatever reason. You could do mathematical stretch - like gamma with known number - then perform star removal and then undo stretch. However, your precision will suffer somewhat and resulting linear data will have some quantization noise injected in it (because you rounded numbers when you used fixed point 16bit format). Makes sense?
  14. Just a quick question - what would be better then, aluminum or brass? I have feeling that brass would be better (easier) as far as machining goes. I'm just wandering about mechanical side - will it bend in use (It should not, I mean it is 12mm thick shaft and weight is not going to be substantial).
  15. That is actually great idea I was thinking of the best way to do it and I only came up with drilling a hole on opposite end / tapping it and then using screw and washer. This is much better idea - I can thread both ends and see which one is more square with the shaft - and use that on mount side and put nut on the other.
  16. That was the part of the question I saw a youtube video where guy hand cuts cold rolled steel with larger diameter and coarser thread - 3 inches long. There is a bit of hard work involved - but nothing that can't be done. My reasoning was that both aluminum and brass would be easier to work with than cold rolled steel. My thread will be much shorter as well - maybe 10-12 turns of the die.
  17. I'm actually thinking of just adding counterweight bar to my setup to replace current - one that is capable of holding a bit more weight - like regular mount weights (those for EQ1/EQ2). This is the one I have so far: It was simple DIY solution enough to balance out DSLR + lens: I now have more complex / bulky setup that I want to use to image: And this small counter weight is simply not heavy enough to balance out the scope. I think I still have piece of threaded M12 rod that I used, and I could make longer CW bar, but that was temporary solution anyway and it's time to move on to "real deal". I have couple of options. I could purchase Star Adventurer counterweight kit - but that is M8 and I'd need to get M8/M12 adapter - which I would need to order from abroad. With current postage prices - that is just silly. It would cost me like 40e or similar to get that and I'd have to wait some time. Another option is just to order shaft with M12 thread - but same thing - it would end up costing silly money. Solution is to purchase 12mm metal rod (either aluminum or brass - that is why I asked which is better) and tap it myself - that way whole thing would cost me like 40 - 45 euro total and I'd have some additional tools for later use (die / die handle - I might even go for a set taps+dies)
  18. Yes, I think that half an inch will be enough - maybe even less. I don't think it will be too much of a problem to do it by hand
  19. I already made one out of threaded rod and some washers - but it seems that such approach has enough weight for DSLR and not for more intricate setup involving side by side lens + guider. Since I can't get 12mm cw bar with M12 thread (I can order EQ1/EQ2 weights and that should fit fine on such rod) - I decided to fabricate one myself. Question is - what material? I'll be hand cutting the thread with die and I have choice of two materials : brass and aluminum. Brass is Ms58 while for Aluminum I can choose between AlCuMgPb D5 Dural and AlMgSi1. Which one to go for? I'll be using single weight of 1.8Kg and limit length of CW bar to say 35-40cm?
  20. You can obtain larger image - but without fine detail. It's a bit like taking any image and enlarging it. No additional detail will be there in the image. Many people do that and don't realize that they are not capturing additional detail. I can show you want I mean by taking above image that I shot at 1"/px and posted in this thread (M51). On that particular night sky was poor and that image actually contains information for about 2"/px although I process it at 1"/px. I'll make two comparison images for you: This is animated gif consisting of two frames - one is my image of M51 taken at 1"/px - and second is Hubble M51 image rotated and scaled to match my image. You can clearly see how much detail is lost in my version. Sure - galaxy is large - but it is blurred and without detail. Actual level of detail in my image is closer to 2"/px on that evening. Here - look what happens if I do the same comparison, but this time I'll resample image to make it smaller and be at 2"/px: Now difference is very small - only few bright stars are enlarged and a bit of contrast is missing. Otherwise, detail is mostly there to the same level. That is atmosphere for you. That is why are professional observatories built on mountain tops / in deserts or on islands in the middle of the ocean - to get the air as steady as can be. That is also the reason why they developed adaptive optics systems. No it won't help. What can help somewhat is: 1. Lucky imaging approach. With low read noise cameras there is starting to be less difference between stack of long and stack of short exposures adding up to same total exposure time. With zero read noise camera - it would not make a difference if you took 36000 x 0.1s exposures and stack them or 10 x 360s exposures - result will be the same. Shooting short exposures let's you filter out subs where seeing is poor and keep only the best frames. This can help lessen effect of the seeing 2. Using very large telescope This effect is two fold. First - you can afford to image in diffraction limited part of the field (without corrector) and second - you minimize effects of aperture on resolution (aperture size adds into total blur - larger aperture - less blur it adds). Second - it lets you accumulate much more signal - or better SNR in given imaging time (which is important for third part) 3. You can actually recover some of the lost details by sharpening the image. Planetary imagers do that all the time - only trick is to have high SNR image to start with. As you sharpen, you actually amplify high frequencies and among those high frequencies there is noise as well. Amplify too much and noise will become larger then signal in those frequencies and you will get noisy image. High SNR helps there as it let's you sharpen more aggressively.
  21. Another thing that you'll need is for your phone (if android) to support so called Camera2 API. That allows for RAW files to be shot instead of using lossy compression like Jpeg (compression looses detail and prevents you from being able to stack your images effectively).
  22. Ok, saw that. Not sure why is that, but then it probably does not work yet.
  23. No problem what so ever - we are all discussing similar things. Btw, if there is INDI driver for DSLR - INDIGO can use it and possibly expose it to ALPACA ASCOM via ALPACA agent. Maybe give it a go and see if it works?
  24. Not necessarily. Ratio of image scale in prime focus vs afocal method - depends on ratio of focal lengths of eyepiece and camera lens In this case we have smart phone camera and lens. We need model to determine parameters of eyepiece as there are couple of things we need to adjust properly. I'll randomly select phone model to give example of how calculations work. Say we go for Galaxy S8 (random pick) Here we have pixel size, sensor size, F/ratio of the lens and its equivalent focal length for 35mm film. 1/2.55" sensor has crop factor (here term is actually useful) of ~6.1 - which means that actual focal length of phone camera lens is 26mm / 6.1 = ~4.25mm That is first parameter that we need - FL of camera lens Next is entrance pupil (which we must match with exit pupil of scope + eyepiece combination) - we have F/1.4 lens and it has 4.25mm of FL - so our entrance pupil is 3mm - we must not have larger exit pupil than 3mm It will therefore depend on scope what is the best eyepiece. Say it is 6" SCT telescope in question. It is F/10 scope so highest focal length that can be used will be ~30mm (anything above will create too large exit pupil and waste light). Using this calculator: https://www.pointsinfocus.com/tools/depth-of-field-and-equivalent-lens-calculator/#{"c":[{"f":13,"av":"8","fl":50,"d":3048,"cm":"0"}],"m":0} we can see how big FOV sensor captures in terms of degrees: So it is about 80° We need 30mm 80° AFOV eyepiece and phone for optimum performance (so we cover whole sensor and don't waste light with exit pupil). From above parameters, we can calculate pixel scale and any needed binning since pixels are rather small.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.