Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Have no idea, but maybe we can get some info here: https://planewave.com/product/l-600-direct-drive-mount/ direct drive? Zero backlash / Zero periodic error - no need for guiding with good enough model?
  2. Well, we don't have data on that from above video nor from specs sheet. Yes, satellites, ISS and so on ... For some reason I don't like idea of encoders + guiding. It seems like waste of encoders - and I always think that two might "fight" if software is not properly implemented. What is to be trusted? Guide command or encoder correction. In ideal world - guide command, but as we have seen - it depends on accuracy of measurement and seeing can disturb it somewhat. But if we had perfect guide corrections - then we would not need encoder at all, right? So it must be encoder that has "the upper hand" - but why guide then? And off I go in circles ...
  3. I really like my GSO 32mm plossl, but not sure if it stands out against other 32mm plossl offerings. 32mm plossl is maximum in 1.25" format without loosing field of view. It has field stop of about 27mm. Longer focal length eyepieces in 1.25" are bound to have smaller FOV - and in principle show you same amount of the sky (same field stop) - only a bit more zoomed out in a bit narrower field of view (like 40mm plossl). There is really not much difference between 28mm and 30mm or even 32mm - and I'm not entirely sure where you found 28mm plossl at all? Usually next step down is either 26mm or 25mm (probably same FL - just labeled differently).
  4. By the way - I would not get 10 micron mount for myself (although it might look like I'm the fan from above discussion). It is indeed pricey in my opinion - and I would not have use for it. I like friction mounts better, and I'm actually eyeballing this one: https://www.geminitelescope.com/efric-friction-drive-mount-german-equatorial/
  5. 0.2-0.3" RMS of what? Tracking precision or Guiding precision? You compare it to Mesu - but two are very different in their use case - at least the way I see it. 10 Micron is to be used without guiding. Mesu 200 absolutely needs guiding to be used. If you want to compare the specs on the two that we have (we don't actually have guide precision of 10 Micron mount) - then compare this: vs 10 Micron is at least x2 more accurate in tracking (if we assume that P2P error is representative of smoothness of the mount) and can maintain that level of performance without needing to auto guide (they also state less than 0.7" RMS without guiding with accurate model). On top of that 10 Micron mount has several other things that Mesu lacks. 1. It can track really fast (servo motors - specs say up to about 10 degrees per second) 2. It is probably not as sensitive to balance as Mesu is. Friction drives work the best when you have very balanced setup
  6. Seeing can't influence the model - but it can influence measurement of current position. In order to measure how well the mount is performing we need to measure its motion against earth's rotation, right? In order to do that - we assume star must be stationary in the FOV. Seeing will cause star to "bounce" around even if we have a perfect mount (there is a tilt component of wavefront aberration that is causing shift in apparent star position due to seeing). No, or at least minimally. Both of these depend on longer exposures than few seconds. In theory - tilt component of wavefront aberration should average out in few seconds. Seeing is measured as 2 second exposure star FWHM - that suggests that for the most part, after two seconds star profile is gaussian in shape - which is due to averaging different wavefront aberrations (central limit theorem). In reality - I've found that sometimes exposures up to 8s help to stabilize seeing. I usually guide at 4s exposures because of this. There is tradeoff - you need good mount (mechanically) to be able to guide with long exposures - longer the guide exposure, more actual tracking error builds up and larger correction one needs - guide RMS rises, but if guide exposure is too short - then RMS rises due to seeing introduced measurement error. Bottom line - seeing is only important in sense that it obscures true tracking / guiding error and there are tricks that one can perform to minimize that: 1. longer guide exposure 2. multi star guiding
  7. You want as good guiding as possible regardless of the seeing. Two compound to produce resulting blur - lowering any component of the two is beneficial (it does not behave linearly and in principle - one can swamp the other if they are significantly different in magnitude - but tracking/guiding is reported in RMS error while seeing is reported in FWHM - two different measures. RMS is roughly FWHM / 2.355 - for Gaussian type distribution). Ok, so you need to understand difference between mount tracking performance, mount guiding performance and tracking/guiding measurement precision. What you see in the video is mount tracking performance as there are no guide corrections made. we see three numbers here: RA has 1.98" RMS value DEC has 0.44" RMS value Total is at 2.03" RMS. But we have to note several things: 1. Mount tracking is flawed - probably because of poor model. RA has significant drift / downward trend on the graph. This means that mount is not tracking at correct rate. My guess is that the model is somehow flawed (probably because that "moving mirror compensation" - mentioned in the video). We don't have accurate RA RMS once this linear trend is removed / model is fixed 2. There is quite a bit of seeing introduced error. If we look at DEC stats / graph - one thing is obvious. Polar alignment is very good and there is virtually no drift in DEC - which in turn means that DEC variation in position is mostly due to Seeing - we have in this case error in position measurement introduced by seeing for given exposure time (longer exposure times reduce this error). Actually, error in DEC can be due to several things - shake of the setup due to wind, or mechanical shake from mount tracking - or seeing. Given that this is 10 micron mount - I think that we can exclude first two. It is mounted on a pier and it is a heavy duty mount - wind influence is unlikely. It is also very good mount mechanically and heavy duty mount - so I don't believe that motor tracking in RA would cause shake in DEC. This leaves with seeing. If you look at two lines - green is just a bit rougher than red (exclude downward drift) - this tells me that most of the error in RA is also due to seeing - but mount is not perfectly smooth. It will have some roughness and I don't think that this roughness can be guided out as it is too fine (encoders are already doing the work).
  8. Not only that - there is also matter of quantization error / noise - which does not behave like regular noise. With e/ADU value of about 4 - there is two bits of rounding error. Say that you have 15e pixel value - it will be recorded as 3 ADU - and when you convert it back to electrons - it will be 3ADU * 4 e/ADU = 12e instead of 15e. Any value in electrons will be divided with 4 and rounded to lower integer (or nearest - but effect is the same - 3 out of 4 numbers will have error). This quantization error looks like "ladder" - or steps. 12e, 13e, 14e and 15e are all recorded as 3ADU or rounded to 12e. High read noise is attempt to mask this - it is called noise shaping: https://en.wikipedia.org/wiki/Noise_shaping It does help - but when you stack many images that have same noise pattern - read noise goes down, but this type of "predictable" noise is not reduced equally as truly random noise and starts surfacing. It is most visible in areas of the image where there is no signal (shot noise also helps to mask it).
  9. It is much simpler than I thought it would be Won't help with FF/FR spacing and issues related to that
  10. I went to Aladin lite and found M100 and turned on labels - there is control to the right to select Catalogue Then select target and get info:
  11. It is galaxy. In SIMBAD database it has following identifier: SDSS J122218.25+154759.2 (from Sloan Digital Sky Survey) It is about 320MLy away (z = 0.0233758)
  12. Always or does it depend on quality of the prism? As far as I can tell - there aren't many "affordable" models out there. I would personally consider Baader T2 model (or maybe TS T2 model) as "starting" point. I guess those are bound to be polished fine, but what about cheaper ones?
  13. Yes, like @Stuart1971 pointed out - technically it is not hardware binning - but rather "on camera" / "on chip" or "firmware" binning, as opposed to software binning - which is performed after capture. If we want to be pedantic about it - both are software type binning - but one is performed by camera firmware and other we perform at our leisure after capture. Hardware binning is not mathematical operation like software binning - but is rather "putting together electrons from multiple pixels prior to readout". It can only be performed with CCD type sensors because of hardware architecture. CMOS have A/D conversion at each pixel and no way of combining electrons from multiple pixels. CCD have one A/D (per row, or per sensor - depends on type) and electrons are "marshaled" to A/D converter from each pixel. It is during this time that they can be "poured" into same well and combined to perform hardware addition - or binning.
  14. Yes, replace it for night time observation (but do leave it around as it is handy for daytime spotting with lower magnifications). With 90mm Mak - it really does not matter if you get diagonal mirror or prism at 90 degrees. Prisms usually work better in slower systems than in faster - but since Maksutov is already slow system - prism diagonal will work fine. Mirror, as long as it is decent quality - works equally good in both slow and fast systems. Affordable prism: https://www.firstlightoptics.com/diagonals/celestron-90-prism-diagonal.html Affordable mirror: https://www.firstlightoptics.com/diagonals/stellamira-1-25-90-di-electric-diagonal.html (there are cheaper mirrors - but are probably less reflective and not as good 91% vs 99%).
  15. I'm not really sure which one does it - but it is the same thing as sigma reject when stacking - with enough pixels one can determine statistics of sample and anything that falls out of statistics can be rejected. I'm sure this can easily be implemented as PI script for example (pixel math thing). I've done it for ImageJ as plugin. Not sure which EEVA software does it though.
  16. Because of limited bandwidth for data download and also because you can choose binning method to suit you. As far as data itself - in principle there is no difference in software and hardware (or rather firmware) binning with modern CMOS sensors. You should in principle get the same result - but in practice it is not so because of other limiting factors - namely data format. Say that your camera is 12bit. Binning by its nature increases bit depth of the data. Regardless of the binning method - average and sum are same (except for division factor - but that is constant over whole image and is not important as far as SNR goes). Add 4 12bit numbers and you'll get 14bit number. Average 4 12bit numbers - and you will get 12bit number with 2 bits after decimal point. In either case - you need more bits to accurately represent number that you get from binning. However - firmware binning still produces 12bit number. It looses that precision that you gained with binning. With software binning - you will download original 12bits - and then you can do all the operations in floating point - without any loss in precision (although binning still produces "round" number of bits - there are operations that are not like that and are therefore better represented in floating point format). You can also choose to do "clever"-er binning. This would greatly improve results in your case for example. Say that you have telegraph noise like in your example. Btw - that is probably telegraph noise that you are seeing - it looks like hot pixels but is not consistent and pops up in different places much like Morse code (hence name telegraph noise - dash dot dot dash dot ....) You can choose to bin 2x2 pixels and take average of all four - or you can choose to skip one pixel if you determine it is anomalous and just average remaining 3 that you think have proper value (that can be determined by some sort of threshold - like if one of pixels is much larger than average of the four - odds are that it is hot and should not be taken into account).
  17. If you don't get them using 1x1 binning - then there is no reason to get them when using software binning on any level (2x2, 3x3, ...). Software binning is preferred way of binning with modern CMOS sensors btw.
  18. I think that distinction between clothes and trees catching fire (former at shorter distance) - has to do with profile / cross section presented to the blast rather than type of material or how easy is to set it on fire using normal means. Trees are taller and have larger cross section and will thus collect enough energy for ignition at larger distance.
  19. Anyone else found linked site to have a therapeutic value? Hmm, I'm annoyed by XY - lets fling medium sized meteorite at their house and see what happens
  20. @OK Apricot I took a look at 2023 guide log (one with OAG), and here are couple of points that might make your guiding better: 1. There seems to be mechanical issue with DEC axis - probably quite a bit of backlash that can be tuned out mechanically. Look at the graph above - once DEC drifts away (red line) - it takes many corrections to bring it back - and then it overshoots - and it takes many corrections again in other direction to bring it back. There are numerous sections like that in guide graph. Mount is not responding properly to corrections. 2. RA guiding is too aggressive or some of parameters is improperly set: now focus on blue line instead of red line. This time mount responds very fast - but overshoots almost all the time. Graph is zig-zagging above/below baseline a lot. I would check following parameters: RA Guide Speed = 13.5 a-s/s, Dec Guide Speed = 13.5 a-s/s That is way too high. Sidereal is 15"/s and you are guiding at 13.5"/s - so that is 90% of sidereal. Mounts that are not mechanically "tight" don't like fast corrections - a lot of mass is being pushed / pulled quickly and that can lead to overshot a lot. Try using very conservative guide speed of say 0.25% of sidereal - or about 3.75"/s in both axis. Next thing is minmo. This parameter is impacted by focal length as it is set in pixels rather than arc seconds. 0.1px minmo translates to 0.15". If you have seeing issues - try raising this value. You don't want to react to every little shift in guide star position - those are most likely due to seeing. Try putting that to say 0.2 - 0.25px in both axis. Third is aggressiveness. You are guiding at 100% in DEC (which is ok at the moment as you have mechanical issue - but eventually leads to overshoot) - and 80% in RA. Maybe bring that down a bit once you fix DEC - to say 60% if you guide short cycle of 2s.
  21. It looks like it was down to tolerances and thread form. Once I've made my own model - where I could tweak tolerance and which had proper thread form - I can now thread 1.25" filters without too much problems. I still have some resistance on first few goes - but once threads are additionally formed - it works fine after.
  22. Scope -> adapter -> field_flattener -> extension_tube -> camera That is why it's called the train
  23. Many people forget to update guide focal length in settings when switching to OAG (leave guide scope focal length). This can lead to substantial error in RMS values (like doubling or tripling of the RMS). Check if you have guide focal length correct. Other than that - @ONIKKINEN is right - OAG does have higher precision in measuring star position - and that results in more jagged graph (guide scope with lower resolution "smooths" this graph).
  24. I had issues with belt mod on my HEQ5 until I really tightened belts, then problems went away. Quite possible that issue was due to belt tension.
  25. I have access to some fairly cheap aluminum tubing - they cut it to size and charge by kg. Maybe you have too? I plan on using that. Some primer paint for aluminum and then mat black for inside and shiny black for outside . Few holes so you can attach lens cell and 3d printed focuser.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.