Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. This has me worried somewhat. I see myself owning a Mesu 200 in some (distant) future, and it looks like it's a bit fussy with regards to balancing. How perfect does balance need to be for it to work properly?
  2. In principle you can, and you should for Mesu, as there is no point in clearing backlash on mount that should not have any. I'm not sure it is going to solve your issue, it is just going to skip clearing DEC backlash and go straight into DEC calibration where it will again complain about not detecting any motion in DEC (if everything stays the same). Could be that software upgrade will solve the issue.
  3. I think that this is crucial piece of information. PHD2 will do clear backlash until it sees guide star move on guide command. It is clearing backlash in DEC before doing DEC calibration (RA calibration went ok). So it tries to clear backlash and does that great number of times, then gives up and and tries going North - and then it complains not enough movement in DEC axis. This simply means that: a) for some reason guide commands in DEC axis are not working properly b) DEC guide rate is set so low that it cannot detect motion of the star even if guide commands are working properly (again very unlikely that 110 moves would not cause guide star to move even if single step was very small). c) it guides normally but friction drive is slipping in DEC (very unlikely because it slews normally, and it is more likely that it would slip in slew due to torque applied than in simple guide command)
  4. I'm going to expand a bit why absolute encoders are necessary for Alt-Az type mount if you want accurate tracking - it is not the same as with GEM mounts where you increase precision of tracking - it is for accurate tracking with AltAz. Maybe diagram is going to explain it better - I'm going to exaggerate curvature of things so you can see more easily what is going on. Imagine scope can't precisely determine Az position - there is certain error. Scope is just a tad past meridian, but "thinks" it is pre meridian in position. Software will tell scope that Alt should increase a bit, but in reality needed Alt position will decrease a bit - tracking will create error in Alt because it is going "the wrong way", and guiding will combat this instead of only correcting things it should correct. Similar thing applies to guiding as well - it needs to know direction of vector - where Alt is pointing and where Az is pointing in order to give proper corrections - if orientation of scope's Alt/Az is different from that of guide system - wrong corrections will be given and guiding will not work to optimum - it is what happens in GEM mounts when you have wrong guider calibration. Difference between GEM guide system and Alt/Az system is that Gem guide system in principle needs calibration only once (although people do it every time when changing target / part of the sky - good practice because of cone errors and fact that RA/DEC axis are not always perfectly at 90 degrees to each other). With Alt/Az - you need constantly changing calibration, and software needs to track this and change things, but in order to do so accurately it must know exact pointing of the scope. This is why you need absolute encoders. Not sure what is needed precision of such encoders, but we might be able to calculate drift rates depending on guide command cycle length and position in the sky.
  5. Yes, only problem with this approach is that you assign single weight per sub and that is not optimum solution. Level of noise consists of multiple components - one of them being shot noise whose level depends on signal. Imagine following scenario - there is one frame that is lower in signal by 10% due to poor transparency, and other frame that has a bit more light pollution. Once you equalize frames, you end up with same amount of noise in the background where is no signal (only LP noise and read / thermal noise), but because signal was lower by 10% and you had to scale it to equalize it - so does you scale shot noise (and LP noise and read noise - but since LP was higher in first image these turn out equal where there is no signal). Now you have situation where background weights should be 1/2 : 1/2 because level of background noise is equal, but in signal area weights are different - might be something like 1/2.1 : 1/1.905 (or whatever proper ratio is to get total of 1). In general case you want to assign weights based on signal level in particular area, and also in patches of image (because LP can be gradient and contribute different levels of noise to different parts of image) - and this is not really something that is easy to do. I've developed signal based weights estimation and it works really good. You can assign "bins" like 16 or 24 different signal level bins, and all the pixels mapped to each bin get their weight and participate in their own "stack zone".
  6. Yes you can if you know quality of each sub in comparison to all other subs. Transparency changes from night to night (and even during the course of one night), and although you know number of subs in each separate stack and their relative weights in one night (for example 1/3, 1/3, 1/3 - because they are of the same quality) - you have no way of knowing how quality of subs on one night compares to subs on different night. Weighting works by comparing all the subs in stack and assigning weight based on relative quality within the stack, so if you want proper weights with respect to all the data - put everything in one stack. There is also matter of other algorithms that we use - like sigma clipping and such - read other posts to get the idea of how it works, but in principle - more subs you have, better statistics of the data and more precision in statistical methods like sigma clip.
  7. Well, at least we agree on minimum accuracy needed, that being 1/10 of arc second . But from what I can tell, you have very high gear ratio after that. I'm not sure if you were recommended 9000:1 because of motor torque - it needs a lot of reduction for small motor to move such a large scope. But let's run precision calculation again with this new info. So motor has about 10:1 gear reduction (and it better be smooth since encoder is pre reduction from what I understand in your quote). 500 x 4 x10 x 60 x 60 x 2.5 = 180000000 (and that is quite a lot) - that is 555.555 ticks per arc second (if I'm not mistaken). As far as precision goes - that is exceptional, but more precision you add, lower your slew rate. Do you have any idea of what the max RPM is on this motor? Found it - it says here: http://siderealtechnology.com/MotorSpecs.html "Maximum RPM with SiTech ServoII controller (24 volt Supply Before Reduction): 10,000" That is 166.66 revolutions per second. One revolution is 1.6666 arcminutes, so it's 277.777777 arcminutes per second, or 4.63 degrees per second. Ok, that is actually quite fine (if my calculation is correct). As far as I can tell, Alt drive calculations are quite ok. What I'm worried about next is smoothness of the drive components (any little bump is going to be amplified significantly at those reduction ratios), and lack of absolute encoders. We should discuss how much exact positional information impacts on tracking and guiding rates in alt az. This is also very important, as you can't expect 0.1-0.2" RMS tracking/guiding error if drive itself makes wrong move because it does not know exactly where it is pointing.
  8. Could you go a bit more into detail regarding precision of motors. 9000:1 is total reduction of motor spin right? Let's say that you need to move thru 90 degrees in altitude. You also want at least 0.1" precision in altitude position (to be able to guide properly, and possibly you will want more precision). This gives you 100 revs per degree, or 1.6666 revs per arc minute, or 0.027777 motor revolutions per arc second, so we are looking at 0.002 revolution of a motor per "step". That is something like 0.72 degree precision on motor shaft, or 1/500 accuracy. I guess that should be doable with 10 bit encoder on shaft if we are talking about servo motors.
  9. I think this is very nice setup, and I would really love idea of "half slit" integrated with it, or "quarter slit" at least - that way one can still have 3/4 of the field usable for multiple star spectra at once, plate solving and such and use slit to get better spectrum, or at least easier to calibrate and remove background glow. Out if interest, 40mm + 25mm reduces things about 40/25 = x1.6 am I right about that? I was thinking about similar setup for EEVA - 32mm EP + something like 10-12mm cheap Chinese "megapixel" lens for CCTV (actually a bit larger than standard 1/3 ones - I've found 1/1.8 one for reasonable price of $25 on aliexpress).
  10. Having thought about this, I'm worried If you are going to go for something like 0.79"/px, and aim to properly sample at this resolution, you will be wanting something like 0.1-0.2" RMS error in tracking/guiding. I'm not so much worried about the drive (although that is also major concern), I'm worried about smoothness of Alt-Az mechanism. It needs to be extremely rigid, yet so smooth in motion that it does not have even slightest "stiction / jerk" anywhere along the arc of motion. All of that holding something like 200kg+. I believe this calls for exceptional machining precision and knowledge of materials. You would not want your mechanics seizing due to temperature change or becoming jerky or whatever, and there is also potential for forming a slack in hotter conditions - at this scales, with such large parts - it can easily happen. Things can even get out of shape if load is not spread evenly .... Another thing to consider is that you are going to need custom software written for this. No guiding software, as far as I'm aware, guides in alt-az mode. Software needs to know exact pointing of the scope to properly calculate needed shift depending on guide command. Same goes for tracking software - it needs to know precisely where the scope is pointing at any given time - that means either full precision absolute encoders on both axis ($$$) or some sort of split configuration with calibration - meaning lower resolution encoder on both axis and lower resolution encoders on drive shafts. With Ra/Dec system it is fairly easy to determine tracking rate and needed "resolution" of the motor (provided you are using steppers) to keep things within certain limits. For Alt-Az, this is not so easy calculation, as rotation rate will change with respect to where the scope is pointing. You will also be giving up best position in the sky for imaging - near zenith, as alt-az has trouble properly tracking in this region of the sky. Just some things to consider if high resolution imaging is one of your goals.
  11. Yes, if it produces single image in the end, then you are doing it right - groups in DSS are "calibration" groups rather than separate stacks
  12. What did you use to stack your data? I presume PI? Noticeable improvement if you have same sized batches usually comes from the fact that there was difference in the conditions on particular night compared to other nights. Let's say that a single night out of three had poor transparency. Subs of that particular night will differ somewhat between each other but overall weights will be similar - close to 1/7 each. Same will be on other nights. When you combine stack from all three nights you will end up with weights close to 1/21 for each sub. If one night was poorer than others - this is not optimal since there can be significant difference between quality of subs on different nights and if you put them all in one stack - each will be given appropriate weight, so you can end up having 1/18 for good subs and something like 1/25 for poor subs - poor subs will contribute less to final result this way and good subs more. In case of stacking stacks from each night, because subs on each night are close in quality to each other you end up with each contributing about the same to final result - and that is not what you want if you have different quality subs. This is a part of explanation. Another part is related to use of sigma clip stacking. More subs you have sigma clip works better. Let's say sigma clip "decides" to reject one or two pixels. In stack of 7 subs this leaves you with 5 subs stacked, or 40% difference! In stack of 21 subs you are left with 19 subs - maybe even 18 subs if sigma clip rejects three values - but this time it is only 16% "loss". More subs you have it is better with sigma clip. It really boils down on your stacking algorithm. If you have equal number of subs per group - like in your case 3x7 but you don't use any sort of advanced stacking methods, and use simple average instead - result will be the same. On the other hand if you have mismatched groups - like 8 subs on first night, 7 on second, 6 on third or similar - this straight average will give worse results then advanced methods when stacking stacks, but in general best approach is to stack single subs.
  13. Not sure how the DSS works. I think groups are calibration groups and it produces single image at the end? If so, then continue using that approach. If you get three separate images as result (provided that you used 3 groups) then use one group (but not sure about calibration - can you add different calibration files to one group?)
  14. Not best way to do it. Ideally you want all your subs stacked into single stack. You can create "mini stack" out of already stacked data, but chances are that it's not going to be optimally stacked, especially if you captured different number of subs on different evenings. Imagine you have 3 subs on first evening, and 5 subs on second - you create your stack for first and second night and you will have roughly 1/3, 1/3, 1/3 in first stack and 1/5, 1/5, 1/5, 1/5, 1/5 in second Now you combine those two, and you will end up with this: 1/6, 1/6, 1/6, 1/10, 1/10, 1/10, 1/10, 1/10 (each again divided by two) - so subs from first night are "weighted" more in result than subs from second evening, where in reality you want close to 8x 1/8 to be the weights. PI has weighting as an option and it works better if you put all your subs "in the same basket" rather than create separate averages and then weigh them against each other. If you for some reason don't have previous subs and only final stack (still linear without any processing) then yes - do it like that - create "mini stack" out of 2 or 3 stacks from each night. Result might not be optimal, but it will still improve things.
  15. I'm sort of struggling to see justification in such a project on science side. I understand the appeal of such a large aperture for visual, so that is legitimate requirement, but on the science side, especially if we are looking at DSO imaging, I would look at other options. Don't know what is expected budget for this project, but such a large aperture is unlikely to give you significant advantage over smaller aperture in terms of resolution unless you construct very precise tracking platform. It is large telescope and it needs to be tracking very well to have edge over for example 16" RC on a good mount. There is of course huge light grasp of such aperture, but let's do a comparison against alternative that can be used for "quick" acquisition of DSO images - like small galaxies for photometric / astrometric measurements for example. You expect to be tracking for at least couple of minutes, so I presume that you will be doing multiple exposures and stacking anyway, and not working with fast transient phenomena that would benefit from short exposures and "concentrated" light gathering. Here is a quick calculation: 16" RC with x0.75 reducer / field flattener will provide you with very large corrected field at ~2400mm FL. It costs about 6K? Put that on Mesu 200 mount (another 5-6K), attach suitable camera / filterwheel whatever you like and repeat 4 times. That is about 50K of investment and you will have total of 800mm aperture, 2400mm FL and very multi hour tracking / guiding. I'm not sure if there will be any significant resolution advantage in 32" vs 16" aperture, given even very good seeing and tracking (there will be some, but not sure exactly how much, we just had a discussion on seeing impact vs aperture size in another thread, and concluded that that topic is out of reach for our level of understanding as is). Best you can hope to achieve in my estimate is about 0.75"/px practical sampling rate (FWHM of about 1.2"). If you can get acceptable tracking, field correction at F/3 with some CC in this custom solution for less money then yes, it's worth it. It's worth anyways if you like the challenge of custom making and see it as open source project to be repeated by others, and of course to be used as awesome visual scope. As for me, and my usage of such system, well I would use it the way I use smaller scope - to do what ever comes to mind, with addition of crazy imaging speed - or rather large SNR for given imaging time. I do have couple of ideas for some processing algorithms that require higher SNR than usually achievable, and would like to opportunity to test those, so such scope would be good for that purpose.
  16. Btw, this is something that Christian Buil also noted in his report on feasibility of CMOS sensors for spectrographic purposes, here is a screen shot: He ascribes this to linearity error on small signals, but I in fact suspect it is due to bias not being proper. Source: http://www.astrosurf.com/buil/CMOSvsCCD/index.html
  17. I forgot to expand on internal vs external clock. I've read somewhere that for exposures of less than 1 seconds, either driver or camera firmware is controlling the exposure length, and above 1s it is up to application using ASCOM driver to start/stop exposure. Might be that there is different bias applied to these two modes, and using sub 1s bias with over 1s exposures leads to problems described above.
  18. I've found two types of bias issues with cmos sensors - both would prevent dark scaling how it's usually done. However, if you want to do dark scaling, I think there is way to do it with ASI1600 and we can devise simple procedure to test this. First problem is the one @Adam J described - internal calibration and difference in bias between power cycles. ASI1600 does not suffer from this problem, it has another thing that is making regular dark scaling problematic. It has to do with the fact that there are sort of two regimes of operation - "internal clock" and "external clock", at least that is what I've read, and might not be related to this issue at all, but I suspect there might be some relation. When I did my measurement of ASI1600 I noticed that bias sub has higher average pixel value than dark sub of relatively short exposure - like 20-30s. This simply cannot happen if bias is proper. If you take set of bias files and later another set of bias files, you stack each and subtract stacks - you will get what you expect - average 0 and noise (stddev) as expected. Same will happen with darks of same exposure (and same temperature of course). Where you can see the problem is if you try to match two different set of darks. Take bias subs for master bias, take set of shorter darks and take set of longer darks, for simplicity you can take 30s and 1minute. Now if you take short darks stack and subtract bias and then multiply by 2 (or which ever ration of exposure lengths you chose) and take long darks stack and remove bias, and then subtract the two, you should get 0 average + expected noise. If this works, then you can scale darks, but I suspect it will not work, at least it did not work when I did my measurements. Things might have changed in the mean time with drivers or whatever (there was change of how offset is applied - fixed/not fixed/what is default), so it's best to run above test to see if you can effectively scale darks. There is another way of doing dark scaling that does not involve shooting actual bias subs, but rather extracting common bias from sets of darks. To test this, you would need three sets of darks. Let's say 1m, 2m and 3m. 2m - 1m = X (1 minute of dark current only since bias cancels out) 1m - X = bias now you can test if 3m - bias = 3*X and if it is, you can scale your darks with two different exposure only and doing some math (2m and 1m is enough to extract "unity" dark current and common bias).
  19. Not sure if it is worth extra to you, but there is difference between two scopes - focuser. WO has 2.5" R&P unit with 10:1 reduction, which I suspect will be rather good focuser. SW is regular 2" crayford unit with micro focusing as well. Although price difference in focuser alone can't account for such a price difference in units (aftermarket 2" Crayford vs 2.5" R&P differ at about 100 pounds or so), there is price of "label" as well. Not sure if WO scopes are worth the premium, neither from first hand experience nor from casual hear say on internet.
  20. What sort of science are you looking at with this scope?
  21. Well, I just did some calculations above. I agree that noise goes down as square root of subs stacked, but in general - it is not how low you get your read noise, it's how that read noise impact the rest - if it is not significant to start with (there are other noise sources high enough) then you won't make much difference, but if it is comparable to some other noise source - then you'll make a difference. Much larger difference is obtained by dithering then going with large number of subs, but still, I advocate going with large number of subs because you won't be loosing anything. No imaging time to be lost, you can do it during day time and over multiple days.
  22. Here is simple reasoning behind number of darks. You can either do read noise alone if dark current is low, or you can do combined calculation. I'll do combined calculation in two extreme cases: a) perfect dither (no subs have aligned pixels) b) perfect opposite - every pixel is stacked against the same pixels (here pixel means X,Y position on sensor rather than on sky). Let's examine atik 460 as an example, read noise is 5e and dark current is 0.0004e/s/px at -10C. Let's do reasonable sub length - let's say 15 minutes. Dark current in this case will be 0.36e per exposure - which is low enough not to impact flat calibration much. Associated noise will be 0.6e. Again not as significant, but let's include it anyways. Total noise per sub will be - 5.034 (so really not much difference to read noise alone). But here is important thing. Stacking 20 of darks will lower this value by sqrt(20), so it will be 1.126e With each calibration you are "injecting" 1.126e of noise back into a sub. This means that in a) - each sub will not contain 5.034e of noise (read+dark) but rather 5.1584e of noise - slight increase but not too terrible. b) - you will end 1.126e noise to final stack instead (because this values will be constant on each pixel they won't be random and can't add as random, but rather are "pulled out in front of parenthesis"). Imagine you did 4h worth of imaging, that means 16 subs of 16 minutes, so your read+dark noise in final stack will be reduced from 5.034 down to 1.2585, but when we add 1.126e to that, we will end up with 1.6887e of noise. That is like we stacked a bit shy of 9 subs as far as read+dark noise is concerned. Let's be a bit more aggressive with number of darks and see what the difference is - let's take 100 instead. Now master dark will have 0.5034e of noise instead of 1.126e (less then half of original) in case of : a) single sub will have 5.059e instead of 5.034e - almost no increase this time b) 1.2585e of noise + 0.5034e of noise = 1.3554e of noise, or it's like we stacked 13.8 frames instead of 16 as far as read+dark noise is concerned. Much better. So yes, dither and use a lots of calibration subs. I use as much as 256 of each darks, flats and flat darks because I use shorter exposure time.
  23. May I chip in? "Wrong" calibration can work in some instances, but not in all, and therefore I think that we should always advocate for proper calibration. I have nothing against using "wrong' calibration if that works for someone, provided that they understand why it is working for them. The minute it stops working for them (with any particular data) - if they don't know why, there will be trouble. Once one understands why it is working in certain circumstances, it is ok to use it if you accept approximation that you are doing. Using bias as dark in calibration can work if you have low dark current and exposure such that low dark current does not build up to certain level, and if dark current is uniform across the sensor. If it builds up too much - it will mess up your flat calibration. If it is not uniform, you will introduce a sort of "noise" that is predictable in nature (unwanted signal) that you could have removed with proper calibration. Yes, calibration increases random noise but there are ways to deal with this, and often benefits of removing predictable unwanted signal outweigh a bit more random noise. Take a lot of calibration frames, and dither and you will minimize impact of added noise in calibration step.
  24. Worth checking out is "honeycomb" cast of blank - it can save quite a bit of weight while still maintaining figure. Here is image of it: It is pretty much same as regular blank with a difference that casting was done "over" honeycomb pattern so one side of blank is flat and used for figuring while other is sort of "hollow" with support structure that gives it enough rigidity to hold shape.
  25. I've heard of this, and I tend to agree that size of aperture combined with same seeing conditions will give different results on PSF. It might be the case that larger aperture will fare worse under most circumstances - I just have no clue. We can maybe even argue that this effect is not linearly related to aperture size and that there is "the worst" aperture for given seeing. I base this on following reasoning: If we look at distorted wavefront it will have different "curves" along the segment, and it depends on how much of that segment falls on aperture - for small aperture we will in principle have "tilt" - star will "jump" around from it's position. Medium sized aperture will have largest distortion compared to size of segment that it is covering, while largest aperture will in fact act as distortion is on one piece of aperture. If we have very very large aperture, wavefront distortions will resemble more rough surface of the mirror than large scale distortion - ripples will be small compared to segment of wavefront being observed. This is of course applicable to "instant" in time - or "frozen" wavefront distortion. When we let it accumulate - on a small scope we will have jumping star that "smears" over long exposure, on average aperture we will have smeared star averaged while on very large aperture we will have "light scatter" averaged - quite possibly the result will be the same, or maybe large aperture wins in case of long exposure (give of course that tracking is perfect). Like I said - I have no idea. What I do know is that large "professional" telescopes (1m and above) produce data that is much sharper than amateur setups. Whether this is consequence of their location and seeing conditions only, or is it related to what I've described above - can't really tell as we don't have "control" - large scope on less than perfect location under average seeing to give us comparison.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.