Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. And is unfortunately incomplete explanation. F/ratio is not measure of the speed of the system (as it omits one more component that can change and that is pixel size).
  2. Try to switch to full steps after some time - or to lower number of usteps as you speed things up in a slew. That should help, shouldn't it? If you have issues with micro controller speed and ability to turn outputs on and off - do consider RPI Pico. I don't have much experience with either - arduino nor pico, but it took me like half an hour to get everything connected on breadboard (I'm using DRV8825 with nema 17 and PI Pico) and to write my first piece of code to move the stepper. Most Arduinos have 16mhz clock speed, while Pico has 133mhz - so there is considerable speed increase just with that. Btw, I'm building my thing with 3d printed parts, and in process I built very high reduction gear box based on split ring compound planetary gear mechanism. First iteration of design was too big for star tracker and probably over kill, so I instead opted to go for pure belt reduction system - 3 stage, 5:1, 5:1 and 10:1 - that will give me 250:1 or 0.81"/ustep resolution. More than enough for low cost 3d printed star tracker. I still plan to explore how well that planetary thing behaves, and I'll probably design 3d printed EQ5 class mount around it at some point (it won't be completely 3d printed - it will use some aluminum tubing for the frame and axis).
  3. What type of belts are you planning on using? Regular GT2 belts are probably going to be a bit too weak for larger loads close to "exit" stage? If you use something more robust, then you'll be looking at rather large pulley? Say you use GT5 - that is 5mm per tooth. With 160 teeth - you'll get 5x160 = 800mm circumference for that. That is 10" wheel just there, and you'll be needing two of them. I know size is not an issue for you, but just wondering if that is a bit too much?
  4. 0.5"/ustep is really rather coarse - it is comparable to AZGti mount and about double that of EQ5 and quarter of HEQ5/EQ6 mounts EQ5 has 0.287642"/ustep and HEQ5/EQ6 have 0.143617"/ustep All of those mounts have ~700:1 reduction (704:1 and 705:1) and difference is that later two use 64 usteps rather than 32. With regular 200 degree motor and 32 usteps to get ~0.14"/ustep one would need ~1400:1 reduction. Now, I misunderstood your aims, I thought that you were aiming for <0.1" guide RMS, but you were talking about <0.1" per ustep, right? Again, if you want that sort of tracking resolution - you need more than 2000:1 with regular 200 degree motors and 32 usteps. Even with those sort of reductions - you should be able to slew your mount with reasonable speed if that is your concern. Regular nema 17 steppers can achieve almost a thousand RPM, but let's say we limit it to 300 RPM (most will be able to do that). 300 RPM is 50 revolutions per second or 20ms per revolution. 20000us / 6400usteps = 3.125us/ustep You should pulse every 3.125us - maybe that is too high for Arduino based board - but why not use RPI Pico? It has much higher clock and dual core, so it is much more powerful. I can easily get 20us resolution with micropython on it (I'm developing small 3d printed star tracker) - which is more than enough for sidereal. In C - I'm sure I could go sub microsecond in resolution with ease, but like you said - there is no need for micro stepping - once sufficient speed is reached - one can switch to quarter, half or even full steps for fast slews. With 300 RPM, slew speed will be in above configuration, if we assume 0.1"/ustep tracking resolution - ~8.9 degrees per second?
  5. Ok, so here is an idea - how you might be able to get sufficient (I'm not claiming you'll hit 0.1" guide RMS) resolution and minimize PE and backlash at the same time. You'll need multi stage belt reduction - like 3 stages of 5:1 or similar. Get stepper motor with mechanical reduction in say 1:10 range and get 16bit absolute encoder. Stepper has 200 steps per turn and with 32 micro steps that is 6400 steps per turn. With 10:1 reduction - it is 64000 steps per turn. 16 bit absolute encoder will deal with PE and backlash to precision of 65536 parts in a circle (so you can even go with 256 micro steps and have accurate position for each 7-8 steps as measured by encoder). Total reduction with 3 belt stages of 5:1 will be 5x5x5x10 = 1250:1 (so you might even consider using higher belt reduction ratios). If your first pulley in belt system has 18 teeth - that is 1/18th of a turn for fastest component of periodic error on belt system. With 1250:1 reduction single "precise" step (or measurement of encoder) will be 0.162"/step, so whole turn of motor / gearbox / encoder will be 10616.832" and 1/18th of that will be ~590". Sidereal is 15.041"/s so that is 39s. Still too fast to be honest, but better than say belt modded HEQ5 which has 11s period on belt tooth (If I remember correctly) - and that is nightmare to guide out if it starts to "behave". Anyway - consider using encoders on motor shaft for first few stages to avoid issues of fast PE and backlash. It looks like backlash with those mechanical reduction systems can be significant. I've found quotes of "less than 1 degree" of backlash - but that is huge. That is 6400/360 = ~18 steps of backlash - and if you have 0.333"/step - that is 5.92 arc seconds of backlash. Again at sidereal that is 0.3s of backlash (and at lower guide speed - even more).
  6. Have you done the calculations on that reduction? I'm not sure you'll be able to achieve such a fine guide resolution with such a coarse stepper resolution. With 32 micro steps, ratio needed to achieve 1"/step is 202.5:1 - you are looking to boost that by x3 (with 625:1) - which will only reduce stepping precision to about 0.33"/step. I'm not sure how this impacts RA precision, since there is mass on the mount and inertia does its thing to smooth out steps, but I'm sure you won't get <0.1" total RMS with this approach because of DEC. In theory DEC should not move if PA is perfect, but it does move for the host of reasons - PA is not 100% precise, there are influences like wind and mechanical things, and after all, seeing influence will cause apparent DEC motion at some point. All of this means that guiding will from time to time issue correction in DEC - and it can only have 0.33" steps for its position. Any change in position in DEC will instantly raise DEC RMS error above 0.1" and total RMS can only be large than this. If you want to have any shot of guiding in 0.2-0.3" range - you'll need step resolution below 0.1"/step - which means reduction of at least 2000:1. Take a look at mounts that are capable of being guided at 0.2-0.3" RMS range - like Mesu 200. It has around 3000:1 reduction if I'm not mistaken. Second thing that I'm concerned about your design and that low guiding performance is high ratio mechanical gearing right next to stepper motor. With mechanical gearing you need to trade off some things. It will introduce some level of backlash and it will have some amount of periodic error. Depending where that gearing "sits" in complete reduction train - you can trade off backlash for smooth PE and vice verse. If gearing is close to motor - you reduce amount of backlash. It will be same mechanical backlash - but it won't be "amplified" with other stages before it (like using belt before mechanical reduction stage), but this also speeds up periodic error, and fast periodic error is tough to guide out. Putting mechanical gearing "further down the line" - makes PE much slower and easier to guide out - but of course increases amount of backlash for the same gear set. Have you considered doing friction gearing instead? Mounts that guide well with small errors use that.
  7. What is your designed total reduction ratio in RA and DEC and what is stepper resolution that you aim for - "/step?
  8. Depends what you mean by "in an unexplained way". Currently most widely accepted cosmological theory - Lambda CDM - explains expanding of the universe quite nicely - and not as inverse gravity - but rather within frame work of general relativity as vacuum energy - which has negative pressure and thus causes space to stretch when its devoid of matter and energy (here energy is any form of non zero point vacuum energy). We know from quantum mechanics that vacuum is not empty and that it indeed must contain some amount of zero point energy in form of random quantum fluctuations (there is however discrepancy in calculation of said energy by different means). Dark matter is not the cause of accelerated expanse of universe - quite the opposite it is one form of matter that together with luminous matter - counter acts above negative pressure. Dark energy is the name for above vacuum energy that is causing accelerated expansion of the universe. Above Lambda CDM model is one of things why we believe there is dark matter present. It fits quite well in observable facts - except that amount of matter that we can detect via EM radiation - or stuff that shines (which is mostly stars and large black holes in centers of galaxies that we observe via release of energy) is only about 1/5 of mass needed to fit observational data of structure of the universe and rates of expansion. But this is not the only "proof" of dark matter. There are other phenomena that are easily explained by additional matter (which does not "shine") - like rotational curves of galaxies and gravitational lensing. You can see nice list of things that point towards the dark matter here: https://en.wikipedia.org/wiki/Dark_matter#Observational_evidence
  9. We do know how to manipulate gravity. Mass/energy density is responsible for curvature of space time. We can concentrate large amount of energy in tiny space and change local space/time curvature there. I'm not sure that existence of "anti gravity" engines is proof that "we finally know how it generates force". It's a bit like saying - we don't understand energy because we have not made perpetual motion machine. No where in the universe we have not observed "inverse gravity" phenomenon - and there is no reason to believe it is possible - much like creating energy out of nothing.
  10. Yes, that is the main problem - but neither of the two are capable of extracting faint galaxy from background noise if SNR is below certain threshold. You can try it yourself - just take one of your images that has background galaxy that is faint - but shows in stack. Take a single sub and see if you can spot that faint galaxy in single sub. If you can't - that is prime candidate to see if either of the two AI tools will be able to pull it out of the noise - just run them on that single sub and look if that galaxy appears. There is no feasible way to do this - just think about if your SNR is below certain threshold - say for sake of argument we set threshold SNR to 1. This means that signal is below average value of noise. If we have some sort of gaussian type of noise - about 83% of values will be below noise value (noise value is just standard deviation - and ~66% is between +/- one standard deviation - so 66 + 33/2 of them will be below +standard deviation) - but so will signal. Given pixel value - how can you tell if it's noise or signal if majority of both signal and noise values behaves the same - have the same value?
  11. Because you can't calculate mass of anything - you can just measure it somehow - directly or indirectly. How do you propose one measures mass of such a sphere? How do you propose one measures mass of everything in such a sphere that is not dark matter? Last part is trivial as outlined above - we subtract two numbers.
  12. It might not be obvious from my previous answer, why is it so, so I'll try to explain it more directly. Imagine we have 4 types of pixels in the image after AI examined it. 1. Signal and identified as signal by AI 2. Signal that was misidentified as signal and labeled as noise by AI 3. Noise - properly identified as noise by AI 4. Noise - but identified as signal by AI You then propose that pixels type 1 and 4 are left as is - or rather "copied" to new subs from existing subs, while pixels type 2 and 3 are replaced by pure noise (at background level) in artificial subs, right? Let's examine what happens to each type of pixel in final stack (original + artificial subs). By type: 1. Nothing happens here - we have same SNR for signal pixels as using copies of subs won't affect final SNR 2. Here we have drop in SNR as we mix subs containing signal with subs with pure noise 3. We here have "improvement" in the noise - but SNR is still 0 - since there is no signal in these pixels. Result is much like using average filter on these pixels, or even replacing all of those pixels with constant value of what we determine to be background level 4. Nothing happens here - same as type 1 - except this one remains noise with SNR 0 and we even don't make it constant signal As you can see, out of 4 types of pixels - only one to have anything that we might call improvement - is pure background noise, but we don't need to use stacking for that - we can simply identify those pixels - and set them to same value and get very smooth background - but we can do that with denoising anyway and results don't look particularly good.
  13. No, it would not work. If you can identify "signal" - then you don't need to stack, you are done. You create image consisting out of pure signal and you have image with infinite SNR. Problem is that you can't identify signal or even "pixels containing signal" completely. For pixels with high SNR - it is fairly easy to say that they contain signal. Their value will be obviously larger than average background value. With other pixels - you simply won't know if they have no signal and just noise or they have some useful signal. Thing is - every pixel in the image will contain some signal, but probably most of it is unwanted signal in form of light pollution. To detect true signal that is superimposed on that background sky glow - you need to have noise low enough so that variation between the two is mostly signal and not mostly noise. In any case - if SNR is low enough - there is no AI that will be able to distinguish signal, as it self won't be distinguishable from the noise.
  14. Even small amount of light can mess up your calibration, so it's a good idea to cover the view finder. There is special rubber cap on the strap of your camera that serves this purpose
  15. Redo stacking but without darks. Your flat calibration has failed as well for some reason. Are you using viewfinder cover for your DSLR when shooting long exposures?
  16. Ah, ok. Here is what you need to understand. There is no "fixing" effects of bad seeing. Best you can hope is to sharpen up back some of that softness and blurriness produced by poor seeing or poor mount performance (or that two combined). How much you can sharpen - depends on how good your signal is. In theory, one should be able to sharpen all the way to telescope aperture limit - if they had perfect signal and no noise. Problem is - we have noise (and often quite a lot of it) - and when we sharpen - we only make that noise worse. Binning does not directly address poor seeing as such. With poor seeing you get situation where you have too fine sampling (too much pixels) for level of detail that is available because seeing blurred things. This is called over sampling, and over sampling is bad because you loose SNR and gain nothing as there is no detail. Binning addresses this SNR loss by making sampling rate more adequate for the level of detail that you've captured. Amount of blur in the image is directly related to star profile in the image. It is in fact PSF of blur (point spread function) as star is point like object - and star profile shows how light from a single point spread into "blob" of light. This is why we can use FWHM of star profile to assess how much blur there is in the image. That is why we have sampling rate = FWHM / 1.6 (there is some complex math behind that expression). Even when you sample at that rate - you won't get perfectly sharp image. There is still room for sharpening, and in order for that sharpening to work - you need good SNR. That is why I say - I'd rather bin x4 than x3 - even if it seems like under sampling in above case - better SNR will allow you to sharpen up image in processing more.
  17. Binning x3 won't ruin your color data if you bin at later stage. If you bin at capture time - then it depends on how the binning is implemented in firmware - and yes, it can create mono data and loose color, so it is best not to do it at capture time. When imaging - don't select bin x3. Image like you normally would - bin x1 - or "normal". Then after you calibrate your data and debayer it and you stack your subs and get final output - then before you start processing - perform x3 software bin on that data. Depending on software that you use - binning will be called differently. In PixInsight for example - it is called integer resample - and you select average option. If your software does not have option to bin resulting data - and you don't want to pay for software that does - use ImageJ. It is java based (will work with any OS) and open source software for scientific image manipulation. It can load fits / tiff files and perform binning (Image / Transform / Bin - again choose average method and x3 as bin factor).
  18. With CMOS sensors like that in ASI294MC - you don't have to decide right away. In fact, it is better to capture at full resolution even if you plan on binning later. In any case - you can decide on bin factor later after you have gathered the data. Simplest way would be to integrate image and then bin at linear stage before you start processing. Do note that you won't get almost anything if you bin x2 data from OSC that has been debayered with interpolation methods (and not super pixel), so it is best if you bin x3 or higher. Good sampling rate can be inferred from the data itself. Measure average FWHM in your resulting linear stack and aim for value that is close to FWHM/1.6. My personal preference is to be slightly lower than higher sampling than this value if you can't get it right. Here is an example. If you bin x3 you will have effective 0.47"/px * 3 = 1.41"/px and if you bin x4 you will have effective 1.88"/px Now imagine that your average FWHM is around 2.6" - this means that you are best sampled at around 2.6" / 1.6 = 1.625"/px Which one of the two would you choose? Well, if you have large amount of data and good SNR and you can sharpen your data extensively - then it makes sense to go for bin x3 and 1.41"/px - but in all other cases - I'd say go for bin x4 and 1.88"/px
  19. Still having trouble understanding, I'm afraid. I've been thinking about same/similar mechanism and here is what is troubling me: In above diagram ends of shaft that are used for adjustment don't follow circle with center in the center of the disk. There should be two constraint degrees of motion for that cylinder - not just one. You have pivot point to let it rotate, but I think that you should also have "vertical" component - or distance from the center of the circle. Maybe I should draw another similar diagram - but this time, just to make it obvious what I'm saying - I'll reverse what is spinning. In this case - let the adjustment shaft / knobs be stationary and let central disk rotate instead (opposite from your case where rectangular body is rotating and circular disc is stationary). Diagram would look like this: So above diagram is exaggerated to make a point. Imagine that adjustment knobs are fixed to threaded rod and there is nut that is attached to rotating disk. When we turn knobs, since threaded rod is stationary - it is the nut that will move left / right and given that it is attached via arm to central disk - disk will rotate. But we need to have both pivot point and some sort of "telescopic" mechanism in that arm for thing to work properly. How is above different from your arrangement?
  20. Could you provide more detail for this? How does it operate? As far as I can tell - it's not usual "push against central pin" configuration.
  21. You can practice on Galilean moons first? They are 1.8" down to 1" when Jupiter is closest to us and smallest apparent diameter of Europa when Jupiter is the furthest (0.665") is about the size of Ceres when it is close (it ranges from 0.34" to 0.854")
  22. 1) Yes 2) Yes 3) No - calibrate before debayering - calibration works properly without debayering first if all files are "raw" 4) That really depends on what software you want to use - most software already has debayering built in - DSS, Siril, PI - all have debayering as part of their processing pipeline 5) I prefer to use .fits for everything as it is meant to be standard for astronomy images, but tiff can be used as well. Just make sure you use 32bit floating point for all except for capture where 16bit unsigned int is fine.
  23. If you want to use high gain to reduce read noise for narrowband and still avoid issues with quantization - here is a neat trick that you can use with ZWO cameras (other might have something similar). Gain on ZWO cameras is measured in 0.1dB units. This is handy if we know something about dB scale - like the fact that it is calculated like 20 * log_base_10 (ratio). For ratio of 2 we thus have 20 * log(2) = ~6.02 = 60.2 in units of 0.1dB Every ~60.2 increase in gain (you can round it to 60) - you have halving of e/ADU value. You can also calculate for other integer values. These are gain values that one can use to minimize quantization error - as e/ADU value will be based on whole number (reciprocal of whole number) - say if you have e/ADU to be 1/3 or 0.3333 - then 10e / 0.3333 = 30ADU and inverse 30 * 0.3333 = 10e (you won't get exactly 1/3 e/ADU value - but it will be close that quantization error will show only on very large signal values - and there we simply don't care as SNR is good enough as is). Using above calculation - gain of 210 is good place to be to minimize quantization errors.
  24. I just examined the attached fits - and I'd direct my attention to field flattener. There is minimal amount of tilt and I bet it is related to flattener and not focuser / sensor tilt. How is flattener attached to the rest of the system? Are there any spacers introduced to dial in distance, and if so, what kind and where?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.