Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. It works for any mirror - if you tilt it, reflected rays will change direction. In above image - incident rays 1, 2 and 3 are all the same - they don't change - but depending on tilt of the mirror - reflected rays go different ways (they all obey same rule - incident angle is the same as reflected angle). Only difference between concave and flat mirror is that concave mirror converges rays - in case of telescope to a single point - but position of that point will depend on tilt of primary mirror. When you change tilt of primary mirror - star position in the image changes. Simple as that. Thing is - if you have one corner that star is particularly defocused at - this is due to tilt of primary and correct way to tilt back primary is such that star image moves towards the center. If you've ever collimated reflector of any kind - you might have noticed that adjusting tilt of any component causes star image to shift at the eyepiece (or sensor). Same is here - star moving from bad corner to center is just indicator of good direction of tilt adjustment.
  2. Thread where I question accuracy of astronomy.tools contains mathematical details, but it boils down to this: - when your hands shake - you get blurry picture. Same is with mount - if mount does not track perfectly and none does - there is a bit of blur that comes from that. Guide RMS is measure of how accurate tracking is - and that translates into blur that happens due to mount inaccurate tracking - seeing creates blur on the image - again there is measure of that (seeing FWHM) - aperture size limits how much can be resolved - it too creates blur. There is also known way how to calculate it - planetary imagers do it all the time as there is no point going above certain F/ratio (for given pixel size) Mathematics describes how those three blurs combine to form single blur (convolution). If we know all those parameters - we can calculate expected level of blur / FWHM and consequently needed sampling rate to capture it properly. In above estimate I used 60mm of aperture, 1.5" RMS Guide error for AZ-GTI and seeing in 1.5-2" FWHM range (all expected conditions).
  3. It doubles the number of used pixels, but those pixels won't include new / valuable information. It is really like visual astronomy. You can always add a barlow to scope / eyepiece combination - but if you already have short focal length eyepiece - you will get larger darker image without additional detail. This is the same thing - you already have very small pixels compared to what scope can deliver (it's like using 2mm eyepiece) and you want to add barlow to that.
  4. Dark noise does not depend on sub length. Let me show that to you. First thing to note is that stacking by average and stacking by addition gives the same SNR improvement. That one is rather simple as average is defined as: sum of subs divided with number of subs - in another words - average is just sum of subs divided with some constant. Signal to noise ratio does not change when you divide or multiply with a constant as you multiply / divide both components - so their ratio does not change. It is easier to work with sums - simpler to show my point (hence above - just in case you wonder why did I choose sum instead of average - but they are the same thing). Say you have 1e/s/px dark current. Let's say that we have two subs of 100 seconds each and one long exposure of 200 seconds. In first case we are summing two subs, each of them will have 100s x 1e/s/px = 100e of dark current. Dark current noise is SQRT(dark_current) - so each of them will have 10e of dark current noise. When we add two subs - dark current noises in them add like noise adds - square root of sum of squares: sqrt(10^2 + 10^2) = sqrt(100+100) = sqrt(200) = 10*sqrt(2) What about one long sub? Well that is easy 200s x 1e/s/px = 200e of dark current and dark current noise is square root of that so SQRT(200) = 10*SQRT(2) Same thing. After stacking you end up with same amount of dark current and dark current noise - no matter how long your subs are (provided that you have same total imaging time). For dark current and dark current noise - it does not matter if you do 100 x 50s or 10 x 500s - you will end up with same amount of dark current and dark current noise. What will be different is per sub amount of noise - but you know what will also be different - per sub amount of signal (light signal - important one) - so SNR of final stack does not change. Only thing that changes final SNR is amount of read noise. This is because all other noise sources are dependent on time and only read noise is dependent on number of subs (each sub gets one dose - in 100x50s vs 10x500s case - first one will get 100 doses of read noise while second will get only 10 doses of read noise). No - there is only one thing - thermal or dark current and associated noise. It is called shot noise because it behaves as shot noise - it is same thing in its nature. Both electrons themselves and photons are particles and act the same with regards to this - they never come in exact numbers but have randomness associated with how much of them has arrived / is detected. This process is called Poisson process and is described by Poisson distribution. That really depends. Some of it is due to electronics around sensor (with CCDs) that either raises temperature or in other ways "infuses" electrons in that part of sensor. Or it can be simply some sort of electron leak from circuitry on sensor (CMOS) that acts like dark current (builds up with exposure time). Note that dark current and amp glow are not noise - they are signal that is removed. Because of nature of that signal - there is always some "uncertainty" in how much of it has built up over time and that is what noise is. We never calibrate out noise - we calibrate out signal. That is purpose of calibration to remove unwanted signal. Dark noise increases with sub length - but so does signal strength. Signal increases linearly (all signals - our target signal, LP signal, dark current signal) while associated noise increases like square root of that If signal goes 1, 2, 3, 4, 5, 6 .... noise goes like 1, 1.41, 1.72, 2, .... It rises slower. That is why we get higher SNR with longer exposure. (blue is square root and green is linear - green is getting larger than blue as time goes on - moving to the right) With lowering temperature we get into domain of diminishing returns. That is true. If there is noise source that is larger than dark current noise, then yes, impact of dark current noise will be minimal - but it will not depend on sub duration. Why? Because LP signal grows the same as Dark current signal - hence LP noise grows the same as Dark current noise - their ratio will be the same in 20s exposure, 60s exposure, 120s exposure. There is no "swamping" of one with the other in certain sub length - they remain of the same ratio regardless of sub length. I agree that in strong LP - it makes very little difference if you cool down to -20C vs say -10C - but benefit is "infinite" If we define benefit like improvement / cost and if improvement is positive and cost is 0 we get positive/0 = + infinite When we double sub length - it costs us something, there is greater chance that 10 minute sub will go to waste than say 5 minute sub - so for read noise there is point where we want to stop increasing our sub length as cost gets bigger than improvement - it is not worth to increase sub length any more. This does not happen for cooling. Is it harder to cool to -20C than it is to -10C? It is just entering a number in app - set this particular temperature on sensor. As long as you can reach it due to deltaT - it makes no difference if you cool to -20C, -15C or -10C in terms of cost - so what ever improvement is - it is worth it.
  5. I don't think you should reason like that. Let me explain. All noise sources except read noise grow with exposure time. They all grow in same way. If we exclude read noise - for example, if we have perfect sensor that has 0 read noise, then it makes no difference if we use short subs that we stack or we make one long exposure. Resulting SNR is the same. For dark current noise - it does not matter if you use 2 minute exposures or 10 minute exposures - total noise will be the same in the end and will depend only on level of dark current (and will be square root of total accumulated dark current). Fact that you get dark current noise to the level of read noise - does not impact result with respect to dark current noise - it impact result with respect to read noise. Maybe I should put it like this: - we can change level of read noise by selecting different gain - we can change impact of read noise by selecting sub exposure length - we can change level of dark current noise by selecting temperature. Each of these choices is a tradeoff - and first two choices are not independent choice - but third is. Gain / read noise choice goes like this: - we increase gain and thus decrease read noise - but we also decrease saturation point of our sensor / full well capacity. So this is a tradeoff - Once we have chosen our gain - then we can choose sub length such that - we can comfortably guide that long and we don't get trailing or out of shape stars - and impact of read noise is minimal. Thing is - impact of read noise will depend on other noise sources (but not the other way around) - and once we reach certain threshold - we enter domain of diminishing returns. For example - doubling exposure length from 30s to 60s can have much more significant effect on total noise than doubling from 4 minutes to 8 minutes. In both cases we double sub length - but SNR won't improve the same in both cases. Third choice is independent. No matter what we choose for first two cases - total amount of dark current noise will be the same. It does not depend on either selected gain nor on read noise of camera. Only parameter to change here is temperature. That is what determines level of dark current noise. Here we also have tradeoff and area of diminishing returns. However - we don't have "gradual tradeoff" - but rather we have "hard limit" - so in that sense it is much easier to decide. Tradeoffs are: - can you reach set temperature (there is deltaT that cooling can achieve and achieved temperature will depend on ambient - we can't cool lower than (Ambient - deltaT) ) - how much power you are willing to spend (this is not important in 99% of cases unless you have limited power supply and you want maximize imaging time versus power consumption - but I'd say - get a bigger battery). so we really only have issue of reaching set point temperature and I would say: simply go for lowest temperature that you can manage - it costs you nothing (except electrical power). On the other hand - if you can't reach -20C, find comfort that dark current also has "domain of diminishing returns": If you can manage only -15C - it won't add much more noise to the image than -20C because dependence is exponential versus temperature and at low temperatures curve is not steep. In any case - choosing temperature based on already selected exposure length and read noise - is flawed as dark current does not depend on those. (we choose sub length based on LP as impact of read noise on final result depends on ratio of its magnitude to largest noise source magnitude, but dark current is the same regardless what read noise we have and how long our subs are - it simply does not depend on any of that - only on temperature). Makes sense?
  6. Nope, did not pop up. If mention is not properly marked (that gray "badge" surrounding nick) - then it won't notify. Give me few minutes to read the original post and I'll respond.
  7. Binning does help. It is just matter of understanding how it is all connected together. I'll try to summarize it into few simple / straight forward points (that we can expand on if you feel the need). - There is limit to what can be achieved in terms of real resolution of the image - it depends on aperture size, seeing and mount performance, and is expressed in arc seconds per pixel - Any object has finite size - certain number of arc seconds that it covers / spans. - From two above points follows that any object that you record will have certain size in pixels. For example, with 60mm scope and AZ-GTI in regular conditions I would expect 2.5"/px - 3"/px to be realistic resolution of the image. M57 is about 1'40" at its longest extent. That translates into 100". With 2.5"/px - that is 40px. That is realistic size of that object that you can fully resolve with 60mm scope. Now, you can use a barlow to make it larger - but it is the same as taking those 40px and enlarging image to make object be say 100px across. In fact - you can do all sorts of manipulations - you can bin your data, you can use barlow, you can drizzle, you can enlarge / crop - whatever you like - but that won't change maximum resolution of that image which is equivalent to about 40px for object. Only way to get more detailed image of the object is to get larger aperture scope and better mount and shoot in better seeing. Even then - you can't expect miracles. Most amateur setups can't go beyond 1"/px (even that is very tall order in 99% of cases). That does not mean that you can't image at something like 0.5"/px - you can and many people do - it just means that you won't capture any more detail than if you imaged at say 1.5"/px or in best circumstances up to 1"/px. To put things into perspective - this is what you can hope to achieve with say 8" scope, steady skies and good mount: That is it. Most images of M57 that you'll see that were taken by amateur astronomers will: 1) be less detailed 2) will probably be larger by that (be much blurrier and thus oversampled) For reference, here is my M57 from quite some time ago (it was taken with 5" scope, and is probably far cry from proper image in terms of processing): Like I said - larger and less detail On the other hand - if you shoot with 60mm scope and x2 - this is what you can expect to get in ideal circumstances: while actual image at that resolution (fully resolved) looks like this: I hope above makes sense. Bottom line - you can make your target look larger by number of ways - adding barlow, drizzling, enlarging in software - but all of that won't bring in detail as you are ultimately limited by possible resolution of your setup + skies.
  8. There is only so much resolution that you can achieve with 60mm scope and you are already over sampling with 2.4um pixel size (probably even when using super pixel mode). No benefit what so ever in using barlow lens - effect will be the same as when you take regular image without barlow and enlarge it by factor of x2 or whatever barlow you were planning on using.
  9. @Pitch Black Skies Just noticed that you guide with 0.5s guide exposure. That is very fast and will often result in chasing the seeing. Maybe try next time at say 2-4s just to see what soft of guiding you get?
  10. DIY version would be much much cheaper. I already started a thread to discuss this approach - telescope for stacking + display and optics to recreate visual experience. I tested idea with my mobile phone, simple lens and eyepiece. Works fine if configured properly. Camera like ASI183mc paired with appropriate telescope (does not need to be high end - just needs to be matched in resolution to be able to show different objects / scales) can serve as potent stacking platform.
  11. If you go too high in sampling rate of your guide system - you'll introduce too much noise. Upping sampling rate while keeping same aperture - leads to lower SNR for same integration time - so you either need to increase guide exposure or bin. I'd say that upper bound on guide exposure is determined by your mount. I'd say that you don't need more than x3-4 of ratio between best guide RMS for your mount and guide precision. Guide precision is set to something like 1/16 - 1/20 of a pixel (let's use 1/16 as conservative figure). This all means that if you for example have Mesu 200 and expect to guide down to 0.3" RMS regularly - then your guide precision can be 0.1" - no need for more than that. Pixel size in this case can be x16 that - so 1.6"/px. You don't really need to go lower to have precise enough guiding (you can - I'm guiding at 1.0"/px because I use OAG at 1600mm and it gives me ~0.5"/px natively so I bin that by x2). 2.9um at 600mm FL will be also around 1"/px - so you don't really need to do anything. It will be ok as it is - for even very precise mounts. You don't need to bin either.
  12. As long as you remember this about space:
  13. Just as a contrast (and because you decided to switch to PI after gimp) - here is gimp / imagej version
  14. In that case, I'll just go over results and both methods without really worrying if starting assumptions are true or not. Gain: 0.333 e-/ADU Bias mean value: 486.9 ADU Bias stddev: ~4.9 Read noise: 1.61 e- ---> 1.6 / 0.333 = 4.83 ADU Background level of light sub is measured at 565. You want to swamp read noise by factor of x3 with LP noise. 486.9 ADU x 0.33e/ADU = ~160.7e 565 ADU x 0.33e/ADU = ~186.45e Background LP value is 186.45 - 160.7 = 25.75e and LP noise is square root of that 5.07e You are swamping it by factor of 5.07 / 1.61 = ~ x3.152 Error in your calculation comes from calculating in ADU units and not in electrons. Said relationships hold in electron units and not ADU units. Problem is that we have square in our calculations and conversion factor also squares and is thus calculated once more (one more time than it should be taken) (4.83 *3)^2 is actually (1.61 * 3 * 0.33)^2 so this leads to (4.83 *3)^2 * 0.33^2 and not (4.83*3)^2 * 0.33 If you first convert to electrons - everything checks out like I showed you above.
  15. I'm having a bit of difficulty getting stats for your camera. Can you help out? You say that gain is 0.333 e/ADU, yet I see gain set at 56 in fits header. I'm unable to identify readout mode or actual gain / e/ADU used for images. Fits header says it is mode 1 - so it should be blue line, right? Then e/ADU is about 4.4 at 56 (this is from fits header, so hopefully software wrote down exact values used). Next issue is that QHY graph shows for read mode 1 - and gain 56 that read noise is about ~3.4e Standard deviation of Bias sub is indeed 4.89ADU - and if we were multiply it with 0.44 e/ADU we get 2.15e of read noise at max. Problem is - stddev of bias is not the same as read noise. In order to measure read noise from bias sub - you must first remove bias signal. Any sort of non constant signal that is not removed will increase stddev measurement and hence you will get higher value than read noise. If you want to measure read noise with bias subs - simplest method is to take two bias subs - subtract one from another, measure stddev and divide that value with sqrt(2) (or 1.414 whatever rounded up value of sqrt(2) is). Whatever the case - if we take 2.15e or less - we run into yet another problem. For numeric gain of 56 only read mode 0 has read noise 2.15 or less (although not much less) As you can see - I'm rather confused what the actual numbers are in this case and don't know even where to begin. Problem is that you did not remove bias signal and dark current signal or respective noises. If you want to do that sort of calculation, here is how you should perform it: 1. Measure read noise by taking two bias subs, subtracting them, measuring stddev and dividing it with sqrt(2) 2. Take your master dark at 60 seconds, subtract master bias and measure average value. Take square root of that - this is your dark current noise. 3. Take calibrated sub, select empty patch of background, measures stddev and then calculate sqrt(sub_noise^2 - dark_noise^2 - read_noise^2) to get approximate value of LP noise so you can compare that with read noise. I say approximate - because we did not account for noise added back by calibration. Using noise to measure swamp factor is very inconvenient - because all the noise addition that happens (sub contains one dose of read noise, one dose of dark current noise, one dose of LP noise all mixed before you start calibration and then you add more noise from calibration subs). When measuring signal - mean takes care of the noise (its like stacking) so you only need to know a) read noise b) mean signal level of empty patch of calibrated sub (and if there are some stars - you can simply take median and it will ignore those)
  16. Just saw animation that @Craney linked above - it really shot off like a projectile in the end!
  17. Good point. If motor bracket, wedge and belt transmission parts are designed with care - they can be reused on multiple iterations of reduction gear. Btw, do you think 3d printed wedge would be usable for small loads (or maybe "hybrid" version with both plastic parts and metal bits - like bolts and nuts for adjustment)?
  18. For 3d printed stuff - I like cycloidal drive the best. Depending on design - it has the most points of contact which means the least backlash (although it is back driveable). I'm hoping to make one myself at some point.
  19. Indeed - do mind not only the spacing - but also how close you put the prism to main sensor. Faster the system - closer it needs to be to minimize vignetting. Simple calculation says that for example 8mm prism on a fast system of say F/5 - needs closer than 40mm if you want to avoid prism size stopping down guide assembly. Even at 40mm (in F/5 case) - there will be only one point 100% illuminated. OAG really needs to be close to camera for best performance (and it works better with slower scopes in that regard).
  20. It won't show if light leak is strong enough compared to amp glow (and it does appear to be as there should also be noise visible in 300s -1C exposure - but there is none showing - it is too faint compared to light leak).
  21. It actually makes more sense to use smaller sensor than larger for OAG. This is because prism is too small to illuminate large sensor. I use ASI185mc - it is somewhat larger than ASI120 (8.6mm vs 6mm diagonal) - and it gets vignetted on my OAG:
  22. Then I have to correct myself - I wrote that it's not being done by available software - but it seems that it is in the case of that script. It has different approach then what I implemented for myself - but it is valid approach for equalizing gradients.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.