Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Here is simple thing that you can do. Swap RA and DEC gear. They should be the same, but just in case there is some case of defect on RA one - you should see the difference with replacement.
  2. A bit late to the party, but here are my views (and supporting math). I think there are at least two types of mixing long and short exposures. I completely support both, but I think that both are lacking proper software support. First approach is using "filler" subs. We take several very short subs and use data from those to replace over exposed / saturated parts of long exposure. Given that here we are working with very strong signal (otherwise it would not saturate pixels) - there is little concern for SNR issues. My view on this is that it should be done at linear stage in certain way: we stack short subs, then register against each of long subs and replace saturated pixels and then work with long subs as we would normally do. I'm not aware of any software solution that does this. We can replace exposed pixels in final stack - but that has drawback of: 1. not knowing exactly what pixels are over exposed because we used interpolation for alignment of subs in order to stack and that changes pixel values (resamples the image) - thus we would need to replace all pixels above some threshold - like 90% of max signal. 2. resampling algorithms work the best if data is "natural" - i.e. not clipped / saturated, so aligning subs that contain clipped signal is not optimum solution. Second approach is straight forward mixing of different exposures in single stack. This one is tricky as there is no optimum solution. There is optimum solution in principle - but we can never find it because of lack of information. No currently implemented algorithm tackles this problem properly in my view, with the possible exception of "Entropy Weighted Average (High Dynamic Range)" used in DSS, but I haven't read the paper on that algorithm so I'm not really sure what it does and if it's any good for this particular case. It does say on DSS technical info page: Here is what we need to know in order to see extent of the problem: 1. Subs of different exposures will have different SNR for same signal levels. This is fairly easy to see - both with our own eyes and with math. This is the reason, like @ONIKKINEN pointed out - that we normalize our subs (we make them have the same signal level, regardless of noise present in them) 2. There is no such thing as one SNR for entire sub. SNR is per pixel metrics rather than per image. This can easily be seen if we take SNR of any part of nebula and SNR of background. For nebula there will be some signal so S>0, noise is always present, so S/N>0, but for background - there is no signal, so signal is 0 -> S/N is also zero regardless of the noise level (zero divided with any number apart from zero itself is still zero). We have two different SNR values. By changing signal (like spiral arms and core - one is fainter other is brighter) - we again have different SNR. In the end - we can see that we pretty much have unique SNR for every pixel. 3. Straight forward average of two values with different SNR will produce sub optimal result Imagine we have two samples (you can extend this to subs, but subs have many more samples - for simplicity let's look at just two samples) - one with SNR 5 and other with SNR 4. We normalized our subs so they have same signal - say that signal is set at 10e. This means that first has 10e/2e = SNR 5 (there is 2e noise for 10e signal) and second one has 10e/2.5e (there is 2.5e of noise for 10e signal). Let's calculate SNR of simple average. For signal we will have (unsurprisingly): (10 + 10) / 2 = 10e For noise we will have: sqrt(2^2 + 2.5^2) / 2 = sqrt(4 + 6.25) / 2 = sqrt(10.25) / 2 = ~1.6 So final SNR is 10/~1.6 = ~6.25 But is this best solution? We can actually calculate best solution with a bit of math. We just need to think about the problem. Given that signal is the same - it's average (regardless of averaging coefficients) is going to be the same - that same value. This means that best SNR is one that minimizes averaged noise. We have two samples and in order to average with some weights - that weights need to add to 1. If one of weights is p, where p is (0-1), then other will simply be 1-p Noise expression will then be: sqrt( (2 * p)^2 + (2.5 * (1-p))^2) and we want minimum of that expression. Let's tidy it up a bit. sqrt(4 * p^2 + (2.5 - 2.5*p)^2) = sqrt(4 * p^2 + 6.25 - 12.5p + 6.25* p^2) = sqrt(10.25*p^2 - 12.5*p + 6.25) That is expression that we need to minimize. How do we minimize it? We find first derivative and see where it is equal to zero. First derivative will be (20.5 * p - 12.5)/sqrt(10.25*p^2 - 12.5*p + 6.25) In order for that expression to be equal to zero 20.5 * p - 12.5 must be equal to zero (we can't divide with zero so sqrt expression can't be equal to zero) From that we have that p = 12.5 / 20.5 = 0.6097560975609756097560975609756..... or roughly 0.61 Other coefficient will then be equal to 1-0.61 = 0.39. We can again calculate SNR with these coefficients: signal: 10 * 0.61 + 10 * 0.39 = 10 (not much surprise there) noise: sqrt((2 * 0.61)^2 + (2.5 * 0.39)^2) = sqrt( 1.4884 + 0.950625) = 1.56173781410... = ~1.56174 Total SNR is then 10 / 1.56174 = ~6.4 By choosing right coefficients we raised resulting SNR from 6.25 to 6.4 in this example. Higher the difference in SNR of samples we stack - bigger difference to regular average (one with 1/number of samples - coefficients) there will be if we use optimum weights. So what is the problem? Problem is that all of this was per pixel. Each pixel in each sub will have its own SNR - we can't use simple weights for whole image - usual approach used in stacking as each pixel has different SNRs. On top of that - we don't know their SNRs, even approximately prior to stacking. Even after stacking we only have estimate of their SNR - not true value, simply because we don't have true pixel values - only those polluted by noise. More subs we stack - closer to true value we get - but we never get 100% there. Closest thing to optimal solution of this problem that I've seen is actually an algorithm that I developed (using in part analysis similar to above one), and I've presented that algorithm, together with results here on SGL.
  3. Good thing about theories (good ones at least) is that they are testable.
  4. @ollypenrice It should be fairly easy to establish how much noise master dark adds to the image. If you still have your dark subs, do the following: - split them into two equal groups (must be equal so if you have odd number of frames to begin with - discard one to get even which can be then divided by two) - stack each group using simple average method (or sum - it does not matter) - subtract resulting images - first from second or other way around, it really does not matter. - convert into electron count by using e/ADU for selected gain - measure standard deviation. This will give you noise level for complete stack - you can calculate then per sub noise level. I would be surprised that you get significantly different result than adding read noise + dark current noise (both of which are low to begin with).
  5. It is very hard to determine if there is slight light leak when taking darks. With flashlight and scope - there is significant increase between subs and that is easy to detect, but with darks - if you did not shine a torch on purpose - light leak will be much less and will be more or less constant. Since there is no lens - whole sensor will be illuminated so there won't be any gradient to go by. It won't be much different that regular dark current. What we can do in order to try to detect light leak in darks is to: 1. examine if all darks have same (or nearly) mean value. If there is significant variation - it can point to light leak 2. we can examine "dark current" levels to see if they match expectations. If we get significant discrepancy between expected and measured dark current levels - then we can start suspecting light leak. Here we are actually going to measure noise rather than signal itself as there is no simple way to isolate signal with this camera - we can't remove bias signal easily as bias are not correct. Here is what mean value looks like for 180s subs: that is pretty consistent. Measured noise in 180s sub is 2.686e This consists out of read noise at gain 75 + dark current noise at -15C Read noise at Gain 75 according to ZWO graph is about 2.2e Dark current according to ZWO at -15C is about 0.009e/s/px: Let's see if these two add up to what we have measured. In 180s we will have signal that is about 0.009 x 180s = 1.62e of dark current. Associated noise will be square root of that so ~1.2728e We have 2.2e and 1.2728e of noise added - that is sqrt( 2.2^2 + 1.2728^2) = sqrt(4.84 + 1.62) = sqrt(6.46) = ~2.542e In theory we should have around 2.542e and we measured 2.686e so there is a bit more noise, and this could mean slight light leak - but it can also mean that we did not read graphs properly (it is very hard to read graphs like that - especially temperature graph that is in log form rather than linear). I've also found that ZWO sometimes under estimates noise in their graphs - measured values are always a bit larger than the graphs would suggest, so this might be case as well. In the end - from measurements we can conclude that if there is any light leak in darks - it is very small by magnitude and you should revisit darks only if you eliminate telescope light leak for lights and flats and you still have calibration issues.
  6. I can't really tell from the masters themselves. Maybe I would be able to tell from individual dark subs. Here is comparison between your 60s and mine 60s sub (mine was created by stacking 64 exposures - offset is a bit different, but other than that they should be comparable): left is yours and right is mine master. Both have been binned and stretched (linearly) to show amp glow, and I would say that they are comparable. Mine was taken at -20C not sure what temp did you use so that might be reason for a bit less noise / smoother looking dark. Unfortunately, I can't say from just comparison if there is slight light leak or not. We need numbers for that. Take look at ImageJ - it is free/open source and it is made for scientific analysis of microscopy images. Works rather well for astronomy applications (there is even AstroImageJ which is fork dedicated to astronomy applications - but you don't need that as it is geared towards astrometry / photometry rather than simple operations). You can simply open two images and use Image calculator (there are at least 3 different kinds - each will do simple subtraction with ease, or you can even do complex math operations with multiple images in some of them - like image expression parser / image expression parser (macro) - I used simple Process / Image calculator for above as well as Analyze / measure menu option for statistics on the images).
  7. These are only 10s darks, right? Look at this: Mean ADU value varies from 787 to 811 - that is 24ADU in just 10s. As comparison - I took random 4 dark subs from my ASI1600 taken at 240s exposure: Difference is in second decimal place (due to noise). In any case, here is what light leak looks like. I took those 4 subs and subtracted one with the least mean value from them all. That is first one in top left corner. Obviously, it will be black as it is all zeros (subtracted from itself) - but rest are not zero - nor pure noise. They contain uneven signal which is additional light leak.
  8. It is definitively plausible, but far from conclusive. We won't be able to tell for certain until we figure out what the dark energy really is. Our currently widely accepted LCDM model assumes dark energy to be constant - and that fits the data. There is "competing" theory of dark energy - that it is some sort of scalar field / a force that depends on distribution of kinetic and potential energy thorough the universe. Problem with either of those two is that we just don't yet know. We have different types of measurements and we fit our models to those measurements and when we have a good fit - then we have higher confidence that our model might be correct. It is like having a curve and fitting different mathematical equations to that curve. Some curves can be fitted with variety of functions with small error - for example: Apart from constant function (green) - other seem to match all the way up to x=1.5. If we have data only for 0 - 1.5 range - and there is some noise - we could say that any of those lines "fits" our data (although they are different functions - like sine or polynomials of order 3, 5, 7. ...) Similarly - both LCDM and Quintessence fit observational data well. There is no currently way of telling which one is correct model. Most people prefer LCDM model as it is simpler. Both have dark energy as repulsive force (actually with Quintessence it depends on configuration of universe, but is repulsive now and has been for most of history) so it is not completely different theory - it is just that one component that is modeled slightly differently. Thing is - for past data - they both agree, but for future they disagree completely - one ends up in big rip and other in big crunch Until we have more understanding on what dark energy really is - we are free to choose our favorite model
  9. Mounts with stepper motors know their position by counting steps. If this information can be recorded between power cycles, then in principle there is no need for parking. There are two places you can record this information - hand controller or mount itself. Parking just ensures that you start each session from 0 position and that there is no need for remembering actual position where mount was turned off. In any case EQMod does not have this capability and if mount does not remember this in firmware (not hand controller but on mount itself) - then you will loose sync. Encoders nicely solve all of this as you don't need to remember anything - you always know where you are pointing, but yes, you are right - it can be implemented without encoders as long as it "stays with the mount" and is not implemented in external device (here hand controller is also external device as you can use mount without it and swap between mounts).
  10. Can you post linear / unaltered versions of those files you posted stretched above? You can best verify any light leak by subtracting relevant subs while data is still linear.
  11. It works with HEQ5 if you have EQMod. It performs and stores pec curve for you. Because of this and lack of encoders - you must park HEQ5 after each session before you shut down EQMod. If you don't park - you won't loose PEC curve but it will go out of sync (be unusable and will do more damage to mount performance than turning it off) and in principle - you can sync it back (there is option for that) - but in practice that never seems to work well as it requires too much eyeballing precision.
  12. Physics and light itself won't behave differently depending on what you choose to image.
  13. I can do this fairly easily - but I don't think you'll appreciate it. In order to truly show that higher gain does not produce more noisy image - we need to conduct experiment in controlled environment, to remove any other variable that might change results. This is very easy to do - anyone with camera can do it. Just point camera + lens to a flat wall and try to have ambient light as uniform as possible (no changing light sources like TV near by or computer screen - any kind of flicker). Alternatively - flat panel will work nicely (not sure everyone has one). Two images should be taken at lower gain levels and two at higher. All other parameters should be kept the same (offset, exposure length ...). All subs should be converted from ADU to electron count (divided by e/ADU for selected gain and any bit offset removed) and then each set of two images should be subtracted (subtract first from second in first group and first from second in second group). Resulting subs should be measured for standard deviation. One with higher standard deviation has higher noise. As you see - experiment is very simple and very straight forward. My only concern is that you won't accept its results, because as you put it 'theory is one thing, but "hands on experience" is something completely different' (that is something that I strongly disagree with) and it might be hard for you to related above experiment with actual solar imaging. I don't think that anyone will be able to produce sufficiently controlled experiment with actual solar images (would need two scopes with perfect or at least exactly the same figures, two cameras with the same levels of read noise and response - all manufacturing defects the same and so on ...).
  14. How about daytime focus at something very far away or perhaps using the Moon? Using the same equation from above (1/f = 1/f1+1/f2) you can see how much back focus you need depending on distance. Say you have something that is very far away - maybe 10Km. That should not move focus point too much. focus_position = fl * distance/(distance-fl) Say FL is 2 meters and distance is 10000m, in that case focus position will be 2 * 10000 / 9998 = 2000.4mm or just 0.4 away from where it is supposed to be. You can also go the other way around and see what will effective FL be if you put focus position where you want it to be (thus missing by 0.4mm). Just solve for FL if you have distance and focus position.
  15. That is exactly one of the things I wanted to point out. Sure that person in question knows that they can swamp read noise in 15ms exposures at high altitude at the site of good seeing. I just wander if that person ever recommended someone starting in lunar AP - to necessarily use 15ms as it produces excellent results for them? Anecdotal evidence can sometimes be misleading. In average seeing, even with 8" aperture you can't really hope to have coherence time longer than say 5-6ms, let alone 15ms. Just imagine for a second that person reading advice is interested in Ha solar with internal etalon with scope that operates at F/30 while using ASI290. That is oversampling by x3.4. Now we have x3 less exposure time and pixel is 3.4 x 3.4 = 11.56 smaller by surface than it needs to be, which makes about x35 lower signal in total. Do you still think that signal will significantly swamp read noise? There is no harm in my advice - only benefit. While this benefit might be marginal for some setups - it is always correct and for some people it will make a difference. It will be as full as imaging section dedicated to images taken with equipment and processed with software backed by science and theory not so easily dismissed by people using them.
  16. I would not use term optimal for single variable, but yes, having everything else the same, gain of 80 will produce stack with worse SNR than higher gain that has less read noise.
  17. Not the matter of opinion - higher gain settings for same exposure length and same number of stacked subs will produce less noisy image. This is what math is telling us. Only difference between two stacks - one with higher gain and one with gain 80, if exposure length is the same and of course same camera is used under same conditions - is amount of read noise. All other noise sources will be the same and signal will be the same. Gain 80 stack will have more read noise than higher gain stack and everything else will be the same - hence Gain 80 stack will have lower SNR.
  18. Hi and welcome to SGL. Usual procedure when making mosaics is to stack each part and then join them together. For ICE try this link https://archive.org/download/ice-2.0.3-for-64-bit-windows There are several other programs that do it as well - iMerge for example and ImageJ also has stitching plugins.
  19. Does this hold for increased gain while keeping exposure length the same?
  20. It will work in this arrangement: Above is setup you are describing - telescope with eyepiece pointed at optical flat. Bottom part of the image is eyepiece assembly enlarged. You need to fashion custom eyepiece in order for that to work. You need to bring source of light of artificial star at actual focal point of the eyepiece - at the field stop. If you take standard optical cable thread to act as your artificial star - you need to route it inside eyepiece and make it end at field stop - pointing outside towards the telescope. This way when you look at eyepiece and you are in focus - artificial star will be in focus as well. If you want to DIY solution just for testing and we are not discussing novel device - there is much simpler solution for you to try. You will need another scope, and by the looks of your signature - you will have at least one for this purpose (although even simple finder can do). Here is diagram showing arrangement: Take your second scope - put eyepiece inside - focus at infinity, take eyepiece out and put artificial star in at focus plane of that telescope. Objective will then project parallel beam of light aimed at "infinity" - on the other side place your RC (it does not matter if projecting scope is smaller - just make their optical axis parallel - but you can aim at one half - say bottom of larger scope aperture) and it will receive light as if coming from infinity (light will be collimated). Then you insert eyepiece and focus artificial star to a point. That will be your focus position. Third option is to use "close" focus and calculate difference to focus position if you know focal length of your instrument. Say your RC has FL of 2000mm. Put artificial star at 20m away. It will then focus at 1/2000 = 1/20000 + 1/focus_position 1/2000 = focus_position + 20000 / (20000 * focus_position) 20000 * focus_position = 2000 * focus_position + 2000*20000 18000 * focus_position = 2000 * 20000 focus_position = 2000 * 20000/18000 = 2222.22mm - your actual focus will be 222.22mm inward from where you focused (you will focus at 2222.22 but your FL is 2000 so you will be 2222.22 - 2000 = 222.22mm out).
  21. It's not my theory, it is well established theory on how light and sensors work and well established theory of lucky imaging. I'm not going to try to convince you that what I'm saying is right, after all, I'm just repeating what are well established facts. If you wish - we can run simple experiment that will show that increased gain does not mean increased noise - actual noise measured will in fact be somewhat less due to decrease in read noise (but otherwise shot noise does not depend on gain settings). On the other hand, if you don't want to change anything in your work flow, then of course don't, after all, as above image shows, it's producing excellent results. There is only one slight issue with recommending your workflow to people. It is based on anecdotal evidence - it works for you, under your circumstances. Theory works in all instances (where it is applicable), and it will work for everyone if properly applied. If it does not - then it is flawed and should be replaced with more accurate theory. So far we have not yet seen evidence that theory is flawed.
  22. Excellent image! Things regarding noise levels are quite straight forward, although @neil phillips and myself already had discussion along those lines and did not reach the same conclusion. Gain affects read noise in modern CMOS cameras in known way - usually there is a graph published for each camera, and one can easily measure their camera for read noise levels at different gain settings. Result is - read noise goes down with higher gain. Gain does not directly affect any other type of noise. This is where Neil and myself had disagreement before. Freddie also seems to be sharing Neil's opinion on this - but that is not based on theory (nor experiment for that matter). Higher gain won't produce noisier results. What can happen is that higher gain will produce noisier results if one exposes for histogram rather than for time. This is how higher gain can indirectly affect noise levels. However - this is what I always caution people not to do - expose for histogram. Primary factor with lucky type / planetary imaging is to expose just enough to freeze the seeing. This is usually 5-6ms or less - depending on actual seeing at that moment. It is also related to aperture size and Fried parameter. It is also related to Greenwood frequency. https://en.wikipedia.org/wiki/Greenwood_frequency Coherence time is inverse of this frequency: Freezing the seeing means just that - time period in which distortion is not changing. If we integrate for longer period of time and atmospheric distortion is changing during this time - we will capture both distortion and motion blur of this change. We want to avoid later and select frames with minimal distortion. Once we have selected our exposure time - then rule is rather simple: Increase gain until you are near of hitting saturation (you want to avoid saturation). Different types of planetary imaging mean different brightness of the target - Solar Ha and Lunar probably being the brightest and here you don't have to raise gain much as you will have plenty of signal to start with and chance is that you will saturate pretty quickly if you up the gain. Not sure what are usual signal levels with Ha (after all, it is pretty restrictive filter on its own), but for planetary where planets are much dimmer - one can go very high with gain on short exposures without saturation. Keep in mind that it is bad to lower exposure time just to be able to raise the gain. This is counter productive as at those signal levels, shot noise is much higher than read noise. It only makes sense to raise the gain in order to lower read noise - once your exposure time is fixed. With fixed time - you have fixed shot noise as you have fixed signal levels. It only makes sense to lower read noise then without changing shot noise. Also note that lowering exposure time less than coherence time will not yield sharper results - atmosphere is already frozen. If coherence time is 5ms and you expose for 2 seconds - you will get two frames with same distortion. Overall - you will have equal percentage of good/bad frames. You will have more frames - but total integration will be the same, and as far as SNR goes - best SNR for same total integration time is one that is consisting out of the least number of subs (that means the subs are longer and each sub already has higher SNR).
  23. They are there - but are discolored due to whole image having a strong blue cast. Here is your image: I marked some features out Ha regions are marked with an arrow, while prominent features are circled (one star and one star field) Same features marked on reference image. Now look at color cast in your image compared to reference: It's like you boosted blue too far - galaxy cores are no longer yellow and there is no red to speak of
  24. That really does not matter. With belt mod - motor shaft still has pulley on it that has 9 teeth - each of those teeth moves one place at ~13.6s - if belt meshing is issue on motor gear - you will see issue every 13.6 seconds (I had that due to poor belt tension). One whole revolution of motor shaft is every ~122.4 seconds - so if motor gear is out of shape - you will see that period. Worm gear still has 47 teeth and turns every ~639.2 so if that one is out of shape (or worm itself) - you will see that period. Harmonics of those periods can still be present. Only thing that is removed is transfer gear - which is replaced by belt.
  25. If it is not exact period - it is unlikely related to mechanical periods. What is your shooting location like? Do you have something like row of houses in the distance that your mount "traverses" as it tracks? Could it be local seeing effects? Which direction? East/West or South?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.