Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. That is the meaning of anisotropy. Isotropic space means that it is behaving equally irrespective of chosen direction. Anisotropic space means that it is behaving differently depending on chosen direction. I don't think that Anisotropic space implies global reference frame, but it does define preferred direction and as such - any reference frame would have preferred direction. We don't need to specify reason for it behaving differently - in much the same way we can't specify the reason why it is behaving the same in every direction. It is just the fact. Whole problem is that given our current state of understanding - we accepted cosmological principle - which comprises of isotropy and homogeneity (on large scales) and it is build in our formulas. This thread has been about pointing out that if anisotropy exists that effects the speed of light - we would not be able to tell from experiments.
  2. In shore example - it is quite obvious since we can see the flow of the water. Suppose that EM quantum field is "flowing" in arbitrary direction in space. We can't see actual quantum field except for vibrations in it that we identify as photons. Could we able to tell anything at all? Maybe flowing analogy is not as good - maybe we can have additional field that photons couple to and it is anisotropic - and photons couple more or less to it depending on direction. Again - no way to tell. It really does not make a difference for original question - which ever way anisotropy is pointing - we won't be able to tell since we can't measure single leg of the journey.
  3. Anisotropy makes sure of that - photon does not need to "know" anything. Imagine you are swimming in a river that is flowing - you have a blind fold on your eyes. To you - it does not really "feel" any different depending on whether you are swimming up or down stream. For someone on the shore - it is rather obvious that you are swimming faster down the stream compared to the other direction.
  4. Could be that you accidentally mixed calibration subs?. At -10C it is more likely that you'll end up with some hot pixels versus -20C and if you've shot your lights at -10C and calibrated with -20C by mistake - those hot pixels could be left in image.
  5. If you stack your data prior to debayering it will effectively destroy color information and turn your recording into (low quality) mono / luminance information. This happens because stacking does alignment and different color pixels end up in same place - so you stack red against green against blue - and you effectively average them out - which creates gray color. When you debayer after that you'll just get gray image where B-V information is constant.
  6. Dithering works for hot removal only if you sigma reject in your integration step. 10 subs seems a bit low to do reliable sigma reject.
  7. Another cool video: https://www.youtube.com/watch?v=ZIOn6w7gwWU I can't help follow thru those suggested videos on topic that interest me
  8. What sort of precision do you have now? Tracking precision is really down to mechanical side of things. Precision machining and such. Maybe you could try to do your own worm gear? I've seen you tube video of how it can be easily done with thread tap and drill. Let me see if I can find that for you. https://www.youtube.com/watch?v=19jKlq8Ofd4
  9. Should be easily done in code, no? I'm guessing that your arduino code is looping and tracking time and at appropriate moments issuing step command. Therefore you really must have a flag that says - sidereal, fast forward / backward in order to control steps. From there it is rather easy with one if and one counter. If not sidereal then on step increase counter when counter hits some number (you can calculate number of steps that you want LED to be on off) - toggle state of LED. One more piece of code - when changing speed if sidereal reset counter and set LED to on.
  10. Check out this paper for example: https://www.radioeng.cz/fulltexts/2004/04_04_27_34.pdf
  11. If you want to do proper calibration - one way to do it is to use darks, flats and flat darks. Bias is useful only if you want to scale dark signal for some reason (and some CMOS sensors don't really allow for that).
  12. No need for correction. Only difference between point sampling and relatively squareish pixels (not really perfect squares in real life) is that pixels convolve original function with shape of the pixel (if perfect squares - they would convolve original function with 2d rectangle / square) which acts as low pass filter but does not change cut off frequency. Perfect square for example attenuates frequency down to about 70% of original at cut off frequency if pixel is matched as above calculations say. Here is FFT of perfect square pixel: That represents low pass filter that is acting on original signal. I roughly marked cut off frequency of the signal due to diffraction at scope aperture and area where signal is contained. As you can see - this low pass filter does not change cut off frequency - just attenuates frequencies below it a bit more.
  13. Yes. Actually it will depend on IR cut filter - but most cameras have sloped IR cut filter rather than sharp edged one. This provides better color balance. This image sums it nicely: It shows that IR cut filter reduces Ha sensitivity to 20% of what it is normally in particular sensor. If you can - try DCRaw software. It is command line utility and can extract completely raw sensor data. That is what you need. Mind you - such raw data will be very strange looking - like all green or similar as no color correction is applied to it. Thing with color correction is that it uses matrix multiplication so that final red contains all three raw components - raw blue, raw green and raw red. Problem with that is that green and blue raw components only contain noise really - and no signal. So you end up injecting that noise into Ha signal for no valid reason if you let software treat data like regular photo and tries to do color processing. In any case - use dcraw to get proper raw data from your camera and then compare to what you get in affinity photo if you tell it to leave image as raw data. If you get the same thing - then great. If not - I suggest that you use some software that can extract real sensor raw data from your images for Ha.
  14. Yes, indeed, this is correct. I mentioned x10 based on values that I just guessed. Fact that you have sky background values from subs makes it possible to get exact numbers instead of guessing. Getting planetary sampling rate is straight forward. We just need to observe frequency cut off point for diffraction limiting case which is https://en.wikipedia.org/wiki/Spatial_cutoff_frequency We need to sample at twice that frequency (Nyquist). If we substitute pixel size in there - it is straight forward. F# = pixel_size * 2 / wavelength (two is due to Nyquist - two pixels per max frequency) In case of 2.4µm and say visual light of 550nm = 0.55µm we get F# = 2.4 * 2 / 0.55 = ~8.73 F/12 corresponds to 400nm light wavelength as 2.4 * 2 / 0.4 = 12, so yes, if you want to get max detail possible - you should use F/12 but in practice one should not go that high since blue suffers atmospheric effects the worse and detail is probably not going to be there anyway.
  15. Nyquist for 2.4µm pixel size and Ha wavelength is F/7.32, but you can't use that as you are not in outer space. There is atmosphere and there is mount tracking performance. Measure FWHM in your subs in arc seconds and multiply that with 1.6 - that will give you optimal sampling rate.
  16. Not sure why you bring in thermal noise into this. Dark current should be negligible at -20C that I assume you are cooling down to. According to ZWO published data, dark current at -15°C is 0.00298e/px/s. In 60s that would amount to 0.1788e in one minute exposure - or ~0.4228e of dark current noise. About x4 less than read noise. You need at least 16 minutes subs in order for dark current noise to match read noise. You can't use dark current noise to swamp read noise. You are right - but that means that ADU counts are divided with e/ADU values and if you want to get form ADU count to electrons - you need to do the inverse - multiply with e/ADU. Here is where you are wrong. That would be correct if you used unity gain with e/ADU value of one. However you used gain setting of 300 which has ~0.1259 e/ADU. Inspect your fits header for exact value, it should be reported. From that math is straight forward. If you have 4100ADU count - and these are shifted by 4 bits because we are using 12bit ADC - actual value is ~256ADU and when we multiply that with e/ADU we get: 32.261875e as sky level in single exposure. Associated Poisson noise is square root of that or ~5.68. That is about x3.44 times read noise at gain 300. That is actually not that bad - but I'd still go with x5 factor - or twice as long exposure. Having 0.54e/px/s sky signal equates to about mag 16.3. That is seriously poor sky - probably sqm18 + moon.
  17. Did you actually measure sky background in your subs? Should be quite easy to do - take calibrated sub, measure background ADU - divide with 16 and then multiply with e/ADU unit for gain 300. If you are to swamp read noise, you should get value that is 68 or higher. We can reverse things and go the other way - 300 gain should be 0.1259 e/ADU, so 68e should be 540 and that x16 (as ASI183 is 12bit ADC) gives 8642ADU. Is your background level in calibrated sub 8642?
  18. I think that you want to bump that up by factor of x10. If you can - fix the link between the mount and computer. Your sampling rate is 1.29"/px - which is very low and although each pixel only receives ~1.65e (300 gain) of read noise - you are nowhere near swamping it with background sky noise in narrowband. I ran some calculations and these are stats that I've got. I used 7nm filter, 65% QE at 656nm and roughly SQM 19.2 (from lightpollution info map at your place - not sure if I have correct info there). These are SNR values on very faint target (around mag24 / arc second squared) after two hours of exposure - using 60s, 300s, 600s and 900s subs. Transition between 600s and 900s is showing minimal improvement (about 2-3%). But there is improvement in SNR of 55% between 60s and 600s. That is equivalent to stacking additional 140% subs (square root of 2.4 is ~1.55). Do bin x2 all of your data since 1.29"/px is simply too low for 80mm scope. You can use Tri band filter as luminance if you want to further improve things. Use Triband + Ha, OIII, SII as you would L+R,G,B - most of the time spent in triband and some time spent in getting the color. You can also spend minimal time on RGB star colors - that makes nice addition. But I would say primary thing is exposure length.
  19. You don't have to break a bank to get half way there. Get one of these (in 35mm version other two are not available): https://www.firstlightoptics.com/ovl-eyepieces/aero-ed-swa-2-eyepieces.html In F/6 scope - maybe outer 10% will suffer somewhat so you won't get that "to the edge" sharp performance - but if you concentrate on the target in the center of the field and use outer part just for framing / context - well you don't see that sharp with peripheral vision anyway. Other than that - ES68 is very nice performer. I have 28mm version and use it in my 8" F/6 and it is comfortable and sharp and excellent performer.
  20. What scope were you using, gain settings and what was your exposure length? ASI183 has very small pixels and that means that you need quite a bit longer exposure length in narrowband than is customary to CMOS sensors.
  21. From a quick search - affinity photo does not seem to have pixel binning. You can use Ha filter with color sensor - but need to be aware of few things. It will have only 1/4 sensitivity as only red pixel from CFA (bayer matrix) is sensitive in Ha wavelengths. Proper way to utilize Ha would be to make shoot with that filter and after stacking - just extract red channel and throw away blue and green. Also - make sure you use RAW data and not color processed data in any way - camera and software do color balancing and you don't want any color balancing. DCRaw can extract pure raw data from raw files. That is what you want for Ha image. Once you have that - you can bin your image like regular image and in fact you should - in order to make them same resolution so you can properly align them. Alternative would be to go with UHC filter instead - which is cheap version of quad band filter. It will pass Ha, Hb, OIII and SII wavelengths and a bit more. It is not as effective as regular narrowband filters but it is effective in light pollution on emission type targets. Do consider upgrading mount and adding guiding.
  22. Get CEM60 if you can find one. I have HEQ5 and my upgrade path was going to be CEM60 / CEM120 depending on the budget. CEM60 is no longer made and is replaced with CEM70 which is much more expensive and I don't see justification for that. My personal belief is that iOptron realized that CEM60 is too good a deal in performance/$ so they decided to "upgrade" it - and of course increase their profits that way. From what I've read, experience with CEM60 has been really positive. For me, CEM120 is still upgrade option. HEQ5 is excellent mount once it is properly tuned / upgraded / modded. However, having done all the mods to mine myself (belt mod, changed bearings, changed saddle plate, upgraded tripod) - I now have a feeling it is somehow too fragile. I constantly worry that something will throw it out of perfect tune and it will stop performing the way it is performing now. Tuning it to perfect running order seems more luck than skill to be honest and part of me is scared if I'll be able to do it again. I managed to get my mount down to incredible 0.36" RMS guiding error on a very quiet night (which is funny because I maintained that it can't go below about 0.5" RMS as stepper precision is 0.17" per step and it is doing 64 micro steps - so it can't have much holding power to keep the scope centered in DEC against the wind for example). Keep in mind that you'll feel most comfortable by mounting less than 2/3 of specified load capacity on a given mount. In one configuration I put about 11Kg on Heq5. I did load it with more weight - but it did not perform well. Similarly CEM40 is declared at 18Kg - so for imaging consider 12-13Kg to be upper limit on it. CEM60 will carry up to 27Kg according to specs so you can really put 18-20Kg on it for imaging. Btw, I think that CEM40 is similarly good product - but I have not read much on people's experiences with it.
  23. What a nice collection of space marbles Do you have any idea why this happened? Maybe out of focus + spherical because of mirror spacing + low down on horizon so atmospheric dispersion?
  24. Yes, binning is effectively like having larger pixel. You take all electrons captured by 4 adjacent pixels - group of 2x2 pixels and sum them. Resolution / sampling rate goes down by factor of 2 but SNR improves by factor of 2. It is trade of between resolution and image quality. In fact - if we over sample - pixels too small for our scope, mount and skies - you don't loose any details. CCD sensors can bin in hardware and both CMOS and CCD sensors can bin in software. Difference is in how much read noise ends up in the image - with CCDs and hardware binning - summing is done in silicon and there is only one read noise applied. With software binning - summing is done after read out and each pixel gets its read noise so overall read noise raises by factor of 2 (or rather - bin factor - if you bin 3x3 - you get x3 read noise for resulting large pixels). CMOS sensors have much lower read noise than CCD so that is not a big deal. Average CMOS has about 2e of read noise, while CCD has about 8e - so you can easily bin x2, x3 or even x4 and still have smaller or same read noise as average CCD. For binning - you can use any software that is capable of doing it - PI has it as integer resample, StarTools has option to bin data in processing stage after stacking. ImageJ can bin subs before you stack them while still being raw and so on ... Sampling rate is calculate by this formula: and you can actually access handy calculator for that here: http://www.wilmslowastro.com/software/formulae.htm#ARCSEC_PIXEL So your 80ED with 4.5µm pixel size is sampling at 1.82"/px - which is rather good general purpose resolution. What do your images look like when viewed at 100% zoom level (1:1 - or 1 image pixel to 1 screen pixel)? If you still have nice round stars than your mount is doing a good job.
  25. Both sensor size and pixel size are very important considerations - as long as you match them to the scope. For example - ED80 will not cover full frame sensor and stars in the corners will not look good - as you no doubt already know. I think that it is best to think in following terms when deciding on what sort of camera + scope combination you want to use. First determine what will be your working resolution / sampling rate. For example - if you want to go wide field - choose something in range 2.5"/px to 3"/px. If you want medium to wide images - choose 2"/px. If you want to do general/medium working resolution - that would be 1.5"/px. You should not work with higher sampling rates if you don't have very good mount, larger scope (6" or larger) and can guide very well. Next step would be to see how you can achieve such resolution and what would be your field of view. Here sensor size and pixel size play major part along with selection of telescope. Larger sensors have advantage as they will be faster for given sampling rate. This is because you can use larger telescope and bin your pixels to make them larger. Larger telescope means more aperture - more photons gathered on wanted sampling rate. Pixel size gives you "grain" in which you can change resolution. Large pixels means that you'll have to use them as they are or possibly bin x2. Small pixels give you more flexibility as you can use them natively or bin x2, bin x3, bin x4, etc ... For example - if you have 6.45µm pixel size then you have that or possibly 12.9µm if you bin x2. If you on the other hand have something like 2.9µm pixel size, then you have 2.9µm, 5.8µm, 8.7µm, 11.6µm - native, bin x2, bin x3, bin x4. This means flexibility in choosing the imaging scope. QHY8L has 7.8µm pixel size and with ED80 + FF/FR it will sample at 3.15"/px - so you are in wide field mode and there is no way to change mode. But same camera and scope with 2000mm of focal length will offer 1.61"/px, 2.41"/px and 3.22"/px - so all three options, depending on how you choose to bin - except, FOV will be much smaller than with ED80. In the end - it is a balance and it will depend on what you choose as your working resolution. Do bear in mind that EQ3 mount is inadequate for anything but very low res / wide field work. As for active cooling - cooling is important, but more important is set point temperature as it enables you to do proper calibration with precise calibration files. That is much more important than actual "sub zero temperature", so yes - if you can go set point cooling - just keep in mind, dedicated astronomy cameras with cooled sensor can be quite expensive. If you like QHY8L, then have a look at these: https://www.firstlightoptics.com/zwo-cameras/zwo-asi071mc-pro-usb-30-cooled-colour-camera.html or https://www.firstlightoptics.com/zwo-cameras/zwo-asi-2600mc-pro-usb-30-cooled-colour-camera.html Just a question - why don't you continue to work with that full frame mirrorless - look into binning and maybe put funds towards better mount instead? This looks like very decent replacement mount: https://www.firstlightoptics.com/ioptron-mounts/ioptron-cem26-center-balanced-equatorial-goto-mount.html
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.