Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Neither are very important as you can handle either with the way you image. Read noise is only important when determining minimum exposure length that you should go for and that also depends on your light pollution or in case of camera that is not cooled - dark current at ambient temperature. Pixel size can be (somewhat) changed by binning. Smaller pixels have certain flexibility in this regard as they offer more options when binning. 3.75um pixel size - next step is 7.5um and next is 10.25um But something like 2.4um will have more steps in same range 2.4, 4.8, 7.2, 9.6, 11 (native, bin x2, bin x3, bin x4 and so on) However in order to exploit that - you really need to know how to treat your data and do proper software binning.
  2. Don't think so, but that is my opinion. Since you are starting out, I would advise you to go for 1.8"/px resolution. With focal length of 1000mm that translates into ~8.72um pixel size. That would mean that you should get 4.3um pixel camera and bin it x2 or maybe 2.9um camera and bin that x3. Now ASI585 might seem to fit the bill - with its 2.9um pixel size, but have you checked field of view on that camera with 1000mm of focal length? First - since it has 3840 x 2160 with bin x3 you'll actually get 1280 x 720 image size. Second, and probably more important - you'll get very small field of view - only 0.64 x 0.36 degrees. Ideally when starting you want something that is middle ground like 2x1.5 degrees or similar.
  3. Check to see if Nina or other long exposure capture software is capable of using Sony cameras. I think that APT certainly is (at least Canon DSLRs). Also look up to see if there is ASCOM driver for Sony camera - then you can use it with any application supporting ASCOM cameras. SharpCap is usually not the best for long exposure astrophotography. Planetary and live stacking is where it is at its best.
  4. https://www.teleskop-express.de/shop/product_info.php/language/de/info/p15858_.html Not translated into English yet. There is also official ASI version, but won't be available from TS for next 12 days: https://www.teleskop-express.de/shop/product_info.php/language/en/info/p15856_ZWO-Farb-Astrokamera-ASI533MC-ungekuehlt--Chip-D--16-mm---3-76--m-Pixel.html
  5. Yes - that's why I thought it could be the problem. Here is a little diagram showing what is happening: If obstruction is in region where light beam is collimated (where aperture sits) - then it will affect whole field as it will block some of light coming in at any angle. That is represented by upper blue line in diagram - it intersects beams at both angles. Once beam starts to converge after the lens, then blockage that is causing diffraction can impact only some part of the field - depending where object that is blocking the light is. In above image - blue line is only intersecting light rays converging into right part of FOV - so only stars in right part of the FOV will be affected - but stars in left part of the FOV won't experience this as light is not "touched" by object.
  6. @inFINNity Deck This nicely describes what I tried to explain in PM. In this particular instance there is linear dependence between features in two domains - frequency and spatial, and when you choose some object in spatial domain - you will find that it gives you some constant in which to express F/ratio. It does not however mean that actual cut off frequency depends on that feature - only that they are correlated and that there is linear conversion constant between values. I think it is best to always refer to true cause of band limited signal and that is aperture and fact that it has hard cut off frequency and that is what we can use as part of criteria for Nyquist sampling theorem.
  7. If you want to see how a lens performs - check out its MTF diagram - and pay attention to LPMM part (that is line pairs per mm - and convert that to pixel size - two pixels per line pair / one pixel per line and one per gap). You'll be surprised what sort of pixel sizes match lens. I have Samyang 85mm T1.5 (same as F/1.4 only "cinema" version without click stops on aperture ring). Above is MTF wide open. Red line is 10 lpmm while grey is 30 lpmm. 30lpmm corresponds to pixels being 16.666um (1000um in mm / 30 = 33.3333 per line pair, half of that per pixel so 16.666um) and MTF is already at 70% and dropping with distance from the center. Here is what stars look like (Artificial star) at center and at corner of 4/3 sensor (ASI1600): Top to bottom is F/1.4, F/2, F/2.8 and F/4 and left is center while right is edge. In first row center star is not showing this good because exposure was not long enough - but there is like massive halo from chromatic aberration. This is actual star field at F/1.4 in R, G and B channel - you can see how much bloat there is in red compared to green and blue has halo around bright star. Same image in color reduced to %50 size (pixel size 4.8um as this was taken with ASI178mmc): Strange star color can be easily seen as well as halo around bright star. While center of the field can be tidied up by stopping aperture to say F/3.2 or maybe F/4 - astigmatism+coma in outer field can't (they also show at F/4 only 11mm away from the center given that diagonal of ASI1600 is only 22mm). But we digress. I think that m63 Riccardi will be excellent FF/FR for that scope (and probably most other refractors).
  8. It won't work. Not sure which one is M68 version of x0.75 Riccardi, but why don't you go with M63 / smaller version of the two? I know that there are M82 and M63. M82 can illuminate full frame, but M63 can't - it can do only 42mm diagonal, which makes it suitable for APS-C sized sensor (full correction and illumination over 30mm). Question is - why do you want "faster" scope? Maybe you should aim for specific resolution on that scope rather than "speed"? By the way - there is no optics that at F/1.4 that is sharp - at least not sharp in telescope terms. I'm yet to see lens that is diffraction limited, let alone with high Strehl ratio.
  9. That is excellent example of thermal problem between you and Jupiter. Some sort of object giving off heat - like chimney or similar was right between scope and target.
  10. Yes, very nice analysis. By the way - there is regular FFT in ImageJ that has very nice feature. It does not offer the flexibility of FFTJ but does provide way to very quickly measure frequencies in question. It is located under Process / FFT / FFT menu item. Once executed (and it works even for RGB images) - it will create sort of Hybrid image - one that will display log of intensity in frequency domain (but it will keep phase as well so it can perform inverse fft when you finish modifying it) - it will be displayed in 8bit mode and very stretched. When you hover cursor over parts of the image it will give you info on frequency: Here is FFT of crop of one of images that were used in this discussion (I think it is Tak image) and I marked with pointer where cursor was when I took screen shot (screen shot won't capture cursor). Above we can see that frequency or rather wavelengths is 5.17 pixels per cycle at that point. One more note - this FFT function is "cheating" a bit - it scales image to nearest largest power of two and then performs FFT so it is very fast.
  11. That is very strange - there should be no change with cropping. Response in frequency domain does not change based on size of the image.
  12. @inFINNity Deck Here is interesting experiment if you are up for it. Generate random noise and print it on a piece of paper. Place that piece of paper at some distance (at least 30-40 meters to avoid too much issues from close focus) and use barlow that can be adjusted to sensor distance (you can also print a scale on same paper for reference on how much magnification each position is). Take narrow band filter like OIII that is at 500nm and then use F/ratio that is pixel size x3, x4, x5 and x6 for example to record that noise patch. Use longer exposure to get good SNR (don't bother with stacking - if everything is ok, there should be no seeing effects in this close setup). This way we should get very clear cut off point for aperture in frequency domain.
  13. Yes, I would expect so. One of difficulties of using above frequency spectrum approach for determining if image was properly sampled is SNR level in the image and how much image was sharpened. We are trying to estimate telescope MTF based on result it produces - but there are several unknowns in that process. 1) Actual frequency response of the target. Luckily here we know what the target is and we are sure that there are higher frequencies present (smaller detail is readily visible in Hubble images, so we know that we have not reached cloud "smoothness" level yet - in fact it is far away) 2) MTF of telescope 3) Seeing induced blur after stacking. While we use lucky imaging to minimize impact of the seeing, there is still some residual seeing that is left and is averaged out in stacking. 4) Noise due to sampling 5) Sharpening effects. In the end profile in frequency domain might look like above graph. Black line is MTF of telescope, Orange line is what is restored in terms of image, but there is also orange jagged horizontal line that represents noise in the image. Noise is usually equally distributed in frequency domain and at some point it will cross signal line. When this happens - we have no way of determining where our signal line actually finishes. Best case scenario is this (high SNR) So noise is low compared to signal and although we are wrong - we are wrong by "few pixels". We have a good estimate where signal finishes in frequency domain. In above image vertical line would be representing our sampling points. Left of where MTF hits the zero would be under sampling, right of that point would be over sampling and right at that point would be proper sampling. In frequency domain image, proper sampling (high SNR) would look like this: Faintest part would just touch edges of the image. In your original image above - it is really hard to tell where signal ends and noise begins. From just looking at the image - it could be inner circle, but to my eye there is still some gradient between two circles and it seems that circle extends beyond image (this would indicate under sampling). However, due to SNR - I can't be really certain. Maybe circle extends even further and that faint glow we see in corners is still signal? By the way - circular signature in images above is from aperture and airy disk. When we enlarge image in software - signature is a bit different as transform is done in digital domain. I think I'm detecting a hint of it in your enlarged image: Here outline tends to be more square shaped then round: You can check this by simply doing a noise image and enlarging it by factor of 2 Top is small image FFT and original image, and bottom is enlarged (Cubic O-Moms interpolation) and its FFT (fft is to the left). This square will operate on both noise and original signal, so the rest of the image in high frequencies will be noise free and darker - as it is shown in FFT of your enlarged image.
  14. Here is another interesting exercise that we can do - we have two 14" scopes on that image. Given that Jupiter image is roughly the same size on the image (we can't be certain at what date each image was taken and whether Jupiter's apparent size in the sky was the same) - we can compare if both of them are over sampled by equal amount. Do maximally stretched circles match in size? Here it is - two C14 compared in frequency domain:
  15. I worked on all three images except Chris's, but I think that we have the same result except for level of stretch. I'll now do the same on Chris's version and will post results. Here are the steps: 1. Load image in ImageJ 2. Make 512x512 selection centered on Chris's image (512x512 is just for FFT - much quicker computation than arbitrary ROI - and there won't be difference in spectrum as far as intensity goes - just phase). 3. Image type -> RGB stack 4. Select green channel in stack and separate it as single image 5. Image / type convert to 32bit 6. FFTJ / single precision (there is no difference really for this application) / Spectrum log centered at image center 7. Manual stretch If I don't stretch it completely, here is what I get: So that is pretty much as what you get, but if I stretch it to the limits here is what I get: Now, you are right - image is not over sampled by factor of x2 - it is more over sampled by factor of say 2.5-3, and it looks like it was very affected by seeing or was not sharpened properly. To my eyes, this is domain where there is some information: However, as you move from center - there is point where MTF of recording drops significantly. If we want to represent this as MTF graph, it would look like this: Which is consistent with poor seeing superimposed on telescope optics.
  16. Actually here is one explanation of which I was not aware previously for why might someone over sample: Use of ADC - apparently it does correct for dispersion but adds optical aberrations and aberrations are smaller in higher F/ratio.
  17. Could it be that: a) people associate larger image with more quality (selfish gene, wanting bigger / better/ more or comparing one self with Hubble images) b) If person A is making such a large image, why can't I? (competitiveness) c) General public's expectations and trying to get likable image? I'm not saying that any of them is doing it consciously - I just know that I found my myself on several occasions in struggle of what is expected vs "what is right" and that tells me that the struggle is real.
  18. If you don't want to mess with a barlow - then sure ASI678.
  19. Do you know what type of aberrations does ADC introduce? Regardless of that, there will be critical sampling for combination of aperture and whatever aberration ADC introduce for given F/ratio. Critical sampling is related to the point where MTF graph hits 0: For example, here we have comparison of effects of central obstruction. They introduce "sag" of MTF graph - but don't change critical sampling, all graphs end up in same point. How much MTF is sagging compared to perfect aperture - represents how hard you need to sharpen to restore proper image. Some aberrations just make MTF sag more, while other can put it to zero "earlier" than it would otherwise hit it.
  20. Just to add - above is high power that is needed. Then there is low power that is usable - it is related to size of exit pupil and how much one's pupil dilates in dark. Medium power is smack right between those two
  21. Here is another rule to try out Maximum magnification (except for double star observing, people seem to like seeing these disks and not just detecting separation) is one that equates angular resolution of your vision to airy disk radius. Here is nice table: https://en.wikipedia.org/wiki/Visual_acuity#Expression If you have 20/20 vision - your resolution is 1 minute of arc (MAR column - minimum angle of resolution) For 200mm telescope, airy disk radius is 0.64" 60 seconds of arc / 0.64" = ~x94 We can round that to x100. If you have 20/20 vision - max power that you need is x100 (maybe you'll want more magnified image, but you won't need it to see all that can be seen). If you have vision that is 20/10 - then you only need x50 power, but if you have vision that is 20/40 then you need x200 How much power you need depends on how good your eyes are. People with worse visual acuity need more magnification (and often don't as much mind higher magnifications - to them x500 does not look as blurred as to someone with sharp vision).
  22. You can't use (flawed) calculator for deep sky imaging as a guide for planetary imaging. Lucky type planetary imaging is completely different activity. In planetary imaging you hope to get around effects of seeing - while with long exposure deep sky imaging - there is no getting around it, and you need to account its effect on resolution of the image. With planetary type imaging - you go by aperture alone - how much the telescope can resolve without impact of atmosphere (and hope for the best - or rather you image and hope that there will be moments of good seeing that you will then use in stacking software). Best guideline for you in terms of F/ratio for planetary is pixel size x 4. That will give you best results. That is F/11.6. Your scope is F/10 - so you might even skip using barlow at all (with SCTs focal length depends on primary to secondary separation, so if you move camera backward - you could even reach F/11 or above). Alternative is to get barlow element that screws into nose piece of camera and then get short 1.25" extension in hope that you'll get barlow to work around x1.2 (closer you place it to sensor - less magnification it will give you).
  23. I guess with a bit of effort one can be found used for 250? Above was one of first google results for used A6000
  24. Actually, something like Sony mirrorless camera with APS-C sensor can be faster than those "entry level" CCD cameras (at least those up to ~10mm in diagonal, like Atik 314+ and similar). I'm not saying it will be true for novice, but someone that knows what they are doing is going to take advantage of real estate of APS-C sized sensor and turn that into speed. Think about it - Diagonal that is almost x3 larger - means x3 larger scope will manage same FOV, but x3 larger scope for same F/ratio has x3 larger aperture and hence accumulates x9 as much light for the same FOV. You can't beat that with good QE and thermal noise control.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.