Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yep, small sensor will exclude all problems with vignetting. Maybe some sort of Newtonian rings? That happens when two optical elements are almost touching. One is highly curved and other flat for example. Only place in your optical train that I can see this happening is use of filter at the end of coma corrector. Coma corrector probably has highly curved "field lens" (lens facing the telescope) and it is positioned very close to filter thread? This is only image that I found online that shows CC from that angle - and I can't really tell due to good AR coatings: Maybe try to move filter between CC and camera instead of it being on the far side of CC.
  2. So, it's vignetting like this? could you post your master flat and result of calibration? Sometimes there is residual vignetting in the image if you don't do proper calibration or calibration fails for some reason (like mismatched temperature or light leak). Again - not related to the fact that CC protrudes into light path - but it does have to do with model of coma corrector. Some coma correctors are optimized for certain sensor size and if you use them on larger sensors - you get vignetting and poor edge correction. What camera do you use and what is the model of coma corrector (is it Aplanatic F4 model? it says that it will cover APS-C but does not say much about vignetting)
  3. I'm fairly positive that rings won't be produced by CC protruding into light path. Rings can be reflection issues if they appear around bright stars (those are in fact same stars - just out of focus). They can also be due to dust particles somewhere in the optical train. What sort of rings are we talking about here?
  4. Image is teensy-weensy noisy but here is the result: this was green channel extracted (FitsWork and VNG debayering) and sample of 192x512 pixels taken with the edge. I did differential of edge and got this: LSF is there but so is quite a bit of noise.
  5. Forgot to say. If you can also include following - that would be really helpful: 1. something of known size at the same distance - so we can determine pixel scale used - maybe ruler of sorts or just place something that you measured and know dimensions of at the foot of that pole. 2. precise measurement of distance to target (about 125m is "okayish" but 127.63m is better ) Problem is with Maksutov telescopes - they focus by shifting primary mirror - and their focal length depends on primary to secondary distance. There can be significant difference in declared and actual focal length - depending on focus distance and where in optical train camera sits (extension tubes and where is optimal focus position with respect to that). Without above - we can produce "relative" curve, but with above info - we can produce absolute curve and compare it to theoretical curve for that scope.
  6. I said no because in first sentence you said that we only have matching FOV and 10X vs X aperture size. If only those two conditions are met - then answer is no, there is no guarantee that same SNR will be reached in x10 less time with 10X aperture vs X aperture. In fact - odds are that it will not. I answered yes to the second part where you started questioning pixel size and quantum efficiency and all the rest - that yes refers to the fact that all conditions must be met for SNR to be the same in x10 less time. Only then you can say 10x aperture will have same SNR in x10 less time. By the way - I did not specify but its worth mentioning - X and 10X are aperture area not diameter. No canceling out and you don't need to have larger pixel size. Here is what happens - telescope aperture gathers light and pixel size divides light. Say you have target - how many photons will be collected by telescope - depends on aperture size. Larger aperture size - more photons collected in same amount of time. Number of photons is proportional to aperture area. All photons from target get divided by all pixels covering the target. If you have smaller pixels - more of them is needed to cover target, if you have larger pixels - less of them is needed to cover the target. All the light from target is spread over those pixels covering the target - and no photons end up elsewhere (some of them actually do end up elsewhere but we model that by other means - like reflectivity of mirrors / transmission of glass, scatter and such). Say your target gives off 10000 photons per time period. If it is covered by large pixel - only 100 of them is enough to cover the target - we end up with 100 photons per each pixel (10000/100). If we use smaller pixels and this time we need 400 pixels to cover the target - now we get only 25 photons per pixel as 10000/400 = 25. Smaller pixels - mean less signal and poorer SNR. Well depth really does not come into this. It somewhat dictates single exposure length and not total imaging time and only if you want to avoid saturation, but in reality - you'll base your exposure length on read noise and not FW. Any saturated parts you can always "fill in" with handful of short exposures at the end of the session. You not only have to scale sensor to match FOV - you also need to "scale pixels" to match resolution. With larger sensor you can always get larger pixels by binning them. If you want to compare two systems very basically - use "aperture at resolution" or "aperture at resolution times QE" if you want to include QE of sensor (it's significantly different). Just be careful that both aperture and resolution need to be squared (both need to be area rather than length - like diameter or pixel size) if you want to compare it to time (have linear dependence). Here is quick example. How does 100mm at 2"/px and 60% QE compare to 150mm at 1.6"/px and 80% QE in terms of "speed"? 100^2 * 2^2 * 0.6 = 10000 * 4 * 0.6 = 24000 150^2 * 1.6^2 * 0.8 = 22500 * 2.56 * 0.8 = 46080 150mm setup is x1.92 (46080/24000) faster than 100mm setup - or 150mm in 1h will have same signal as 100mm in 1.92h if they image on the same night under same conditions.
  7. I think we can certainly give it a go as is. EOS 7D is going to slightly under sample as it has ~4.1µm pixel size - but it will be close enough. I can work with raw image and edge being straight up. Alex is probably going to want slanted edge for his software. It will be interesting to compare results.
  8. Then thing is rather easy - extract Plane number 2 and plane number 4. This will give you two images that have half of pixels in height and width - and you treat those two images as two separate observations. If you've taken for example 16 images - you'll actually have 32 observations rather than 16 - just remember that each pair was recorded at the same time (if time is important in your further analysis).
  9. Look at my next post: https://stargazerslounge.com/topic/371424-mtf-of-a-telescope/?do=findComment&comment=4040601 I think that Roddier method calculates things in nanometers from sensor pixel size and other data in millimeters - like aperture size and focal length and only uses wavelength of light to give answers in "waves". For example 100nm in 400nm wavelength is equal to 1/4 of wave - because 400nm/4 = 100nm, but same 100nm is 1/6.56 wave in Ha wavelength which is 656nm because 656 / 6.56 = 100nm. For that reason - changing wavelength does not alter absolute wavefront OPD (optical path difference - deviation of wavefront from optimal in nanometers - or how much "wavefront" is late or advancing with respect to ideal wavefront) but it does change things expressed in waves and Strehl which also depends on wavelength.
  10. I think that sensor MTF can be made insignificant if we over sample. It might not be the option when working at prime focus - but can be made so when doing afocal method. Then we just need to figure out how much lens used to image is messing things up. Here is what perfect square pixel MTF looks like when we sample at optimum sampling rate: So yes, it does have fall off of about 0.7 at max frequency (possibly exactly 1/square root of 2). Indeed - that should be taken into account - even when we are at optimum sampling rate.
  11. Yes, that is correct - but when you use tools that extract green channel - they mostly do following: - reconstruct R, G and B channels by using some sort of interpolation - then you get to save G channel from that If you want to get only green - and to get two sub frames from each image - you can easily do that, but it is a bit more involved. Take any software that will convert camera raw data to fits format while still preserving bayer matrix (DCRaw will do that from command line, FitsWork will do that from GUI and supports batch processing) and I'll provide you with plugin for ImageJ that splits fields for you so you can extract only green channel data.
  12. Both methods are the same. If you end up with same number of pixels in you green channel as sensor size would suggest (like if you have 6000x4000 px sensor and you end up with 6000x4000 green channel) - you did not loose the data - you "added" the data. Missing green values have been interpolated from green component of bayer matrix. For photometry purposes - that is just fine as interpolation should preserve average pixel values (in simplest form, missing values are added by linear interpolation which is just average of two neighboring existing values - so average of the lot does not change).
  13. Now that I think about it - maybe above with wavelength is not a bug and is quite fine. Defocus pattern is image on sensor - and sensor pixel size is in units of length. We have all the data to reconstruct wavefront in terms of length (µm, nm) - and we only need wavelength of light to translate that into relative units of "waves"? So the question is - have I made an error in converting to length units? I don't think that is the case for two reasons: 1. In first test we had two independent aberrations - spherical and astigmatism. If problem was with wavelength - then ratio of measured and generated errors would be constant. However we have: 29.815nm vs 25.156nm, ratio of ~1.1852 and we have 12.758nm vs 4.925nm - ratio of ~2.59 2. There is little utility that comes with WinRoddier - to help you calculate how big you need to make defocused star in order to get wanted defocus in waves: So for given test parameters - it estimates defocus pattern with diameter of ~141px on screen Roughly measured out focus has ~140 (well - looking at the image - I would say more like 138px - measurement line is a bit out of the pattern in top right corner) WinRoddier - measures roughly the same: (note that this is in radius so diameter is twice the value)
  14. No. Yes, all of the things will have impact. First place is pixel size. 10x aperture scope will collect 10x light from same FOV. But how will all that light be divided into chunks? Smaller number of chunks means higher value per chunk. If pixels are the same - then yes - 10x aperture scope will have 10x light fall on each pixel. Next comes QE. Say first camera has 60% QE while second has 80% QE - that is 1/4 increase in QE - not insignificant. Given that all parameters are equal - then yes, 10x aperture will have x10 higher signal than x aperture scope for same exposure time. Problem is - parameters are never equal. Even night to night differences can be significant in terms of achieved SNR. Transparency can change as much as 0.5 mags or more between the nights. That is 50% increase / reduction in signal strength from target! - Yep, almost the same as difference between 4h of imaging and 6h of imaging. Not to mention LP. One mag of difference in SQM can be roughly x2.5 in imaging time! Yes, what you can achieve in 1h in SQM 20 sky - you'll need 2.5h of exposure in SQM19 sky and 6.25h in SQM18 sky.
  15. Ok, I now performed simpler test and results are the same - wrong. I think that I also found a bug in WinRoddier. Simulation is as before - except - I did not make difference in defocus - both in and out focus are exactly at 22 P-V waves. I also did not add astigmatism term - but simply left things at 1/4 PV Primary spherical. Just as check - I generated same thing in Aberrator 3.0 - 80mm f/6 with 0.25 PV spherical and generated wavefront and MTF. Here is comparison between my wavefront and MTF and that of Aberrator: (color scheme is a bit different - I used "physics" LUT in ImageJ - it seems to be lacking violet a bit) - but numerical values are the same - both have 0.25 P/V - so you can't really go wrong there. Here are MTFs: Again - the same. Here are test in/out focus patterns: And here is result from WinRoddier: Again - PV is 105nm or 1/4.743 instead of 1/4 - WinRoddier again over estimated quality of wavefront. There is again strange defocus term - but I guess that is just quirk of method - it should not be taken into account. Again other terms are not completely 0. For a moment, I thought that maybe I entered wavelength wrong - but no, it was 500nm all the time. In fact, I decided to change it to 600 to see what happens: And the strangest thing happened - results stayed the same! only error in wavelengths and Strehl ratio changed???
  16. This paper discusses sensor MTF. Do you know what sort of differences we might expect when trying to evaluate optics MTF instead?
  17. Yes, I know that, but I'm asking if we need to measure it at all - given that it is "derived" quantity. We have definition of time and we have definition of light. Currently, distance is derived quantity. One meter is distance traveled by light in vacuum in one second. I'm wondering if the fact that we can't measure light in one direction only - now translates to the problem of measuring distance in either direction? Maybe it does not translate - but we can similarly postulate that distance is not the same in both directions - and there would be no way to test it?
  18. Don't we know the speed of light - exactly? I mean - we might not know how long is a second or meter - but we do know the speed of light in vacuum exactly to be 299792458 m/s How can we measure if stick is 1 meter long in both directions?
  19. That is a good point. I have not thought about it - but yes, it could be that in / out focus was the issue. I deliberately made small difference in defocus. I used 22.1 waves defocus in one case and 22.3 waves defocus in other direction - to simulate fact that one can't repeat exact defocus in both in and out directions (unless using motorized focuser?). Maybe that was the problem? I'm going to do another test - this time using simple wavefront (just 1/4 PV spherical) and exact in/out focus. In the meantime - I'm going to copy results that I posted to Gerry in private discussion. Simulation parameters: I used 1/5 PV Primary spherical and -1/8 PV Oblique astigmatism to generate wavefront. 80mm F/6 scope, 3.75µm camera, star at infinity, 500nm wavelength, 22 PV waves defocus (in fact - I used 22.1 and -22.3 waves defocus to simulate fact that it is hard to do exact in and out defocus - even when measuring defocus pattern size - that is like 2-3px difference in diameter). Actual test images used: Results: Results: 1/5 PV spherical is equivalent of ~1/16.77 RMS. For 500nm wavelength that would give Spherical OPD coefficient of 29.815nm (test reported ~25.156nm) 1/8 PV oblique astigmatism in RMS is ~1/39.192. For 500nm wavelength that would translate to 12.758nm (test reported 4.925nm) WinRoddier implementation of Roddier test clearly over estimated quality of the wavefront. I'm surprised with variation of other Zernike terms - I did not use noise in this simulation.
  20. Could well be. I'm going to detail exact procedure used and hopefully someone will point out error if there is one. What does baffle me is that I get consistent results with analytical approach - my MTF looks exactly the same, Airy pattern looks the same - above edge method works flawlessly when done the way analytical math suggests.
  21. I think that I'm a bit confused now with all of this. Not the theory but the fact that we all got different results and to add to that in conversation with @jetstream I leaned that some people expressed concern that WinRoddier might be giving wrong results - so I did a test on it, and indeed - I got different values then I used to generate wavefront. If I did not cross check my data with Aberrator - I would have surely concluded that I'm at fault - but wavefront that I generate is the same as Aberrator 3.0 - so is MTF. Bunch of other things check out as well (like expected star defocus size and such). I'll run tests once more - with very simple wavefront and post findings here so we can discuss those as well. I'm hoping that we will eventually have simple test that amateurs can perform that is reliable enough and produces MTF or Strehl for their scopes.
  22. It did work. On a quiet night I managed to guide at about 1" RMS. This was before I modded my Heq5. Fact that I needed 15Kg of CWs made me feel really uneasy. Mount should be good for up to 15Kg I think - and OTA alone here is 11Kg. Add guide scope, rings and all that - and I was really conscious of the weight of setup all the time and always looked it something would give. I would say that setup was on upper limit even as far as visual goes. I'm guessing that 10" is even heavier - at least few Kg. That is really going to push the mount past it payload capacity.
  23. Nope. Been there, done that with 8" dob OTA and, nope - don't do it. Yes, that is imaging setup with 3x5Kg counterweights. What do you think wind does to a setup like that?
  24. I agree with all except this part. That is the part that I stressed should not be extrapolated from daytime photography and F/speed of lens. It is true when system is operating in target light dominating regime - which means target is much brighter than all other things that hurt image. In astrophotography - that is almost never the case. There are things between two nights that can easily turn the tables on F/ratio alone. For example - you are imaging very faint target and first night you have very transparent skies. On second night you loose that great transparency - then it might be even the case that 10 hours at F/4.7 will not match 10 hours from first night at F/5.9 If all things are equal - and only thing that has changed is F/ratio, well in that case - improvement will be even larger than F/ratio alone suggests. This is because of the read noise. You probably won't be able to see it - but yes, theoretically results will be even better than F/ratio alone suggests. What you can conclude is that imaging at F/4.7 will bring you more FOV and will also bring in a bit more SNR for same imaging time than imaging at F/5.9. Exact numbers vary from situation to situation and from target to target, so that is not something that can be easily stated.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.