Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm interpreting this to be sarcasm - you can't correct for multiple wavefront deformations at the same time - adaptive optics works only on a single point in FOV perfectly. Planet size objects will have very different wavefront aberrations from one edge to another.
  2. It does not work like that. You are correct that Adaptive optics compensates for wavefront error - but it does not happen on pixel level - that is impossible. It happens on level of the optics - hence the name adaptive optics - optics adapts to wavefront and counters it so that combined error is 0 - or optics cancels wavefront error created by atmosphere. This can only happen up to a point because we don't have infinite control over optical surfaces - there is finite number of actuators that bend mirror into wanted shape - and you are right - that happens deterministically, but it happens only after we have made a measurement of atmospheric wavefront deformation - so it is not predictive - it is reactive - much like guiding - we react after we have detected error.
  3. Yes, I've seen some tables - but I think that it depends on particular night and location I think that prediction is only in very narrow window. Active / Adaptive optic works similar to guiding. It tracks wavefront deformation on either a star with primary or secondary optics - Active (telescope or guide scope) or projected laser beam - Adaptive. Issue with both of them is that they are useful over very narrow field of view - much less than planet size for example. Take a look at this video: https://www.youtube.com/watch?v=cazY2gKqQrw You will see that planet is "boiling" - or wobbling See those deformed edges? That is because tilt component of the seeing wavefront error - is different (tilt PSF moves star or point position away from its true location). This shows that every single point in the image is affected by different wave front aberration. This is the same for optical aberrations of the system. Some of them depend on distance from optical axis / angle - like coma for example in Newtonians with parabolic mirror. Optics might be perfect - strehl 1.0 - but move away from the optical axis and you'll get increasing coma. In the end, it is forth noting that there is temporal relationship between seeing aberrations between points in the image. "Seeing moves" across the image - in multiple layers - because it is generated by multiple layers in atmosphere and in general air moves in atmosphere. If we want to simulate effects of the seeing - we need to account of all of those things: 1. fact that it is random 2. fact that every point in the image of the planet is affected by different wavefront aberration 3. fact that "seeing moves" in multiple layers. It is rather simple to generate some sort of seeing aberration. For example, it is enough to do this: I just create random wavefront and it results in this: Question is, how likely is that above seeing aberration is going to be encountered on particular night at any given moment?
  4. I've actually seen people do it. Of course - you can't print every single bit, but if you get shafts, bearings and motors - you can 3d print the rest. Here is an example: Just do a google search and you'll find various designs.
  5. Nice one. Title fooled me - I was under impression that you 3d printed whole mount for mobile astrophotography - like a 3d printed star tracker
  6. It does make sense for planetary imaging as well. If you want the best results - you should also do calibration frames for planetary imaging. Calibration is always the same - you want to remove any signal that is due to characteristics of the sensor and telescope and leave only light signal. As minimum set of working calibration frames I suggest you use: - darks - flats - flat darks No, PIPP handles calibration for you. You shoot those - the same way you shoot your planetary video - as a video, except: - darks need to be of same exposure length and settings as your lights (don't forget to cover the scope for these - they must be dark - so without any light) - flats need to hit 75% of histogram - so adjust your exposure length accordingly - flat darks need to match your flats in settings and exposure length and also be taken in darkness (covered scope)
  7. Well, I'm planing a test on my SkyMax 102 - to both measure and verify this method with straight edge - not sure when I'll have both time and means to do it (hopefully soon). Anyone wanting to try this method at home - I can walk them thru ImageJ usage - or even do it for them if they post their results here. Recording part is straight forward - take your scope, find high contrast very straight edge at some distance (say minimum x20 focal length of your scope). Make sure edge itself is not blurred but very sharp and very straight and capture some images of it. If you can - use dedicated camera, if not, maybe even mobile phone at eyepiece could work? I did not even think of that - but it would be interesting to see MTF of complete combination - scope + eyepiece. One just needs phone adapter to snap image. Use high power eyepiece (does not need to be wide field) to have good sampling rate so that we don't loose high frequency portion of MTF due to under sampling. Take another shot of anything that we can use to calculate sampling rate. Maybe shoot ruler and also measure distance to the target (as precisely as you can). Ok, so here is complete idea: - Take straight sharp edge target (it can be just printed on a piece of paper). It is important that edge is high contrast and of uniform color (black / white), that edge is very sharp with no blurring of its own and that it is straight. - measure distance to target as best as you can - Take image of the target with high power eyepiece and your mobile phone (use adapter as any shake will void results) - also make sure your phone has good focus - Take additional image in same configuration (same distance, same eyepiece, etc ...) of something of known length - maybe a good ruler or similar - so we can calculate sampling rate in given configuration. From that data we can derive actual MTF or your scope + eyepiece and compare it to theoretical scope MTF?
  8. I would not quite put it like that. Sometimes two errors cancel each other out and you get perfect optics. If you have spherically under corrected optics - you can insert corrector that over corrects and if two are matched - you'll get perfect correction (think catadioptric telescopes - spherical mirror + corrector) There is no guarantee that two match - if phase shift due to seeing is matching optics - you'll get better image, but if its opposite - you'll get worse image (in cumulative way). Since seeing is random and ever changing - I would say that errors, at least in wavefront - add like any sort of noise - "in quadrature". Not sure how that translates to perceived image quality.
  9. Neither? Both are quite a bit heavy and large for Heq5. 250PDS in particular - mount simply can't handle that much weight let alone be precise for astrophotography (almost 15Kg without imaging accessories). 200PDS at 9kg (+ imaging accessories - easily pushed up to 11Kg) will be right on the limit of what HEQ5 can handle - and any wind is just going to make sail out of such a large tube. I mounted 8" F/6 tube on heq5 and while doable - you really don't want to do it. 150PDS is going to be very nice match for Heq5 mount and enjoyable newtonian to image with - maybe think of getting that one.
  10. You are not going to see constellation in polar scope - just a star - a bright one. Use constellation by naked eye before you look thru the polar scope to verify you are aiming in general direction. Setting latitude and aligning to compass north is going to be only roughly accurate - it can deviate by a degree or two - enough to move Polaris outside of the polar scope - but it should provide you with good rough pointing.
  11. Well, this last one is pretty similar except for two things. Your is slightly more noisy (which is strange since we use the same data) and "overshoots" critical sampling. Mine is right up to critical sampling. Btw, distance between mirrors in Maksutov system alters effective focal length (so it might be operating on slightly lower FL) and also can introduce spherical aberration. Not sure where focal point of Mak180 is placed (optimized for 1.25" or 2" diagonal) but since this is close focusing - mirrors will be further apart than usual? I was thinking of doing the same with my ST102 SkyMax102. Use Ha filter and ASI1600 (with 3.8µm pixel size) and shoot both straight edge and do Roddier test with artificial star (at the same distance so I get comparable results even if they contain some SA due to close focus).
  12. @alex_stars I wonder why we keep getting different results? Although, we did get the same curve from synthetic data, right?
  13. With that image, here is what I got for MTF: I did it the same way I've done it every time so far: - take image, convert to 32bit (in this case since it is 8bit) - run differentiation filter - crop to important bit Then I run 2D FFT on that and get this as a result: According to theory, at 600nm and 3.75µm pixel size, optimum sampling rate (provided that you used ASI224 in your signature and that you used all pixels in 600nm - not just red channel) is F/12.5. Your scope at F/15 is properly sampled and it shows from MTF.
  14. This topic should be of interest: It shows that with a bit of care - AzGTI can guide in 1" - 1.5" RMS range without much issues. It is general rule of the thumb that guide RMS should be at least half of the sampling resolution - which gives us baseline for imaging resolution on this mount. I'd say that 3"/px or above is doable. I'm still talking about using SkyMax102. Canon 1200D has ~4.3µm pixel size. Base resolution is 0.68"/px. If using super pixel mode (and one should) that is doubled to 1.36"/px. Doubling that by binning x2 - will not be enough in my opinion as it will end up at 2.72"/px, but using x3 binning will be just right at 4.08"/px This means that image will be only 864 x 576 in the end, but mosaics are always an option.
  15. Excellent post. I think this will be helpful to many people.
  16. It looks like I was right about under sampling as well I'm not sure that I fully understand this graph - I was expecting different kind of cut off, but here is result: This graph is created in following way - I took edge, convolved it with PSF that is just properly sampled, differentiated it and then I binned x2 the result which would make it under sampled and did FFT. Graph no longer hits the 0. I was not expecting it to keep the shape (fact that it has two sides is just due to line for profile - I did not bother to start at center - I just measured across the whole image). If this is so - then I don't think we have viable method for amateurs to use - scopes that are F/6 for example would need ~1µm pixel size to properly sample without a barlow - and there are no such cameras.
  17. @alex_stars It looks like I'm right about time scaling when edge is tilted at an angle. Here is what I've done - I generated two edges, one vertical, one diagonal - convolved both with PSF generated from clear aperture and differentiated both in X direction. Then I took crop of each 1px tall and wide enough (around 400px each) and I performed FFT on both. I scaled result and printed on the same graph - this is what I got: Now, if I'm right - cutoff frequency between these two graphs should differ by ~1.41 (square root of two). We have one being at about 127 and other being at about 90. 127/90 = 1.4111111 - that is actually very good match for eyeballing numbers of the chart. This means that curved edge that is averaged is going to produce issue - it will average MTFs with different scaling factors. I think that the end result is just "averagely" scaled MTF since FT has property of linearity (so FT of average of functions is average of FT of functions) - in any case, this makes cut off frequency inaccurate if line is at an angle. In above example where you had different cut off frequency, did your software tilt the edge?
  18. If you don't mind - I would rather tackle this as a team rather than opposing sides. I think this method is a valid one, it is based on solid math. I'm very interested in assessing its usability. You have working software and please understand that I have reservations about accuracy of it for few reasons: 1. Your method deviates from straight forward math that describes this method 2. I don't have proof for some of the claims so I need to understand those claims deeper (like sampling at an angle and doing 1D FFT and such) and if possible find proof of them. 3. Graph that you presented as result of measurements does not quite fit the theory - it is "above" theoretical maximum in two places Now this raises my eyebrow - why should we need to do that and to which extent is this method going to produce such results - how accurate it is? ImageJ - it is java based free software tool for scientific image manipulation. I use Fiji - which is distribution loaded with assorted plugins - like FFTJ for Fast Fourier Transforms, ... If you like, I can detail every single step I do so it can be reproduced by anyone (I sort of assume that it is reproducible - when I take screen shots of steps, but I'm not sure if I'm detailed enough). I don't have payed access to it and I don't intend to pay just for the sake of this discussion. I have had a look at the freely accessible one and quoted from it. First off - let me tell you that I've found mathematical justification for using 1d FFT in this case - but it is limited in its scope. This is example of what I mean when mentioning you being helpful - It would be so much easier if you mentioned - "look Vlad, check out 2D FT of separable functions" - and then I would do that and it would save us time. So yes in case of separable function - which means perfectly vertical straight edge in this case (can't have bends and be at an angle) - we can use 1d FFT to produce cross section of MTF as in this case - FT of product is equal to product of FTs and in case of vertical straight edge we have f(x) * constant so we'll end up with FT(f(x)) * delta(y) - which is 1D FT of f(x). I don't know if there are other cases where this is applicable as well - feel free to point me in the right direction. On further thought - could it be that scaling in frequency domain has something to do with angled edge? Here is why I think so - but we'll need to confirm that: Sampling LSF at an angle will produce stretching in "time domain" so FT will be shrunken in frequency domain. I've been thinking of another thing, let me point out something from your data graph: Measured line does not end at 0 - now this might be consequence of noise or some error in the data - but it could be consequence of under sampling. At one point I remembered that you mentioned that image used is color image, and I figured that it is made with OSC sensor. Now, OSC sensor has Bayer matrix and as such - samples at twice the lower frequency than mono sensor (at least for red and blue - green is probably somewhere in between - depending on debayering used). If above is correct, then this method is not going to be as useful to amateur community as it would require very long focal lengths or very small pixels - which generally means use of barlow element, but then we are not measuring scope, we are measuring scope + barlow. I know that part of super resolution you were mentioning is to tackle this issue and having edge tilted has to do with "drizzling" - however, I don't really think it works - unless I see some sort of proof for it (math proof or even sim) If you don't mind - I would like us to explore these things. I will, on my part do some simulations and prepare data. If it is not too much trouble, maybe you could run that data thru your software so we can see the result?
  19. AzGti will not hold 130PDS properly for imaging let alone 200mm scope. Eq3 is not decent mount for imaging. If you want to get different setup - look into at least EQ5 class mount as starting point and scope like 130PDS or 150PDS - forget 200mm scope for now.
  20. I really want to get to bottom of this and confirm that doing things the way you do it - 1d derivative and 1d FFT, is producing correct result - but I'm having difficulty in doing so and honestly - you are not helping me much with that. For example: I did the same thing you did - I took last image that I posted here - took readout of one horizontal line and convert that to numbers. I then entered those numbers into this: https://scistatcalc.blogspot.com/2013/12/fft-calculator.html I got two different result based on how many samples I've chosen - In both cases I tried for line to be in the middle of the set. If we can't get consistent result on same (perfect) data using a method - how reliable is that method? With method as given by underlying math, even if I do different cut outs I still get the same result (only pixels need to be properly converted into frequency units): By the way - regular method is quite resilient to noise. Here it is polluted with gaussian noise as to produce SNR of 100 (easily accomplished in single exposure with sensor having full well capacity of about 15K):
  21. It should go up to 256px or rather up to frequency of 0.5 cycles per pixel
  22. But its it accurate? I have proposition for you - run your software on following image: edge_transfer_function.fits So we can compare it to known PSF/MTF and see what sort of result your software produces
  23. I think if PSF is asymmetric then so is MTF. This means - coma, astigmatism, pinched optics, tube currents and so on. You can see more details here: https://www.telescope-optics.net/mtf.htm (scroll down the page to the bottom - you'll see large table with graphics - showing aberration types / wavefront, PSF and MTF - if MTF has multiple graphs it is in different orientations). CO makes symmetrical MTF, but if your plan is to do planetary - maybe look into Classical Cassegrain instead? CCF telescopes make ccs with very long focal length - like F/20 and F/24 - but are extremely expensive. Maybe something could be found on second hand market instead?
  24. 8 bit is going to introduce much quantization noise in samples and in general samples are really noisy - which can be seen as alternating black and white lines. This is the result of processing. It does not look like much, but take the image you posted in this thread earlier as exact image you took - again, it is only 8 bit. Then I took 512x512 region that has fairly straight edge. I applied differential kernel and got this: and finally I took FFT of that and got this: This is zoomed central region. Since edge is not perfectly vertical but angles - we have angles in our MTF cross section - but we can still extract information from it: Now I don't believe that little "belly" that starts around 25 is genuine - I believe that it is "secondary spectrum" so to speak - pollution from other lines at an angle - but first part is fairly strong - very good for only 8 bit data and curvy Edge. This shows that you can't read off data from the image until you finish complete procedure - it is best to leave image as is and perform operations on it.
  25. Well, there is actually large number of them readily available that will satisfy the need of amateurs - it is just the matter if they have been implemented in software. For example - free software like Deep Sky Stacker uses linear interpolation, but PixInsight has large number of supported algorithms - here is page from its reference documentation on interpolation algorithms: https://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html As far as I know APP also has advanced interpolation algorithms. Not sure how planetary stacking fares in this respect - I don't know what AS!3 is using for example.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.