Jump to content

inFINNity Deck

Members
  • Posts

    500
  • Joined

  • Last visited

Everything posted by inFINNity Deck

  1. Hi Bluesilver, yes, it is quite confusing, and it is a pity that I am in the middle of an imaging project, so I do not want to open mine to check it out for you. I went back to where I found that diagram: https://manuals.plus/sky-watcher/esprit-150-ed-super-apo-3-element-ed-apochromatic-refractors-manual#axzz7pVHdakQL As you can see it is a messy site. What I missed, is that the diagram is of the reducer/flattener instead of the flattener... So we have to ignore that drawing. That means we are left with the images and diagrams of other flatteners, and those seem to be "scope ( ( camera". Now, I do also have an Esprit 80ED with flattener on my rig (which is not being used at the moment), so I opened that one and that confirms "scope ( ( camera". But it would be good to compare what you create from your parts to that last flattener-photo I posted. Nicolàs
  2. I have checked several drawings of other manufacturer's flatteners, and the first and last element should be convex in the same direction: In the last of above images a faint reflection on the uppermost surface can be seen, indicating that indeed that surface is nearly flat. But best thing to do is to mount the lenses and check if the reflections of and view through the lenses compare to the images in my first reply. Nicolàs
  3. According to this image, the last lens before the camera is convex towards the camera: [PS: this is a diagram of the reducer/flattener, it comes from https://manuals.plus/sky-watcher/esprit-150-ed-super-apo-3-element-ed-apochromatic-refractors-manual#axzz7pVHdakQL] And the other side appears to be convex towards the scope: Here you can see what it should look like when everything is mounted. If you mount one of the lenses the wrong way around you may be able to see that it is different: I do have a flattener on my Esprit 150ED, but that is currently mounted with camera and all, so I will keep it that way... 😉 Nicolàs
  4. Nice images Nigella! I think it is good for Beermaker and other planetary starters to know that most quality images we get presented are the result of many failures. It takes quite a few attempts before a good image is taken. A few tips: - Always use an ADC (unless imaging near zenith); - Always check collimation on a nearby star (unless imaging with a reflector); - Always focus (some do it visually on the planet itself, but I learned the hard way that using a Bahtinov-mask on a nearby star or planetary moon works easier and better); - Always take multiple runs (seeing can change in a matter of minutes); - Do not expect every imaging session to result in a good image. Here is how I started four years ago (C11, 2x PowerMate, ZWO ASI174MM): And this is now, still using the same equipment: Still I have plenty recent examples that look nothing like this and that is what seeing does to your project... Nicolàs
  5. Part II of that page is especially of interest as it explains in great detail the aberrations you can expect (it is link to at the bottom of Part I). Nicolàs
  6. ADCs are very effective and a must have for obtaining good planetary images, even if imaging is done using a monochrome camera. The filters (whether they are embedded as a Bayer-pattern, or sit in a filter-wheel) have a band-width of about 100nm per colour. Suppose we image a planet at 40 degrees altitude, then the light coming in through a blue filter (400-500nm) disperses by about 0.7 arc-seconds, green light (500-600nm) disperses by about 0.4", and red by about 0.2". The PDS150 at f/15, combined with the GPCAM3, will image at about 0.3"/pixel, so in blue the refraction induced smearing is more than 2 pixels, in green this is more than 1 pixel and in red almost 1 pixel. Using an ADC this can be reduced to virtually zero. When we do nothing at all in the processing, the smearing from blue towards red is at least 1.3 arc-seconds or 4 pixels (actually more like 1.5 arc-seconds or 5 pixels as blue starts around 380-390nm). When the object is lower the refraction increases significantly, at 20 degrees altitude the total smearing is more than 3 arc-seconds (some 10-11 pixels). Nicolàs
  7. Hi Michael, if I draw straight lines in both images along the vanes of the spider, it indeed seems that the hub is drawn towards the focuser: It appears to me that the upper-right vane is more off than the lower-left one as if they are not in line with each other on the hub. The other two vanes look fine to me. Nicolàs
  8. Hi Beermaker, maybe it is good to read a bit more about the limitations of planetary imaging. There are some threads that discuss it in parts in this forum, but I have written two articles about it in a Dutch forum (they should translate properly when opened in Chrome). In short: the optimal focal ratio is around 3.7 x [pixel size], but that is not including the effect of seeing. Seeing ignored, for your GPCAM3 it will be 3.75 x 3.7 = f/13.9. So although under perfect conditions a 3x Barlow should be more than enough, you will not achieve optimal sampling due to seeing. Seeing negatively affects the optimal focal ratio. Your scope is diffraction limited (in green light) at a seeing of about 0.7". If the best seeing (expressed in arc-seconds) is worse than that, the factor 3.7 no longer gives the optimum focal ratio, and should be divided through roughly 1.6 x [seeing] ^ 0.74. So suppose your best seeing was about 1.5", the optimal focal ratio would be 3.75 x 3.7 / (1.6 x 1.5^0.74) = f/6.4! Not that I recommend doing this, as there is always a chance that the best seeing was better than the limit for your scope. It is merely to show that a 5x Barlow is not producing any better image, while it comes at a cost: longer exposure-times (= more smearing) and/or higher gain (= more noise). If you want a larger image, then it is best to image the planet at a maximum focal-ratio of 3.7 x [pixel size], stack the result and then resize it using your favourite photo-processing software. A good example is this post (again in Dutch, but again Chrome translates): https://www.starry-night.nl/forums/topic/mars-op-26-december-2022/ It shows Mars imaged with two cameras (ZWO ASI174MM and ASI290MM) in the same night, the former camera having pixels twice the size of the latter. Both were recorded at f/20. The larger-pixel camera does, however, show the same detail as the smaller-pixel camera, simply because the latter was oversampling by a factor of 2 (seeing not included). HTH Nicolàs
  9. Hi Geof, the images on the right are with the ASI174MM images, so the ones that were not oversampled during capture (but they are oversampled now because they were 200% resized). I do not say it is wrong to oversample, but it is unnecessary and it comes at a cost: longer exposure times (or higher gain), as correctly sampling means shorter exposure times (or lower gain). Nicolàs
  10. Indeed a twisted spider could cause this pattern as well. What I meant was what I show here below. When the spider is not properly mounted, you may end up with the following situation (the image at the centre is generated from the left one using ImageJ, the one at the right is an actual astro-image showing this issue): Here the vertical vanes of the spider are still properly aligned, but when these get out of line, the issue will show up along all four spikes. In above example the diffraction pattern shows two spikes in vertical direction, but when the angle between the two opposite vanes gets closer to 180 degrees, they will merge into one diverging spike. Nicolàs
  11. There is a fair chance that it is caused by a deformed spider (i.e. the vanes are not aligned by which they no longer form straight lines across the aperture). Normally this only occurs in one direction, but theoretically it could happen in both. You can check this with a straight ruler. HTH Nicolàs
  12. The original discussion David refers to is here: I joined that discussion, learned a lot from David (I still use his disc) and others, tweaked the RC-collimation method slightly and described it here (open it in Chrome to get it translated): https://www.starry-night.nl/stap-voor-stap-collimatie-van-een-rc/ In the meanwhile I have collimated four RCs, both tube and truss ones and of 8" and 10" aperture. Some have been back several times, not because collimation was off, but to make further adjustments of their focal lengths or mechanics (after which collimation was required). All perform as should. One of the scopes was altered to have a larger back-focus distance, so that may have been the case with your scope as well. Main importance is to have the focuser disconnected from the primary mirror, but achieving that may require quite a bit of machining. Another point is to adjust the mirrors by only using two of the three screws as in that way you avoid that the mirror-distance changes in the course of several collimations. Best is to paste a small sticker (piece of tape) over the screw that serves as reference. If you want to get the focal length right, it is of key importance to know the design value for your scope. The Ronchi-test can be of assistance, but is not very sensitive, so may result in a focal length being off by quite a few millimetres (but then, if the Ronchi-test does not detect aberration, then it will not affect imaging). Nicolàs
  13. Over the past few days I have been editing Part 2 of my Dutch article on the optimum focal ratio. I have explained that the difference between mono and colour cameras only affects the stack-size when we want to achieve the same S/N-ratio and that the effective pixel-size no longer affects the resolution (i.e. the correction factor of 2 is no longer required when stacking). I added a section in which I show how seeing affects oversampling. As the original is in Dutch, I thought it good to provide an English version here (but Chrome does translate above link pretty well). First we need to realise how seeing determines whether we are diffraction-limited or seeing-limited: I have done the calculations for red, green and blue. For green I used the same wavelength (574nm) as in all the simulations in the article Van der Werf and I wrote about the influence of seeing on detail level. The central value for for instance ZWO green-filter is slightly lower, around 540nm, which would position the green graph right in between the red and blue in above figure. So a scope with 180mm aperture (e.g. a 180mm f/15 Maksutov) would be seeing limited in green light when seeing goes above 0.8 arc-seconds. A SCT with an aperture of 279mm (C11) would be seeing-limited above a seeing of 0.5 arc-seconds. I determined how seeing affects oversampling by running the animations I made for the article on the influence of seeing on detail level through ImageJ. The resulting frequency spectra were used to measure the oversampling manually: On average for these two scopes the seeing affects oversampling by approximately 1.6 x [seeing] ^ 0.74. On an ideal evening, seeing can come close to the scope's critical seeing level (i.e. close to being diffraction limited). At a best seeing of 0.6 arc-seconds, the oversampling factor will only be about 1.1, at 0.7 arc second it will be about 1.2, so the ideal f-number would need to be adjusted by a few dozen percent (slightly more with the SCT than with the Maksutov). Last night I used my C11 EdgeHD to image Mars with my default planetary camera ZWO ASI174MM (5.9 micron pixel size) and with a recently acquired ASI290MM (2.9 micron pixel size, it was first light for this camera). So theoretically the camera would respectively require a focal ratio of 3.7 x 5.9 = 21.83 and 3.7 x 2.9 = 10.73. If we assume that we would need correct the focal ratios by about 20% due to seeing, these focal ratios would become 17.5 and 8.6 (or 19.6 and 9.7 when we correct by 10%). The C11 is f/10 and I added a TeleVue 2x PowerMate, by which it became f/20. So according to the theory the ASI290MM images would be oversampled by a factor of 2, the ASI174MM would sample correctly. The following luminance images of Mars are from around 7pm UTC: A few hours later the gaps in the cloud cover became large enough for complete LRGB runs: In both cases the image at the far right was made with the ASI174MM and resized 200% in post-processing to match the ASI290MM images. The rest of the processing was completely identical. Stacking was done based on 50% quality level in AutoStakkert!3 (with a maximum of 50% stack length). Sharpening was done during stacking with a 30% raw blend. No further deconvolution or sharpening has been applied. The only other correction (equally applied to all images) was a bit of additional saturation and correction for green. In the monochrome images we see details in the 290MM image that we do not see that clearly in the 174MM image and vice versa. The same accounts for the colour images. As expected the 290MM images were oversampled by a factor of 2 according to ImageJ (the ASI174MM images were only slightly oversampled, indicating that seeing was affecting imaging). Nicolàs
  14. That should indeed be oversampled, sorry for the confusion... Nicolàs
  15. Indeed the black border tells us it is undersampled oversampled (the spectrum 'begins' in the centre). The white-noise example shows that when oversampling by a factor of two, the black border is about 50% of the frequency spectrum. So the other way around thus tells us how far the image is oversampled. Here is what seeing does (from my Dutch forum article, based on the English ArXiv article😞 The sunspot-images were generated using software I wrote for the purpose and shows how they deteriorate as a result of increasing seeing. From left to right seeing is 0", 1" and 2". The simulation was done for a C11 and ASI174MM. Sadly enough the base image was already oversampled, something we found out slightly too late. Still the effect of increasing seeing clearly shows as the central blob in the frequency spectrum contracts almost proportionally. The one of the far right is quite similar to the Jupiter image of 20 December. Nicolàs
  16. FFT stands for Fast Fourier Transformation, which takes the data from the spatial domain (the image) to the frequency domain (the Frequency Spectra (scatter-plots) below the images that show which frequencies are in the image). The frequency spectra are scaled to the same size as the original images. If the original image is 100% diffraction limited sampled, the corresponding frequency spectrum would show data all over it. Below image from my article was generated using a white-noise image that was sampled optimally (far left). The frequency spectrum below it shows a very similar noisy image that is completely filled with data, which indicates that indeed it is at least not undersampled. The image in the centre here below is that same white-noise image but with a black border around it. The frequency spectrum of that again is completely filled as the original image is properly sampled. At the far right I have taken the original image and resized it 200%. The frequency spectrum of that shows a black border, indicating that the 200% resized image is oversampled. The smaller image at the lower left, the one with the red circles, shows that the input image is not 100% white noise as within the red circles the response is not random. These two areas are also visible in the other two spectra. It should look a bit smoother as some averaging is done of course by reducing it to 25%. When I started planetary imaging I tried visual focusing, but never got good results apart from by chance. Now I use a Bahtinov mask on a nearby star or, in case of Jupiter, on one of the moons. During focusing I raise exposure time to 100-500ms to get a more average result of the diffraction pattern, and I zoom the live image to about 200%. Using it at a low exposure time will make the diffraction pattern to change too much and very difficult to properly analyse. Nicolàs
  17. Basically what we are showing is that your image is oversampled. That means that you could use a lower f-ratio and then blow up the image to get the size you want. That in turn has the advantage that the exposure times can be much shorter, which in turn may result in more detail as the capturing will be less affected by seeing. Nicolàs
  18. Here is the analysis of the 20 December image. I have split the image into RGB, then processed each channel using ImageJ: I have not resized the images, only show them here at 50% to make them fits the screen. Clearly the FFT shows that the images are well oversampled. No detail is lost when reducing it to 25% and resizing it 400% again: Nicolàs
  19. Hi Geoff, here is an example of how sampling would work at different f-ratios. The image that I used was the luminance recording of Jupiter on 9 November 2022 at around 20:51UTC. Here is the final processed image: The recording was done at f/20 using a ZWO ASI174MM, TeleVue 2x PowerMate and C11 EdgeHD. The camera would require 5.9 x 3.7 = f/21.8. In the following image I took the luminance channel and ran it through ImageJ (left set). Then I resized it by 50% and again resized it 200% (middle set), which would have been roughly the same had I imaged the planet at f/10 and then resized it 200%. Indeed it looks a bit softer than the original, but after a little bit of sharpening (right hand set) we get an image that is visually not far off from the original. The frequency spectrum shows that resizing the image caused a little bit loss of data in the higher frequencies (so smaller details, the corners are black now), but most of the information was preserved. Sharpening added some detail, but that would have been no real detail, but mainly magnified noise. That hardly any detail was lost is mainly due to seeing as that severely affects image quality. Preferably we would like to image under diffraction limited circumstances, but at this aperture (11" or 279.4mm) that would require a seeing of below 0.4 arc-seconds, which is very rare under our moderate climate here in the Netherlands and UK: If you want to know more about how seeing affects detail, you may want to read the following article (this time written in English, you may wish to skip the first six pages): https://arxiv.org/abs/2208.07244 Nicolàs
  20. Otherwise install Chrome, that does the job as well and pretty good. Nicolàs
  21. Hi Vladimir, thanks for chiming in. That is why I poured it into an article... 🙂 but maybe should have done that in English.... 🤔 Will do so one day on my own website. That confirms my reasoning, so I will add that to my article. Nicolàs
  22. That of course depends on whether a true Barlow or a Telecentric System (Like a TeleVue PowerMate) is used. The former indeed changes the focal length when the distance between Barlow and camera is changed. In the latter the camera distance hardly affects the magnification. See: https://www.baader-planetarium.com/en/blog/the-benefits-of-telecentric-systems/ Also see: https://www.televue.com/engine/TV3b_page.asp?id=53&Tab=_app Testing a system is always recommended. Even better is to finish that test with a check in ImageJ as described in my article to see whether or not the image is oversampled. If it is, it is better to reduce the f-ratio as that will result in shorter exposure times... Nicolàs
  23. This is one thing I am still making my mind up about. For a single image this idea is valid, but what happens when we stack? When we stack images we may assume that the planet will not be at a stationary position on the chip due to seeing and tracking errors. Stacking software like AutoStakkert! that I use, determine the centre of the planet and by that displace (I presume) the data by a number of pixels (an integer number of pixels I presume). So imagine that frame 2 has a planetary centre that is 1 pixel (or any odd number of pixels) to the left or right and 0 pixels in up/down direction, the stack will cause two adjacent pixels to be filled with data. Likewise frame 3 could have an odd number of pixels offset up/down as well, and thus will fill adjacent pixels in that direction. If this is how stacking colour images works, then It would mean that colour cameras listen to the same f-ratio rule as mono cameras. The only difference would then be that one would need to stack 4 times as much data (for R/B, 2 times as much for G) to get the same signal to noise ratio. Hopefully Vladimir can chime in on this... Nicolàs
  24. Hi Geoff, under ideal circumstances and based on spatial cut-off frequency the optimal f-ratio is 3.7 times the pixel-size [in micron and based on a monochrome camera]. It has been discussed here a few times by myself and @vlaiv and recently I have written a second article on a Dutch forum about the subject (with many thanks to Vladimir for the discussions we had on- and off-forum). The nice thing is, that oversampling can be verified using ImageJ (Fiji), which is also explained in that article (again, kudos to Vladimir). If you open the article in Chrome it will be translated to English (it has a link to the first article, which explains the topic using the Rayleigh-, Dawes- en Sparrow-criteria and Nyquist sampling). Mind you that, when using a colour camera, the effective pixel-size is twice the actual pixel-size (the rest in interpolated), so for a colour camera the f-ratio equals 2 x 3.7 = 7.4 times the pixel-size (theoretically that is for R and B channels, whilst it is SQRT(2) x 3.7 x pixel-size for G). In order to further test the matter I have recently bought myself a ZWO ASI290MM which I will use in combination with my C11 EdgeHD @ f/20. The camera requires 3.7 x 2.9 = f/10.73, so f/20 is almost double oversampling. Almost at the same time I can now compare it to a ZWO ASI174MM, which requires f/21.83, so that should be slightly undersampled (I have only one C11, so comparison may be affected by changes in seeing). So although I do not expect any wonders from the ASI290MM, the outcome may be of interest to planetary imagers. Next step would be testing it against the ASI290MC which I have. The article: https://www.starry-night.nl/vergroting-onder-de-loep-deel-2-het-optimale-f-getal-nader-beschouwd/ Nicolàs
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.