Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. It considerably lowers SNR without any real benefit. Since aperture does not change - setup gathers same amount of photons per unit time - but spreads those photons over more pixels. Each pixel then receives less photons in unit time - so system is slower and achieves lower SNR in set imaging time - again without any benefit in terms of detail since we are over sampling. With small pixels of modern cmos cameras it is very easy to over sample.
  2. I worry about that all the time Do you have any idea of what sort of working resolution you want to achieve?
  3. I will look into that at first opportunity. My computer died on me, so I have now new machine with band new install of Windows 11 (that was adventure to install and get going), so missing all the software needed to analyze the data. Hopefully, I'll install it all in next few days.
  4. There is almost no difference between the two. With CCD camera - there truly is no difference. With CMOS cameras and software binning - there is only minor difference of read noise. Read noise is larger if you bin then for regular pixels - but with stacking - that is not really issue as we adjust exposure length to over come read noise. We could say that it is small drawback of using small pixels and binning - need for longer single exposures in stack. Many people have heard that CMOS sensors enable short exposures and that is true, but when you over sample that advantage slowly diminishes. If you over sample by factor of two for example (with intention of binning x2 later) - your single sub needs to be at x4 longer. If with optimal sampling one would use 1 minute exposures - with twice over sampling - they need to use 4 minute exposures to overcome read noise. If you over sample - you don't get any additional definition. There is nothing to be recorded above optimum sampling frequency. Image simply won't look any sharper or more detailed than if you properly sampled. In fact - that is one of the tests for over sampling. If you can: - reduce the size of your final image to smaller size by resampling and then - reduce it to original size - and it does not change, then you over sampled. Images usually loose detail when you down size them and then up size them back - unless they are over sampled to begin with and detail is not there. If you are not interested in wide field and you want to maximize detail that you capture - you should then aim for optimum sampling. This is what this tool should be for - if you have idea what you want - it should tell you what combination of camera and scope will provide you with that. Workflow would be like this - you set your goal for detail level you want to achieve - say 1.5"/px and then you try to estimate average seeing in your location as well as note down your mount performance (usual guide RMS you get with your mount). Then you have all the info to try out different camera / scope combinations to find one that is closest match for your target resolution. Tool will also be useful if you set some unrealistic goal like sampling at 1"/px. It is very hard to achieve that sort of resolution in all but best conditions with moderately large aperture (8" +). It will show you that there is little to be done if your seeing is regularly 2"-3" or your mount performance is 1" RMS guided. It needs to be as it adds to overall blur of the image - and reduces detail (in technical jargon - it acts as low pass filter and removes high frequencies - that we have been mentioning above). If image is blurred - it does not make sense to use very small pixels / high imaging resolution - as simply there is no detail to be captured. This is what we mean by over sampling - if you use too small pixels for level of blur of your image. We can never know what sort of skies and mount performance we will have on any given night - but we can estimate knowing our usual conditions. If mount never guides better than say 0.7" RMS - well - no point in expecting it to suddenly start guiding at 0.3" and reducing its blur on the final image. If you are doing EEVA - you might say that mount error is say 0" RMS. For very short exposures mount simply does not have time to drift / be corrected / drift / be corrected - and so for many cycles over one exposure. In few seconds exposure, mount error will be minimal - and you can say that it is effectively 0. This will then in turn mean that FWHM estimates are based on aperture and seeing only. You can check if there is mount contribution by comparing FWHM values of final stack of say 2s exposures vs exposures that you usually use for EEVA - like 20 or 30s. Only difference that I see is use of guiding - and it can be handled like above - or like you suggest, with a tick box - if that tick box is ticked then mount error is set to 0 and you can't choose mount / mount error in further calculations. Other than that - maybe insist on optimum sampling and give stronger warning if oversampling? That is for EEVA practitioners to advise - what is more important - get image as fast as possible, or get the best definition of that image (difference is very small for slightly under sampling but it can be significant in amount of time needed to observe the target).
  5. That scope is excellent. Optics is color free and scope is very well corrected. It really does not matter that it is not FPL-53. It is triplet scope. FPL-53 vs FPL-51 would be significant on fast ED doublet. Given that this is triplet scope - correction is more than fine, in fact better than on FPL-53 doublets. Have a look at this review by Ed Ting, there is part on AP as well:
  6. In that range, 130PDS is the scope to go for in case of astrophotography. If you have a friend with 3d printer - maybe you could print some replacement parts / add ons for Heritage to make it more stable imaging platform? One member of my local forum 3d printed replacement focuser for his Astromaster 130 newtonian and takes images with DSLR with it. https://www.thingiverse.com/thing:2552565 Maybe you could do something similar to your scope if you have access to 3d printer?
  7. Yes you can, but results won't be as good as in full darkness. Moon in the sky has similar effect and so does light pollution. You need much more total exposure than you would otherwise need in dark skies. There are different ways of doing this 1. Using brighter stars as orientation 2. Using some sort of plate solving - this is really the same as above, except - you let the computer examine stars (so it can work even with faint ones) and tell you where the scope is pointing. Same as above - by looking at star patterns in test exposure. You look at planetarium software and note star pattern around target - and then you try to recognize it in test exposure. Or you can let computer do that for you. Photographing planets uses very different approach. DSLRs are not the best option for capturing planets. There are planetary cameras for doing that. One records a movie - or sequence of very short exposures which try to freeze the seeing and then software later stacks those exposures. Barlow that is to be used depends on camera pixel size and telescope F/ratio. There is formula to determining needed F/ratio for best image (most detail / largest image). By looking at declination drift. If you get star elongation in direction of declination - then you need to get better polar alignment. You can also use software to calculate how good your polar alignment is. SharpCap has this option. Yes and no. You can get second motor and you can guide your mount, but computer can't "look" thru DSLR as it is imaging. You need separate small camera and way to attach it - either to finder or to main telescope. This is called guiding and yes - most people doing astrophotography do it at some point. It is optional. Benefit is that you can correct for poor polar alignment if you guide and you can turn your mount into a goto. Using computer can automate centering of your target if you have both axis powered. For simple astrophotography - you don't need it. If your polar alignment is good, once you've located your target - it should not move in declination so there is no need to move the mount in that axis. UHC and LPS filters are different types of filters. UHC are good for emission type targets only, while LPS filters can be used on any target. UHC is good filter to have - as long as you use it for what it is intended for (emission type nebulae). I've only used IDAS LPS P2 and it is good LPS filter, but I'm afraid that it is a bit over your budget. Not sure what to recommend inside your budget. Yes - you can only use central portion of your field of view and stars will be ok there. That way you loose field of view, but what is left has good definition. You can judge yourself how much of the field you can keep - depending on how much distortion is ok for you (it is progressive and rises with distance from center). Focuser on Heritage will not hold DSLR properly and will tilt. This will cause issues with the image. Mount payload is usually given without counter weights - which means gear only. EQ5 has say 10Kg payload - that means it can carry 10Kg of gear and 10kg of counter weights. Yes you can and that is the purpose of settings circles - but in practice, it almost never works except for very wide field. Here is video explaining why:
  8. I don't think that point of this thread is to agree with me, at least that is not why I started it, but I would be happy if everyone agreed on the correctness of theoretical framework though - at least people bothered enough to follow thru and try to understand what is being said. I would be very thankful if you actually put some effort into convincing me otherwise if I'm wrong about something. It does not require much - just pointer to mathematical explanation of why I'm wrong (I'm very capable of following math arguments) and/or example. When you mistakenly thought that pixels are square, I did put some effort into showing you that it is actually not the case and that pixels are point samples and squareness arises from use of nearest neighbor interpolation. If you still have doubts about that, I can offer yet another example to confirm pixels as point samples. That is why I asked: It's not about me liking your objections. Question is - do they have merit? One about square pixels does not. I also appreciate you having other objections - like band limited signal and any artifacts arising from any possible higher frequencies. Telescope aperture is ideal low pass filter. It effectively kills off any frequency above critical frequency. They are not just attenuated - they are completely removed. MTF of best aperture looks like this: Seeing and guiding provide another strong low pass filter. These produce Gaussian blur over long exposure (Central limit theorem https://en.wikipedia.org/wiki/Central_limit_theorem ). Fourier transform of Gaussian is a Gaussian - so these also act like low pass filter with Gaussian shape in frequency domain. https://en.wikipedia.org/wiki/Gaussian_filter All of these restrict possible high frequencies and act as strong low pass filter. We have a means of determining approximate cut off frequency by just examining FWHM of stars in the image. They are well approximated by Gaussian shape. Given that combined PSF convolves the image and stars are point sources - then star profile is PSF profile. For this reason we never see aliasing artifacts in astronomical images. There are plenty of under sampled images created by amateur astronomers - just look at wide field shots at 3.5"/px or higher. Some of those images end up on APOD. We can even demonstrate that under sampling does not create visual artifacts in final image - even in high frequency scenarios like globular clusters. Here is an example: This is M13 sampled at 8"/px - that is very undersampled - yet image looks fine. There is visually nothing wrong with it - it does not show any artifacts. I don't think you properly understand what I'm saying. I'm not trying to convince anyone of purchasing a large pixel camera. Not more than Astronomy.tools CCD suitability is already doing. What I'm saying is - we need to change that tool to: 1) Correctly determine over / under / correct sampling for set of parameters user enters - like seeing, scope aperture and mount performance. Tool already does this but in limited capacity which sometimes leads to wrong results. 2) Advise users on any ill effects of their choice. Over sampling has big problem with making system much slower than it needs to be unless data is binned. No advantage comes from over sampling in long exposure imaging - no additional detail will be captured. Under sampling won't capture all the detail available - but besides that - won't cause any other problems. Only problem that can arise from under sampling is aliasing artifacts. Those however don't appear in astronomical images due to strong low pass filter that seeing + guiding + scope aperture represent. Higher frequencies are killed off and those that are not are attenuated below noise floor (noise has uniform distribution in frequency domain while signal is attenuated by gaussian and MTF curves). Some people want wide field of view and there is nothing wrong with being under sampled in that case. They are willing to let go of that last 10% of detail/sharpness/contrast for wide field image. You can't have both - at least not yet. We don't have good large sensor astronomy cameras with pixels below 2µm. When those cameras are made - tool will still be viable. People wanting wide field will be advised that they should bin by certain factor given their small pixel size for best results.
  9. We are in total agreement here. That was my position from the start. I think that over sampling is bad and under sampling can be used for wider field without any issues. Tool needs to reflect this and it should correctly calculate over / under / correct sampling. Currently it is wrong as it puts optimum sampling higher then it really is - it is causing most people to over sample while thinking they are in the safe zone.
  10. In this case - we did not forget about it. It has been modeled. We are not looking at diffraction limit of the telescope but instead saying - let's calculate approximate expected FWHM given certain parameters - scope size, seeing and mount tracking. That is one of the objections to original tool which just uses seeing but does not account for mount tracking performance or scope aperture. All three add up in long exposure astrophotography to create resulting star profile FWHM - which determines actual detail in the image. This FWHM is then used to calculate optimum sampling rate and give recommendation based on that. There can be three cases: 1) over sampling. I advocate that in this case binning should be presented as an option and drawbacks of over sampling in long exposure astrophotography explained - namely lower SNR without detail if one does not bin to recover SNR 2) proper sampling - this should get A-OK and maybe note that binning will improve speed with very slight reduction in detail, 3) under sampling - this should get A-OK with note that some of detail sharpness is lost because of this, but system will be fast I completely agree. This is also the problem - most novice astrophotographers looking at astronomy tools ccd suitability tool are not aware of how binning should be utilized and what are resolution that should be used given FWHM in their images. They probably even don't even know how to measure FWHM. This is something that they have to be told - and above tool should tell them - "look, you are sampling at say 0.8"/px - but you'll have 4" FWHM stars with your little scope. You should really be sampling at 4"/1.6 = 2.5"/px - bin your images x3 to get to 2.4"/px which is close enough to your sampling target".
  11. I think that most of us in fact own diffraction limited telescopes. Most of us have telescopes where Strehl ratio is above 0.8. In fact - if telescope is not diffraction limited - most of us will call it a lemon and get another telescope as high power views would be noticeably blurred. This thread deals precisely with that aspect among other things. Formula is given to approximate resulting FWHM of star profile on exposures given seeing, telescope aperture size and guiding performance. I will address some of the things you have quoted now regarding their paper: You can't have optical system with limited aperture size that is not bandwidth limited. Bandwidth limited has different meaning from diffraction limited - although they have some things in common. Bandwidth limited means that there is limit to what can be resolved with given lens. Some lenses are sharper ans some are less sharp - but each has limit of sharpness - it has limit on bandwidth of information it can record. There is no such thing as lens that is band unlimited - that would equate to 80mm telescope that is able to resolve rocks on Ganymede from Earth's orbit - that simply can't exist. Diffraction limited means that lens is basically as good as can be and that blur is not due to imperfections in glass but rather laws of physics - it is wave nature of light that is causing the blur. More specifically diffraction limited is said for apertures that have Strehl ratio of 0.8 or higher. Every system that is band limited (so any lens / aperture as they are by their nature band limited) has its own Nyquist frequency. By definition, Nyquist frequency is twice highest frequency component of the signal - and any band limited signal has maximum frequency. Only band unlimited signals don't have maximum frequency as their frequencies extend into infinity. They have MTF of their lenses - they don't have to assume anything - it is enough to have MTF and you can easily read of Nyquist frequency for that lens. For example: Teal dashed MTF ends at 310 cycles per mm - that means that wavelength of maximum frequency component is 1000µm / 310 = 3.226µm and that optimum pixel size for that lens is half that - ~1.6µm. Orange dashed MTF ends at 420 cycles per mm, so for that lens - optimum pixel size is 1000µm / 420 = 2.4µm and half that is 1.2µm - so for that lens - optimum pixel size is 1.2µm.
  12. With 6" scope all there is to be recorded from 1.2" disk fits into 3.5px across. Overall - about 10px will cover the disk (so really 3x3 matrix). Anything more than that is over sampling.
  13. Actually - in that particular example, I learned something new thanks to you - slanted edge method.
  14. I can and most time always cite sources that you can verify about the things I "claim". I always try to make my examples repeatable by others. If ever in doubt how I did something - please ask and I'll walk you step by step.
  15. Do you have any counter arguments to arguments that I made about validity of the claims in that paper? Is their phone really over sampling like they claim?
  16. Not sure about real time, but here is realistic Ganymede texture map: https://www.deviantart.com/askaniy/art/Ganymede-Texture-Map-11K-808732114 Maybe you can map features to it? More resources: https://astrogeology.usgs.gov/search/map/Ganymede/Voyager-Galileo/Ganymede_Voyager_GalileoSSI_Global_ClrMosaic_1435m
  17. well - this part is important. That is how you measure e/ADU. There is relationship between noise and electron counts - noise is square root of electron count. This however does not hold for ADUs Say you have e/ADU of 0.25 and you receive 100e per pixel - then standard deviation of such frame will be 10e (this is simplification if there is no read noise or dark current and so on, but these can be accounted for in real measurement). If e/ADU is 0.25 - then you will record 400 ADU but standard deviation will be 40ADU (10 / 0.25) instead of expected sqrt(400) = 20. Signal linearly depends on gain - but noise in quadratic fashion. Square root of ratio of the two is e/ADU so 40 = 2x20 -> 1/2^2 = 0.25 From measured standard deviation and measured average ADU - you can get e/ADU. This is best done on several different exposure lengths and one must account for read noise and dark current (need to be subtracted properly), but that is the principle. I was just wondering if sharpcap did actual measurement or relied on driver reported value.
  18. Not sure what is lacking in my explanations that you still maintain your position, so I'll give it one more try with different articles from wiki explaining concepts: found at: https://en.wikipedia.org/wiki/Multivariate_interpolation found at: https://en.wikipedia.org/wiki/Image_scaling As well as dedicated page on https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
  19. Could you be more constructive than that and say exactly what you are disagreeing with? Is this your final position regardless of things I've pointed out above?
  20. How does sharpcap measure e/ADU displayed in the table?
  21. Look, this is going a bit in circles now. Do you have any objections on any of the three points that I wrote? @Martin Meredith You seem to be under impression that under sampling is a bad thing, but seem to agree with the rest of it? Do you still think that squares are indeed actual pixels or have you accepted that squareness of it is down to choice of interpolation? Here is one more simple explanation that should be easy to understand by most. Let's look at only two adjacent pixels. Let it actually be 1D case (2D is just the same) - like sound samples or pixels on the numbers line. Pixel at position 0 has value of 10 and pixel at position 1 has value of 11. That is what we know. Question is - what is value at positions 0.25, 0.5 and 0.75 of this function? We don't really have actual number for any of those positions. We just have two numbers - one at position 0 and one at position 1. We can say - well, that is easy - let's round the position and take number from position that we have. round(0.25) = 0 - so value at position 0 is 10 0.25 has value of 10 round(0.5) - we can choose to round up or down - so let's round it down and so it is 0 and again value is 10 0.5 has value of 10 round(0.75) = 1 - now we take position one and value is 11 But hold on - wouldn't it be better if we draw a line between two points that we have and then look at height of that line instead? In that case at position 0.25 we will have value of 10.25, at 0.5 it will be 10.5 and at 0.75 it will be 10.75 I just described two different ways of "filling in the blanks" - or interpolation. Here is image from wikipedia showing three basic interpolation types - nearest neighbor, linear and cubic: As you see - we only have dots - and the rest depends on how we fill the gaps. Among other things - Nyquist sampling theorem specifies how to interpolate in order to perfectly restore original data / function that we sampled. https://en.wikipedia.org/wiki/Whittaker–Shannon_interpolation_formula Why is simply Sinc function not used for interpolation. Because Sinc extends off to infinity on both sides (or in case of images in both width and height) and is not practical to do so. Next best thing is Lanczos - which is windows Sinc function.
  22. Well, maybe it is published somewhere - but I doubt that it would pass peer review with such statements. They even contradict themselves in the text - for example - look at the graph: They mark lines that represent sampling for different pixel sizes - including 1.4µm pixel - being right most. 1.4 pixel corresponds to 2.8µm cycle and there will be 1000µm / 2.8µm = ~357 cycles per millimeter. a) they have marked half of that on the graph - 180 cycles per millimeter - clearly using half instead of two times frequency for some odd reason b) lens discussed have much higher MTF cutoff frequency than that - over 600 cycles/mm - probably reaching 757cycles per mm that is theoretical maximum for F/2.4 lens How can that phone then be over sampling? Yet that is premise of that paper and also conclusion: This is start of their conclusion. I agree with first part - if you control "/px of your setup then yes, sensor size and choice of optics to get wanted "/px dictate speed of setup. Then it goes on to say that smart phone optics has higher than "Nqyuist information" - what ever that means. This simply can't be true or mean anything sensible. Amount of information is dictated by aperture size. This is follows from theory of light and diffraction and is well established. Nyquist sampling theorem is also well established. It dates back to 1950's. If it is somehow flawed - well we had 70 odd years to disprove it (it is really mathematical proof so we don't have anything to disprove unless we turn math upside down) - yet much of modern telecommunications and signal processing science relies on it.
  23. By the way - that paper you linked is not scientific paper - it is not peer reviewed nor published in any journal and funnily enough - authors all have e-mail addresses at nokia
  24. That paper discusses something completely different and is questionable in its methodology and conclusions. As an example - they claim that there is mobile phone that is oversampling. In order to critically sample F/2.4 lens - one needs 0.66µm pixel size. if we put above 2.4 lens and 550nm into equation we will get 1 / (0.00055 * 2.4) = 757.576 cycles per millimeter or you need twice that many pixels to properly sample per millimeter (Nyquist) - so optimal sampling is with 1 / 2 * 757.6 = ~0.66µm Pixels in this "over sampling" camera are at least twice as large as they should be for optimal sampling let alone over sampling. What they have shown in that paper is following: If you sample image with higher frequency than other image and then resample it down using one of resampling algorithms (that have cut off filters built in them) - you will get less aliasing artifacts. Not because of sampling but because cut off filter that is essential part of interpolation algorithm for image reconstruction. That is their conclusion: Both images are made the same only Nokia one is taken at original sampling and then resample down to 5Mpix - and suffers from less Moire because of that (but is not completely free of it as it is aliasing artifact that you can't avoid if your signal is such that it contains a lot of strong high frequency components).
  25. You don't care because of loss of SNR? Binning will have same effect as using larger pixels. If you can bin to optimum sampling then fine, but if you bin to same level of under sampling - things will be the same as regular under sampling. You can't recover SNR and still be over sampled (although there is no need to be - but again - if you don't agree with that, there is only so much I can do about it). None in above image is wrong. It should look like this: Everything else is some sort of interpolating algorithm, and nearest neighbor is most "artifact-inducing" one. If you really want to know what sort of artifacts each interpolation method induces - I advocate you do some tests. In fact - I'll do them today later and post here. It is rather simple - we make a function, we then sample it, we reconstruct it using different interpolation methods and we compare to original (subtract the two and look at residual).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.