Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,032
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Very complex topic. I'll just outline some of the complexities of it and how it fits together without going too much into detail - because I don't know that detail, and to be honest, not sure if we do have good models that cover all cases. Let's first assume that seeing effects don't depend on aperture size and can simply be modeled with gaussian distribution. This is not bad first order approximation. In each instant of time, actual seeing induced aberration is random but if you average them over time due to central theorem - distribution will tend to Gaussian. If you take star profile in long exposure and fit gaussian to it - you'll get very good fit. Then there is second part in resolution equation - how good your tracking / guiding is. Again these are random displacements from where mount should be pointing and again, over time it adds up to Gaussian distribution. Third part in resolution equation is aperture size - larger telescopes simply resolve more. There is Gaussian approximation to Airy disk profile. Fourth part of resolution equation is quite complex and relates to images and not what can be captured - it has to do with pixel size - so called pixel blur (fact that pixels are not point sampling devices but rather have surface), so we'll skip that. We have three Gaussian distributions (approximations) that convolve together to produce final blur in the image. We simply add standard deviations and get final standard deviation of "total" Gaussian blur. If it were only that - larger scopes would always have edge over smaller scopes in what they can record resolution wise because their Airy disks are simply smaller - they resolve more, and although difference between resolving capability of say 16" and 8" scope is double - when we add guiding/tracking and seeing - it will not be twice any more, but maybe 10% or something like that (variations add in quadrature - like noise). Now we return to beginning and examine what happens with actual scope aperture in a given seeing. Seeing is complex thing and it is best approximated as wavefront perturbation. For that you need to understand a bit Zernike polynomials that describe wavefront errors in general (or rather describe curved surfaces as sum of different basis shapes). Telescope aberrations that we talk about are: tilt, piston, coma, spherical, .... Above you can see math form of certain polynomial and classical name for it. This diagram on the other hand shows phase difference: Here color spectrum is used to denote different phase error (wavefront lagging or leading). These polynomials make up orthonormal basis - similarly like 2d or 3d coordinates - you can't use X coordinate to describe height or depth - only width, so X, Y and Z are linearly independent. So are above polynomials for phase difference - you can decompose any phase into component "vectors" or above polynomials. What does this has to do with seeing and telescope size? This diagram will explain: Here we have 1d representation of aberrated wavefront - it should be just straight line for perfect front. We also have tree "apertures" - very small one, medium one and long one. They "receive" different portions of wavefront. Large aperture has it all - 3 peaks, 2 troughs - it will need a lot of those polynomials to get good approximation of this wavefront. Medium wavefront just has one valley - it looks like defocus only - maybe some spherical (Z0,2 and Z0,4). Smallest aperture has essentially flat wave front - almost without any aberrations. Now it is important to realize that you can't observe wave front relative to aperture size - peaks and troughs don't became smaller as aperture increases - they are relative to wavelength (famous 1/4 wave or 1/6 wave). We also need to realize that wave front deformation is changing in time frame of milliseconds. In order to understand effects of seeing on certain aperture - we need to know how wavefront changes from instant to instant - limited by aperture and then "integrate" effect decomposed to Zernike polynomials and transform that via Fourier transform into PSF of seeing over that particular aperture. It is really complex thing and worst part is that we don't know how atmosphere is changing and I'm not sure if we have good models for different types of seeing. I would be happy if someone came up with following: For major Zernike polynomials - average value and standard deviation of parameters - over some surface. That is all you need to model (with computer of course) difference in seeing effects of small vs large aperture. But I'm not sure there is such research. Maybe there is - we do have adaptive optics on large instruments - it has to in some part rely on this theory. And now for the grand finale - how much can we sharpen to recover blurred detail? It actually works - sharpening is not just making image prettier - it is about recovering of true detail. It turns out that sharpening is the same thing as amplifying certain frequencies of the image (think Fourier analysis of signal) - pretty much like using old audio equalizers - when you boost certain frequencies. Blurring is attenuation of high frequency components. Amplifying back those components restores image. Problem is that we can't separate signal and noise and when we amplify certain frequencies of signal - we amplify certain frequencies of noise as well - we make result noisier. In order to be able to do frequency restoration (fancy name for sharpening) - you need good SNR and large telescopes have advantage here - they can get larger SNR in same amount of time (if paired with suitable camera / matched for resolution and all). Hope this answers your question?
  2. I'm not quite clear on that but I think it is not speed of telescopes but rather design and focal length. From what I understood when I was researching on the topic (nothing serious, just informative), refractors have curvature somewhere around 1/3 of their focal length - regardless of their aperture.
  3. Most coatings bring that down to less than one percent. Here is graph for one coating type: It also varies with angle - so slower scopes have advantage there I think it is safe to say that we have something like 99.5 or if you want to be safe 99.2 per air glass surface. For coma corrector with three elements that is something like total of 5% loss. I don't think it is substantial in above approximation. Just using 406 instead of 400mm for aperture (true 16") will change amount of light about the same in other direction - about 3% more light.
  4. I guess that should be taken into account by QE of sensor as you can't measure sensor efficiency without micro lens? above in black line is measurement of absolute QE by Christian Buil. I doubt that he removed sensor cover glass or microlens from pixels. Peak QE is ~60% and I would say that average QE is around 50% in 400-700 nm range? You'll see that I put 50% as QE in above calculations, so I guess it has been accounted for?
  5. Yes, it's just an approximation. We did not take into account atmospheric extinction, nor do we have exact light pollution info. We did not account for calibration frames either. As approximation - I would say it's pretty good. Whether actual SNR in that region is 2 or 3 - it does not matter. It is detectable and indeed - it shows in the image. Signal that is 23.5 would have SNR of 1 or less and that is just at noise level - hence it would not be seen in the image - and indeed it does not show. I think it only takes to switch from 1s exposure to something more meaningful for capturing deep faint stuff and not going after resolution - like 2-3 minutes to see significant improvement in outer regions. Going with 10x200 seconds would make mag24 signal be captured at SNR 2.2 - that would start to show outer tails (although at threshold of detection).
  6. In addition to to excellent answer by CraigT82 - not even all fast achros can be used from Narrowband imaging. Well, you can, but you might not get as good images as you are hoping. Fast achromats suffer from spherochromatism. That is spherical aberration that depends on wavelength of light. In another words - achromat is corrected for spherical aberration at only one wavelength. Usually that wavelength is green at around 550 - for good achromat - since these are mostly visual instruments. Unfortunately, only narrow(ish) filter that is in this wavelength is Baader solar continuum and it is useful for imaging Sun in white light and maybe moon. If you want to image for example Ha - you then actually need your scope to be corrected for 656nm wavelength. More skilled people do this themselves - by changing distance between two lenses of achromat. This changes spherical aberration of the system and you can get enough spherical to counter acts spherocromatism for wanted wavelength and correct for it. Complex topic. Here is example: http://interferometrie.blogspot.com/ checkout comparison between three fast achromats there.
  7. That is about right. In fact, if you want to be precise, you should count in if it is a doublet or triplet and count each air/glass surface (4 for doublet, 6 for tripled - air spaced of course) and put something like 99.7% for each. But x2 0.99 is good approximation. Those numbers are there because my RC8" has dielectric mirrors that allegedly have 99% reflectivity.
  8. @dan_adi Let's do validation of spreadsheet on above image of M51 by Emil I would say that zone marked by arrow is barely detectable. He used 16" F/5 scope and ASI1600. His image is scaled down 50% - so we can use x2 in our calculations (actual binning will yield a bit better result than scaling down, but they are rather close). From this diagram we can see that this particular part has magnitude of about 22-23 (core is at mag17 and each color is one mag step). Let's approximate it with mag22.5 in our example. I used 400mm aperture, 26% central obstruction and enhanced coated mirrors - 0.94 2000mm FL (F/5) - I doubled pixel size and read noise (CMOS binned). I put target brightness at 22.5 and sky brightness at about 20.5 (we don't have info on this, but we can take a guess). SNR is about 3.3 - which is right there in range to be detectable. No nice features rendered and you need denoising to make it look smooth, but it will show in the image like this: (I just cropped that part at 1:1 zoom) I guess that is pretty good calculation?
  9. You say you have 1400mm focal length scope? When you bin camera 2x2 - you need to increase read noise x2 for CMOS cameras and leave it the same for CCD cameras. You also want your sampling rate to be higher at about 1"/px. You can go lower than that - but that requires large scope. Let's for example take ASI1600 binned x2. Same as above 1s exposure, total 2h Total SNR of stack = ~0.12052 for mag26 target. SNR5 is to have the thing comfortably displayed after processing. SNR2-3 is for detection. If you don't mind having noisy hinted outer parts of a galaxy then you can go for SNR 3 instead. Key here is sampling rate and keeping read noise low. If you get good x0.5 focal reducer, then you don't need to bin and read noise is back at 1.7. SNR after stacking is ~0.2184
  10. Not sure if you understood me properly. I was simply implying that if OP does not want to guide - mount with encoders will provide better results than one without encoders. If OP wants to guide (and that is only sensible option to me), then I simply said that encoders are too expensive and not needed in that case. I might be wrong though. Do you find EC version of mount better performing when you guide?
  11. Excellent capture. Maybe it would be interesting to try planetary type wavelet sharpening on that core / jet? It has good SNR on its own. Quick test on 8bit jpeg that you posted, gives very nice results:
  12. You should really check for mag26 for example - what 1s stack of two hours brings with 10e read noise camera. Then switch to 1.5e modern CMOS camera and check the same thing . I think it is modern low read noise cameras that enable DSO lucky imaging approach - as only difference in total SNR of the stack vs single exposure as long as total time of the stack - is in read noise. If you by any chance had camera with 0e read noise - you could in theory do "planetary type" imaging of galaxies - meaning exposures in milliseconds. There would be other issues - like amount of data, or problem with aligning frames as you need enough signal from stars in single exposure to align subs properly - but exposure length would not be an issue. With appropriate camera it is very much possible. By appropriate I mean - large enough pixels to get you decent sampling rate like 1"/px and very low read noise.
  13. I agree, 600 sec is about right for 10e read noise camera For single sub duration - you did not have to go thru all that trouble. It depends on other noise sources compared to read noise on your given resolution. What target has brightness of 18 mag/arcs2? That is very bright stuff - as bright as bortle 8-9 sky. Actual surface brightness of objects is hard to find. Usually you get average surface brightness and for a good image, you want faintest parts of your target to have SNR of about 5. On average galaxy, faintest parts are mag28-29 or there about and most galaxies have higher average surface brightness. For example: Stellarium lists M51 at being mag21.45. But that is average magnitude of object. For actual profile, look here: Faintest parts are at about mag27
  14. That is correct - I just used above image for illustration of Fourier transform of signal.
  15. Not an easy answer, I'm afraid. Nevertheless, let's take it step by step. First we establish some base "rules" for SNR calculation. We model process by 4 types of noise and one type of signal - target signal we are interested in. Noise types are: - read noise - given in electrons for particular camera model - added per exposure - thermal noise - given as dark current at certain temperature as e/px/s (electrons per pixel per second). We actually want thermal noise and it is modeled like Poisson noise related to thermal signal/dark current. Magnitude of that noise is square root of dark current signal (which depends on exposure length) - light pollution noise - here we need to know sky brightness in magnitudes per arc second squared. Again Poisson process - associated noise is square root of accumulated signal in exposure - Target noise - same as light pollution noise - but this time we take target brightness in magnitudes per arc second squared - and use square root. Read noise and thermal noise are straight forward - we have that from camera specs (or we can measure ourselves). Sky brightness can be measured or read off from websites like lightpollution.info . Be aware that this info depends on conditions on particular night - so we use approximate value for our calculations. We need to know our sampling rate in arc seconds per pixel. We need to know atmospheric extinction. Telescope aperture and reflectivity of mirrors / transmission of glasses also. We start by taking that mag 0 source produces 880.000 photons on top of the atmosphere per 1cm squared per second. We adjust target magnitude by atmospheric extinction. We find clear aperture of our telescope - aperture surface - central obstruction surface time reflectivity / transmission of each glass/mirror component. For example newtonian will have (aperture_radius^2 - co_radius^2) * pi * 0.94 * 0.94. For enhanced mirrors reflectivity is about 0.94, for standard it is 0.91, starbright 0.97 and so on ... We also need quantum efficiency of sensor. From these we can calculate signal per pixel per exposure - from both target and sky. That gives us noise from target and sky (square root), we have read noise and dark current noise to add. Noise adds in quadrature - like linearly independent vectors (square root of sum of squares). If you want to be pedantic - add in calibration frames and their noise and "stack" wanted number of subs and calculate total SNR. Or rather - put everything in spreadsheet and solve how much time / exposures you need for target SNR. You'll find example of such spreadsheet in attachment here. It does not include calibration frames noise. SNRCalc-english.ods
  16. It has to do with 2d signal that represents image. Any signal can be decomposed into sum of sine/cosine waves - Fourier analysis. With sound, we are talking about cycles per second. With image we are talking about cycles per unit length - spatial frequency. It is related to resolution of the image but not in simple sense - high frequency = small detail, low frequency = large detail. There is such relationship to a small degree, but frequencies represent contrast more than actual detail / features. This image can be used to understand: In order to form a pulse train shape function - you need infinite frequencies (because it has very sharp edges - vertical ones), but if you limit yourself to finite frequencies, depending on how "fine grained" you want to be - you can either have just plain sine wave - edges will be very smooth and contrast will be gradually varying, or you can add more frequencies (middle ones) - and you will get green line - closer to what we want but still smooth, or add even more higher frequencies and get blue line - again closer approximation and sharper detail. Now imagine that above pulse train is actually pixel intensities in an image - like checkerboard pattern or something like that. Using only low frequencies will make it smooth - and image will look blurred. Add high frequencies back and you'll get "sharper edges" - or image will be sharper.
  17. In deep sky imaging (long exposure), yes, seeing blurs each star into sort of gaussian profile. Seeing is measured by that - FWHM of star profile in long exposure (usually just 2 second exposure is enough to characterize seeing in terms of FWHM). One of important parts of DSO imaging is to choose proper sampling rate to record all the detail there is in the image - and that depends on seeing, scope aperture and tracking / guiding precision combined. If your sampling rate is good - image will not look blurred. If you over sample - it will look blurred when viewed on 100% zoom. Frequency restoration is real thing - it shows real details. There is upper limit to how much detail can be restored and it depends on aperture of telescope used. You may have seen graphs like this one: Left is profile of the star that is blurred by optics alone - so called Airy function. Right is graph that shows attenuation (in this particular case - one obstructed - red line, and one clear gray line, aperture). This graph represents attenuation by frequency / detail. You can see that vertical axis is numbered 0-1. Line represents how much of a value particular frequency keeps. At some point - line reaches 0. This means that all frequencies above that one are effectively killed off - set to 0. You can't restore something that has been set to 0. You can restore something that has been attenuated to 20% of its value - just divide it with 0.2 and you'll get original value. But if you try to divide with 0 - you'll get infinity, so it is impossible to restore those frequencies above certain threshold frequency - and that depends on aperture size. Atmosphere acts in similar way to aperture - except it is random. We can approximate impact of atmosphere by Gaussian type blur (or sometimes Moffat in some cases). Gaussian type blur - never reaches 0, not even at infinitely high frequencies. However, this is only approximation and we always have upper limit imposed by telescope aperture. You might notice something here - we are "boosting" particular frequencies by dividing with number smaller than 1. This is the same as multiplying by number larger than one - or just simply getting particular frequency values to be larger. That is essence of frequency restoration - like when you have audio equalizer and you amplify some frequency range. Maybe even better comparison for this would be - you increase volume. Increasing volume is fine if you have nice music, but if you have noisy music - increasing volume will increase noise as well. This is why SNR part is important - when you sharpen image - you increase certain frequency components and you in the process increase noise on those frequencies. If you want that noise not to be visible in the final image - you need to have good enough SNR so that boosted noise is still small. This is possible in planetary imaging because planets are relatively strong sources and we stack thousands of subs - which means we further increase SNR by factors of 30-50 and sometimes even more (imagine stacking 10000 subs in good seeing - that will boost snr by factor of x100). In DSO imaging we stack hundred of subs, and single sub is noisy, so overall SNR is not great. In fact, we don't have much SNR to spare and are happy if SNR is good enough to render image properly. For this reason, sharpening is seldom used in DSO images - and only in brightest parts where SNR is good.
  18. When you stack say 1000 frames of moving Jupiter - you get relatively blurry mess. Then comes magic of wavelet sharpening or deconvolution that exploits high signal to noise ratio of such stack and you are able to do something called frequency restoration. Without it, you'll still have blurry thing. With it, you'll get image better than is possible with naked eye (or rather with eyepiece). This is because optics of telescope blurs the image somewhat and frequency restoration is able to sharpen up even that. Complicated topic. In any case - you are now referring to seeing, but that seeing is on scales less than dozen of milliseconds (shimmers are rather fast). I'm guessing that your exposures are at least couple of seconds long if not longer. Seeing averages in that time. Really poor seeing needs a bit more - like 8-10 seconds to average out, but it does. If your exposures are in seconds range or longer - you are seeing periodic error and not atmospheric seeing. And it is not the noise. Noise is related to uncertainty of pixel value - regardless where that pixel "falls". Distortion by seeing (or rather just one component of distortion - namely tilt) moves pixel out of it's position and you can compensate that by calculating average pixel position (not value) out of many frames. That is what planetary stacking software does - it "returns" pixel into proper place by distorting back every frame. However, tilt is only one component of aberrations that seeing brings. If your exposure is not fast enough to freeze the tilt part of seeing - you'll get motion blur. Other aberrations from seeing just manifest as regular blur and since it is random - it just blurs more or less (when you view it) - there are even moments when image is relatively clear. Point of lucky imaging is to capture these "clear" moments - without too much blur, or when only tilt is present. It is quite a bit complex topic. We can discuss it in length if you are interested. Noise is uncertainty in pixel value - average many same pixels (just make sure you place them in the right spot - both planetary and deep sky stacking software does that, just differently) and you reduce noise.
  19. First thing to understand is that you can clip the black point even if your background is not black. This term has more to do with processing of the data than with actual black background. It happens more often in conjunction with black background because of the way it happens in processing (hence the name). There is a reason why you should not have fully black background. Having just slightly brighter background actually more resembles what one sees thru a telescope when dark adapted. It is only when not fully adapted or having high contrast target that we see pitch black background in our scopes. It also has to do with very interesting phenomena called Eigengrau. Here is wiki article on it: https://en.wikipedia.org/wiki/Eigengrau In short - in absence of light many people describe sensation / perception as not quite black - more grayish like. This image sums it nicely: Therefore we expect not to see completely black when there is no light. Black is just a contrast thing - it is stuff that we can't see because there is something too bright in the scene (think of night time and car headlights pointing at you - everything else will be pure black in that case). For above reasons - your background should not be fully black. How much above black level? That sort of depends on your taste and also on the feel you want to impart on the image.
  20. As explained - not noise. It is probably not atmosphere either. That will depend on your sampling resolution and seeing values. To me this sounds like periodic error more than anything. This will move stars frame to frame but overall keep Field of view in same place (or maybe very small drift from first to last sub).
  21. Could you check something for me - that might help a lot. Can you open up each R, G and B flat and check max value in PI. Even better would be if you could select central - brightest part - like small circular selection in center of each flat and measure mean value?
  22. Just an update on all of this. I eventually implemented everything, tried two different ASI1600 QE curves - one published by ZWO and one produced by Christian Buil (not really sure which one is correct - they are quite a bit different). I also tried with atmosphere compensation and without - and simply could not get proper calibration. It is most likely due to flats. In order for this to work - flats must be normalized to 1 (flat for each color must have values such that brightest part of flat has value of 1). If this is not done, color values are scaled by unknown constants (max ADU for each flat). I'll keep trying to get proper calibration this way, but I'll need some way of verifying results. Probably by shooting couple of stars of known spectral class / spectrum (like Vega and similar) and comparing that with synthetic results to at least determine which is proper QE curve.
  23. Ok, I officially give up on trying to color calibrate this data. I tried numerous options - like two different QE curves for ASI1600 that I have - one published by ZWO and one that Christian Buil produced with this spectroscopes (btw these two are quite different that I find highly puzzling). I also tried with atmosphere compensation and without. In every case, image was quite a bit away from what I would expect out of proper calibration. I even tried matching colors of individual stars against measured values - and then it hit me. I don't have information on spectrum of flat panel used nor exposure lengths used. You don't need that information if you normalize master flats (scale them so that max intensity is 1) but no one really does this unless they want to do some sort of photometry. This introduces another level of color disbalance into the image as flat frames intensities depend on both spectrum of flat light source and exposure length used.
  24. I personally prefer second image from your post above.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.