Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. IFN is all about SNR. You should consider binning data that you already have - say even x4 in order to keep noise down. In the end good denoising routine will help smooth things.
  2. In my view it's a good thing and is to be expected. When you sample at 2.5"/px - there will be nights of poor seeing when you are slightly over sampled - like with first image, but there will be nights of average seeing as well (and there should be more of these) - where you'll be spot on with your sampling rate. There will be nights of better seeing as well (again not so often) - where you'll be a bit oversampling. I think that you are having OK sampling rate at 2.5"/px. That part of tool is OK - one that tells you sampling rate based on pixel size. Formula is simple and goes like this: sampling rate = 206.3 * pixel_size / focal_length Where pixel size in is µm and focal length is in mm You can derive that formula with a bit of trigonometry. In the end, you are wondering if 2"/px will be good sampling rate? I believe it will be as long as you have decent guiding (RMS below 1") and you have scope that has aperture of 80mm or more. Ok, so I was right then - it seemed over sampled by larger factor than x2 - and indeed it is. If your average FWHM is 4.5" then optimum sampling rate is ~2.8"/px. That shows it was night of poor seeing (as does fact that you have FWHM in 3"-6" range - very variable - seeing must have changed quite a bit - I'm suspecting local thermals as it is winter and when you track object across the sky - it moves above different houses - heating & chimneys can cause issues). Ok, so I'll go briefly over that. I wrote several times on that topic already - you can also search SGL for more info. Here are statements from Astronomy.Tools explaining why it does calculations it does. Nyquist sampling theorem (or Shannon-Nyquist sampling theorem) states that for band limited signal, perfect reconstruction can be achieved if one samples at double maximum frequency of that band limited signal. That part is sort of ok. Problematic part is equating seeing FWHM to sampling rate. That is simply not what Nyquist's theorem is saying. Maximum frequency is feature in frequency domain and FWHM is defined in spatial domain and different for different curves. If we assume any type of curve - like Gaussian - then we must calculate / explain relationship of FWHM to elements of frequency domain. further Ok, again this is very wrong. Stars won't be squares because of way we sample them. Even if we grossly over sample - they still won't be squares. That is misunderstanding of sampling process. Result of sampling process are not little squares (nor other shapes if pixels are of different shape) - result of sampling are points - dimensionless points (like true math points). What "shape" will objects have - depends on restoration procedure. We often "see" square pixels in our images - not because they are really squared, but because certain restoration algorithm is used - namely nearest neighbor resampling / interpolation. If we don't use that algorithm - we'll get different results. Nyquist states that Sinc is perfect restoration kernel (sin(x) / x). That is function that goes of to infinity and it must because band limited signal is cyclic in nature (and images are not). For this reason we use approximation to Sinc kernel and that is often Lanczos kernel (which is windowed Sinc). In any case - square stars are not artifact of sampling and square pixels - it is artifact of restoration process - way we restore image from sampling points and we have complete control over that in choice of algorithm used - it is separate from imaging part. And of course - if your premise is false - then there is no wonder if conclusion / solution is. There you go in a nutshell what I object - first is equating seeing to star FWHM. Seeing is just part of story, aperture and guide performance also impact final FWHM among other things. FWHM is not measure of frequency elements (although it is related in Gaussian profile) so we can't just take 1/2 or 1/3 of FWHM to be our pixel size. Nyquist clearly states that we need x2 max frequency and no we don't need x3 to make stars "rounder" - that is just misunderstanding of sampling process. In the end, a bit of theory (this will be very brief). I arrived to 1.6 number by using several relationships. First I approximated star profile with Gaussian. There is known relationship between FWHM and sigma of gaussian. Then I used fact that fourier transform of gaussian is gaussian (fourier transform "moves" function in frequency domain). There I selected cutoff frequency as place where it falls of to less than 10% - this is because Gaussian approximation is just approximation - it continues to infinity and hence has no clear cut off frequency so we can use one where it makes sense in terms of SNR and level of detail in the image). For this reason I often put quotes around "optimum" sampling rate - there really is no hard cut off point and changing that percent above will change result so there is really range of optimum values based on your criteria. If you do the math - it turns out that sampling rate should be FWHM / ~1.6. That is later confirmed by simulations and also by that technique I demonstrated above - take any image that is over sampled and calculate "optimum" sampling rate. Then downsize image to optimum rate and upsize it back again - there will be no visual difference, but if you downsize it more - you will start to notice difference in the image - detail will start to be lost. You can find articles online that explain every step in what I said above - look up Nyquist sampling theorem, look up Gaussian function and it's properties - relationship of sigma and FWHM and fourier transform of Gaussian function and you'll be able to derive the same from those.
  3. Is that in pixels or arc seconds? If it is in pixels - convert to arc seconds. Be careful - if you use stack and you've drizzled - then you've changed pixel scale of the image. In order to calculate optimum sampling rate / pixel scale - you need to divide FWHM in arc seconds with 1.6 If you have FWHM in pixels - then it needs to be 1.6px if you are sampling at optimum rate. If your FWHM in pixels is between 3 and 6 - then this means that you are oversampling by factor of x2 - x4 (or rather if it was between 3.2 and 6.4 this would hold). I would not use that tool for judging optimum sampling rate as I believe it is flawed.
  4. It tells you that you are very slightly over sampling in the case of that image. If your working resolution is 2.5"/px and you have FWHM of 1.936px - then your FWHM in arc seconds is 1.936px * 2.5"/px = 4.84" Once you know that number - you can calculate "optimum" sampling rate as that number / 1.6. In your case sampling rate should be 4.84 / 1.6 = 3.025"/px This is for this image alone. 3"/px is quite high and can be consequence of particularly poor seeing on the night of capture. Check couple more images to get idea of what your working resolution should be. Btw. I think that at 2.5"/px - in most cases, you should not worry about over sampling. That resolution is good. Another thing to add - if on particular night seeing is very good or very poor - your sampling rate will be over or under sampling. That is something that we can't control and should not worry you much. You should aim for "average" FWHM to match your working resolution.
  5. That is correct answer, but besides that - you were also oversampled since I could reduce original size to 25% - or 2"/px without loss of detail (probably even a bit more). Check FWMH values in your subs so far (or even stacks - but make sure they are linear). From that you can see what sort of resolution you should be aiming for - FWHM / 1.6 should be your guideline. Judging by this image - anything below 2"/px is over sampling.
  6. Now interesting thing is that you drizzled that image for some reason, right? Actual resolution of the image is not only compatible with double the imaging resolution - it's not even half of the imaging resolution. One of these two images has been scaled to 25% of its original size and then scaled up again to 100%, but can you tell which one?
  7. That will work. CC that looks / works like this is what you need: https://www.teleskop-express.de/shop/product_info.php/info/p6706_TS-Optics-NEWTON-Coma-Corrector-1-0x-TSGPU-Superflat---4-element---2--connection.html It is 2" body that goes all the way into focuser tube. That CC even pushes focal point 20mm outwards.
  8. It's not easy to find coma corrector that covers the whole full frame sensor. Most of these are not 2" but larger like 2.5" or 3" and require suitable focuser. In any case - look for coma corrector that can be "sunken" into focuser tube. If you can reach focus with Canon DSLR - you won't have any issues with ASI294mc - all that will maybe be needed is simple extension (depends on your setup but ASI294 has sensor much closer than DSLR).
  9. Yes, that is in degrees, so 1 degree represents about 4 minutes
  10. That can easily happen with any system that can be easily collimated - like mirrored systems. Collimation aligns optical axis of the mirrors, but both mirrors can end up being lightly tilted with respect to focuser or baffle system. If one has focuser tilt - there is tilt plate on camera or separate tilt system to compensate and in the end everything is square except baffle system - and that results in vignetting that is not completely symmetric.
  11. This is excellent piece of advice to determine if flats are impacted by rotator. @teoria_del_big_bang It really depends on scope. We could say that there are two main components to flats - one is dust shadows and other is vignetting. Depending where rotator sits in optical train - dust that is responsible for shadows might rotate with camera and it won't change position. In scopes that are symmetric if sensor is placed dead center on optical axis - vignetting will be symmetric as well and rotation won't matter. Refractors are prime candidates for this, but do be careful - camera can be offset, or rather sensor. For example - look at latest ASI485 - sensor offset is very clear with that camera: They did their best to make it centered (even if silicone is offset) - but I don't think it's centered properly. Usually this does not matter for imaging, but it can be problem with flats. Another thing that can create asymmetric flats is type of scope. Newtonian scopes and especially fast newtonians have asymmetric flats (there is some secondary offset that I don't really understand - never bothered to properly grasp it since I never owned fast newtonian)
  12. Yes, those should be ADU values and it looks like you have same values of 257 for both bias and dark (with small variation due to noise - that is ok). It looks like your camera behaves the same - hence bias only and dither.
  13. It is always better to reduced it. It might mean slightly longer subs if you cut some of it down - but total SNR will always improve if you don't cut signal but you cut down LP.
  14. Just be careful. Narrowband is completely different thing. With NB you seriously cut down any LP that you have and it can't swamp read noise easily. NB does benefit from lower read noise and longer subs. Here you combine the two - choose gain setting with 1.5e and still do 10 minute subs for best results (CCD cameras with high read noise often used 20-30 minute NB subs because of this).
  15. Again, that is really the question of how much read noise won't cause you any problems. Here, look at this for example. 3.5e of read noise versus 1.5e of read noise sounds a lot, right? But imagine that you have LP flux per pixel of say 10e per minute and you image 1 minute subs with 1.5e read noise while you image 5 minutes with 3.5e read noise. You image for one hour total. What will be total noise in each case. In one minute there will be 10e total LP background and LP noise will be square root of that - so ~3.16227766. These two noises combined will be sqrt ( 1.5*1.5 + 3.1622... * 3.1622...) = sqrt(2.25 + 10) = 3.5e So first combination gives you 3.5e per sub - and you have 60 subs stacked - so we end up having 3.5 * sqrt(60) = 27.11e of total noise Let's do the other. Background will be 50e (10e x 5 minutes), so LP noise will be sqrt(50) and we have read noise of 3.5e so total per sub will be sqrt ( 50 + 3.5*3.5) = sqrt(62.25) = ~7.889867e per sub. We have 12 subs so total will be sqrt(12) * 7.889867 = 27.33e of total noise Although 3.5e of read noise looks like much more than 1.5e - in reality, if LP is strong enough and if sub duration is chosen accordingly - difference is minimal 27.11e vs 27.33e
  16. Ok, here is image with background removed (no more green cast): @Stuart1971 Have you tried working on 32bit float point image from APP? You loose quite a bit by going with 16bit.
  17. You say that you were using 5 minute subs with your old camera? That camera has twice as large pixel size (x4 by surface) and read noise of ~12e. If you used mode 0 and gain 0 you had read noise of about 7.5e According to your old camera, with these settings on new camera you should have used about 6 minute subs. 3 minute subs with gain 0 mode #1 and 1 minute subs with gain 75 mode #1
  18. I can't really answer that question properly. I can only guess. You can measure your background levels and then figure it out. What CCD camera did you have and what was mode and gain setting that you used for this image?
  19. Ok, as far as sub duration. Fewer longer subs that add up to same total imaging time as many short will always produce better result. Having said that - there is a point at which there is no difference as extending sub duration brings in diminishing returns. So yes - single 4h sub is the best - it will give you best result. It will be measurable but will it be seen by eye? If you stack 4x1h - you won't see difference. If you stack 20 x 12 minute - you probably won't see difference either. In fact - you can go down to certain sub duration like 48 x 5 minute where you will not be able to see the difference in the finished image. But at some point, there will be difference. Trick is to select that sub duration where you still don't see difference but subs are short enough that you have enough subs to work with and you don't loose too much data if you drop a sub or something happens. Only way to figure out that sub duration for your setup and LP is to measure background LP levels.
  20. You should adjust your sub length according background level. You want background / LP level to swamp read noise. Once you do that, then all you have to do when making an image is to put enough time into it - do 2-4h of imaging and if you are not happy - do more imaging. There is very wide range of times needed for different targets. Some targets are dim and some are bright. Interstellar dust is very dim. And you really need to put in time to make it show.
  21. Greenish background? I did not perform background wipe. Maybe I should do it and see what I get?
  22. Is 2 minutes what you can comfortably do with your mount and going longer would be an issue? In that case - Go with gain 75 again mode #1 (blue line). That will give you e/ADU of 0.25 and read noise of ~1.55e. You'll be effectively working with 14bit data and your FWC will be 16K, but that is ok.
  23. Yes that is what has been posted above. Here is my workflow (it would give better results with linear data saved as 32bit float point): - Loaded image in Gimp and split channels into R, G and B - Saved each as fits file - Loaded in ImageJ, converted to 32bit float point, binned each sub x4 and saved - Loaded in Gimp, did RGB compose to get RGB image again from individual channels and did stretch with levels. - Created copy of image as a layer and did wavelet denoise. Added mask to that layer to apply denoised version only in dark parts of the image (otherwise it blurs too much of parts where signal is strong as is and that do not need denoising). - saved as Jpeg.
  24. Offset is easy. You determine that once you have your gain selected and you figure it out using bias. You want none of your pixel in bias to be at 0, so as long as you have at least one being equal to zero - raise your offset. Selecting gain: - you want gain setting that gives you low enough read noise. How low? That depends on your shooting conditions. Fast optics + strong LP favor shorter subs and one can get away with higher read noise. Here is general rule of thumb: read noise higher than 7-8e - think 10 minute or more subs. Read noise around 4e - think 5-6 minute subs. Read noise around 1.5e - think ~2 minute subs (or even 1 minute if you have strong LP like Bortle 7-8). - next factor is full well at given gain. This is really not critical as you can take filler subs / short subs at the end in case you have stars that saturate (and you almost always will). - I like to have e/ADU value that is easy to work with and does not introduce much rounding errors. Say 1 e/ADU or unity is my first choice if available. Next are 2e/ADU or 0.5e/ADU and then 0.25 e/ADU. Going with 3 or 4 e/ADU usually means high read noise because designers of sensor need to mask off that much rounding errors with some other noise source and they don't really bother to optimize read noise at these settings. There you go - not much of science to it. Just pick a value and work with it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.