Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Here is an interesting thing: these are both F/14 versions - but one on the left is sharper than one on the right - because I just ran a bit of sharpening on it. Compared to resized oversampled image - it now looks sharper, doesn't it: (left sharpened F/14 image and right downsampled over sampled image). I guess there is a lot to how the image was processed. One thing is sure - over sampling won't make image (at focal plane) sharper as laws of physics don't allow for that - any perceived sharpness is due to difference in processing - and as we have just seen, simple tweak can make properly sampled image look sharper.
  2. Really? I'm not seeing it Here is comparison at ~F/14 scale and here is comparison at ~F/26 scale In both cases - ~F/14 is on the right side. And to tell you the truth - I'm finding ~ F/14 to be maybe just a tiny bit sharper on some features while F/26 might be said to be sharper on other Here we see that F/14 is sharper / has better separated and more contrast on the feature that I marked. ~F/26 looks just a tiny bit sharper in this feature.
  3. Excellent images. It faithfully shows many features (to resolving capability of the scope of course). I managed to find similar high resolution image for comparison - and yes, everything is where you expect it to be - nothing is processing artifact:
  4. I doubt it. Outgoing rays are collimated and you should see the same image regardless of the distance. We can't see shadow of the secondary if exit pupil is small enough (we can't even if it is larger, but we can see other issues related to this). This can be from smudge on the eye lens of the eyepiece - accidental touch that left some gunk on it? That will cause some light scatter and softness.
  5. Yep. You can use this map to get rough idea what the SQM is like at your observing location: https://www.lightpollutionmap.info/#zoom=5.74&lat=53.6966&lon=-3.2837&layers=B0FFFFFFFTFFFFFFFFFFF Here is handy conversion table between bortle / SQM and other types of sky brightness estimates:
  6. I just did google image search on "gpu coma corrector pal gyulai lens arrangement" and that image came out. However - when I click on that link, it takes me to a page that does not contain the image: It is this page: https://en.lacerta-optics.com/KomakorrF4_Aplanatic-Super-coma-corrector-for-f-4-and-f-5 Correction - it has that image, it is only "hidden" under "Photos and Support" tab (third tab after Description and Recommended). It is even labeled as gpu lens sequence
  7. I've found this image online - hope it helps a bit: That is for Aplanatic CC / GPU CC by Pal Gyulai (although this one is labeled as Lacerta)
  8. For mono it works normally, but for OSC - it will work only if you first do split debayer. Regular debayering and then split binning won't give you result that you are hoping for as there is pixel to pixel correlation in interpolated values (no longer truly random - true measurements). If you treat OSC as already having half the pixel count / half the sampling rate - then yes it will work normally after you extract color data as mono subs.
  9. This is really not an easy topic and there is a lot of misinformation and misunderstanding "floating" around. I'll try to give you sensible answer - or at least partial answer - as it is always better to go into depth on particular parameter to fully understand implications. There will be a lot of ifs and buts included - such is the nature of the beast. First of - dynamic range is completely meaningless for astrophotography - I have no idea why so many people get hung up on it. Probably because it is somewhat important metric in daytime photography where some shots need to be taken in shortest time possible. In AP - we can expose for much longer and we stack many subs. We end up with much much higher "dynamic range" image than single exposure suggests - in fact, we can control what sort of dynamic range we want to end up by selecting number of exposures we want to stack. Each quadrupling of number of exposures leads to doubling of dynamic range. Ok, back to original question - sensor properties and how do they compare. Well, it depends on use case. I'll list some of them and list what is better depending on circumstances and "impact" - how important that difference is. Let's start with read noise. Lower is always better - in modern CMOS sensors if doing RGB / OSC long exposure imaging - impact minimal. With selection of proper exposure length, impact is effectively removed. - In narrowband imaging - impact raises to medium / high - In lucky type planetary imaging - impact raises to very high - If mount performance is questionable and one is doing EEA with short exposures, or lucky type DSO imaging - impact is high - if one is considering CCD type sensor - impact is high (mostly due to fact that CCD sensors have much larger read noise than CMOS sensors and some of above factors will kick in - like need for half an hour exposures for NB imaging - can mount do that nicely? and so on) Quantum efficiency - higher is better - impact is low to medium. No special conditions, higher QE is always better. Only problem can be that there is no single QE number - but rather QE curve, and even if certain camera has higher max / peak QE - other camera can be more suitable due to having higher QE in certain wavelength or range of wavelengths. For example - for IR imaging - QE beyond 700nm is of course more important than peak QE. (just a note - choice of imaging night or imaging scope can have more impact than difference in QE between two camera models - something that is often overlooked). Number of bits - inconsequential (again this is mitigated with number of subs stacked / selection of exposure length). Might be important for fringe applications - like single exposure photometry or alike. Sensor size - major impact (depending on application) For planetary - it is minimal impact and small sensors are often preferred. For lunar / solar it has medium to high impact - if one wants to do full disk captures or minimize number of panels when doing mosaics - but there is caveat. Scope needs to support the sensor size - it needs to be corrected over whole sensor for sensor size to be useful. For planetary scope needs to be diffraction limited or better over sensor size and for DSO - stars should be nice round and tight over sensor. For DSO it has major impact as dictates speed of capture when paired with appropriate optics. Larger sensor size allows one to capture same FOV with larger aperture scope - which turns into speed when working at set resolution (larger aperture captures more light). I can't really overstate how important this can be. Imagine pairing 10mm diagonal sensor with 4" scope and pairing 20mm diagonal sensor with 8" scope. Both scopes are same design, F/ratio and are supported by the mount for simplicity sake (of course - all of these factors need to be taken into account). They will cover the same FOV and with choice of binning will operate on same resolution / sampling rate (arc seconds per pixel). 8" scope will be x4 faster - or it will produce the same image in 1/4 of the time. Just to compare that with QE - say we have state of the art sensor with QE of 90% versus very basic sensor with QE of 50% - that is only 80% improvement. Above is 300% improvement in speed. To reiterate - sensor size - major impact if : scope can provide corrected field over whole sensor, sensor can be paired with appropriate optics and of course that size is utilized (not so in planetary) - then it will provide massive speed boost Pixel size - low to medium impact As long as one knows how to bin data if needed and selects sensible working resolution - pixel size is not very important. Dark current / how low can cooling system go .... If there is set point temperature control - impact minimal, otherwise - depends on how well dark current is behaving. Amp glow is major issue for non cooled cameras / camera that don't have set point cooling (like passively / air cooled). Next are things that don't really show in stats: - ability to calibrate sensor properly - high impact (thermal stability, linearity, issues with amp glow and so on) - microlens effects or other artifacts - medium / high impact (but personal preference and depends on use case) - any quirks that sensor might have - again depends on use case and if that will pose an issue.
  10. I did some experiments with non integer binning and I have come to the following conclusion: Just bin to nearest whole number and then up sample subs before integration using some quality resampling method. If you want to bin x1.5 - bin x2 and then up sample by factor of x1.3333. Resolution won't take large hit and bin x2 will improve SNR by factor of x2 - so you'll get wanted resolution and still get good SNR improvement. (alternative is to bin and then drizzle integrate - but I think result will be more or less the same except up sampling is much faster - it can even be part of registration process as alignment of images uses resampling anyway).
  11. Here we are talking about split binning on mono data - so naming won't matter really as all subs will still be mono. Only drawback of split_cfa approach is that it is limited to bin x2 (or bin x4 if bin x2 is repeated on output of first bin x2). It can't do bin x3 split.
  12. It is recommended that you refocus at start of each session - but also on any significant temperature change. If temperature drops more than 1-2C, you should refocus (actual temperature depends on length of tube, tube material and F/ratio or focus critical zone). Metal in tube contracts significantly to change focus position (or rather focus position remains the same but scope shortens) when it gets colder (and vice verse when it gets hotter - but it never gets hotter during the night) that it can throw off your focus. You can either monitor FWHM values in your subs or watch the temperature (or if you feel it getting colder - it might be worth refocusing) and refocus as needed.
  13. Indeed - seeing plays major part and depending on skies - one might find that they need lower sampling rate more often due to poor seeing. Interestingly enough - a bit better results are obtained with NB filters. NB is a bit less sensitive to seeing, especially longer wavelengths - like Ha.
  14. @ollypenrice Look at "split cfa" type command in software of your choice: https://free-astro.org/index.php/Siril:Commands#split_cfa It is intended to do debayer on OSC data - or rather to extract color information into separate R, G and B subs instead of creating combined image with interpolation - but in essence it does bin x2 type operation on mono data - splitting it up into 4 subs. That is immediately available for testing purposes.
  15. I'm sure it can be easily done. I think that there is script for Siril that does something similar - split debayer. I'm sure same thing can be done for regular subs and that it can also be done in any software that has scripting support - like PI or ImageJ I've written plugin for ImageJ that is capable of doing just that (among other things).
  16. Software binning is as effective as hardware binning if one accepts that sensor has a bit more read noise. If one exposes properly for read noise at native resolution - then nothing else needs to be done. Say that we are swamping sky background noise with 5:1 and that we have read noise of 2e. This means that sky background signal is (2e * 5)^2 = 10^2 = 100e. When we bin x2 following happens - we effectively increase read noise by bin factor so we now have 4e of read noise - and that is still swamping the background with factor of 5:1. Adding 4 background pixels together will produce 400e of signal - that will have sqrt(400e) = 20e of sky background noise and that is still x5 larger than 4e of read noise. As long as one is properly exposing for native resolution - all is good as far as read noise. In all other aspects - software binning acts like hardware binning. In fact - best way, in my view, to bin the data is "not to bin at all" - but do something else that will explain how binning actually works even if it is software binning. Imagine we are stacking 100 subs at say 4000x3000px and that we want to bin x2. Instead of binning we do something similar to split debayering. We take each sub and split it into 4 subs without changing a single pixel. We take odd, odd (in vertical and horizontal) pixels of original sub and just make first "sub sub". Then we take odd, even pixels (again in vertical and horizontal) and we create second "sub sub", and so on - even, odd and even, even Resulting sub subs - have 2000x1500 pixels. Each pixel in them is spaced at "twice the distance" of original - so resolution is only the half of original. We haven't changed a single pixel value by this process so there is no change in read noise or anything else but now we have 400 subs to stack instead of 100 subs. Resulting stack will have x2 higher SNR and twice lower sampling rate. If we do it like that, we avoid whole summing pixels together thing and just have twice lower resolution and regular stacking that we are used to - just with x4 more subs for x2 better SNR. (as a bonus - this method gives ever so slightly sharper results as it aligns each of those subs with sub pixel accuracy, and that is why I think it is best way to do it).
  17. While I can't comment on usefulness of points you listed, I'm just going to point out that large aperture in this case simply means nothing for signal. Per pixel signal will be equal in 3" scope as well as 14" scope if one uses critical sampling for both - or same F/ratio for both. We can see that on simple example - if we double the aperture and keep F/ratio the same - we will double focal length. With double the aperture we quadruple light gathering, so x4 photons are collected. With doubling of focal length - we double sampling rate, and again we reduce pixel sky coverage to quarter (1/2 squared) - as it is again surface. We end up with x4 more photons spread over x4 more pixels (as each pixel now covers 1/4 of the sky). Signal per pixel remains the same. On the other hand - using x2 longer focal length then needed - cuts signal by x4 and thus per sub SNR falls by half (or more - depending on how large read noise is compared to shot noise).
  18. Yes, you are right, what Nyquist is saying is that sampling with less than x2 will lead to aliasing artifacts in reconstructed signal and that you need x2 or more in order to perfectly reconstruct the signal. I already pointed that out - you can certainly sample at higher sampling rate than optimum / critical sampling rate and you won't loose any detail / you will capture the detail equally as with optimum sampling rate. There is however other part of the story and that is SNR part. We need good total SNR, but we also need good per sub SNR. Former we need to get good resulting image, but later is needed for software to be able to do it's thing properly. Software needs to do two primary things - first is to undo distortion that is created by tilt component of the seeing across the planet's face - that is what alignment points are for. Second thing is to identify quality frames. For both of these things per sub SNR needs to be high enough otherwise alignment might not be correct as it would be performed on noise rather than on features and of course - noise can be mistaken for detail and otherwise blurred sub might be accepted (AS!3 has special option to handle this called noise robustness that you can increase if your subs are too noisy). In any case - sampling higher than x2 will lead to decreased per sub and consequently overall SNR, but will not otherwise contribute to captured detail - all detail is already available at x2. This naturally leads to question - why do it then?
  19. It would be really nice if it actually worked - but unfortunately it does not.
  20. Actually, in planetary lucky imaging we choose to ignore the seeing effects when choosing sampling rate. There is very good rationale behind this and it works well. Most of the seeing blur comes from motion blur - change in PSF over time. When we say we have to use short exposures to freeze the seeing - that is what we are exploiting - fact that on short scales PSF is fixed. When we integrate for longer - we record superposition of different PSFs - and that acts as motion blur creating more blur then there might initially be. One of the reasons we see much more detail in images versus when observing is because of that. Our eyes and brain integrate image for much longer then typical exposure for lucky imaging. We look at something like 30fps (so 33ms integration) while most of the time coherence time is 5-6ms. Second important bit is that we end up choosing subs where dominant component of wavefront error is tilt rather than higher order terms in Zernike polynomial. What this really means is that software chooses subs that are only geometrically distorted rather than optically. (active optics systems that exist for amateur imaging also deal with this first order component - tilt unlike adaptive optics that tries to restore wavefront with higher precision). If you look at this recording of the moon - you will notice this effect: With appropriate choice of exposure length (and selection) we mostly get geometric distortion. This is dealt with in software by using of alignment points and software is able to "bend" image back into proper shape. In the end - we do have some impact of atmosphere - but here we come to final piece of puzzle. There is very big distinction between how atmosphere affects image (especially after stacking) and how aperture affects the image. With aperture we have clear cut off point due to nature of light, airy disk and its representation in frequency domain. This can be seen on MTF graphs for a telescope that look like this: MTF graph shows "telescope filter response" for image, and this filter drops to zero at some point. This means that all the frequencies above critical are effectively killed off - multiplied by zero (anything multiplied by zero will simply be zero - no way of restoring it). Seeing induced blur behaves differently - it is much more like Gaussian shape (in fact - mathematically, given enough subs to stack under same seeing condition will produce exactly Gaussian shape in the limit), and Gaussian curve has following property: It never reaches exactly 0. This means that while seeing attenuates higher frequency components - it never completely removes them. What this really means is that given enough SNR - seeing influence can always be reversed, but aperture influence can never be reversed. When we sharpen - we are reversing effects of these low pass filters, and that is the second reason why we get more detailed images than we can ever see at the eyepiece - our brain can't sharpen up image, while computers can. Sharpening is not making up detail - it is actually restoring detail that has been attenuated in frequency domain - and better the sharpening algorithm - more accurate restoration is. However, we can only restore up to where aperture effects perform their cut off. For this reason - we always use aperture related critical sampling. Seeing will sometimes prevent us from sharpening it properly, but sometimes recording will be restored to max detail allowed by aperture.
  21. I have StarAnalyzer SA200 and I plan to make a bit higher resolution spectrograph using it as diffraction grating - possibly doing even lowspec or some other already designed 3d printed spectrograph. Will probably at some point use it as spectroheliograph to image sun - if I get large enough ERF for it. I also wanted to image crab pulsar with it to detect it's pulses using very short exposure. At some point - I'm going to show how can 1600mm telescope be used as a wonderful wide field imaging instrument .
  22. 8" F/6 Dob - primary visual instrument 8" F/8 RC - primary "scientific" instrument (imaging and other things) 80mm F/6 APO - wide field imaging scope 4" Maksutov - "grab'n'go" / lunar scope, but I have other uses planed for it. I want to explore it as cheap EEA scope (although many say it won't work - I have few ideas). 4" F/10 Achromat - this one is probably the least defined in terms of usage. At first, my idea was to explore it as "all around" scope which includes DSO imaging and planetary imaging, wide field and high power observing and so on. Unfortunately, I haven't done much with it so far due to lack of time (and at this point interest).
  23. Well, my point was that you misjudged the size of planets when comparing to object at the distance and yes, they are small at eyepiece - but not that small. That is common thing - people are rather poor at judging the size of objects without comparison. It is much easier for us to judge the size of something when it's next to something else that we use as a reference (and there is no reference at the eyepiece unless one is using some sort of astrometric eyepiece with scale). For that reason - full moon looks much larger when it is near the horizon (and we can use terrestrial objects for comparison) then when it is high in the sky. Another interesting thing is that full moon has the same apparent diameter as Jupiter when viewed at x40 magnification. We can see detail on the moon when it is full and we just look at it with naked eye - but almost everyone would say that Jupiter is not worth looking at with such small magnification and that no features could be seen.
  24. Don't be so harsh on Synta. 60% of my scopes are made by Synta.
  25. Are you sure about that? With 150mm scope - one can go for x150 without any issues. 45" Jupiter will have 6750" when magnified x150 - that is 112.5' or 1.875 degrees. One degree is 1 meter at 57 meters - or roughly 33cm at 20meters. 1.875 degrees will be ~60cm at 20 meters. Much larger than a golf ball.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.