Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,098
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. We need to distinguish two types of hot pixels here - or possibly even more cases (probably 4 in total). First is - saturated hot pixel and second - not saturated hot pixel. Either of the two can happen in darks and in light sub. To further complicate things - since we are creating master dark from multiple dark subs - any combination of hot pixels can exist in those subs - some saturated and some non saturated. Hot pixel is just that - pixel that behaves as if not cooled. This means that it's value can be very big compared to other pixels because of accumulation of dark current. It is also "noisy". We know that dark current is poisson process and has thermal "shot" noise associated with it. What values can calibrated hot pixel produce? That depends on type of hot pixel. Easiest case is two saturated hot pixels - one in light and one in master dark (which means it was saturated hot pixel in all dark subs or most dark subs if some sort of sigma rejection was used). In this case - we are left with "hole" - or pixel that has 0 value. This is because saturated pixel can have just one value - max value that sensor can record. If you subtract same value you are left with 0. "Smart" dark calibration should be able to recognize this case and replace such pixel with average of surrounding pixels. Even after regular dark calibration - simple subtraction, you can still "recover" from this case if you use cosmetic correction that removes "dead" pixels (it will be seen as dead pixel because it will have value of 0). Other cases are rather unpredictable. Smart calibration algorithm will recognize pixel that is hot enough to have saturation value in some of the subs. It will replace it with mean value of surrounding pixels without trying to calibrate it. It can saturate in light sub (or other light subs if algorithm is very smart to examine light subs for such defects as well), or some of the dark subs so we will have reference point. Worst case is hot pixel that does not saturate. In principle, such pixel will calibrate properly - but due to very high value, it will also have very high error - and it can seem as hot after calibration. Take for example case where hot pixel has value of 4000 (out of 64000) - this is in fact 40000 +/- 200 (error or noise is square root of value). You now calibrate that pixel, but it will have +/-200e still remaining. If it is on the background - it will look brighter or darker than surrounding pixels. In the end - CMOS sensors suffer from FPN / Telegraph type noise. This can also produce what seems to be a hot pixel - and it indeed is "hot" but not in regular term. It also has more electrons captured but not because of heat and thermal motion, but rather because of imperfections in silicon and current leakage. Such pixels often come in pairs and "leakage" is between two such pixels - sometimes leaking into one and sometimes into the other. Here is what this looks like on my ASI1600 camera: This is stddev stack of 64 dark subs. Higher pixel brightness means that particular pixel has higher standard deviation - on in another words it's values are more noisy then rest. You can clearly see that some pixels are more noisy than others and that often they come in pairs and that those pairs are in diagonal direction. If I animate that part of darks like this: You will see how such pixels behave and why it is called telegraph type noise. You can also see why such pixels can be mistaken for hot pixels although they are not. Solution to this is - take many subs of each kind (both light and calibration subs) and dither.
  2. What you are suggesting is related to geometric distortion of the image and not field flattening. It is part of the stacking routine when one works with short focal length instrument - so it is easily possible and it is usually performed by software without user intervention. Celestial sphere is - well sphere and we are trying to map part of the sphere onto a flat surface - image that is 2d. If focal length is big - there is almost no problem because distortion is very small. When focal length is short - we get large distortion due to type of projection used by lens. Take for example all sky camera type lens which is sort of fish eye lens. It produces images like these: That is very distorted image and if you tried to for example match a feature like triangle to actual triangle in the sky - you would see it very distorted (changed angles). Maybe better example is this: No, those walls are not bent - they are straight walls, it is just the projection when image is placed onto 2d surface of the image that shows them being bent. Btw, observe central part of the image - it is almost undistorted. That is the same as using long focal length - longer the focal length of instrument - less distortion. Field curvature is something else. Coma also. These are optical aberrations that are related to single point rather than geometry of image. This is why we can tell from star shape if it has been affected by coma or field curvature (defocus really) - but we could not tell above distortion of geometry from a single star image - as it would still be single point (maybe displaced, but still just a point). Field curvature is actually defocus that depends on distance from optical axis. Coma is a bit different, but both are blur rather than geometrical distortion. Here is example of field curvature: It is not related to slight curving of the straight lines (that again is geometric distortion) - it is related to blurring of the lines further away from the center - as if being out of focus. In fact - that is what field curvature is - out of focus outer parts of the image - it happens because surface of best focus is not flat like imaging sensor but rather curved like this: Either your center is in focus (more often) and sides are out of focus, or center is out of focus and edges are in focus - but they can't be in focus both at the same time. Btw, look what happens when you try to deconvolve noisy image vs original noise free image: Here is a base sharp image that we are going to use in this example: Here is blur kernel and convolved (blurred) image: Blur is just PSF - which would be coma kind of blur in coma case, or simple round disk in defocus case (each can be calculated from aperture image and Zernike polynomials with Fourier Transform). Now let's look at result of deconvolution: Algorithm used here is naive inverse filtering (which is just division in frequency domain - look up Fourier transform and convolution / multiplication). Pretty good result - if we know blur kernel / PSF, we can get pretty good sharp image back from blurred version. But look what happens if I add some noise in the mix: Here I added Poisson modulatory noise and additive Gaussian noise - simulating shot noise from target + read noise from camera (we could play around and add LP noise and thermal noise - but it really does not matter - this will be enough for example). Here is restoration by naive inverse filtering: Doesn't look nearly as what we have been hoping for, right? Luckily there are much better / advanced algorithms that can deal with noise in better ways - for example Lucy-Richardson deconvolution (often used in astronomy applications): Much better, but still not nice and sharp like noise free example from the beginning. There are even better algorithms like regularized LR deconvolution (LR with total variation regularization): Keep in mind that these are synthetic examples and I have used constant blur kernel. With above approach one needs to use changing kernel and real time examples will be worse. It can be done with high enough SNR or very specific algorithms and approaches, but in reality it is far far simpler to purchase suitable Coma Corrector or Field flattener and use that instead.
  3. It is possible, but you won't like the results. Problem is that math is there for pure signal, but our images are never pure signal and in fact SNR (signal / noise ratio) is something that we deal with on a regular basis. Mathematically correcting for defocus or coma or other aberrations that can be represented by Zernike polynomials comes down to a process called deconvolution (with changing PSF). One part of the problem is that PSF (point spread function) is not known unless exact model of optics is established. It can be approximated by examining stars in the image (which are in fact PSF od system - since stars are point sources) and mathematical approximation (coma depends on distance from the center, tail is always away from center, all that stuff), but other more important problem is that noise is random and therefore not subject to PSF in classical sense. It is related to light intensity (shot noise) or not (read noise and dark current noise) and we can't include it in "restoration process" - but it is embedded in the image and can't be separated - otherwise we would have prefect noise free images. When we try to restore original image by deconvolution - noise which was not convolved in the first place, undergoes reverse operation - and that just makes things much worse. If you want to see that in action - take a blurry image that has a bit noise in it and then sharpen it. You will see that sharpening really brings the noise up. Same process happens when you do deconvolution (which is just fancy word for sort of sharpening that we are talking about) - noise will be blown up and would become non random (but rather ugly looking). If one has high enough SNR in image - then coma correction and field flattening in software is actually feasible. People, when processing images, use deconvolution to do sharpening on the parts of the image where signal is strong and it works.
  4. Stellarium will show very nice deep exposure image that does not correspond to what can be seen with a telescope You can get a sense of what it will look like if you adjust light pollution setting. I live in Bortle 8 light pollution, and when I enter that value in settings: View of M31 looks much more realistic - but still not like in telescope: In telescope it won't look so "flat" - it will have rather visible core and very hard to invisible outer spiral arms.
  5. Standard deviation will "capture" any sort of signal - be that random signal such as noise or non random signal. Only uniform "DC offset" type of image will have stddev of 0. If you want to evaluate noise between two images - calibrated one and non calibrated one, then do the following - select patch of the sky where there is nothing - not even a single star. That might be tricky if you use a single sub since you don't really know if there is very faint star in there. Best approach is to create a stack aligned to the sub you are inspecting and make a selection on actual stack - because stack will have better SNR and you should be able to spot empty part of the sky much more easily. Save selection to be applied to calibrated and not calibrated sub. Apply selection and do statistics on selection. Even having a background gradient will skew your stddev calculations. In a single sub - calibrating with master dark will increase noise (very slightly - depending on number of dark subs in master - more subs less noise increase), but can decrease stddev even if you have completely empty patch of the sky and no gradient. This is because dark calibration removes offset signal. Offset signal is a signal - and it will impact stddev as well.
  6. Lagrangian point. See here for details: https://en.wikipedia.org/wiki/Lagrangian_point
  7. I use this OAG: https://www.teleskop-express.de/shop/product_info.php/language/en/info/p8319_TS-Optics-Off-Axis-Guider-TSOAG16---stabil---Baulaenge-16-mm.html It's even less expensive than guide scope. It adds only 16mm to optical path, or rather uses up only 16mm of optical path. RC8" has very generous back focus distance - 20cm or so. I use rather long focuser and another 5cm extension M90 before focuser. I have OAG, spacers and filter slider as well as rotator. As for cool down time - I honestly have no idea. It is open design and it should not dew up, but I actually had it dew up once (very humid night). It was slow build up on secondary mirror. I keep it in basement, so it is close to ambient temperature and by the time I get everything sorted out (camera focus, everything connected, plate solving + alignment point, etc ...) it is ready to image. I never used it as visual scope (I did look once thru it - Jupiter was the target but image was rather pale / lacking contrast at such high magnification - about x250 or so - I believe it was 6 or 7mm eyepiece at 1624mm FL, due to large secondary obstruction). I don't think there will be a problem with half a kilo of weight on the scope. I hang following: Two cameras, one cooled, one not - 410g + 120g so 0.5Kg only in cameras, I have extension tubes, filter drawer, OAG, rotator - probably close to 1Kg of gear. Replacement focuser is 1.2Kg, so there is about 2+Kg hanging off the back end of the scope - there is usually no problem there - except that I need to add 1Kg weight on front side of the scope to balance it properly. This scope does not have a moving mirror like SCT scopes, so you don't have to worry about tilt as much.
  8. I think that Altair Wave 80 is the same as TS 80mm APO? I use TS x0.79 reducer / flattener with it with ASI1600 for wide field. https://www.teleskop-express.de/shop/product_info.php/info/p5965_TS-Optics-REFRACTOR-0-79x-2--ED-Reducer-Corrector-fuer-APO-und-ED.html On my "upgrade" list is this - and I think it will work also, but you'll have to dial in the spacing yourself: https://www.teleskop-express.de/shop/product_info.php/info/p11122_Riccardi-0-75x-APO-Reducer-and-Flattener-with-M63x1-Thread.html
  9. I guess CEM25p is a good mount. I've not seen or operated it, but what I've heard of iOptron mounts - it should be ok. It is eq5 class mount, but probably much better in performance to SkyWatcher EQ5. As for OTA - I think it is very good instrument - I own one - TS RC (GSO rebranded - I guess iOptron model is as well because they look identical) and have paired it with ASI1600 camera. A few pointers on the scope - I replaced stock focuser to a better 2.5" one because stock focuser (2" monorail) does not have threaded connection. I use OAG for guiding with it rather than guide scope. Don't think that you will be able to get decent results with such scope unguided. It is very long focal length scope - 1600mm FL and FOV will be small. Corrected and flat field is rather limited - less than APS-C sized chip - I think I can start to see field curvature at 22mm diagonal of ASI1600 in the far corners. With suitable corrector / flattener (maybe even reducing one) - you should be able to do APS-C sized field. Again, I believe that full size sensor is going to be wasted on this scope as I don't think it will be illuminated or corrected passed APS-C size (or about 30mm, so almost 1/3 of diagonal wasted). Some people have issues with collimation of this scope - I found it rather easy to align properly. RC8" can also be considered Jack of all trades, but it actually is master of astrophotography with planetary observing and imaging quite lagging behind due to large central obstruction. My advice would be to go for better mount to carry that scope. Something in EQ6 class - like this one: https://www.firstlightoptics.com/ioptron-mounts/ioptron-cem40-center-balanced-equatorial-goto-mount.html
  10. Not only that - there would be an added bonus of smaller atmosphere impact! There are a few problems that prevents us from very precise parallax measurements on earth - atmosphere, precision of tracking and so on ... All of those "smear" star image and add uncertainty in true star position (although we use centroid type algorithms). Maybe best solution would be a space telescope / or pair of space telescopes in orbit around the Sun at some distance, maybe exploiting L points of some of outer planets?
  11. There are a few issues that you should consider: 1. Mismatch in resolution 2. Mismatch in sampling rate 3. Mismatch in SNR If your software is capable of dealing with all of those and you are prepared to go with "lowest common denominator", then you should be fine with that combo and cropping away. Let me just quickly explain what those above are. 1. Mismatch in resolution. Let's assume that both scopes are of good optical quality and close to perfect aperture for our purposes. In same guiding and same seeing 80mm scope will have advantage in resolved detail over 51mm scope. Quite a bit of a difference, almost double, because with small apertures, guiding and seeing have very small impact and most of scope resolving is down to airy disk size. 80mm scope will have almost double Airy disk size compared to 51mm. If you accept that images will have resolution of smaller scope, then you can combine the data (data from small scope will simply be more blurred and that will impact total stack) 2. Although you are close here in sampling rate there will be some difference. ASI1600 + 480mm fl will give you 1.63"/px while ASI183 + 250mm FL will give you 1.98"/px. These are not matched and your stacking software needs to account for that. Also - you'll have to go with lower sampling rate of smaller setup - 1.98"/px There is interesting point here - why not put FF/FR on Altair 80mm to have closer to 2"/px - that will also match FOV more closely? 3. Mismatch in SNR - well this one is easy, if you match resolutions above with using FF/FR on 80mm scope - you'll have same sampling rate but one scope will have almost the half of aperture of the other 80mm vs 51mm. In same time - less capture photons by smaller aperture and you end up with considerably different SNR per sub. Regular stacking works because it assumes that all subs have same SNR (there are algorithms that can sort of compensate for different exposure lengths for same setup). PixInsight has per frame weights. Neither of the two is good enough for seriously mismatched SNR. PixInsight approach can seem to be solving the problem - but it's far from it. There is no Single SNR for image - in fact every pixel in the image has different SNR and therefore we can't adjust things with only single constant / weight per sub. There is a good algorithm for dealing with this - but no software has yet implemented it. Btw, look at FOV matching with ASI1600 + 80mm F/6 and x0.8 FF/FR. Sampling rate is also better matched: Moral of the story - if you want to go with dual rig, best choose same rigs as that will give you the least issues to solve.
  12. You really need to manage your expectations of astrophotography you are going to achieve with that scope / mount. That setup will be very nice in doing lunar / white light solar (with appropriate full aperture filter - check out baader solar foil) / planetary imaging scope if you purchase modern dedicated astro CMOS camera. Planetary imaging / lucky imaging utilizes very short exposures and AltAz mount or mount that is not tracking good enough - simply does not matter there. I would also advocate going for dedicated astro camera with cooling for DSO AP - but, I highly doubt that you will have easy time doing either short exposure or long exposure with wedge. Probably best thing to do is EEVA - which is very similar to regular astrophotography except it is real time / live stacking approach (check EEVA section here on SGL for ideas). Again, subs are kept reasonably short - like a few seconds and image is created in real time on computer screen. Helps in light polluted areas instead of regular observing but it can also be a sort of astrophotography thing as you can save live stacked image for further processing. In any case - dedicated astro camera, if it has set point cooling, enables you to do proper calibration of your subs - bias / dark current removal and flats application. That is advantage over DSLR. Another advantage is that you can see in real time on computer screen what you are capturing (for focus purposes or framing) and you can do high frame rate without any distortion for planetary imaging (DSLR can shoot movie but it utilizes compression which creates certain artifacts). Another reason why you should go for dedicated astro camera is - it is very unlikely that 6" SCT will illuminate and have fully corrected circle the size of full frame DSLR. It will be APS-C size at best (a bit shy of 30mm), so full frame DSLR will be effectively "wasted" on that scope (mind you - not many scopes do have such a large usable field).
  13. Bayer drizzle will actually work if implemented properly, unlike regular drizzle. That is because in Bayer drizzle - one does not shrink down pixels, but rather exploits the fact that pixels are already "shrunken down" compared to sampling rate. It is this shrinking step in regular drizzle that is questionable (in my view) as it needs quite precise dither offsets to be effective. This is something that I would expect in comparison of the two. I don't think that methodology is wrong however and synthetic data is also representative of what will actually happen. I think it is down to level of oversampling if differences will show. For example 4.04"/px is very close to theoretical "ideal" sampling rate if FWHM is 5.8 - and that is 5.8 / 1.6 = 3.625. We could say that we have oversampling by (4 - 3.625) / 3.625 = ~ 0.1 or 10% in this case. In first case we had FWHM of 2.34" which corresponds to 1.4625"/px while image was oversampled to 3.448"/px which would be ~ 135.76%. I would say that it is the first case that should matter more - and it shows that drizzle is not bad. I have not done SNR measurements and wonder - how much dithered are the subs? In drizzle integrated image, I also measured FWHM to be around 3.1 - 3.2px, which corresponds to 0.86"/px * 3.2px = 2.752" FWHM - very close to 2.34" FWHM of original image. Lanczos upscaled images fare very similarly - a bit higher FWHM at around 3.4px being 2.924" FWMH - both values being far below FWHM that corresponds to 3.448"/px - ~5.52".
  14. This is very interesting, and it shows that I could have been wrong either in dismissing drizzle or with proposed comparison method. Will need to look into it a bit more. Star shapes are indeed poorer on upscaled subs, but that is consequence of ringing that can happen when we upscale very undersampled data. I guess that different upscaling algorithm would deal with those. Maybe B-spline interpolation could be used instead of Lanczos? Could you do another test with Bicubic B-spline for upsample?
  15. I've heard good things about this one: https://www.teleskop-express.de/shop/product_info.php/info/p3041_TS-Optics-PHOTOLINE-115-mm-f-7-Triplet-Apo---2-5--RAP-focuser.html Also, maybe consider use of Quark Combo instead of regular quark if you intend to do imaging alongside visual. With Quark Combo, you'll be able to do close up shots / viewing but also full disk viewing. Trick is to use both aperture masks and telecentric barlows. You can get x2 and x3 telecentrics from ES - they are supposed to be good, and simple aperture mask can help make scope be F/20 or F/30. You can get full disk viewing with up to 1800mm focal length. This means that you need something like 400-450mm FL scope with regular quark to get full disk viewing. Above scope is 800mm FL so you would not be able to get full disk with regular quark, but take x2 telecentric lens and make 80mm aperture mask. 1600mm FL and 80mm aperture - gives you F/20 system and full disk viewing with something like x120 magnification with ease. Want to get very close in? Put x3 telecentric and you are at F/21 with 115mm of aperture. x200 mag should be doable without too much trouble. BTW, put Riccardi FF/FR on above scope and you'll have 115mm aperture F/5.2 600mm FL wide field instrument for imaging.
  16. Semi apo filter is nice addition. I have F/10 4" achromat and use Baader Contrast Booster as minus violet filter. It does work. Out of focus blue and violet are much easier to spot in images than with eye - which is good for visual but bad for astro photography. This is because camera sensor is more sensitive to blue part of spectrum than human eye is. Human eye is most sensitive in green part of the spectrum. As for accessories, I don't think that you should worry about it now. Part of the fun (for me at least) is discovering what you need and anticipation and the thrill when you get new piece of kit and wait for a chance to test it out. Here is something that you should consider, but like I said, it will depend on your interests / taste. I would consider 2" diagonal mirror if you don't already have one and wide field eyepiece. 6" refractor is very good scope for wide field viewing of Milky way and large open clusters. If you are serious about astrophotography - consider getting autoguiding kit at some point. That means guide scope (you can turn your finder scope into a guide scope or add separate guide scope instead of finder scope) and planetary camera. Planetary cameras can be a lot of fun. You can take better images of the Moon and the Planets with them and even some Deep Sky imaging - that is how I started into AP. Scope you have is not well suited for AP because it is achromatic design (it is a bit better corrected than regular achromat, but as you have seen - bright objects will have that blue / purple halo around them). Luckily there is something that you can do about it. - You can use Semi APO filter or even regular #8 Wratten yellow filter. Very good filter for that is also 495 long pass filter (from baader - it is deep yellow filter). Problem with yellow filters is that it will skew your color balance, but that can be corrected in processing phase - Another thing that you can add is aperture mask. That is something that you can't purchase, but you can make one, even out of cardboard. With nowadays 3d printing - it is very easy to print one to suit your needs. Aperture mask is just a mask with smaller aperture than original aperture of telescope. Chromatic blur of achromat telescope depends on clear aperture size. Reducing this size removes some of chromatic blur. Don't think that your scope won't be able to produce good images - it is just a bit more complicated and appropriate technique needs to be used. Here is example of what F/5 achromat (2 lens simple design with a lot of CA) can do with planetary type camera: Of course, to get better images - you will need DSLR camera and adapter (T2 ring) for that camera. Canon cameras are probably easiest to work with for astrophotograpy because of software support and being able to shoot raw images (without any in camera processing).
  17. Hi and welcome to SGL. You should wait with field flattener / focal reducer for your scope until you start doing AP. You might not need it at all. Scope that you have is actually 4 lens design and that means it could have flat field as is. According to TS website - in fact, this is modified Petzval design and it has flat field: source: https://www.teleskop-express.de/shop/product_info.php/language/en/info/p7786_Bresser-4852760---152-mm-Refraktor--f-760-mm--OTA.html Bigger problem will be chromatic aberration, but you won't know until you start imaging with said scope - again, here 4 lens design is meant to reduce chromatic aberration usually associated with refractors.
  18. This one is easy Although not quite coherent. If you follow the graph from foot to nautical mile, you will conclude that: nautical mile = 10 cable = 10 x (100 fathom) = 10 x 100 x (2 x yard) = 10 x 100 x 2 x (3 foot) = 10 x 100 x 2 x 3 x foot = 6000 x foot = 6080 x foot (if you go directly) We have 80 feet (11520 poppy seeds) missing in our calculation - or about one shackle
  19. While we are on the subject, can anyone explain this graph:
  20. According to wiki (I had to look it up since this is the first time I saw that English and Imperial units are different thing), American units are further evolution of English units. Therefore - not two steps behind, but two steps behind and one to the side https://en.wikipedia.org/wiki/Comparison_of_the_imperial_and_US_customary_measurement_systems
  21. I regularly bin my color guide camera with OAG since "base" sampling rate is somewhere around 0.48"/px and I don't need that much resolution for my guide system.
  22. Yes, other way around. It really depends on mount used. Higher quality and better performing mounts don't need as frequent corrections. Many people use 1-2s exposures for guide cycle. That is often fast as seeing influence can be quite big on those scales. Better mounts tolerate 4-8s guide cycle, while top level mounts can go over 10s of seconds for single correction - mount simply stays on target that long. Guide exposure depends on several factors - one is seeing, you need long enough exposure to average seeing effects. Other is quality of polar alignment. There is a tool that will calculate DEC drift rate depending on polar alignment error. In most cases, this rate is something like 1-2 arc seconds per minute or less. From that you can calculate guide exposure length - just choose what is maximum offset in DEC you will tolerate for single guide exposure. Similarly, another factor, acting in RA is periodic error. This can be as much as 30 arc seconds peak to peak in one worm cycle or as low as few arc seconds. Depending on how smooth it is (for example, pure sine wave is probably the smoothest form you can have - but it is rarely so), you can again calculate max drift rate in RA. Based on that and wanted max correction - you can calculate guide exposure. Ideally - you want longer guide exposures as that means that seeing effects will be reduced, your polar alignment is good enough and your periodic error is low enough and smooth to be able to use longer guide exposures. Sometimes in the future it can even happen that imaging exposures become shorter than guide exposures ( less read noise and better mounts requiring corrections less often) - but that is easily handled by "summing" (stacking) multiple short imaging exposures to form single longer guide exposure - thus still being able to both guide and image with single camera.
  23. More affordable scopes mean more people can use larger aperture scopes and hence higher magnification? Knowledge on optimization of high power viewing is now readily available online?
  24. It is actually related to read noise of sensor. With modern low read noise sensors, we are approaching moment when no additional guiding system will be required. As has been pointed out, at the moment, there is discrepancy between imaging exposure and guiding exposure - a few magnitudes of a difference. Imaging exposures tend to be in hundreds of seconds while guiding exposures are in seconds. Since difference between long exposure vs short exposure image quality depends only on read noise (or more precisely - it's relation to other noise sources) - we still need to keep our imaging exposures at least few minutes long. With advent of very low read noise sensors - this time will reduce and at some point - exposure lengths will match and then you'll be able to guide on imaging exposure. In fact - something like that is already partially possible in what is called EEVA - short exposure live stacking where exposure lengths are tens of seconds long. I don't think software is yet capable of doing it, but in practice EEVA software would benefit from such guiding as it would be able to dither and improve results.
  25. Some sort of statistical analysis should be considered here. Besides already mentioned variables - observer plays a role as well. Some people tolerate slightly soft but larger image as it allows them to see detail easier. Others prefer detail to be small at edge of resolving but overall image to be sharp. I see this in imaging also. There is not much difference in x2 sampling resolution in terms of what can be seen in the image and perceived sharpness. That is x2 in "magnification". As for myself, I think I went thru different phases. Before - it was about magnification / image size - it allowed me to see better, but as time went on, I found that I prefer lower magnification and perceptually sharper image. Btw, actual magnification that allows you to see all there is to be seen is quite low. For 8" aperture it is less than x100. Everything above that is just magnifying image to make it easier to see without additional detail being revealed.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.