Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,032
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. If you already have NII filter - why would you want to extract it from Ha? You have means to record it as is. I suppose that original post suggested that you can get NII data by using 5nm Ha and 3nm Ha. 5nm Ha is wide enough to have both Ha and NII data combined as their wavelengths are 656.461 and 658.527, so 656.5 +/- 2.5 => 654nm to 659nm and that includes 658.5, while Ha 3nm will not do that 656.5 +/- 1.5 => 655nm to 658nm does not include 658.5nm. 5nm will be Ha signal + NII signal, while 3nm will be only Ha signal and therefore 5nm image - 3nm image = Ha signal + NII signal - Ha signal = NII signal. I would not worry much about red component of LP - as it will be offset or maybe very slight gradient - but that thing can be wiped out from the image.
  2. I think that USB connection is abstracted away too much from software (good thing for software developers). It just acts as serial connection to software and the best thing you can do is just send / receive over it. Controllers handle any issues with noise / speed of transfer / possible errors and correction mechanism ....
  3. Don't really see why these filters need to be used with OSC camera? Usage with mono is allowed and if I have something to say - encouraged We use LRGB as imaging model - why not Multiband + H/OIII/SII model?
  4. I agree, does that mean that what is captured in the image is not part of reality or it shows that human vision is lacking in some ways? Would you call this image Photoshopped? No human being will ever be able to see something like this - because it was recorded in UV light.
  5. I resent this term being used to describe treatment of astronomical images as it implies changing contents of the image that will dramatically impact truthfulness of subject being displayed. Many imagers go to great lengths to preserve documentary value of astro photographs and try not to alter image in any way other than to show what has been captured by camera sensor. Distinction between visual appearance and photograph of astronomical object comes from difference in sensitivity between human eye and camera sensor and also the difference in linearity. Camera sensor is mostly linear while human vision is logarithmic in nature (that is why we have magnitude system - what we perceive as linear brightness decrease corresponds to log function). Process of digitally manipulating captured data has the purpose of transforming linear response from camera sensor into log space of human vision. It also "compresses" dynamic range of the target in less space that can be shown by a single image either on paper or on computer screen (remember, you can easily observe two stars with mag5 or higher difference at the eyepiece and that will be x100 intensity difference between them - try showing that nicely on computer screen with total of 255 different intensity levels, let alone on a printed photograph).
  6. Since the article is from 2017 - it might be some time before it hits the market
  7. In my experience, guide issues are again best solved "in hardware". What you are describing sounds more like issue with periodic error than guide scope sag / guide issues (well, it can possibly be solved with guide settings, but sometimes periodic error is fast enough that guiding simply can't deal with it efficiently). You can check if it is indeed periodic error by aligning your frame with RA/DEC - make RA direction be horizontal for example and DEC vertical. This is easily checked if you start exposure and slew mount in RA - you should get horizontal line from a bright star. If it is at an angle - rotate camera a bit until it is horizontal. Now just image as usual and later check direction of elongation of stars. But you are right, for sake of argument and for academic reasons - let's discuss what can be done in software. What you are describing here is known as PSF - or point spread function - way single point (or pixel) - spreads its value on adjacent pixels. You have made very simple case where PSF is just a single line L pixels long with uniform distribution. In reality PSF will be a bit distorted. Deconvolution does precisely what you have been trying to do. As long as you know PSF - description of how a single point spreads, you can reverse the process (to an extend - depending on how much noise there is). I played with this idea long time ago when my HEQ5 mount had quite a bit of periodic error (since I added belt mod and tuned my mount and I no longer have these issues). Here is an example of what I was able to achieve and description of how I went about it. By the way, tracking errors are much better candidate for deconvolution fixing than coma or field curvature - because PSF is constant across the image - whole image was subject to same spread of pixel values. Here is base image: You can see that stars are elongated in ellipses. I shot this image at fairly high resolution and as a consequence even small PE that I was not able to guide out resulted in star elongation. If you check, you will see that elongation is in RA direction (compare to stellarium for example). First step is to estimate blur kernel. I actually did following to do that: Convolution is a bit like multiplication - if you convolve (blur) A with B, you get the same result as you would if you conovolved B with A (this is a bit strange concept - blurring of PSF with your whole image - but it works). This helps us find blur kernel. I've taken couple stars from the image and deconvolved those with "perfect looking star". I did this for each channel. These are resulting blur PSFs - 5 of them. I then took their average value to be exact PSF. Testing it back on a single star: We managed to remove some of the elliptical shape of the star. Here is what final "product" looks like after fixing elongation issue: Of course, process of deconvolution raised noise floor and it was much harder to process this image (hence black clipping and less background detail) - but result can be seen - stars do look rounder and detail is better in central part of nebula. If you are a PixInsight user - you might want to check out dynamic PSF extraction tool (I think it is called something like that). I don't use PI so can't be certain, but I believe it will provide you with first step - average star shape over the image. From that you can deconvolve that with perfect star shape to get blur kernel and then in turn use that blur kernel to deconvolve whole image.
  8. What software did you use? Do you have image of blurred star for me to try and deconvolve?
  9. Deconvolution (sharpening) is very difficult process, while geometric distortion correction is fairly easy process. I explain why is that in a minute. These two are handled by different algorithms of course, and geometric distortion is readily available in software. For example in Gimp there is lens distortion plugin: That can easily reverse geometric distortion made by lens / telescope. Difference between these two operations can be explained like this. Imagine you have very large table and some glasses of water on table. Geometric distortion is just moving glasses of water around - changing their position. It is rather easy to correct for that - just move glasses back where they were (you need a way of figuring out how they were moved - but that is easy - you know position of the stars or you can take image of the grid like above to see how lens distorts the image). Blurring due to coma or field curvature process does something different - it takes a bit of water from the glasses and transfers to other glasses - it mixes the waver in the glasses - and now you need to "unmix" it back. That is much more difficult problem than geometric distortion - even if you take test shots. Btw - moving of the glasses is moving of the pixel values around, and mixing of the water is actually changing pixel values in exactly the same way - you take some of the value of one pixel and spread it around and add to values of other pixels - and you do that for all the pixels in the image - that is what blurring does. Deconvolution is not that common in software and if it is implemented - it is usually implemented for constant kernel - which is useful in regular blur - like defocus blur or seeing blur or maybe motion blur - where all the pixels in the image are affected in the same way. Coma blur and field curvature blur add another level of complexity - they change how "the water is spread around" depending on what pixel we are talking about. In the center of the image there is almost no coma and no field curvature (in fact there is none) and amount of it grows as we move away from the optical axis. It is very difficult to model and calculate how to get image back under these changing circumstances.
  10. We need to distinguish two types of hot pixels here - or possibly even more cases (probably 4 in total). First is - saturated hot pixel and second - not saturated hot pixel. Either of the two can happen in darks and in light sub. To further complicate things - since we are creating master dark from multiple dark subs - any combination of hot pixels can exist in those subs - some saturated and some non saturated. Hot pixel is just that - pixel that behaves as if not cooled. This means that it's value can be very big compared to other pixels because of accumulation of dark current. It is also "noisy". We know that dark current is poisson process and has thermal "shot" noise associated with it. What values can calibrated hot pixel produce? That depends on type of hot pixel. Easiest case is two saturated hot pixels - one in light and one in master dark (which means it was saturated hot pixel in all dark subs or most dark subs if some sort of sigma rejection was used). In this case - we are left with "hole" - or pixel that has 0 value. This is because saturated pixel can have just one value - max value that sensor can record. If you subtract same value you are left with 0. "Smart" dark calibration should be able to recognize this case and replace such pixel with average of surrounding pixels. Even after regular dark calibration - simple subtraction, you can still "recover" from this case if you use cosmetic correction that removes "dead" pixels (it will be seen as dead pixel because it will have value of 0). Other cases are rather unpredictable. Smart calibration algorithm will recognize pixel that is hot enough to have saturation value in some of the subs. It will replace it with mean value of surrounding pixels without trying to calibrate it. It can saturate in light sub (or other light subs if algorithm is very smart to examine light subs for such defects as well), or some of the dark subs so we will have reference point. Worst case is hot pixel that does not saturate. In principle, such pixel will calibrate properly - but due to very high value, it will also have very high error - and it can seem as hot after calibration. Take for example case where hot pixel has value of 4000 (out of 64000) - this is in fact 40000 +/- 200 (error or noise is square root of value). You now calibrate that pixel, but it will have +/-200e still remaining. If it is on the background - it will look brighter or darker than surrounding pixels. In the end - CMOS sensors suffer from FPN / Telegraph type noise. This can also produce what seems to be a hot pixel - and it indeed is "hot" but not in regular term. It also has more electrons captured but not because of heat and thermal motion, but rather because of imperfections in silicon and current leakage. Such pixels often come in pairs and "leakage" is between two such pixels - sometimes leaking into one and sometimes into the other. Here is what this looks like on my ASI1600 camera: This is stddev stack of 64 dark subs. Higher pixel brightness means that particular pixel has higher standard deviation - on in another words it's values are more noisy then rest. You can clearly see that some pixels are more noisy than others and that often they come in pairs and that those pairs are in diagonal direction. If I animate that part of darks like this: You will see how such pixels behave and why it is called telegraph type noise. You can also see why such pixels can be mistaken for hot pixels although they are not. Solution to this is - take many subs of each kind (both light and calibration subs) and dither.
  11. What you are suggesting is related to geometric distortion of the image and not field flattening. It is part of the stacking routine when one works with short focal length instrument - so it is easily possible and it is usually performed by software without user intervention. Celestial sphere is - well sphere and we are trying to map part of the sphere onto a flat surface - image that is 2d. If focal length is big - there is almost no problem because distortion is very small. When focal length is short - we get large distortion due to type of projection used by lens. Take for example all sky camera type lens which is sort of fish eye lens. It produces images like these: That is very distorted image and if you tried to for example match a feature like triangle to actual triangle in the sky - you would see it very distorted (changed angles). Maybe better example is this: No, those walls are not bent - they are straight walls, it is just the projection when image is placed onto 2d surface of the image that shows them being bent. Btw, observe central part of the image - it is almost undistorted. That is the same as using long focal length - longer the focal length of instrument - less distortion. Field curvature is something else. Coma also. These are optical aberrations that are related to single point rather than geometry of image. This is why we can tell from star shape if it has been affected by coma or field curvature (defocus really) - but we could not tell above distortion of geometry from a single star image - as it would still be single point (maybe displaced, but still just a point). Field curvature is actually defocus that depends on distance from optical axis. Coma is a bit different, but both are blur rather than geometrical distortion. Here is example of field curvature: It is not related to slight curving of the straight lines (that again is geometric distortion) - it is related to blurring of the lines further away from the center - as if being out of focus. In fact - that is what field curvature is - out of focus outer parts of the image - it happens because surface of best focus is not flat like imaging sensor but rather curved like this: Either your center is in focus (more often) and sides are out of focus, or center is out of focus and edges are in focus - but they can't be in focus both at the same time. Btw, look what happens when you try to deconvolve noisy image vs original noise free image: Here is a base sharp image that we are going to use in this example: Here is blur kernel and convolved (blurred) image: Blur is just PSF - which would be coma kind of blur in coma case, or simple round disk in defocus case (each can be calculated from aperture image and Zernike polynomials with Fourier Transform). Now let's look at result of deconvolution: Algorithm used here is naive inverse filtering (which is just division in frequency domain - look up Fourier transform and convolution / multiplication). Pretty good result - if we know blur kernel / PSF, we can get pretty good sharp image back from blurred version. But look what happens if I add some noise in the mix: Here I added Poisson modulatory noise and additive Gaussian noise - simulating shot noise from target + read noise from camera (we could play around and add LP noise and thermal noise - but it really does not matter - this will be enough for example). Here is restoration by naive inverse filtering: Doesn't look nearly as what we have been hoping for, right? Luckily there are much better / advanced algorithms that can deal with noise in better ways - for example Lucy-Richardson deconvolution (often used in astronomy applications): Much better, but still not nice and sharp like noise free example from the beginning. There are even better algorithms like regularized LR deconvolution (LR with total variation regularization): Keep in mind that these are synthetic examples and I have used constant blur kernel. With above approach one needs to use changing kernel and real time examples will be worse. It can be done with high enough SNR or very specific algorithms and approaches, but in reality it is far far simpler to purchase suitable Coma Corrector or Field flattener and use that instead.
  12. It is possible, but you won't like the results. Problem is that math is there for pure signal, but our images are never pure signal and in fact SNR (signal / noise ratio) is something that we deal with on a regular basis. Mathematically correcting for defocus or coma or other aberrations that can be represented by Zernike polynomials comes down to a process called deconvolution (with changing PSF). One part of the problem is that PSF (point spread function) is not known unless exact model of optics is established. It can be approximated by examining stars in the image (which are in fact PSF od system - since stars are point sources) and mathematical approximation (coma depends on distance from the center, tail is always away from center, all that stuff), but other more important problem is that noise is random and therefore not subject to PSF in classical sense. It is related to light intensity (shot noise) or not (read noise and dark current noise) and we can't include it in "restoration process" - but it is embedded in the image and can't be separated - otherwise we would have prefect noise free images. When we try to restore original image by deconvolution - noise which was not convolved in the first place, undergoes reverse operation - and that just makes things much worse. If you want to see that in action - take a blurry image that has a bit noise in it and then sharpen it. You will see that sharpening really brings the noise up. Same process happens when you do deconvolution (which is just fancy word for sort of sharpening that we are talking about) - noise will be blown up and would become non random (but rather ugly looking). If one has high enough SNR in image - then coma correction and field flattening in software is actually feasible. People, when processing images, use deconvolution to do sharpening on the parts of the image where signal is strong and it works.
  13. Stellarium will show very nice deep exposure image that does not correspond to what can be seen with a telescope You can get a sense of what it will look like if you adjust light pollution setting. I live in Bortle 8 light pollution, and when I enter that value in settings: View of M31 looks much more realistic - but still not like in telescope: In telescope it won't look so "flat" - it will have rather visible core and very hard to invisible outer spiral arms.
  14. Standard deviation will "capture" any sort of signal - be that random signal such as noise or non random signal. Only uniform "DC offset" type of image will have stddev of 0. If you want to evaluate noise between two images - calibrated one and non calibrated one, then do the following - select patch of the sky where there is nothing - not even a single star. That might be tricky if you use a single sub since you don't really know if there is very faint star in there. Best approach is to create a stack aligned to the sub you are inspecting and make a selection on actual stack - because stack will have better SNR and you should be able to spot empty part of the sky much more easily. Save selection to be applied to calibrated and not calibrated sub. Apply selection and do statistics on selection. Even having a background gradient will skew your stddev calculations. In a single sub - calibrating with master dark will increase noise (very slightly - depending on number of dark subs in master - more subs less noise increase), but can decrease stddev even if you have completely empty patch of the sky and no gradient. This is because dark calibration removes offset signal. Offset signal is a signal - and it will impact stddev as well.
  15. Lagrangian point. See here for details: https://en.wikipedia.org/wiki/Lagrangian_point
  16. I use this OAG: https://www.teleskop-express.de/shop/product_info.php/language/en/info/p8319_TS-Optics-Off-Axis-Guider-TSOAG16---stabil---Baulaenge-16-mm.html It's even less expensive than guide scope. It adds only 16mm to optical path, or rather uses up only 16mm of optical path. RC8" has very generous back focus distance - 20cm or so. I use rather long focuser and another 5cm extension M90 before focuser. I have OAG, spacers and filter slider as well as rotator. As for cool down time - I honestly have no idea. It is open design and it should not dew up, but I actually had it dew up once (very humid night). It was slow build up on secondary mirror. I keep it in basement, so it is close to ambient temperature and by the time I get everything sorted out (camera focus, everything connected, plate solving + alignment point, etc ...) it is ready to image. I never used it as visual scope (I did look once thru it - Jupiter was the target but image was rather pale / lacking contrast at such high magnification - about x250 or so - I believe it was 6 or 7mm eyepiece at 1624mm FL, due to large secondary obstruction). I don't think there will be a problem with half a kilo of weight on the scope. I hang following: Two cameras, one cooled, one not - 410g + 120g so 0.5Kg only in cameras, I have extension tubes, filter drawer, OAG, rotator - probably close to 1Kg of gear. Replacement focuser is 1.2Kg, so there is about 2+Kg hanging off the back end of the scope - there is usually no problem there - except that I need to add 1Kg weight on front side of the scope to balance it properly. This scope does not have a moving mirror like SCT scopes, so you don't have to worry about tilt as much.
  17. I think that Altair Wave 80 is the same as TS 80mm APO? I use TS x0.79 reducer / flattener with it with ASI1600 for wide field. https://www.teleskop-express.de/shop/product_info.php/info/p5965_TS-Optics-REFRACTOR-0-79x-2--ED-Reducer-Corrector-fuer-APO-und-ED.html On my "upgrade" list is this - and I think it will work also, but you'll have to dial in the spacing yourself: https://www.teleskop-express.de/shop/product_info.php/info/p11122_Riccardi-0-75x-APO-Reducer-and-Flattener-with-M63x1-Thread.html
  18. I guess CEM25p is a good mount. I've not seen or operated it, but what I've heard of iOptron mounts - it should be ok. It is eq5 class mount, but probably much better in performance to SkyWatcher EQ5. As for OTA - I think it is very good instrument - I own one - TS RC (GSO rebranded - I guess iOptron model is as well because they look identical) and have paired it with ASI1600 camera. A few pointers on the scope - I replaced stock focuser to a better 2.5" one because stock focuser (2" monorail) does not have threaded connection. I use OAG for guiding with it rather than guide scope. Don't think that you will be able to get decent results with such scope unguided. It is very long focal length scope - 1600mm FL and FOV will be small. Corrected and flat field is rather limited - less than APS-C sized chip - I think I can start to see field curvature at 22mm diagonal of ASI1600 in the far corners. With suitable corrector / flattener (maybe even reducing one) - you should be able to do APS-C sized field. Again, I believe that full size sensor is going to be wasted on this scope as I don't think it will be illuminated or corrected passed APS-C size (or about 30mm, so almost 1/3 of diagonal wasted). Some people have issues with collimation of this scope - I found it rather easy to align properly. RC8" can also be considered Jack of all trades, but it actually is master of astrophotography with planetary observing and imaging quite lagging behind due to large central obstruction. My advice would be to go for better mount to carry that scope. Something in EQ6 class - like this one: https://www.firstlightoptics.com/ioptron-mounts/ioptron-cem40-center-balanced-equatorial-goto-mount.html
  19. Not only that - there would be an added bonus of smaller atmosphere impact! There are a few problems that prevents us from very precise parallax measurements on earth - atmosphere, precision of tracking and so on ... All of those "smear" star image and add uncertainty in true star position (although we use centroid type algorithms). Maybe best solution would be a space telescope / or pair of space telescopes in orbit around the Sun at some distance, maybe exploiting L points of some of outer planets?
  20. There are a few issues that you should consider: 1. Mismatch in resolution 2. Mismatch in sampling rate 3. Mismatch in SNR If your software is capable of dealing with all of those and you are prepared to go with "lowest common denominator", then you should be fine with that combo and cropping away. Let me just quickly explain what those above are. 1. Mismatch in resolution. Let's assume that both scopes are of good optical quality and close to perfect aperture for our purposes. In same guiding and same seeing 80mm scope will have advantage in resolved detail over 51mm scope. Quite a bit of a difference, almost double, because with small apertures, guiding and seeing have very small impact and most of scope resolving is down to airy disk size. 80mm scope will have almost double Airy disk size compared to 51mm. If you accept that images will have resolution of smaller scope, then you can combine the data (data from small scope will simply be more blurred and that will impact total stack) 2. Although you are close here in sampling rate there will be some difference. ASI1600 + 480mm fl will give you 1.63"/px while ASI183 + 250mm FL will give you 1.98"/px. These are not matched and your stacking software needs to account for that. Also - you'll have to go with lower sampling rate of smaller setup - 1.98"/px There is interesting point here - why not put FF/FR on Altair 80mm to have closer to 2"/px - that will also match FOV more closely? 3. Mismatch in SNR - well this one is easy, if you match resolutions above with using FF/FR on 80mm scope - you'll have same sampling rate but one scope will have almost the half of aperture of the other 80mm vs 51mm. In same time - less capture photons by smaller aperture and you end up with considerably different SNR per sub. Regular stacking works because it assumes that all subs have same SNR (there are algorithms that can sort of compensate for different exposure lengths for same setup). PixInsight has per frame weights. Neither of the two is good enough for seriously mismatched SNR. PixInsight approach can seem to be solving the problem - but it's far from it. There is no Single SNR for image - in fact every pixel in the image has different SNR and therefore we can't adjust things with only single constant / weight per sub. There is a good algorithm for dealing with this - but no software has yet implemented it. Btw, look at FOV matching with ASI1600 + 80mm F/6 and x0.8 FF/FR. Sampling rate is also better matched: Moral of the story - if you want to go with dual rig, best choose same rigs as that will give you the least issues to solve.
  21. You really need to manage your expectations of astrophotography you are going to achieve with that scope / mount. That setup will be very nice in doing lunar / white light solar (with appropriate full aperture filter - check out baader solar foil) / planetary imaging scope if you purchase modern dedicated astro CMOS camera. Planetary imaging / lucky imaging utilizes very short exposures and AltAz mount or mount that is not tracking good enough - simply does not matter there. I would also advocate going for dedicated astro camera with cooling for DSO AP - but, I highly doubt that you will have easy time doing either short exposure or long exposure with wedge. Probably best thing to do is EEVA - which is very similar to regular astrophotography except it is real time / live stacking approach (check EEVA section here on SGL for ideas). Again, subs are kept reasonably short - like a few seconds and image is created in real time on computer screen. Helps in light polluted areas instead of regular observing but it can also be a sort of astrophotography thing as you can save live stacked image for further processing. In any case - dedicated astro camera, if it has set point cooling, enables you to do proper calibration of your subs - bias / dark current removal and flats application. That is advantage over DSLR. Another advantage is that you can see in real time on computer screen what you are capturing (for focus purposes or framing) and you can do high frame rate without any distortion for planetary imaging (DSLR can shoot movie but it utilizes compression which creates certain artifacts). Another reason why you should go for dedicated astro camera is - it is very unlikely that 6" SCT will illuminate and have fully corrected circle the size of full frame DSLR. It will be APS-C size at best (a bit shy of 30mm), so full frame DSLR will be effectively "wasted" on that scope (mind you - not many scopes do have such a large usable field).
  22. Bayer drizzle will actually work if implemented properly, unlike regular drizzle. That is because in Bayer drizzle - one does not shrink down pixels, but rather exploits the fact that pixels are already "shrunken down" compared to sampling rate. It is this shrinking step in regular drizzle that is questionable (in my view) as it needs quite precise dither offsets to be effective. This is something that I would expect in comparison of the two. I don't think that methodology is wrong however and synthetic data is also representative of what will actually happen. I think it is down to level of oversampling if differences will show. For example 4.04"/px is very close to theoretical "ideal" sampling rate if FWHM is 5.8 - and that is 5.8 / 1.6 = 3.625. We could say that we have oversampling by (4 - 3.625) / 3.625 = ~ 0.1 or 10% in this case. In first case we had FWHM of 2.34" which corresponds to 1.4625"/px while image was oversampled to 3.448"/px which would be ~ 135.76%. I would say that it is the first case that should matter more - and it shows that drizzle is not bad. I have not done SNR measurements and wonder - how much dithered are the subs? In drizzle integrated image, I also measured FWHM to be around 3.1 - 3.2px, which corresponds to 0.86"/px * 3.2px = 2.752" FWHM - very close to 2.34" FWHM of original image. Lanczos upscaled images fare very similarly - a bit higher FWHM at around 3.4px being 2.924" FWMH - both values being far below FWHM that corresponds to 3.448"/px - ~5.52".
  23. This is very interesting, and it shows that I could have been wrong either in dismissing drizzle or with proposed comparison method. Will need to look into it a bit more. Star shapes are indeed poorer on upscaled subs, but that is consequence of ringing that can happen when we upscale very undersampled data. I guess that different upscaling algorithm would deal with those. Maybe B-spline interpolation could be used instead of Lanczos? Could you do another test with Bicubic B-spline for upsample?
  24. I've heard good things about this one: https://www.teleskop-express.de/shop/product_info.php/info/p3041_TS-Optics-PHOTOLINE-115-mm-f-7-Triplet-Apo---2-5--RAP-focuser.html Also, maybe consider use of Quark Combo instead of regular quark if you intend to do imaging alongside visual. With Quark Combo, you'll be able to do close up shots / viewing but also full disk viewing. Trick is to use both aperture masks and telecentric barlows. You can get x2 and x3 telecentrics from ES - they are supposed to be good, and simple aperture mask can help make scope be F/20 or F/30. You can get full disk viewing with up to 1800mm focal length. This means that you need something like 400-450mm FL scope with regular quark to get full disk viewing. Above scope is 800mm FL so you would not be able to get full disk with regular quark, but take x2 telecentric lens and make 80mm aperture mask. 1600mm FL and 80mm aperture - gives you F/20 system and full disk viewing with something like x120 magnification with ease. Want to get very close in? Put x3 telecentric and you are at F/21 with 115mm of aperture. x200 mag should be doable without too much trouble. BTW, put Riccardi FF/FR on above scope and you'll have 115mm aperture F/5.2 600mm FL wide field instrument for imaging.
  25. Semi apo filter is nice addition. I have F/10 4" achromat and use Baader Contrast Booster as minus violet filter. It does work. Out of focus blue and violet are much easier to spot in images than with eye - which is good for visual but bad for astro photography. This is because camera sensor is more sensitive to blue part of spectrum than human eye is. Human eye is most sensitive in green part of the spectrum. As for accessories, I don't think that you should worry about it now. Part of the fun (for me at least) is discovering what you need and anticipation and the thrill when you get new piece of kit and wait for a chance to test it out. Here is something that you should consider, but like I said, it will depend on your interests / taste. I would consider 2" diagonal mirror if you don't already have one and wide field eyepiece. 6" refractor is very good scope for wide field viewing of Milky way and large open clusters. If you are serious about astrophotography - consider getting autoguiding kit at some point. That means guide scope (you can turn your finder scope into a guide scope or add separate guide scope instead of finder scope) and planetary camera. Planetary cameras can be a lot of fun. You can take better images of the Moon and the Planets with them and even some Deep Sky imaging - that is how I started into AP. Scope you have is not well suited for AP because it is achromatic design (it is a bit better corrected than regular achromat, but as you have seen - bright objects will have that blue / purple halo around them). Luckily there is something that you can do about it. - You can use Semi APO filter or even regular #8 Wratten yellow filter. Very good filter for that is also 495 long pass filter (from baader - it is deep yellow filter). Problem with yellow filters is that it will skew your color balance, but that can be corrected in processing phase - Another thing that you can add is aperture mask. That is something that you can't purchase, but you can make one, even out of cardboard. With nowadays 3d printing - it is very easy to print one to suit your needs. Aperture mask is just a mask with smaller aperture than original aperture of telescope. Chromatic blur of achromat telescope depends on clear aperture size. Reducing this size removes some of chromatic blur. Don't think that your scope won't be able to produce good images - it is just a bit more complicated and appropriate technique needs to be used. Here is example of what F/5 achromat (2 lens simple design with a lot of CA) can do with planetary type camera: Of course, to get better images - you will need DSLR camera and adapter (T2 ring) for that camera. Canon cameras are probably easiest to work with for astrophotograpy because of software support and being able to shoot raw images (without any in camera processing).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.