Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I highly doubt that x0.65 reducer will be able to properly illuminate APS-C sized sensor. APS-C has diagonal of about 28mm and if you use it with x0.65 reduction - that is like using 28mm / 0.65 = 43mm sensor without reduction. That is almost full frame. Not many scopes can properly illuminate full frame sensor with decent star shapes https://www.teleskop-express.de/shop/product_info.php/info/p10095_TS-Optics-PhotoLine-60-mm-f-6-FPL53-Apo---2--R-P-Focuser---RED-Line.html + 0.8 FF/FR like this one: https://www.teleskop-express.de/shop/product_info.php/info/p5965_TS-Optics-REFRACTOR-0-79x-2--ED-Reducer-Corrector-fuer-APO-und-ED.html
  2. Well, you can then look at 60mm scopes with field flatteners / reducers, or you can do mosaics instead? For example little RedCat has only 250mm of focal length: https://www.firstlightoptics.com/william-optics/william-optics-redcat-51-v1-5-apo-f49.html Or you can do mosaics. Even simple 130PDS will act as 325mm FL scope if you: - image 4 panels for 1/4 of the time - bin 2x2 each of those panels to recover lost SNR because you imaged each panel for only 1/4 of the time. This also preserves total pixel count of your final image It is a bit more involved - but it is cheap and it let's you go even wider - at 217mm (3x3 panel and x3 bin) or 162mm (4x4 panel and x3 bin).
  3. Well, you can always go for tried and tested, like this one: https://www.teleskop-express.de/shop/product_info.php/info/p3881_TS-Optics-PHOTOLINE-80mm-f-6-FPL53-Triplet-APO---2-5--RAP-Focuser.html Get Ricarrdi FF/FR for it: https://www.teleskop-express.de/shop/product_info.php/info/p11122_Riccardi-0-75x-APO-Reducer-and-Flattener-with-M63x1-Thread.html and you'll get F/4.5 80mm imaging scope. I've heard of only one sample being out of collimation and generally most people report it to be very nice scope. I also own one of these 80mm triplets and it is indeed very nice imaging scope.
  4. Out of the three offered I would certainly go for first one if I was in market for 72mm refractor - but I would first look to see if there are people already using it and what is their experience like.
  5. As far as I can tell - it is highly dependent on type of filters and other optical elements, speed of the system and distances involved. Some people have micro lensing effects, others don't. I always advise people to try to change distances in optical train / rearrange things when they notice the effect. Of course you can, but it requires a bit of "creative thinking". Take 130PDS, or 150PDS, or 150 F/4 - all three are excellent wide field instruments with focal length of about 300mm or 200mm or 150mm - whichever you want However, acquisition of the data and processing is a bit more involved. You need to do mosaics and bin your data. Mosaics will get you wide field, while binning will keep imaging time the same. If you need to do say 2x2 mosaic - you'll spend only 1/4 of time on each panel - so your SNR will suffer, but you can bin 2x2 each panel and that will restore SNR and you'll still get the same pixel count as if you used shorter focal length instrument. Say you are using camera that has ~ 4000x3000px You'll bin 2x2 each panel and each panel will end up being 2000x1500px but when you stitch those panels together, you'll again end up with 2x2000 = 4000 and 2x1500 = 3000 so 4000x3000 image. Only "loss" is overlap to make stitching easy - but that can be as low as 5-10% of image size. Want shorter focal length - just make 3x3 mosaic and bin each image x3.
  6. I've found rather interesting way to asses effects of interpolation on image resolution. Whenever we take bunch of subs and stack them to produce an image, we use interpolation as a part of process of stacking - frame alignment. Interpolation is also used in debayering process in OSC frames if one wants to "preserve full resolution" of the sensor. Each of these interpolation algorithms affects our data, but it would be good to understand just how much and in which way. I touched upon this subject in a recent thread while discussing debayering / OSC resolution with @Xiga so I figured to start a new topic dealing with it and share some insights. Let's start by noticing interesting property of Fourier Transform of image - when we translate image, FT amplitude is not affected, only phase information. We can check this very simply by taking an image, shifting it by one (or few) pixels, and looking at respective FFT amplitude images (we can subtract or divide them to see if there are really any differences). We start by simple image and it's translated version (above version is translated by 1, 1 pixels in X and Y directions). These are FT amplitudes of both images - and already they are looking very much alike. We can now divide the two frames and see what we'll get. And indeed - we get constant image with value of 1 (dividing two same numbers gives 1). How can we use this? We can generate random noise and see what happens when we shift it by some non integer value. Will we get the same resulting flat image or will there be some change to it? Here I did just that - created patch of gaussian noise with mean 0 and sigma of 1 and then I translated that image using linear interpolation algorithm by 0.5px, 0.5px. We can instantly see that there is some change - noise appears to be somewhat smoothed out and it is not fine grained any more like in starting image. If you've ever wondered why is DSS producing images that have this coarse grained background noise - well, this is the reason - linear interpolation used by DSS produces this. Let's use above method to asses what exactly happened. Well, look at that: Two images are no longer equal - this can be easily seen. How much different are they? Let's divide one with another. This looks like some sort of filter and indeed - it is low pass filter. Maybe it will be easier to see if I plot profile rather than surface plot - so I'll take linear cross section and plot it as graph: Ok, so we can now see that as frequencies increase - so does attenuation. Linear interpolation is blurring out detail when applied! Will other interpolation algorithms do that as well? Let's try with next by complexity cubic interpolation. Cubic interpolation gives following filter: It looks nicer than linear interpolation, so what is the profile like? Here is graph showing linear (black line) and cubic (red line) interpolation. Cubic interpolation clearly impacts data less than linear interpolation. Let's check few more algorithms. Here I added cubic spline (green) and quintic spline (blue) - as interpolation algorithms get more advanced - impact on data lessens. Another interesting thing that we can take a look at is impact of interpolation depending on how much image is shifted from original in fractions of pixel. Since full pixels show no blurring, and half pixel shows it - how does it change in between? Here we have comparison of 0.1px (black), 0.2px (red), 0.3px (orange), 0.4px (yellow) and 0.5px (green) shifts - each done with cubic spline. It is clear that amount of shift from original position dictates amount of blur in the image. It is therefore best to have subs that are integer pixel shifts between each other. Sadly, as amateurs we don't really have this option with popular guiding software - to force dithers to be at specific location with respect to imaging camera. However, we can clearly see that use of advanced interpolation algorithms is beneficial. In the end, let's look at one more graph: This graph shows - response of single shift of 0.5px with use of linear interpolation (black) compared to two consecutive shifts of 0.5px using linear interpolation. Each time you shift image, you introduce another "round" of blurring and things compound! For this reason, it is best to have registration software enable you to frame your target properly when registering frames so you don't have to do additional rotation and shift when cropping finished stack!
  7. This is only true for human wavelength response - but not for imaging sensors. Dave above gave good explanation. Your filters are parfocal but telescope you are using has focus shift based on wavelength of light. Lum will be in focus but will have higher FWHM and could have purple halo around bright stars (purple part of spectrum is often defocused the most). Further away from optimum - more it will be defocused. This is why L3 is working - it cuts biggest offenders - wavelengths at far ends of spectrum.
  8. For imaging? Design - go for triplet. Optical quality is really not that important when discussing imaging scopes. What is important is color correction. Often, for imaging purposes (not the same for visual) - triplet with "lower quality glass" will give better color correction than high quality ED doublet. There are a lot of imaging scopes that are not diffraction limited and would pass as rather poor optically. Optical quality is only important when doing planetary imaging, and there you want your telescope to be sharp, but for long exposure - atmosphere just dominates and if telescope is not sharp - no one will be able to tell at resolutions that we use for long exposure. Take for example this scope (and line of similar scopes from Altair Astro or StellarVue): https://www.teleskop-express.de/shop/product_info.php/info/p9868_TS-Optics-Doublet-SD-Apo-102-mm-f-7---FPL53---Lanthan-Objective.html TS even markets this as being part of their PhotoLine series of scopes. I've seen couple of images made with this scope - and it has very pronounced blue halo around stars. By the specs it should be excellent scope - FPL53 and Lanthanum glass. People that use it as visual instrument are very happy - no trace of CA, but for imaging purposes - it shows CA. Now take this scope: https://www.teleskop-express.de/shop/product_info.php/info/p3041_TS-Optics-PHOTOLINE-115-mm-f-7-Triplet-Apo---2-5--RAP-focuser.html it has fpl51 glass, but it is triplet. It is proper imaging scope with no issues with CA.
  9. It's not going to produce the best results in that role. Problem is size of central obstruction. It adds a sort of blur that needs to be sharpen (I wrote a bit about it above) - and larger the central obstruction - more sharpening needs to be done. In visual we see that as contrast loss over unobstructed aperture. Whenever you need to sharpen, you need high SNR - more you sharpen, more SNR you need to start with because sharpening is restoring high frequencies, but it is also boosting high frequency components of noise - so noise gets amplified as well. If you have poor SNR - then sharpened version will just look too noisy. Here is what my 8" RC can achieve in planetary role: To be honest, that is not level of detail I would expect 8" of aperture. Maybe it was just that particular night. I also over sampled quite a bit. Here is comparison with 5" scope: Although image is smaller (better matched resolution), I don't think there is much more detail in above image over this one.
  10. Have you tried altering the distance between filters and sensor? As far as I know - latest ASI294MM does not have issues with micro lensing - but I might be wrong on that one.
  11. I think that it really depends on what sort of resolution image supports. We image at high resolution but reality is that most of our images are suitable for lower than 1.5"/px - around 2"/px or something like that. Next - it is choice of interpolation method used when aligning images. Then there is matter of how you process your images and what is your final SNR. If you have good enough SNR - you can sharpen back some of the blur created by this process. Proper sharpening is not "making things up" - it is restoring sharpness that was at one point there. In planetary imaging it is done all the time and real detail is "sharpened up from the blur". For that reason, I like to call it frequency restoration process (opposed to low pass filter that is high frequency attenuation process). My personal preference would be to do bayer split and then treat images like mono + filter. To my eye that is the least "destructive" approach. Bayer drizzle is also ok - but the question is, how do you interpolate your data when registering it? Original drizzle algorithm is using something we can call surface sampling and if there is no rotation of the frames involved - math is the same as bilinear interpolation - which we saw above is quite nasty. When there is rotation between frames - things are even worse. I know how I would adapt advanced interpolation to Bayer drizzle - but the question is, was it implemented like that or in similar fashion in software already available? You know what you can try? You can try to measure things and see what sort of difference there will be between methods. Take the data set you have - and make stacks using different registering techniques. Then take each of those masters and measure R, G and B fwhm on a certain star using for example AstroImageJ. Method with smallest FWHM - produced sharpest result. That way you can see if Bayer drizzle actually adds back some resolution or is it actually blurring things further by using bilinear interpolation (as comparison sample you should split RGB as one method and use advanced resampling like Lanczos4 or 5 or higher B-spline like Quintic)
  12. There are several things said there that I potentially disagree with. I'm also against super pixel mode - but for different reason. I advocate splitting Bayer matrix component into separate images and working with them like mono images. That is cleanest way to go about it. Each OSC sub will produce 1 red, 1 blue and 2 green subs - each of 1/4 size of original. They will indeed have twice coarser sampling rate than pixel size alone would suggest. This step will not alter data in any way - everything is preserved as is. Super pixel is not like that. When using super pixel two changes happen: - green data is averaged and only one sample is created out of two green samples - there is 1/4 pixel shift between channels for red and blue and green is even transformed in strange way - one green part of bayer matrix is shifted 1/4 in one direction and other green part of bayer matrix is shifted 1/4 in opposite direction and then they are averaged. In my mind that is a mess. Splitting data is cleanest way to go about that. After you have split your data into separate "fields" and treat them as regular mono+filter subs - you can decide which way you want to register and stack them. I'm also rather against drizzle in general as I strongly believe it is misused feature. There is nothing wrong in being under sampled when doing wide field - there is no such thing as "blocky" stars - there is only inadequate upsampling algorithm when zooming in, and also - you really have to be undersampled to do wide image in single go if you don't want to make mosaic. Using drizzle on anything except undersampled and properly dithered data is just not going to work properly (dithering proper amount is another big issue with drizzle algorithm). To me, choosing proper resampling algorithm is much more important when doing registration that above. People use DSS to stack their images - but it only uses bilinear interpolation, but bilinear, bicubic and so on interpolations are really resolution killers Here is example that you can do to try to understand how interpolation works on data: Create image with patch of pure noise in it: Now make a copy of that image and shift by half a pixel in both x and y direction using some interpolation algorithm. Now, if you do Fourier analysis of these two images - frequency spectrum should be the same (amplitudes) - as shifting by any amount is only altering phase part of FFT and not amplitudes. You can test this if you shift by whole pixel number (then you don't need to use interpolation) - you'll get same FFTs from both original and shifted image. If you take FFTs of both images and divide them - you'll get shape of filter that has been applied during interpolation. Here we can instantly see that something is wrong - left is FFT of original image and right is FFT of image shifter by 0.5px using Bilinear interpolation. If we divide the two images, we get: Look at that - perfect low pass filter. Bilinear interpolation just kills off high frequencies (detail in the image). That is the reason why you get that very grainy noise in DSS stacks. We need to be using much more sophisticated interpolation algorithms - or we need to control our dithers so that each image is exactly integer number of pixels shifted compared to other images. We don't do later - nor have means to do it (but in theory it should not be that hard to do if one could connect PHD2 with imaging software and tell it how much to move so that image is always shifted exactly integer number of pixels). Like I said - this is very involved and very technical discussion, so sorry to derail original thread on telescope choice - but it is good to know that even our choice of processing workflow has rather large impact - maybe even bigger than choice of working resolution.
  13. In theory, Bayer drizzle when executed properly will provide 100% of resolution available from pixel size alone - or same as mono camera. Problem is in the way that Bayer drizzle is implemented and if we go that route - in reality we open up a can of worms really . Due to the way we image and process our images - even mono is not sampling at the rate pixel size suggests, or rather - our final image is not of given sampling rate. Main culprit for this is interpolation algorithm used when aligning images. If you want perfect Bayer drizzle that will recover 100% of resolution - you want to dither by integer pixel offsets - so that final registration of subs is simple "Shift and add" - you don't need to use interpolation. This is by the way also best way to produce image in terms of resolution. Using any interpolation algorithm reduces detail further ... Good thing is that it is really hard to tell difference between say 1" and 2" or 1.5" and 3" visually in astronomical images. This is because of way blur in astronomical images works. It is much easier to spot that in planetary images where limiting factor is aperture of telescope and extensive sharpening / frequency restoration is performed. Here is screen shot of one of my images (btw taken in red zone with F/8 scope about 2h total integration time) here presented at ~1"/px. One copy was sampled down to 2"/px and then resampled back up to 1"/px. Can you tell which one? Actually, you should be able to tell by noise grain - noise has just a bit larger grain and image is just a bit smoother - one that has been down sampled at 2"/px and then up sampled back to 1"/px This is because detail in the image is not good enough for 1"/px - it is more for 1.5"/px - 2"/px sampling (seeing was not the best on particular night). Btw, left image was sampled down and up again and right one is original. Now, if I do that with 2"/px and 4"/px - you'll probably see the difference, but again, it won't be striking: Here again - both have been reduced to 2"/px - but left one has been further reduced to 4"/px and then upsampled back to 2"/px. First thing to notice - 2"/px is much more suited resolution for level of detail - image looks detailed and sharp. Second - now you can clearly see blurring due to sampling at lower rate - look at detail in bridge - it is clearly sharper in right image - but difference is not that obvious - you would not be able to tell unless looking at images side by side.
  14. Polar alignment is really not that crucial when you are guiding. Even when not guiding I feel that people over estimate importance of polar alignment over periodic error (with any half decent polar alignment - periodic error is likely to cause larger drift than drift due to polar alignment error on all but highest quality drives). On the other hand - guiding RMS is something that you might want to work on. It could be that 2" RMS is with 8" Newtonian and wind? You really want to get total RMS below 1" at least if you want to work at high resolution. Yes and no F/ratio of telescope is not really indicative of imaging speed. We could go into length explanation of that - but it boils down to: Aperture at resolution. Once you have your working resolution set - larger aperture wins - F/ratio does not play a part there (it sort of plays part while determining working resolution / sampling rate). In another words - if you compare super duper fast 80mm scope at F/3.2 and 6" scope at F/9 - both working at same sampling rate (arc second per pixel value) - 6" simply wins as it has more aperture. Trick is that some sampling rates are hard to get with certain type of scope. You can't get high resolution with short focal length scope and conversely it is hard to get wide field image (low resolution) with long focal length scope. On the other hand - yes, you will benefit from longer exposures for another reason - read noise. Your exposures need to be long enough for some other noise source to overcome read noise. With long focal length - light pollution is also sampled at high rate and sky becomes darker (think how darker skies become when using high power eyepiece - same thing). This means that you need longer exposures for light pollution noise to overcome read noise. Effect is smaller with DSLR type cameras as they are not cooled and thermal noise can sometimes swamp read noise before LP. In any case, yes, expose for longer individual subs and expose for how much total exposure time you can afford per target - as that overall reduces noise regardless of how "fast/slow" your system is.
  15. Ok, so here is quick break down of things: - pixels are not little squares contrary to popular belief - they are just points without dimensions (they look like squares when you zoom too much and algorithm used to interpolate produces squares - this is simplest algorithm called nearest neighbor sampling) - in that sense - no image will ever look pixelated if different interpolation algorithm is used - there is limited amount of detail that telescope + mount + sky can deliver in long exposure photography - and this is order of magnitude less detail then scopes are capable of delivering (in majority of cases) when there is no impact of tracking errors and seeing influence (atmosphere). - if you are after most "zoomed" in image that you can get - above is the limit - you won't get more detail even if you zoom further in, if you are at the limit imposed by telescope + tracking + seeing. - "zoom" can be defined as sampling rate or working resolution - it is expressed in arc seconds per pixels and represents how much of sky surface is recorded with single pixel. All the details within that pixel can't be resolved - since all of it will be recorded as single point. This depends on pixel size and focal length. Your camera has 4.3µm pixel size and 200p for example has 1000mm of focal length. http://www.wilmslowastro.com/software/formulae.htm#ARCSEC_PIXEL (if you are working with color camera and not mono + filters - you need to factor your working resolution as twice the value - it has to do with how R, G and B pixels are spread over sensor - they are effectively spaced 2 pixels apart for each color). So you were working at 1.77"/px with your 200p. That is "higher medium" resolution (this is really arbitrary naming). Let's say that over 3-4"/px is wide field / very low resolution. 3"/px-2"/px is low resolution 2"/px-1.5"/px is medium resolution and <1.5"/px is high resolution Most amateur setups and sky conditions simply don't allow for detail better than about 1"/px with long exposure imaging (there are techniques that can allow for higher resolutions but those are lucky DSO imaging with large apertures and such). In order to achieve high resolutions - all there must be satisfied - larger aperture, excellent tracking and steady skies. In reality with Heq5 and smaller aperture scopes - I would say, stick to about 1.3"/px - in fact, if we enter focal length of scope that I linked - you'll get just that Another rule of thumb is that your guide total RMS needs to be about half or less of working resolution. This means that if you want to go for smaller targets - you need to guide with 0.65" RMS or less. How is your guiding? I would not try 1.3" without at least 6" of aperture - as telescope aperture also goes into equation as well. Another point - since you are using color camera - use super pixel mode in DSS - that will produce proper sampling rate for your calibrate images and stack. Going higher resolution is just waste of resolution and waste of SNR - too small pixels are "less sensitive" pixels Nothing wrong with going low resolution and in fact when doing wide field - you simply can't do wide field unless you go low resolution - but you won't capture all the detail available (that is ok for wide field). True image resolution is when you watch it at 100% zoom - or one image pixel to one screen pixel - aim for your images to look good like that.
  16. That is only viable option if you think that you are undersampling. My guess is that you are not under sampling. What is your intended working resolution?
  17. https://www.firstlightoptics.com/stellalyra-telescopes/stellalyra-6-f9-m-crf-ritchey-chrtien-telescope-ota.html
  18. That is not easy question to answer as so many things are involved. It really depends on the way you use it and it's not straight apples to apples comparison. I would say that 294mc pro is significantly more sensitive. On paper that does not make much sense since ASI224 is very sensitive as is - but don't look at raw numbers when comparing two cameras - look at how you are going to use them. For example, look at this: This is ASI224 paired with Samyang 135mm lens vs ASI294 paired with 500mm FL telescope. You get same field of view and you can bin ASI294 to get the same resolution as ASI224. But you'll be using ASI224 with 48mm of aperture while you can use ASI294 with 100mm of aperture (you can find 100mm F/5 telescope or combination of aperture and reducer/flattener). Now we have same FOV, same resolution - but ASI294 is using x4 more light collecting surface - hence it is x4 faster, regardless of the fact that Samyang is at F/2.8. However, you can't mount ASI294 + 4" scope on AzGti mount and camera + scope will cost something like x4 as much as ASI224 + Samyang lens (you really need 100mm triplet to get good performance and field flattener / reducer can cost as much as only Samyang lens). That would be apples / apples comparison - which is still not comparing the same things - one setup is vastly more expensive not taking into account that you also need x3-x4 more expensive mount to be able to image with it - and portability is not going to be nearly as good. If we on the other hand compare apples to oranges, then you'll get this: You'll get vastly larger field of view with ASI294 and Samyang 135 and more sensitivity because ASI294 has larger pixels - but that will also mean less resolution. Less resolution is not necessarily bad thing with this lens as 3.75µm pixels are still too small for this lens (lens operates the best at 30 lines per mm - without loss of sharpness - and that is equivalent of 15µm pixel size - x4 smaller pixels are going to produce softer image). I still don't really understand how walking noise forms. I do understand that it is related to read noise / noise floor and that it forms when there is uniform drift in one direction - which often happens when not guiding, but I wonder if it has something to do with the way data is calibrated / stacked? What stacking software are you using? Yes, you mentioned - DSS. Walking noise exclusively happens in DSS as far as I'm aware and for that reason I believe it could be related to issues with calibration or registration. Could you maybe post 3s subs - whole lot that you used to produce first image with walking noise - so I can try to process it in ImageJ to see if there will be walking noise present as well. I'm suspecting bilinear interpolation being primary culprit and also 16bit precision when doing calibration. Dynamic range is really important when doing single exposure photography. It really does not matter much when doing stacking. Stacking even 2 subs is increasing dynamic range by additional 1 bit. If you stack say 128 subs - you'll be adding 7 bits of dynamic range to base sub. Having cameras with deep wells helps a bit - but does not solve the problem. Say you have camera with 14 bits of dynamic range. That is ratio of about 16000:1, or about 10 magnitudes of difference. Can you see 16 mag star in your image? Well, magnitude 6 star is going to saturate in single exposure - no way around it - even with 14 bits of dynamic range. Well - there is a way around it - combine exposure lengths like I mentioned. For this reason - dynamic range is really not important quantity for imaging. What is important is read noise - as it determines how long your regular exposure should be (and rule here is simple - make read noise small in comparison to any other noise source - often that is LP noise). Full well capacity again is not as important - if you have saturation - you'll need to take shorter / filler exposures. These can be needed even if you have very deep wells. In fact - even faint targets like galaxies can have more than 10 magnitudes of difference in brightness - cores are often at mag 17-18 while faint outer regions can be mag 28 - mag 30. It is not just stars that can saturate. Even in your example - we have M42 core saturating - that can happen on any camera.
  19. Nice and sharp images. Photos tend to exaggerate CA since sensors are more sensitive in those wavelengths than human eye - but you can give comparison of subjective feel you had at eyepiece. Is it less or equal to that in images?
  20. I hope you understand what you have there in couple of minutes I takes hours to make something like that with different equipment. In any case, why not experiment with setting gain to 0. Yes, it will increase read noise considerably, but it will also increase FWC to 19K - which should provide you with at least x4 exposure time. Another option is to combine very short exposures with long exposures to fill in over exposed areas like we discussed. Stars seem much better controlled now as far as bloating goes, so at least that is fixed.
  21. ED80 with reducer has 510mm of FL, while Esprit 100 has 550mm of focal length. Field of view for given sensor size depends on focal length. If you want to go wider then ED80 - look into getting something that has ~300-350mm of focal length. For example, here is custom scope of 320mm FL with ASI294 vs ED80+reducer and ASI294: You can look up SW72ED with reducer at 375mm or maybe something like this: https://www.firstlightoptics.com/sharpstar-telescopes/sharpstar-61edph-ii-f5-5-triplet-ed-apo-telescope.html or maybe https://www.firstlightoptics.com/william-optics/william-optics-zenithstar-61-ii-apo.html In the end, you might want to go even wider with something like Samyang 135mm F/2.
  22. Just your general impressions of both handling and view. What you like and what you dislike in regular use.
  23. Yes, you need to use UV/IR cut filter with refractive optics and this camera - otherwise you'll have significant CA and bloating of stars. This camera has only AR coated window but is very sensitive in IR part of spectrum, as it is originally meant to serve as surveillance camera (or rather sensor - not camera itself). It was purposely left with only AR window to enable astronomy use at IR wavelengths - like methane bands for planetary imaging. You can use standard luminance filter or even LPS filter if you have one - that will help with light pollution. Not only it will be sharper but will also further reduce any residual chromatic aberration. These lens are sharp and fast as far lens go, but are not up to telescope sharpness standards. This means that they have some residual chromatic aberration when wide open. Look at image of M31 I took with Samyang 85 F/1.4 stopped at F/2.0: Look at that bright star - it still shows halo of blue. Other stars show red halo. Here is what different color channels look compared next to each other for that image: That is R, G and B. Green is sharpest and looks good. Red is bloated and blue really has that halo issue. In the end, I took artificial star and tested different filters and apertures to find good balance. Here is comparison without any filtration I believe: F/1.4, F/2.0, F/2.8 and F/4 You can see that at F/2.8 things start to look good and there is not much to be gained by going to F/4
  24. I did not know that - it it makes perfect sense, no reason why imaging software could not issue mount command between exposures and provide dither functionality.
  25. I think I now get why you want to change the camera - again, you don't Did you add UV/IR cut filter between camera and the lens? Your stars are indeed bloated but that is due to IR and UV part of the spectrum and lens not being corrected for it. By the way - this is really sensitive camera and that lens is really good. I mean - look at this: that is single 3 second exposure. You can already see running man starting to appear and almost whole "loop" of M42. Similarly - look at 10 second exposure of M31: Core is saturated because I did quick linear stretch - but at 10 seconds dust lanes already start to show and M110 is there ... Stars do look bloated and this is because of above issue. Another thing to consider - this Samyang lens is good - but you want to use it at F/2.8 if you shoot with OSC camera. It will be sharper that way. F/2 is good for narrowband data and similar, but with small pixels and OSC camera - F/2.8 is better / sharper.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.