Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. That really depends on comparison of two sensors and their characteristics. What DSLR do you currently have? Specifications that you want to look at are: - read noise in both sensors (lower is better) - dark current levels (again lower is better) - amp glow (absence is better - this one is particularly nasty if you don't have set point cooling, there is sort of a way to deal with it but it might or might not work for particular sensor - it is called dark frame optimization) - QE of each sensor (higher is better) In the end you have to see which one is going to be better matched in terms of resolution once you bin (you will need to bin on 8" SCT since it has quite a bit of focal length). You would ideally want to target sampling rate in range of 1.2-1.5"/px. With current DSLR and resolution of 0.38"/px that will be - super pixel mode (OSC sensors have twice lower sampling rate than mono sensors due to bayer matrix) + x2 binning, which will give you 1.52"/px. With ASI294 you will have 0.48"/px, so after super pixel mode that will be 0.96"/px - that is ok only if you have rather good guiding and good skies (like 0.5" RMS guiding and good seeing) and your optics are sharp (EdgeHD). If you go for ASI294 you will want to use reducer for SCT. My personal opinion is that ASI294 without cooling would not justify upgrade unless you are planning to use it for EEVA or similar along side imaging. If you want true upgrade then consider extending budget to cooled version. There are few additional benefits of astro camera vs DSLR - weight being one, external powering (USB connection in case of non cooled model) rather than internal battery (may help with thermal issues. Drawback is of course need for laptop to use astro camera along with cabling (at least USB lead in case of non cooled model, and power cord with cooled one).
  2. Short version - Yes, yes It of course depends on your local seeing conditions, but even if those are not perfect, larger aperture will allow for better detail (provided that you can get close to max performance of the scope you are using - meaning matching focal length to pixel size and using suitable exposure lengths and of course good processing workflow).
  3. Just for those interested, I made an image / diagram or graph (not sure how to call it) that displays approximate surface brightness of M51 in magnitudes - might be useful for anyone trying to figure out needed exposure time to achieve good SNR. Data is luminance (LPS P2 filter) and roughly calibrated to give mags (it might be off by a bit - I was not overly scientific about it). Calibrated on single mag 12.9 star, median filter used to smooth things out. Here it is: Each color is single "step" so cores are at mag 17 or higher and background is at mag27 or darker. Again, this is rough guide
  4. Well, you have quite a selection to choose from. I would personally go for M/N, but 115mm APO is also an option for wide field. You would need 9 panels to cover M31 for example. It would seem that taking 9 panels will take up too much time compare to single panel, but in fact you will get almost same SNR in the same time as using smaller scope that would cover whole FOV in single go (provided that you also have F/5.25 scope). I'll explain why in a minute. First thing to understand is sampling rate. I've seen that you expressed concerns about going at 2.29"/px. Fact is - when you are after a wide field that is really only sensible option - to go low sampling rate (unless you have very specific optics - fast and sharp, only in that case you can go high resolution wide field). Take for example scope that you were looking at - 73mm aperture. It will have size of airy disk of 3.52 arc seconds - aperture alone is not enough to resolve fine detail - add atmosphere and guiding and you can't really sample at below 2"/px. I mean, you can, but there will be no point. Another way to look at it is that you want something like at least 3-4 degrees of FOV. That is 4*60*60 = 14400 arc seconds of FOV in width. Most cameras don't have that much pixels in width. ASI071 is 4944 x 3284 camera, meaning you have only about 5000 pixels in width. Divide the two and you will get resolution that it can achieve on wide field that covers 4 degrees - 14400/5000 = 2.88"/px. So even that camera can't sample on less if you are after wide field (not to mention the fact that OSC cameras in reality sample at twice lower rate than mono). Don't be afraid of blocky stars - that sort of thing does not happen, and with proper processing you will just have a nice image even if you sample on very low resolution. Now a bit about the speed of taking panels vs single FOV. Take for example above M31 and 9 panels example. In order to shoot 9 panels you will need to spend 1/9 of time on each panel. That means x9 less subs for each panel than you would be able to do when doing single FOV with small scope. This also means that SNR per panel will be x3 less than single FOV if you use the same scope, but you will not be using same scope. Imagine that you are using small scope that is capable of covering same FOV in single scope - it needs to have 3 times smaller focal length to do that. So it will be 333mm FL scope. Now we said that we need to match F/ratio of two scopes, so you are looking at F/5.25 333mm scope. What sort of aperture will it have? It will be 333/5.25 = ~63.5mm scope. Let's compare light gathering surface of two scopes - first is 190mm and second is 63.5mm, and their respective surfaces 190^2 : 63.5^2 = ~9. So large scope gathers 9 times more light, which means that it will have x3 better SNR - that cancels with time needed to spend on each panel - you get roughly the same SNR per panel as you will for whole FOV. You end up with same result with larger scope and doing mosaic in one night as you would with small scope of the same F/ratio that covers same FOV in one night. There are some challenges when doing mosaic imaging - you need to point your scope at particular place and account for small overlap to be able to stitch your mosaic in the end (capture software like SGP offers mosaic assistant and EQMOD also has small utility program to help you make mosaics). You need to be able to stitch your mosaic properly - APP can do that automatically I believe, not sure about PI, but there are other options out there as well to do it (even free - there is plugin for ImageJ). You might have more issues with gradients if shooting in strong LP because their orientation might not match between panels - but that can be dealt with as well. Unless you really want small scope, you don't need it to get wide FOV shots - you already have equipment for that, just need to adopt certain workflow to do it.
  5. Out of interest, focuser draw tube is said to be 1.6" - but there is a thread at the end of it that is a bit larger - could it be T2 thread? (1.6" = 40.8mm - add a mm and a bit and you have 42mm that is T2). Maybe you could use some FF/FR that has T2 connection instead?
  6. Astro cameras use same sensors as DSLR cameras - either CMOS or CCD (of course actual sensors will differ a bit depending on camera model, but are in principle the same). It is other features that distinguish astro cameras and DSLR-s. Lack of filters - DSLR has IR/UV cut filter that needs to be removed/replaced to get the most out of DSLR (so called astro modding of DSLR). Astro cameras also don't have anti aliasing filter on them - some DSLRs do. Most significant feature is set point cooling (not all astro cameras have that) which enables precise calibration to be carried out on your data. Cooling as such is not as important for calibration - it does help with thermal noise, but ability to always have sensor at certain temperature is the key for good calibration, so main difference is that.
  7. @alan potts What scopes do you have already to image with? Wider FOV is easily achieved by doing mosaics, so you don't really need to spend money on a new scope if you have one that you are pleased with, but gives narrower field of view than you would like. It is just a matter of proper acquisition and processing of such data, and although people think that doing mosaics is slower process than going with wider field scope - it is not necessarily so. If you already have fast scope (fast as having fast F/ratio), then doing mosaics is going to be marginally "slower" than using same F/ratio scope capable of wider field with the same sensor (difference being only overlap needed to properly align and stitch mosaic image).
  8. Background and nebula rendition from DSS version and color saturation from PI version - that would be a good combo I believe
  9. Ah, I see, lack of software support. Maybe easiest way to do it (although not the best - I think it's better to bin individual subs after calibration and prior to stacking) would be to download ImageJ software (free/open source written in java so it's available for multiple platforms) and once you finish stacking image in DSS - save it as 32bit fits format (Gimp works with fits format). Then open it in ImageJ and use Image / Transform / Bin menu options. Then select average method and x2 or x3 (depending how much you want to bin) - after that save as fits and proceed to process it in Gimp (or Photoshop).
  10. It is indeed possible to do it but I don't think that anyone has done it. It would involve deconvolution (already exists implemented) with deconvolution kernel depending on position on image. Although in principle one knows level of coma in newtonian scope with parabolic primary, in practice things are not as easy. Level of coma depends on collimation of the scope and position of the sensor. If sensor is slightly shifted with respect to optical axis (not tilt, but rather shift - so that optical axis does not go thru exact center of sensor), aberrations will not be symmetric with respect to sensor. One can account for that by examining stars in the image and determining true optical axis / sensor intersection. One can also generate coma blur PSF for certain distance from optical axis (for perfectly collimated scope), so yes it can be done. Downside is that it is inherently probabilistic process because data that you have suffers from noise - you are trying to guess rather then precisely calculate because you don't have exact numbers to start with, but rather values polluted by noise. Another difficulty would be that it is better to do it on stack of data rather than single sub (better SNR) but stack of subs will have different levels of coma if you dither or otherwise have less than perfect alignment - like slow field rotation / drift / whatever, because it changes distance of pixels from optical axis. Result will be corrected image but lower SNR - very similar to what you get when sharpening - sharper but noiser image. That applies to any sort of optical aberration - as long as you have proper mathematical description of it (like astigmatism depending on distance from optical axis, or even field curvature) it can be done with deconvolution - at expense of SNR. It is much easier to correct it with optical elements (less costly) whether that is coma corrector, field flattener or whatever ... Just to add - above method will deal both with star coma blur but also extended features coma blur because it will operate on precise mathematical definition of coma blur rather than approximation of neural networks such as StarNet++
  11. Don't use on chip binning - use software binning instead - no saturation issues and gives you more control without loosing anything (except for larger sub size off the camera). In fact you can decide whether to bin in pre processing stage, so you can try to process image above with x2 and x3 bin in software and see if you like the results better.
  12. I don't think that single piece of software can do it all. There is a bunch of alternatives out there as seen by previous posters (everyone has their own favorite piece of software for particular purpose). I can give you decent freeware / open source list that you can try out: - SharpCap for planetary capture - Pipp for manipulating planetary videos (calibration, pre processing, etc) - AutoStakkert for stacking planetary images - Registax for wavelets sharpening of planetary images - NINA for long exposure capturing (not sure if it handles DSLR though) - Deep sky stacker for calibration and stacking long exposure images - Gimp for processing (of both planetary and long exposure images) at the end
  13. Nice image. I would not rush into that. Although 533MC is nice sensor I don't think it can replace 183 since latter is mono. From what I can see, you used ST102 as imaging scope, right? Maybe first fiddle around with it and gear that you already have. Software binning will produce better SNR as you are slightly oversampling because you are using short focal length achromat (colors are not in focus and that produces blur). Using yellow filter will help much with star bloat as it will remove most of violet part of spectrum.
  14. Depending on size of sensor, you might want to factor in a field flattener as well - and if looking for wide field setup - go with one that is reducer as well.
  15. I think that you want something like this: https://www.teleskop-express.de/shop/product_info.php/info/p10419_Skywatcher-Teleskop-Evostar-72mm-f-6-ED-Apochromatic-refractor.html or perhaps a bit better version (not optically, but mechanically as it has sliding dew shield and 2.5" R&P focuser): https://www.teleskop-express.de/shop/product_info.php/info/p8866_TS-Optics-Photoline-72mm-f-6-FPL53-und-Lanthanum-Apo---2-5--R-P-focuser.html
  16. Although many will warn against such viewing - I'm certain that most will agree that some observing is better than no observing, so by all means - if that is the only way, go for it. I think it is just a matter of managing expectations, and making the best of circumstances - like in your case, if open window is needed to bring room to close to ambient, and similar. It would be good if you could give us comparison - how much degradation there is compared to being outside (in same / similar conditions).
  17. Both 9 and 8 are considered white zone, 7-6 is red, etc. In fact there is neat "conversion" diagram that one can use to switch between different designations (like Bortle scale, color scale, NELM, SQM and such): In any case Bortle 5 ranges from about SQM 19 to 20, so that gives good indication for observable surface brightness. Add 8.89 to magnitude in arc minutes squared - listed in Stellarium for example and if that number is above 20 you will be able to see some of that particular DSO - but that will depend on position in the sky and transparency on given night. It will also depend on distribution of light as magnitudes in arc minutes squared are usually average magnitude for object - galaxies with prominent core will be easier to spot as core will have larger brightness than what average suggests. Back to original question - you should be able to see quite a bit, just adjust your expectations on what it should look like. In Mag18.5 skies (that is border between red and white zone, so bortle 7-8 border) with 100mm scope I was able to observe a number of M's - mostly globular and open clusters, but some nebulae as well and a couple of galaxies. I'm at above border of red/white and I can see M31 with averted vision on night of good transparency, so red zone it should not be a problem most of the time. With 8" scope I once observed dust lanes from same location - mind you it was night of very good transparency and M31 was near zenith.
  18. If you value speed of capture - just keep mono camera, it will be faster than alternative OSC (faster in terms of reaching set signal to noise ratio - less imaging time for same results). On the other hand if you prefer "ease of operation" - then switch to OSC. However, do keep in mind that OSC + Triband filters is not going to produce as good results as mono + narrowband on nebulae.
  19. I would like to point out something here that might not be obvious (for me at least - it took some time to realize it) - larger sensor is faster sensor Most of recent talk about speed of setup was in terms of pixel scale, and I'm to large extent guilty of being promoter of such view. It is in fact matter of aperture at resolution. However, most of the time that discussion neglects one important thing - resolution is not set in stone. It is for the most part if we follow traditional processing workflow - but then again, why should we, if there is alternative that allows for more flexibility? Before I explain what I meant with above, we need to assert one observation - when doing software binning of pixels, only difference between camera with smaller pixels and one with larger pixels (everything else being equal) is amount of read noise. There is rather nice way to control impact of read noise on the final result - exposure length. Longer exposure length comes with it's challenges (like guiding, probability of wasted data, etc ...), but in principle one can control impact of read noise. Back to the story of faster sensor. If we have larger sensor - we can use larger scope (both aperture and focal length) to cover same FOV. If we control our pixel scale - for example thru fractional binning or similar techniques, we can image - same FOV at same resolution (in terms of "/px) using larger sensor on a larger scope. Larger scope will gather more light - resulting SNR will be better with larger sensor. Larger sensor is faster (it does hover mean certain processing techniques, being aware of read noise management and pairing it up with suitable telescope for target FOV and matching resolution via suitable binning). In that sense 294 is faster than both 183 and 533 (at a small price premium). Just to add - I'm in general not overly concerned with amp glow in CMOS sensors - from what I saw, it calibrates out and level of noise injected is comparable to read noise level which is rather low on CMOS sensors anyway.
  20. That paper pretty much describes what I intended to do. There are some difference though. They deal with daytime photography so illuminant needs to be taken into account. With AP we don't need to worry about that aspect - we can treat all light as coming from a light source (to some extent even reflection nebulae although they don't generate their own light - there is simply no way to illuminate them with another type of light) that has precisely defined spectrum. We also have a way of standardizing transform matrix between sensors. There is a couple of ways to do it, but most obvious is to derive transform matrix for a range of black body objects in certain temperature range (star colors). We can also include some "standard" narrow band lines like Ha/Hb, OIII, SII and such in our set used to compute transform matrix. That should give us "standard color card" for color calibration.
  21. Hi and welcome to SGL. Planetary imaging is rather different than both daytime photography or night time DSO astro photography (whether it is wide field - like Milky way shots that you enjoy or closer up). DSLR type camera is not ideal for planetary imaging, and yes, you want long focal length optics to be able to show any sort of details on planets. You will need at least a meter of focal length to do that (people often use barlows when having shorter focal length scopes to get enough focal length). You don't need particularly large mount to satisfy your needs. Small compact scope will be good for planetary / lunar. Take a look at Maksutov telescopes or SCT. In smaller apertures up to 5" these are lightweight and can be mounted on really simple mounts (EQ3/EQ5 class mount and even compact travel mounts like AZGti). For planetary imaging unlike for DSO imaging you don't need very precise mount either since you'll be using very short exposures and stacking. For milky way shots, you can just use camera + lens combination attached to this tracking mount. Besides above light weight mount, you will also need planetary type camera. Something like ASI224 or similar.
  22. Indeed. You can think of P-V as difference between lowest point on a map (sea level) and highest mountain peak. It tells you what is the range of values on wavefront but it does not tell you anything about what the relief is (single mountain surrounded by ocean or something else). RMS is akin to "average terrain level", in the same sense guide RMS is average error of guiding (it is not average displacement as that would be 0 because any error in +RA would cancel any error in -RA, and similarly in DEC). Strehl is very good indicator of optical performance as it deals with energy and that is what we see. In any case, P-V and RMS of 0 and Strehl of 1 represent perfect optic - flat wavefront. 1/6 wave means exactly that in P-V - meaning that max difference between two points on wavefront is less than one sixth of wavelength. It roughly corresponds to 0.93 Strehl. If low order spherical aberration was only one present and it had magnitude of 1/6 waves then Strehl would be 0.93. Similarly goes for other values. In some cases aberrations can sort of cancel a bit out - look at first report. P-V for that scope is 1/5.6 but Strehl is 0.97. Now if scope had P-V of 1/5.6 and only aberration present was low order spherical aberration - strehl would be worse than this. It is similar for relation of PV to RMS. In above two diagrams you have same P-V error, but different RMS errors as contribution of aberrations is different. Mind you, if you get above report for newtonian mirror - that is not the whole story as you need to account for secondary as well. Secondary mirrors as well as diagonals have usually 1/10 or better surface, but since things compound that error can be actually beneficial or can make things worse - depending on respective surfaces. Same goes for lens in refractor. Aberrations that are not symmetric can somewhat cancel out between elements and rotating lens elements can affect final wavefront. That is the reason why lens elements are marked for orientation. If you rotate them, you can actually make things worse as aberrations that canceled out start "reinforcing" instead. If you are interested in doing such report yourself, there is a way to do it. It requires planetary type camera or guide camera and a piece of software. Process is as follows: you take images of defocused star (both in and out focus), stack them in certain way and import into WinRoddier software for Roddier analysis. Software does it's magic and you get something like this: Above is test for my RC8 scope. I'll probably repeat it at one point just to verify results and also make sure collimation is spot on this time (not sure about last measurement). I'm currently fiddling around with Zernike polynomials and trying to figure out how I would implement above software (already have some ideas). Both Aberrator and WinRoddier might be a bit outdated software and I think astro community could do with the same (or enhanced) functionality.
  23. This is debatable, and it depends on technique used to asses wavefront. If interferometer is used then yes, it should include image, but interferometer is not always needed to determine above. As pointed above - probably single interesting piece of data on such report is Strehl figure at particular wavelength. I'll just do a brief explanation on what is going on for all interested (and partly because I'm actually rather involved with that topic at the moment ) : All tests are done on optical axis, so this sort of report is only giving you idea of how the scope will perform on axis - it tells you nothing about off axis aberrations (which also depend on telescope design). Principal idea is that wavefront from a star (point source - single point along optical axis) is arriving at telescope aperture perfectly flat - uniform. After telescope does it's thing and focuses light to a point - any optical "defects" of particular telescope act as if someone "twisted" that wavefront. Above colorful diagram titled : Wave front is displaying 3D image of wavefront phase (you can think of it as how much is rest of wavefront late with respect to point that first reaches aperture of scope). This does not show surface of your mirror or lens or whatever! It relates to light wavefront. Perfect flat wavefront will create perfect image that telescope aperture is capable of delivering. Any bending/ripple in that wavefront causes image degradation. That is all you need to characterize performance of the telescope (on axis) - wavefront. It defines PSF (or image of the star with perfectly still atmosphere), MTF or diagram that shows how higher frequencies get attenuated and basically all other info on that report. Any sort of deformation of wavefront can be characterized by set of functions (their sum) - called Zernike polynomials. Many of those polynomials correspond to a certain optical aberration or other effect on telescope. For example tilt and defocus (removed in this report because they are not inherent in optics) - represent angle of incident light (either star that is not on optical axis or tilted focuser / sensor) and actual defocus (focuser distance to ideal focus position). Then there are astigmatism, coma, spherical aberration, etc ... All of these are represented by particular Zernike polynomial. It's a bit like decomposing a vector into unity vectors (3X + 7Y + 1Z sort of thing) - so does wavefront decompose into these Zernike polynomials. Those aberrations that are not symmetric (tilt, coma, astigmatism) have both angle and intensity, while symmetric only have intensity (defocus, spherical, ...) For further info - check wiki page: https://en.wikipedia.org/wiki/Zernike_polynomials These figures are reported on the left. If one had complete decomposition to Zernike terms, one could reconstruct wave front of that particular telescope, but you would need more and more terms to get more precise description of wavefront - here in these reports only first few terms are recorded so it is only coarse representation of recorded wavefront.
  24. There is more to objective lens than just color correction and field curvature. How does it behave off axis for example. With all spherical design of the lens - you have following parameters (I could be wrong at this, so happy to be corrected): radius of curvature for each surface, thickness of each element, spacing between elements and refraction index of each element. You play with those to get characteristics of your scope - color correction, correction of aberrations on/off axis but also there is one more point - tolerance. You need to know how much can manufacturing process deviate from ideal figure or thickness or spacing to still be within optical requirements that you set as a designer. Different combination of above parameters will have different level of "resilience" to errors. I suspect that as manufacturing process is being automated - optical designers need to consider what sort of errors they can expect from machines doing figuring as well as tolerances on glass purity and account for those in their design process. On top of all of that - you need to be cost effective and have a market for your product. Btw, I believe that we are pretty much at stage where computer can do it all - just wonder if software has been written for it - you input available types of glass, expected manufacturing errors, parameters of scope (possibly a few more specs like cost and time to grind/polish) and hit "optimize" button - and software gives you optimal solution - if it says to use two ED types of glass - who are we to argue
  25. Probably best option would be to go with ASI385. It has larger pixel size (same as ASI224 - better for small DSO) and has very low read noise. It is also larger sensor than 224 - again better for small DSO. You don't need reducer / field flattener on such a small sensor, but if you use one - you will have larger FOV which is a good thing for DSO imaging. Worth checking out is astronomy tools FOV calculator with specific combination of scope / sensor:
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.