Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Not sure what is going on exactly, but here is "theory" in the nutshell - it might help you figure out what is going on. There are three different configurations that you can use to image with your scope: - prime focus (that is just camera sensor at focal plane of telescope) - EP projection - that one is eyepiece between sensor and telescope - Afocal imaging - that configuration has both eyepiece and camera lens between telescope and sensor. First one I presume you understand. Third one is like using telescope for visual observation with difference that camera lens acts like eye lens and camera sensor acts like human retina. In that configuration beam exiting eyepiece is collimated - or parallel and it is eye/camera lens that does the focusing. That gives proper EP focus position. If you are using EP projection, eyepiece acts as a simple lens and you no longer want light to exit as parallel rays as that would give you blurry image on sensor - you want EP to focus light to sensor. This can be achieved in two different ways - you can have EP act as focal reducer and you can have EP act as "regular" lens (re-imaging lens). Here are simple diagrams of light rays to help you understand: Upper diagram shows EP acting as "focal reducer", while lower diagram shows EP acting as re imaging - lens. If we take regular EP focus position as "baseline", then two above cases can be summarized as: - for focal reducer, you need sensor to be closer than focal length of eyepiece (so if you use 32mm EP for example you need sensor to be less than 32mm away from EP, or "inside" its focal point). This configuration also moves focus position "inwards" with respect to baseline focus position - it will act as regular focal reducer - reducing size of the image. Reduction will depend on sensor-EP distance. - For regular EP projection, or bottom diagram, you want sensor to be further than focal length of EP away from EP. This configuration will move focus point further away from telescope (outward focuser position compared to baseline) and it can result in different magnification depending on where you put your sensor. If you put your sensor at twice focal length you will get 1:1 - or no change in scale, and it also means that you will need to get "one focal length" outward focuser travel as well. Depending what you want to achieve - you want second scenario, first one is rather difficult and in general you don't have enough inward travel to get it. You can use online calculator for distance and focus travel needed - like this one: http://www.wilmslowastro.com/software/formulae.htm#EPP It will give you approximate results (but good enough for orientation). For example, using 25mm eyepiece on your scope and placing sensor at 80mm from it will give you: 2200mm effective focal length. You can also use lens formula to calculate outward focus needed: 1/object + 1/image = 1/focal_length so 1/object = 1/focal_length - 1/image = 1/25 - 1/80 = 0.0275 So object distance = 36.4mm and since regular FL of eyepiece is 25mm, difference is 11.4mm - that is how much outward focus you need in above case. Hope this helps.
  2. Not sure about which tutorial to recommend, but I can make quick list of things that you should try out in developing your own workflow for planetary imaging. Which model of QHY5-II do you have (there is L, M and P if I recall correctly - they are different in sensor used)? Use x2.5 to x3 barlow with that scope and camera. Use IR/UV cut filter with that scope (or maybe narrowband filter for the moon). Capture video using SharpCap or FireCapture in 12bit mode using ROI to frame the planet (unless doing lunar). Make sure your laptop has fast enough HDD (best if it is SSD with enough speed). Use USB3.0 port if your camera is USB3.0 (but I think that QHY5-II is only USB2.0? right?). Use SER file format to capture your movie (not avi). Keep exposure length short - about 5-10ms, get as many subs captured as you can for about 3-4 minutes if doing planets - for moon you can go more than that. Use higher gain settings. If you start saturating (very unlikely unless you are imaging the moon) - drop exposure length. Capture at least dark movie as well (same settings as regular movie except you will cover your scope to block all light). IF you have flat panel, capture flat and flat dark movies as well. Aim for at least 200-300 subs in dark, flat and flat dark movies. Use PIPP to calibrate and prepare your movie - basic calibration, stabilize planet, etc, and again export as SER. Use Autostakkert! 3.0 (or which ever version is latest now) to stack your movie - save as at least 16bit image (32bit if possible). Use Registax 6 to do wavelet sharpening and do final touch up in Gimp or PS or whatever image manipulation software you prefer.
  3. For ASI1600 and histogram, you really need to examine resulting 16bit fits to see if there is something wrong with it. It will also depend on gain and offset settings that you used. It would be best not to mess too much with those and leave gain at unity (139) and offset around 50-60 (I personally use 64). 12bit range that ASI1600 operates on is not quite as large as 14 or 16 bit of some DSLR and most CCD cameras, but it is still quite large range. You can't expect histogram to be the way you are used to in daytime photography or when you open regular image in photoshop. That does not mean it is bad. You also need to understand that sub from ASI1600 is going to look rather dark if not scaled in intensity or stretched. That is quite normal and does not mean that camera is bad or malfunctioning. For reference, here is single sub from my ASI1600, histogram and what is really captured for comparison: There seems to be almost nothing in that sub (it's single calibrated Ha sub 4 minutes long). Histogram also looks very poor - almost "flattened" to the left: But in reality, that histogram is fine, if we "zoom in " on important section of it, you will see that it looks rather good: It has nice bell shape to it with right side being a bit more extended and "thicker" - meaning there is some signal there in the image. Don't be confused by negative sign in this histogram on the left - it is in fact calibrated sub, so dark frame has been subtracted. Resulting signal in the image when properly stretched is this: As you see, there is plenty of detail in single sub there, although it will not show without stretching. Here is one sub from the same set, but this one is still in 16bit mode and not calibrated: You can see that image looks noisier and there is amp glow to the side (all of which calibrate out), and histogram is "bar like" - that is because image is still in 16bit mode uncalibrated (unlike above one which is 32bit mode calibrated). Moral of this is - don't judge camera output and quality by what your capture application is showing unless you know what to look for - you need to examine what subs from your camera look like when you stretch them and when you calibrate them to see if there is something wrong with them or if they show enough detail as display in capture application can be misleading.
  4. Screen capture suggests that you are using ASI1600 in 8bit mode and that is not going to produce good results. Switch to 16 bit mode and examine image stretched to see captured detail.
  5. Really remarkable images (both scaled down version and full size). Hope you don't mind me noticing following, but really that is in my view something that is robbing your work of perfection. If it were not for those small details, I would really consider above images perfection in AP. You are slightly oversampled (at 100% large image does not render stars as pin points, but rather there is softness to them - which means slight oversampling. There is also issue of "fat" diffraction spikes which means atmosphere was not playing ball and there is no detail to justify such resolution). Going with lower sampling rate will improve SNR further - which is great already, this is probably the best rendition of this galaxy that I've seen (not talking about M31, but this little fella): This is first time I've clearly seen bar in this galaxy and how it's twisted. Going with lower pixel scale would give you additional SNR and smoother background while all things would still be visible in the image. Second thing is obviously edge correction of your setup. It is fast newtonian astrograph and sure it's going to suffer some edge softness on larger sensors, but in this case, it sort of hurts mosaic because overlaps can be easily seen, like this: also, not sure what software you used, stitching is not quite perfect - as this part shows: And the third one is obviously blown cores of M31 / M32. I'm aware that I might be considered too harsh with my comments since you produced some splendid images, but I do really think that above things can be easily rectified (add a few filler exposures for the cores, be careful about stitching / blending part and do bin of your data in software) and then you will be closer to perfection in your work.
  6. I'm sure that you can pull it out a bit better and also - flat correction will not hurt the image either
  7. On the other hand, for price of that four SVBony eyepieces you can almost get 4 BST Starguiders that are known to work very well. FLO offers 15% discount on 4 or more EPs purchased, and stock price is £47 for single EP, so if you purchase 4 of them (5, 8, 12, 15, 18 and 25mm focal lengths available and each of them has 16mm eye relief and 60 degrees of AFOV and will work fine on F/6 scope) that will cost you only 20 quid more than offer you linked to.
  8. To get decent amount of nebulosity around M45 you need to expose for at least couple of hours total, so 20-40 30s to one minute subs is just not going to be enough. Second important thing is processing of course, you need to gain more skill in processing in order to be able to render faint nebulosity properly. For example, I'm certain that first image in the thread - that of M45 can reveal much more nebulosity then it now has. Quick manipulation of attached JPEG gets this: As you can see there is nebulosity there even after converting to 8bit and jpeg. I'm sure that in 32bit format it can be rendered much better.
  9. Btw, that line is branded differently, have a look at these examples: https://www.rothervalleyoptics.co.uk/rvo-68-wa-eyepieces-125.html (this seems to be exact match to SVBony ones you found on e-bay with matching price) https://www.teleskop-express.de/shop/product_info.php/info/p4923_TS-Optics-Ultra-Wide-Angle-Eyepiece-6-mm-1-25----66--field-of-view.html https://www.telescope.com/6mm-Orion-Expanse-Telescope-Eyepiece/p/8920.uts https://agenaastro.com/agena-6mm-enhanced-wide-angle-ewa-eyepiece.html And I'm sure list does not end there ...
  10. Have no idea what they are like as I've not used them, but they do look conspicuously like known "gold line" range of eyepieces. Maybe have a read on that line of EPs to get idea of their performance, although SVBony states 17mm eye relief for whole range - gold line eps differ in ER from 14.8 to 15mm and gold line is quoted at 66 degrees vs 68 of SVBony, EP sizes and range of magnifications seem to match that of gold line.
  11. Found similar graphs myself, but have no idea how to read them, or rather what is the meaning of DN units (or for that matter log2(DN), although I suspect it is number of bits needed for DN unit - just a log base 2 of the number).
  12. Unfortunately I can't seem to find read noise value for 80D expressed in electrons at different ISO settings (another advantage of astro cameras - you get those specs, or you can measure them), but regardless of that I would use longer exposures since you are vastly oversampling with 8" SCT. There is a hint of it being "iso-less" online so we can assume that read noise is pretty low. It also means that you can use something like ISO200-ISO400 range to give you more full well capacity. So first recommendation is go as long as you can in exposure length - at least couple of minutes. Second recommendation would be to try proper calibration regardless of the fact that you don't have set point cooling. For your next session - gather all calibration subs to try it out (you can still skip some steps and do different calibration by just omitting certain files from your work flow). - Darks at temperature close to those you worked with during the night - maybe take 10-20 darks before you start your lights and then another 10-20 darks after you finish, or maybe do it on a cloudy night when temperature is close to that you shot your light subs at. Try to get as much dark subs as possible (at least 20-30, but if you can - make it more) - Do set of bias subs - again gather as much as you can - Do set of flats (you need to do it on night of imaging if you don't have obsy and need to disassemble your rig at the end) and - do a set of matching flat darks Don't know what software are you using, but do regular average for bias, flats and flat darks and use sigma reject stacking for darks. Also use Dark optimization (there is a checkbox in DSS if you are using that for stacking). In case you find any artifacts like vertical / horizontal streaks or similar in the background in your final image - that means that dark optimization failed for your sensor - then try what most people do with DSLRs - using bias instead of darks (and flat darks). Next thing to do is use superpixel mode when debayering. Again that is not the best way to do things, but best way would be very complicated in terms of software support, so we will settle for second best. Super pixel mode just means that R, G and B channel images are made in such way that 4 adjacent pixels in bayer matrix result in a single pixels in each channel. It just uses one R pixel from bayer 2x2 block for R channel image, one B pixel for B channel image and it averages 2 green pixels for G channel image. Resulting R, G and B images will have twice lower resolution than your sensor, and in this case it will be 3144 x 2028 instead of 6288 x 4056. It also means that these R, G and B images are no longer sampled at 0.38"/px but at 0.76"/px (that is actual sampling rate of color sensor). In DSS again there is an option for that in RAW settings Now stack your image and save result as fits file. Next step would be to bin that image x2 to get your sampling rate to 1.52"/px. For that you will need ImageJ software (it is free and written in java so runs on almost any OS). You open your fits file (for each channel, or if it is multi channel image it will open as stack) and run Image/Transform/Bin menu command. Select 2x2 and average method. Do this on each channel or once on whole stack. After that you can save resulting image as fits again (or in case if it was opened as stack - use save as -> image sequence, select fits format and other options and it will write individual channel images that you can combine back in photo editing app of your choice). In case you are using Pixinsight, all above options are also available to you (at least I'm sure they are, I don't use it personally). Btw resulting image will be halved in height and width once more after bin, so final resolution will be 1572 x 1014 (or rather close to 1500 x 1000 if you account for slight cropping due to dither between frames). Yes, almost forgot - do dither between each sub, that will improve your final SNR quite a bit.
  13. That really depends on comparison of two sensors and their characteristics. What DSLR do you currently have? Specifications that you want to look at are: - read noise in both sensors (lower is better) - dark current levels (again lower is better) - amp glow (absence is better - this one is particularly nasty if you don't have set point cooling, there is sort of a way to deal with it but it might or might not work for particular sensor - it is called dark frame optimization) - QE of each sensor (higher is better) In the end you have to see which one is going to be better matched in terms of resolution once you bin (you will need to bin on 8" SCT since it has quite a bit of focal length). You would ideally want to target sampling rate in range of 1.2-1.5"/px. With current DSLR and resolution of 0.38"/px that will be - super pixel mode (OSC sensors have twice lower sampling rate than mono sensors due to bayer matrix) + x2 binning, which will give you 1.52"/px. With ASI294 you will have 0.48"/px, so after super pixel mode that will be 0.96"/px - that is ok only if you have rather good guiding and good skies (like 0.5" RMS guiding and good seeing) and your optics are sharp (EdgeHD). If you go for ASI294 you will want to use reducer for SCT. My personal opinion is that ASI294 without cooling would not justify upgrade unless you are planning to use it for EEVA or similar along side imaging. If you want true upgrade then consider extending budget to cooled version. There are few additional benefits of astro camera vs DSLR - weight being one, external powering (USB connection in case of non cooled model) rather than internal battery (may help with thermal issues. Drawback is of course need for laptop to use astro camera along with cabling (at least USB lead in case of non cooled model, and power cord with cooled one).
  14. Short version - Yes, yes It of course depends on your local seeing conditions, but even if those are not perfect, larger aperture will allow for better detail (provided that you can get close to max performance of the scope you are using - meaning matching focal length to pixel size and using suitable exposure lengths and of course good processing workflow).
  15. Just for those interested, I made an image / diagram or graph (not sure how to call it) that displays approximate surface brightness of M51 in magnitudes - might be useful for anyone trying to figure out needed exposure time to achieve good SNR. Data is luminance (LPS P2 filter) and roughly calibrated to give mags (it might be off by a bit - I was not overly scientific about it). Calibrated on single mag 12.9 star, median filter used to smooth things out. Here it is: Each color is single "step" so cores are at mag 17 or higher and background is at mag27 or darker. Again, this is rough guide
  16. Well, you have quite a selection to choose from. I would personally go for M/N, but 115mm APO is also an option for wide field. You would need 9 panels to cover M31 for example. It would seem that taking 9 panels will take up too much time compare to single panel, but in fact you will get almost same SNR in the same time as using smaller scope that would cover whole FOV in single go (provided that you also have F/5.25 scope). I'll explain why in a minute. First thing to understand is sampling rate. I've seen that you expressed concerns about going at 2.29"/px. Fact is - when you are after a wide field that is really only sensible option - to go low sampling rate (unless you have very specific optics - fast and sharp, only in that case you can go high resolution wide field). Take for example scope that you were looking at - 73mm aperture. It will have size of airy disk of 3.52 arc seconds - aperture alone is not enough to resolve fine detail - add atmosphere and guiding and you can't really sample at below 2"/px. I mean, you can, but there will be no point. Another way to look at it is that you want something like at least 3-4 degrees of FOV. That is 4*60*60 = 14400 arc seconds of FOV in width. Most cameras don't have that much pixels in width. ASI071 is 4944 x 3284 camera, meaning you have only about 5000 pixels in width. Divide the two and you will get resolution that it can achieve on wide field that covers 4 degrees - 14400/5000 = 2.88"/px. So even that camera can't sample on less if you are after wide field (not to mention the fact that OSC cameras in reality sample at twice lower rate than mono). Don't be afraid of blocky stars - that sort of thing does not happen, and with proper processing you will just have a nice image even if you sample on very low resolution. Now a bit about the speed of taking panels vs single FOV. Take for example above M31 and 9 panels example. In order to shoot 9 panels you will need to spend 1/9 of time on each panel. That means x9 less subs for each panel than you would be able to do when doing single FOV with small scope. This also means that SNR per panel will be x3 less than single FOV if you use the same scope, but you will not be using same scope. Imagine that you are using small scope that is capable of covering same FOV in single scope - it needs to have 3 times smaller focal length to do that. So it will be 333mm FL scope. Now we said that we need to match F/ratio of two scopes, so you are looking at F/5.25 333mm scope. What sort of aperture will it have? It will be 333/5.25 = ~63.5mm scope. Let's compare light gathering surface of two scopes - first is 190mm and second is 63.5mm, and their respective surfaces 190^2 : 63.5^2 = ~9. So large scope gathers 9 times more light, which means that it will have x3 better SNR - that cancels with time needed to spend on each panel - you get roughly the same SNR per panel as you will for whole FOV. You end up with same result with larger scope and doing mosaic in one night as you would with small scope of the same F/ratio that covers same FOV in one night. There are some challenges when doing mosaic imaging - you need to point your scope at particular place and account for small overlap to be able to stitch your mosaic in the end (capture software like SGP offers mosaic assistant and EQMOD also has small utility program to help you make mosaics). You need to be able to stitch your mosaic properly - APP can do that automatically I believe, not sure about PI, but there are other options out there as well to do it (even free - there is plugin for ImageJ). You might have more issues with gradients if shooting in strong LP because their orientation might not match between panels - but that can be dealt with as well. Unless you really want small scope, you don't need it to get wide FOV shots - you already have equipment for that, just need to adopt certain workflow to do it.
  17. Out of interest, focuser draw tube is said to be 1.6" - but there is a thread at the end of it that is a bit larger - could it be T2 thread? (1.6" = 40.8mm - add a mm and a bit and you have 42mm that is T2). Maybe you could use some FF/FR that has T2 connection instead?
  18. Astro cameras use same sensors as DSLR cameras - either CMOS or CCD (of course actual sensors will differ a bit depending on camera model, but are in principle the same). It is other features that distinguish astro cameras and DSLR-s. Lack of filters - DSLR has IR/UV cut filter that needs to be removed/replaced to get the most out of DSLR (so called astro modding of DSLR). Astro cameras also don't have anti aliasing filter on them - some DSLRs do. Most significant feature is set point cooling (not all astro cameras have that) which enables precise calibration to be carried out on your data. Cooling as such is not as important for calibration - it does help with thermal noise, but ability to always have sensor at certain temperature is the key for good calibration, so main difference is that.
  19. @alan potts What scopes do you have already to image with? Wider FOV is easily achieved by doing mosaics, so you don't really need to spend money on a new scope if you have one that you are pleased with, but gives narrower field of view than you would like. It is just a matter of proper acquisition and processing of such data, and although people think that doing mosaics is slower process than going with wider field scope - it is not necessarily so. If you already have fast scope (fast as having fast F/ratio), then doing mosaics is going to be marginally "slower" than using same F/ratio scope capable of wider field with the same sensor (difference being only overlap needed to properly align and stitch mosaic image).
  20. Background and nebula rendition from DSS version and color saturation from PI version - that would be a good combo I believe
  21. Ah, I see, lack of software support. Maybe easiest way to do it (although not the best - I think it's better to bin individual subs after calibration and prior to stacking) would be to download ImageJ software (free/open source written in java so it's available for multiple platforms) and once you finish stacking image in DSS - save it as 32bit fits format (Gimp works with fits format). Then open it in ImageJ and use Image / Transform / Bin menu options. Then select average method and x2 or x3 (depending how much you want to bin) - after that save as fits and proceed to process it in Gimp (or Photoshop).
  22. It is indeed possible to do it but I don't think that anyone has done it. It would involve deconvolution (already exists implemented) with deconvolution kernel depending on position on image. Although in principle one knows level of coma in newtonian scope with parabolic primary, in practice things are not as easy. Level of coma depends on collimation of the scope and position of the sensor. If sensor is slightly shifted with respect to optical axis (not tilt, but rather shift - so that optical axis does not go thru exact center of sensor), aberrations will not be symmetric with respect to sensor. One can account for that by examining stars in the image and determining true optical axis / sensor intersection. One can also generate coma blur PSF for certain distance from optical axis (for perfectly collimated scope), so yes it can be done. Downside is that it is inherently probabilistic process because data that you have suffers from noise - you are trying to guess rather then precisely calculate because you don't have exact numbers to start with, but rather values polluted by noise. Another difficulty would be that it is better to do it on stack of data rather than single sub (better SNR) but stack of subs will have different levels of coma if you dither or otherwise have less than perfect alignment - like slow field rotation / drift / whatever, because it changes distance of pixels from optical axis. Result will be corrected image but lower SNR - very similar to what you get when sharpening - sharper but noiser image. That applies to any sort of optical aberration - as long as you have proper mathematical description of it (like astigmatism depending on distance from optical axis, or even field curvature) it can be done with deconvolution - at expense of SNR. It is much easier to correct it with optical elements (less costly) whether that is coma corrector, field flattener or whatever ... Just to add - above method will deal both with star coma blur but also extended features coma blur because it will operate on precise mathematical definition of coma blur rather than approximation of neural networks such as StarNet++
  23. Don't use on chip binning - use software binning instead - no saturation issues and gives you more control without loosing anything (except for larger sub size off the camera). In fact you can decide whether to bin in pre processing stage, so you can try to process image above with x2 and x3 bin in software and see if you like the results better.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.