Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Central obstruction is stated as percentage of aperture diameter rather than surface area by convention. It is much more important to visual telescope use at high power than anything else (modulation transfer function). In case of RASA, being photographic instrument - it is rather unimportant parameter. It might be helpful to know central obstruction when doing some calculations, but as you pointed out, even moderate change in central obstruction by diameter turns into rather small change in effective aperture. Just for comparison, weather conditions on a given night will have larger impact then central obstruction for light gathering. Transparency difference of only 0.1 AOD (for example between 0.2 and 0.3) will be equal to loss of 10% of light - that is how much light loss you get from 31% central obstruction (by diameter) in comparison to unobstructed scope. Change in F/stop also depends on your use of F/stop. F/ratio is not changed, so whenever you use F/ratio to do something (like define ratio focal length and aperture relation or define angle of beam or whatever) - it stays F/2. F/stop on the other hand has rather limited use with telescopes - it sort of represents "speed" of instrument, but it really has no meaning until you pair it with certain sensor for recording. It is usually counter productive to think about speed of telescope alone in terms of time needed to reach certain SNR. Btw, same logic that you applied with central obstruction to get F/2.2 instead of F/2 can be applied to other aspects of instrument. In case of RASA - you should account for transmission of corrector plate, and corrector in front of sensor (about 99.5% for each air/glass surface - two for corrector plate, and about 6 for corrector if it is 3 element piece) and reflectivity of primary mirror that is about 96-97%.
  2. Not sure what you are asking (either in original post, nor in this last one). If you want to know what can be improved in your image / workflow - please post details and relevant .fits files. Maybe I've missed it, but I have no idea what camera you used to get above image for example (is it DSLR or dedicated astro camera. Is it cooled or non cooled model, etc ...). If you are wondering about dark "frame" on right and bottom of this last image - my suspicion is that it is related to dark calibration, but I can't be sure until we examine master dark (and get details about cooling and such).
  3. Having images would really help, I think. Its hard to imagine what could be wrong because you mention that DSLR image is ok, while 385 image is not and behaves like there are optical aberrations in system (both refractor and mak) - field curvature or something else. 385 sensor is much smaller than DSLR sensor and it should be much less sensitive to aberrations that are inherent for particular optics (image is best in center of properly collimated scope). Only thing that I can think of is that you are not used to seeing stars / image detail at that scale and that you have same level of aberrations (probably due to tilt, or defocus or similar) on both cameras but 385 being much smaller shows them larger (think crop factor here, although I don't really like to use that term). What ever it is, I think we can sort it out, but like I said - best thing would be to record an image of star field / star (if you don't have any saved) and post it here so we get idea of what we are dealing with.
  4. No, you should leave it off. There are cases when you want to use that lens - like when using camera as all sky camera (without telescope to capture meteors or weather information), or when doing some fancy afocal things - combining eyepiece and camera lens into imaging system. But for general imaging - either planetary or DSO - you should leave it off.
  5. Would you consider EPs that have slightly different FL? How about 11mm - not much of a change in FL. If that is ok, then go for ES82 11mm because you like sharp and contrasty.
  6. Posting resulting images can help determine the cause. Also - a bit better description of each setup or possibly a picture of it would be good.
  7. In principle that article is correct, but I think that it is worth mentioning that there are cases where our visual system behaves a bit differently - threshold cases. In article, one thing is assumed - that our vision is "linear", or rather that it behaves consistently (not linear in true sense - as magnitudes are mentioned and fact that our perception is logarithmic in nature) which is not really true for low light scenarios. It states that contrast cannot be changed with changing aperture / magnification as both target brightness and LP are governed by aperture (exit pupil), but we know for a fact that every target (and observer) have sweet spot "magnification", or that in principle in heavier LP you want to keep exit pupil around 3-4mm while in dark skies that figure is more in 2-3mm range. It is true that physical contrast remains the same, but our perception of that contrast changes because our visual response is non linear.
  8. Above is "already binned" although I was probably not clear or forgot to mention it explicitly. If you need three panels in width to cover target (because focal length of larger scope is three times that of smaller scope) then to get same resulting image in terms of pixel count and sampling rate (same FOV, same number of pixels per width and height and there fore same sampling rate - that is what we would consider same image) you need to bin x3 which raises SNR x3. In my above example I mentioned that 1/9 of exposure time is compensated by x9 light gathering surface - but that is only if one makes sure that sampling rate is the same. If you on the other hand do not compensate for sampling rate/resolution then "standard rule" applies - two scopes of same F/ratio will have same "speed". Small scope will gather whole field and larger scope with gather only 1/9 of the field in same time so there is clear benefit of using smaller scope for doing wide field as with large scope and mosaic - you will end up with very large image in terms of mega pixels - but it is going to take x9 more time to get same SNR (or you will have x3 lower SNR in same time - something you can recover by binning x3 and equating sampling rate).
  9. Although I have light eyes (bluish / gray) and am sensitive to bright light (at least that is my perception, might be subjective thing), I don't like using ND filters on the Moon. I do have somewhat related question, how about polarizing filters? I'm under impression that polarizing filter would cut some of the light (like 50% or so) but it would cut scatter/glare more making image "cleaner", especially on reflectors (both from secondary support and mirror scatter in general). Anyone tried this and noticed difference?
  10. I think you need to re stack your data, but pay attention to remove offending frames. It looks like there were some thin clouds at some point that messed up image, here is green channel (cleaned from gradients and stretched a bit): It clearly shows tacking artifacts and high level clouds. Stacking artifacts are there because of high level clouds (lines on alignment seams - these are due to uneven background as clouds shift between subs). Nebulosity is there but its about the same brightness as cloud reflection from LP or maybe even less and that prevents you from showing it. Hopefully you will manage to find enough cloud free subs to stack and results should be better then.
  11. Sorry about that, I sometimes forget that people might not be acquainted with meaning of certain abbreviations. I'll expand on that. SharpCap and FireCapture are two pieces of software most commonly used to record planetary video for purpose of lucky imaging. Both are free (you can get pro version of SharpCap if you wish to support the product but at this stage I don't think it's necessary - free version will do the job just fine). Planetary cameras work in two different modes - one is 8bit mode and other is in 12bit mode - that just specifies number of bits used to record data. You can record video in 8bit format and that has its own advantages (less data and faster capture), but you need to adjust gain setting accordingly for each camera model to exploit this mode properly. If you don't set gain properly you can get data truncation and that is bad thing. 12bit mode will be a bit slower, but it should work regardless of the gain, there should be no truncation that I mentioned. This is the option in capture software - you will see different modes of capture offered - go with RAW16 (that is actually 12 bit rather than 16 because the way this particular camera works). ROI stands for Region Of Interest. With lucky imaging it is all about the speed of recording of frames. If you for example select 5ms exposure length - in theory you should be able to record 200 frames each second (200fps) as each one takes 5ms (1000ms / 5ms = 200 frames each second). There are certain technical limitations to how much data you can record, namely speed of camera/computer connection (which is USB type connection, and USB port has certain speed that it can transfer data at - version 2.0 of USB standard has lower speed than USB 3.0, this is why it is recommended to use USB 3.0 connection, but both camera and laptop has to support it - in your case you will be limited to USB 2.0 as your camera model operates on that standard) and speed of your hard drive that stores the data. Back to region of interest - instead of writing each frame as complete image - which contains a lot of pixels, you can select small region of sensor to be read out and recorded. This means less data to transfer and less data to store. Planets are small and usually only cover very small central region of sensor - something like couple hundred pixels across and most of the image is in fact just black background that you will not need and it's a waste to record / transfer and store that data. For that reason you can select smaller size of output image - just central region which will be large enough to contain complete planet - something like 320x200 or 640x480 instead of going for full 1280x1024 image size. If you look at the specs for QHY5II series of cameras at this page: https://www.qhyccd.com/index.php?m=content&c=index&a=show&catid=133&id=8&cut=1 you will see that there is quite a bit of difference in achieved frame rate between full frame and 640x480 ROI, with later being much faster: Only time when you don't want to "shrink" imaging area and use ROI is when you are shooting images of the Moon - simply because it is large enough and will usually fill the field of view and often be larger than sensor can cover (in that case you can do mosaics if you want to shoot whole lunar disk - meaning shooting separate parts of moon and then piecing separate parts in final large image). In any case - number of captured frames is really important for lucky imaging, because you will end up throwing away most of them since they will be distorted by seeing. More frames you have, more chance you will have enough good frames to stack and your image will be better. Last part of the equation is speed at which your laptop can record the video - it can also be a bottleneck in recording. This is why I mentioned replacing your standard hard drive with solid state drive - these are much faster storage devices. You might not need to do it, but if you want the best possible results at some point you will want to move to laptop with SSD in the future (along with some other upgrades that I'll mention in the end). SER vs AVI - that is just a format for storing movie. SER file format let's you record in higher bit count (above mentioned 12bit, or more than 8 bits, since M model seems to work on 10bit format unlike others models of QHY5II line, but all the same - go with highest number of bits that is available) and is simpler file format to handle - a sort of standard for planetary imaging. In the end I would like to say that 72ED is probably worst scope for this purpose (sorry about that, and I do believe it is fine scope, just not well suited for this purpose). With planetary imaging - it is aperture that matters as resolved detail is related to aperture size. Your planets will be tiny with that scope. It is quite ok for you to start planetary imaging with such scope to get the hang of it and learn capture and processing parts of imaging, but if you are really interested in planetary imaging, you will want bigger scope soon. It does not need to be expensive scope - something like F/8 150mm newtonian is going to be really nice and cheap planetary imaging scope. With planetary imaging unlike long exposure DSO imaging - you don't need very stable mount. As long as it can carry the scope and track with decent precision - it will do for planetary imaging. Exposures involved are so short that there is simply no way that there will be any blur due to mount not tracking properly. For example, I did this image of Jupiter on EQ2 mount with simple DC RA tracking motor (one that you need to set proper speed with potentiometer) and 130mm newtonian scope: (it was also taken with QHY5II - but L model and it was color camera)
  12. Not sure what is going on exactly, but here is "theory" in the nutshell - it might help you figure out what is going on. There are three different configurations that you can use to image with your scope: - prime focus (that is just camera sensor at focal plane of telescope) - EP projection - that one is eyepiece between sensor and telescope - Afocal imaging - that configuration has both eyepiece and camera lens between telescope and sensor. First one I presume you understand. Third one is like using telescope for visual observation with difference that camera lens acts like eye lens and camera sensor acts like human retina. In that configuration beam exiting eyepiece is collimated - or parallel and it is eye/camera lens that does the focusing. That gives proper EP focus position. If you are using EP projection, eyepiece acts as a simple lens and you no longer want light to exit as parallel rays as that would give you blurry image on sensor - you want EP to focus light to sensor. This can be achieved in two different ways - you can have EP act as focal reducer and you can have EP act as "regular" lens (re-imaging lens). Here are simple diagrams of light rays to help you understand: Upper diagram shows EP acting as "focal reducer", while lower diagram shows EP acting as re imaging - lens. If we take regular EP focus position as "baseline", then two above cases can be summarized as: - for focal reducer, you need sensor to be closer than focal length of eyepiece (so if you use 32mm EP for example you need sensor to be less than 32mm away from EP, or "inside" its focal point). This configuration also moves focus position "inwards" with respect to baseline focus position - it will act as regular focal reducer - reducing size of the image. Reduction will depend on sensor-EP distance. - For regular EP projection, or bottom diagram, you want sensor to be further than focal length of EP away from EP. This configuration will move focus point further away from telescope (outward focuser position compared to baseline) and it can result in different magnification depending on where you put your sensor. If you put your sensor at twice focal length you will get 1:1 - or no change in scale, and it also means that you will need to get "one focal length" outward focuser travel as well. Depending what you want to achieve - you want second scenario, first one is rather difficult and in general you don't have enough inward travel to get it. You can use online calculator for distance and focus travel needed - like this one: http://www.wilmslowastro.com/software/formulae.htm#EPP It will give you approximate results (but good enough for orientation). For example, using 25mm eyepiece on your scope and placing sensor at 80mm from it will give you: 2200mm effective focal length. You can also use lens formula to calculate outward focus needed: 1/object + 1/image = 1/focal_length so 1/object = 1/focal_length - 1/image = 1/25 - 1/80 = 0.0275 So object distance = 36.4mm and since regular FL of eyepiece is 25mm, difference is 11.4mm - that is how much outward focus you need in above case. Hope this helps.
  13. Not sure about which tutorial to recommend, but I can make quick list of things that you should try out in developing your own workflow for planetary imaging. Which model of QHY5-II do you have (there is L, M and P if I recall correctly - they are different in sensor used)? Use x2.5 to x3 barlow with that scope and camera. Use IR/UV cut filter with that scope (or maybe narrowband filter for the moon). Capture video using SharpCap or FireCapture in 12bit mode using ROI to frame the planet (unless doing lunar). Make sure your laptop has fast enough HDD (best if it is SSD with enough speed). Use USB3.0 port if your camera is USB3.0 (but I think that QHY5-II is only USB2.0? right?). Use SER file format to capture your movie (not avi). Keep exposure length short - about 5-10ms, get as many subs captured as you can for about 3-4 minutes if doing planets - for moon you can go more than that. Use higher gain settings. If you start saturating (very unlikely unless you are imaging the moon) - drop exposure length. Capture at least dark movie as well (same settings as regular movie except you will cover your scope to block all light). IF you have flat panel, capture flat and flat dark movies as well. Aim for at least 200-300 subs in dark, flat and flat dark movies. Use PIPP to calibrate and prepare your movie - basic calibration, stabilize planet, etc, and again export as SER. Use Autostakkert! 3.0 (or which ever version is latest now) to stack your movie - save as at least 16bit image (32bit if possible). Use Registax 6 to do wavelet sharpening and do final touch up in Gimp or PS or whatever image manipulation software you prefer.
  14. For ASI1600 and histogram, you really need to examine resulting 16bit fits to see if there is something wrong with it. It will also depend on gain and offset settings that you used. It would be best not to mess too much with those and leave gain at unity (139) and offset around 50-60 (I personally use 64). 12bit range that ASI1600 operates on is not quite as large as 14 or 16 bit of some DSLR and most CCD cameras, but it is still quite large range. You can't expect histogram to be the way you are used to in daytime photography or when you open regular image in photoshop. That does not mean it is bad. You also need to understand that sub from ASI1600 is going to look rather dark if not scaled in intensity or stretched. That is quite normal and does not mean that camera is bad or malfunctioning. For reference, here is single sub from my ASI1600, histogram and what is really captured for comparison: There seems to be almost nothing in that sub (it's single calibrated Ha sub 4 minutes long). Histogram also looks very poor - almost "flattened" to the left: But in reality, that histogram is fine, if we "zoom in " on important section of it, you will see that it looks rather good: It has nice bell shape to it with right side being a bit more extended and "thicker" - meaning there is some signal there in the image. Don't be confused by negative sign in this histogram on the left - it is in fact calibrated sub, so dark frame has been subtracted. Resulting signal in the image when properly stretched is this: As you see, there is plenty of detail in single sub there, although it will not show without stretching. Here is one sub from the same set, but this one is still in 16bit mode and not calibrated: You can see that image looks noisier and there is amp glow to the side (all of which calibrate out), and histogram is "bar like" - that is because image is still in 16bit mode uncalibrated (unlike above one which is 32bit mode calibrated). Moral of this is - don't judge camera output and quality by what your capture application is showing unless you know what to look for - you need to examine what subs from your camera look like when you stretch them and when you calibrate them to see if there is something wrong with them or if they show enough detail as display in capture application can be misleading.
  15. Screen capture suggests that you are using ASI1600 in 8bit mode and that is not going to produce good results. Switch to 16 bit mode and examine image stretched to see captured detail.
  16. Really remarkable images (both scaled down version and full size). Hope you don't mind me noticing following, but really that is in my view something that is robbing your work of perfection. If it were not for those small details, I would really consider above images perfection in AP. You are slightly oversampled (at 100% large image does not render stars as pin points, but rather there is softness to them - which means slight oversampling. There is also issue of "fat" diffraction spikes which means atmosphere was not playing ball and there is no detail to justify such resolution). Going with lower sampling rate will improve SNR further - which is great already, this is probably the best rendition of this galaxy that I've seen (not talking about M31, but this little fella): This is first time I've clearly seen bar in this galaxy and how it's twisted. Going with lower pixel scale would give you additional SNR and smoother background while all things would still be visible in the image. Second thing is obviously edge correction of your setup. It is fast newtonian astrograph and sure it's going to suffer some edge softness on larger sensors, but in this case, it sort of hurts mosaic because overlaps can be easily seen, like this: also, not sure what software you used, stitching is not quite perfect - as this part shows: And the third one is obviously blown cores of M31 / M32. I'm aware that I might be considered too harsh with my comments since you produced some splendid images, but I do really think that above things can be easily rectified (add a few filler exposures for the cores, be careful about stitching / blending part and do bin of your data in software) and then you will be closer to perfection in your work.
  17. I'm sure that you can pull it out a bit better and also - flat correction will not hurt the image either
  18. On the other hand, for price of that four SVBony eyepieces you can almost get 4 BST Starguiders that are known to work very well. FLO offers 15% discount on 4 or more EPs purchased, and stock price is £47 for single EP, so if you purchase 4 of them (5, 8, 12, 15, 18 and 25mm focal lengths available and each of them has 16mm eye relief and 60 degrees of AFOV and will work fine on F/6 scope) that will cost you only 20 quid more than offer you linked to.
  19. To get decent amount of nebulosity around M45 you need to expose for at least couple of hours total, so 20-40 30s to one minute subs is just not going to be enough. Second important thing is processing of course, you need to gain more skill in processing in order to be able to render faint nebulosity properly. For example, I'm certain that first image in the thread - that of M45 can reveal much more nebulosity then it now has. Quick manipulation of attached JPEG gets this: As you can see there is nebulosity there even after converting to 8bit and jpeg. I'm sure that in 32bit format it can be rendered much better.
  20. Btw, that line is branded differently, have a look at these examples: https://www.rothervalleyoptics.co.uk/rvo-68-wa-eyepieces-125.html (this seems to be exact match to SVBony ones you found on e-bay with matching price) https://www.teleskop-express.de/shop/product_info.php/info/p4923_TS-Optics-Ultra-Wide-Angle-Eyepiece-6-mm-1-25----66--field-of-view.html https://www.telescope.com/6mm-Orion-Expanse-Telescope-Eyepiece/p/8920.uts https://agenaastro.com/agena-6mm-enhanced-wide-angle-ewa-eyepiece.html And I'm sure list does not end there ...
  21. Have no idea what they are like as I've not used them, but they do look conspicuously like known "gold line" range of eyepieces. Maybe have a read on that line of EPs to get idea of their performance, although SVBony states 17mm eye relief for whole range - gold line eps differ in ER from 14.8 to 15mm and gold line is quoted at 66 degrees vs 68 of SVBony, EP sizes and range of magnifications seem to match that of gold line.
  22. Found similar graphs myself, but have no idea how to read them, or rather what is the meaning of DN units (or for that matter log2(DN), although I suspect it is number of bits needed for DN unit - just a log base 2 of the number).
  23. Unfortunately I can't seem to find read noise value for 80D expressed in electrons at different ISO settings (another advantage of astro cameras - you get those specs, or you can measure them), but regardless of that I would use longer exposures since you are vastly oversampling with 8" SCT. There is a hint of it being "iso-less" online so we can assume that read noise is pretty low. It also means that you can use something like ISO200-ISO400 range to give you more full well capacity. So first recommendation is go as long as you can in exposure length - at least couple of minutes. Second recommendation would be to try proper calibration regardless of the fact that you don't have set point cooling. For your next session - gather all calibration subs to try it out (you can still skip some steps and do different calibration by just omitting certain files from your work flow). - Darks at temperature close to those you worked with during the night - maybe take 10-20 darks before you start your lights and then another 10-20 darks after you finish, or maybe do it on a cloudy night when temperature is close to that you shot your light subs at. Try to get as much dark subs as possible (at least 20-30, but if you can - make it more) - Do set of bias subs - again gather as much as you can - Do set of flats (you need to do it on night of imaging if you don't have obsy and need to disassemble your rig at the end) and - do a set of matching flat darks Don't know what software are you using, but do regular average for bias, flats and flat darks and use sigma reject stacking for darks. Also use Dark optimization (there is a checkbox in DSS if you are using that for stacking). In case you find any artifacts like vertical / horizontal streaks or similar in the background in your final image - that means that dark optimization failed for your sensor - then try what most people do with DSLRs - using bias instead of darks (and flat darks). Next thing to do is use superpixel mode when debayering. Again that is not the best way to do things, but best way would be very complicated in terms of software support, so we will settle for second best. Super pixel mode just means that R, G and B channel images are made in such way that 4 adjacent pixels in bayer matrix result in a single pixels in each channel. It just uses one R pixel from bayer 2x2 block for R channel image, one B pixel for B channel image and it averages 2 green pixels for G channel image. Resulting R, G and B images will have twice lower resolution than your sensor, and in this case it will be 3144 x 2028 instead of 6288 x 4056. It also means that these R, G and B images are no longer sampled at 0.38"/px but at 0.76"/px (that is actual sampling rate of color sensor). In DSS again there is an option for that in RAW settings Now stack your image and save result as fits file. Next step would be to bin that image x2 to get your sampling rate to 1.52"/px. For that you will need ImageJ software (it is free and written in java so runs on almost any OS). You open your fits file (for each channel, or if it is multi channel image it will open as stack) and run Image/Transform/Bin menu command. Select 2x2 and average method. Do this on each channel or once on whole stack. After that you can save resulting image as fits again (or in case if it was opened as stack - use save as -> image sequence, select fits format and other options and it will write individual channel images that you can combine back in photo editing app of your choice). In case you are using Pixinsight, all above options are also available to you (at least I'm sure they are, I don't use it personally). Btw resulting image will be halved in height and width once more after bin, so final resolution will be 1572 x 1014 (or rather close to 1500 x 1000 if you account for slight cropping due to dither between frames). Yes, almost forgot - do dither between each sub, that will improve your final SNR quite a bit.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.