Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Hi and welcome to SGL. Since you are doing degree in arts, I'm wondering what sort of noise do you need (do you need to be very scientific about it and what qualities are you looking in the noise). It would be far easier to use software that can generate different types of noise for you. Most obvious difference between real data and "synthetic" data would be that one is true random number generator, while other is pseudo random number generator (good enough for solid scientific work, so I presume not poor for arts project as well)?
  2. Not really that tight - just a bit tighter than just holding in place can distort mirror for example. Remember, optical surface precision is measured in fractions of wavelength of light - and that is order of hundred or so nanometers. You simply can't perceive bending on that scale by eye - and most items flex that much when you apply even slight pressure on them. Clips holding mirrors and lens in position are in perfect place to cause pinching if too much force is applied (again too much force means anything more than what is barely needed to hold it in place). Btw pinched optics can be consequence of different thermal expansion coefficients. If two different materials are used to hold the lens in cell at proper spacing and there is significant temperature drop compared to temperature when scope was put together - there will be different contraction of said materials and that can be enough to cause needed pressure to twist optics out of shape.
  3. Sorry I did not get back to you sooner - somehow I missed notification that you replied. Here are my findings so far: - above subs are not going to be much of use because they were shot in 8bit format. You need to shoot at 16bit format - so choose RAW16 as data format when taking subs. - From fits headers I see that you used different software to capture your subs? Darks and bias have comment: "COMMENT Generated by INDI" while light subs have this: "PROGRAM = 'Siril v0.9.11' / Software that created this HDU". Siril data does not include gain setting (darks and bias show that gain used was 145). For calibration files it is important to use exact settings as light subs - so same gain, same offset. There might be issue with offset, but I can't tell for sure because you used 8bit data format. There is strong histogram clipping to the left in your bias/dark subs - this can be due to offset issues but also due to 8bit format. What offset value did you use (driver settings)? Bottom / right part of the image is definitively due to stacking artifacts - no subs contain anything that can be the cause of that, and there were enough drift over light subs to cause that much stacking artifact - you should crop it out. Other tips would be - your tracking is rather poor. These are 20s subs, right? Look at star shapes (I took one star and aligned it and made animated gif): Almost every frame has some level of distortion. Maybe try to improve tracking / rigidity of your mount.
  4. It looks like ZWO decided to go somewhere in "between" of my two suggested regimes as their gain 0 setting. According to specs on ZWO website and this is chart for their ASI2600: Now if you look at the read noise line for ZWO and above for QHY - blue line (mode #1) - you will see some similarity both are around 3e (ZWO one a bit more) and then fall of below 1.5e (around 1.45 or something like that). This means that ZWO is using mode #1 for their camera and they don't let user change it to other modes (which is fine if you ask me as mode #1 seems to be the best for AP applications). At gain 0, ZWO states that their e/ADU is -0.77 - but that is just their lowest gain setting. This corresponds to gain setting of about 19 with QHY model - just look at graph for gain and blue line - 0.77 e/ADU is at about gain 19. Now look at FW part of QHY graph - again blue line and gain 19: Same sensor - same values, it is just that ZWO opted to put gain 0 at e/ADU value of 0.77 and at that point sensor has FW of about 50K - hence ZWO claim that their model has FW of about 50K.
  5. I think that above are decent results. For comparison to other models, I would consider blue line to be interesting - that is mode #1 (not sure if there will be modes with other vendors - that looks like it is QHY thing). It can operate in two distinct regimes, and selection of regime should be based on sky LP levels and how good your guiding / tracking is (exposure duration is the key). "Low gain" regime (for mode #1) at gain 0: ~60000e FW, ~3e read noise, 0.925e/ADU (that is quite a bit of full well capacity and very decent read noise for such capacity - dynamic range is about 14.3 in this mode) "High gain" regime (again for mode #1) at gain 75: FW is ~16384e, read noise is 1.466e and e/ADU is 0.252 (or about 1/4 - which means that 14 out of 16 bits is used in practice). with these settings we have something like 13.45 dynamic range (not bad at all). I think we can conclude that in these parameters we have equal or better option with this sensor than other current OSC offerings. Size is of course important - and APS-C size is very nice. What remains to be seen is how well it calibrates.
  6. Any sort of light leak on scope needs to be dealt with regardless if darks are taken off scope or not - because it will impact lights and flats (to some extent, depends how strong light leak is).
  7. Not really following your reasoning - small non cooled CMOS camera for DSO imaging on small scope. Will you utilize that camera for anything else - like planetary imaging or EEVA? Why did you narrow the list down to 178 and 224? Any particular reason? You have 600D that has been modified, and if you add scope instead of camera - you will get pretty much the same thing but quite a bit "faster" setup. For example: Virtually the same FOV (just a tad wider) with RC6" + 600D vs ED80 + ASI178. Difference being of course 150mm aperture diameter vs 80mm aperture (almost x4 light gathering increase). In any case, if you are set on either 178 or 224 - my vote goes to 178 simply because it has larger surface (224 would be my choice for planetary role due to low read noise and 385 would be my choice for EEVA / Planetary / DSO - due to size, sensitivity and read noise - however, maybe wait for 533 camera that is to hit the market right about now to see what sort of price will non cooled version have).
  8. There is some amp glow in your subs, but I don't think that all of top left corner is due to amp glow (could be wrong though - we need proper master dark to establish that). You have image of what dark should look like above, and here is another example of it (my master dark really stretched): Amp glow pattern in ASI1600 goes like that - two sections at right side (top/bottom) and a bit less to the left side rather "undefined" in shape following edges. Here is your light frame very stretched to show "features": Now, I marked what could be a light leak, but could also be some sort of gradient from LP? Dark sub matches position of bright spot, so that reinforces probability of it being light leak, but it's not matching "shape" fully to rule out any other explanation.
  9. Ah yes, I get it now - twice diameter being related to two times focal length (simple triangle similarity) rather than magnification of barlow itself. For anyone interested, here is why it works (provided that there is no vignetting in barlow element (but even small vignetting will not hurt):
  10. Not sure how that works, can you explain why it works, and if for example it works only for x2 barlow (how about x2.5 barlow and alike?).
  11. You are welcome. There is mathematical way to determine barlow magnification once you know focal length of barlow, but you are right, you can do it via trial and error all the same. Just shoot something that has feature of known size (like disk of particular planet at particular time or crater at the moon) and then measure in pixels of your resulting image the size - ratio of real angular size of the feature and measured pixel count will give you roughly sampling rate. For mathematical way, use this: 1 + distance / focal_length, but you need to know focal length of barlow lens. Usually better barlow lens have that info published. There is quite a bit to learn before you get to your award winning image of the Moon, but I do encourage you to just start recording. Here are some tips for planetary imaging in general: And of course other threads offer good advice as well, so lookup optimizing planetary viewing/imaging (not all is related to gear used, ambient has quite an impact as well) and how to acquire and process planetary images.
  12. For best image quality you want to avoid using eyepiece projection and go for prime focus like explained above. There is a number of ways you can achieve different "magnification", but for start let's discuss why that is wrong term in this context. Magnification is the term that we use for visual applications - it explains how much something is magnified in the sense of - what would it look like to naked eye if it was X times larger or closer. It can be technically described as ratio of angular size of object. With imaging things are different - we no longer have two angular sizes to compare - angular size with naked eye and angular size with telescope and eyepiece, we now have a different process - mapping angular size to pixels (or sampling points). That is called sampling resolution. Given an image of certain sampling resolution - you can still make thing in the image appear small or large by using different projection on device that is used to display things - like this: Above is the same image (it is Voyager 1 image of Jupiter in high resolution so credits to NASA for that one) but displayed at different scale. What is the "magnification" of that image? Now do an "experiment" - stand really close to monitor and observe these two images, and then walk away 3-4 meters and again look at that image, compare "magnifications" of those images again (they will look less magnified from 3-4 meters away although we did nothing to them). Above was written just to show that "magnification" is meaningless term in imaging - it is related to visual and should not be used when imaging. Proper term for imaging is pixel scale, or sampling resolution and it is expressed in case of astro photography in arc seconds per pixel ("/px for short). Ok, now that we know what we are working with - let's see what would be the proper answer to the question "how one might change magnification". It is really about two things - changing pixel scale and changing FOV. First you need to understand that there is something called native sampling rate for camera and telescope. It depends on telescope focal length and size of pixels on camera chip. Native FOV depends again on telescope focal length and size of sensor. Since you can't physically change number of pixels sensor has - native sampling rate and native FOV are related in the same way pixel size, sensor size and number of pixels are related (sensor size = number of pixels x pixel size). Native sampling rate is your "baseline" - that is basic "magnification" that we can modify via different methods to obtain other "magnifications". It is calculated as 206.3 * pixel_size / focal_length. I would like to mention one more thing that is important - that is critical sampling rate. Due to physics of light there is only so much detail that a telescope of a given aperture will show, and if you are sampling too fine (high sampling rate) - you will be just "wasting pixels" simply because there is no finer detail to be recorded. In reality oversampling has both benefits and drawbacks, but that is another discussion. Once you match your sampling rate to level of detail that aperture can provide under ideal circumstances (not guaranteed that you will actually record that level of detail - it does depend on atmosphere and quality of optics) - we say you are at critical sampling rate. There is simply no benefit in detail capture if going with "higher magnification" - or rather higher sampling rate. Here is guide formula for critical sampling - you want your focal length to be equal or less to pixel * aperture * 3.857281 (this last number is 510nm and 2.4 and 1.22 combined into single constant to make things easy). For example - if your camera is 3.75um pixels and you have 150mm aperture (not sure if your mak is 150mm, but let's say it's that one) - max focal length is 2170mm - that is ~ F/14.5. In fact you will find that F/ratio for critical sampling depends only on pixel size of camera. How to change sampling resolution - to answer finally your question on magnification: - use barlow lens. You can change magnification of barlow lens by changing distance of barlow element to sensor. Add more distance - larger magnification you get (finer sampling rate). - use of binning - this process joins few adjacent pixels into "larger pixel". This method will not change FOV but will change sampling rate - use of mosaics - shooting multiple panels and stitching them together. This technique is useful for larger FOV - something you will want for the Moon for example. Most planetary imagers opt to sample at critical sampling rate and make mosaics for lunar and solar - only two planetary targets that are not tiny (please make sure you have proper filters when trying solar imaging!).
  13. There are couple of issues here. One might be light leak. It looks like dark is suffering from it but it looks like flat dark is also (there is negative imprint of that pattern on master flat as well): However that is not main issue with darks - master dark is wrongly created. If you have followed some tutorial there is a good chance that you missed a step or confused steps between flat and dark parts. Range of values in master dark are 0.1 - 0.2 ADU and that simply cannot be for proper master - it looks like it has been scaled like when creating master flat. Same seems to be the case with flat darks - as such a small bright patch could not make imprint on flats that are exposed properly, yet your master flat shows it clearly. Master flat is also scaled and again - it's not scaled properly - it is in range 0.28 - 0.64 and it should be scaled so that max is around 1.0 value (in principle it does not matter what range flat is in, but if you are going to scale it, one would expect that scaling to be done to unity range - brightest part to be at 1.0 - or 100% light). My recommendation would be to first redo everything that you already have to make sure that you did not mess up processing workflow. Start by creating master dark - create master dark and post it here together with single dark sub for inspection.
  14. Ah ok, yes, what you see is indeed very severe aberration but only because you are using it wrong. Well, not wrong per se, but in this case you don't want to use it like that. You want to loose much of the bits between focuser and camera. In fact you don't want to have anything between focuser and camera, except maybe adapter to attach camera to focuser. If I'm not mistaken, your focuser should have T2 thread on it (male T2 thread) and camera should have female T2 thread on it - so just simply screwing camera onto focuser tube will be enough. Next option is to use 1.25" camera nose piece that is shown in one of the images by screwing that into camera and then using that in focuser in place of 1.25" accessory. This option is a bit better then above direct connection because it let's you rotate camera to align FOV of your camera the way you want it to be aligned. Bottom line - you want your camera in prime focus - either straight or with barlow (depending on scope and application) and not in eyepiece projection mode. In fact, you might want to try eyepiece projection thing at some point, but do it via afocal method (using both eyepiece and small lens that you've got with your camera) - that way you might try the EEVA on Mak for example - but that is another story.
  15. Central obstruction is stated as percentage of aperture diameter rather than surface area by convention. It is much more important to visual telescope use at high power than anything else (modulation transfer function). In case of RASA, being photographic instrument - it is rather unimportant parameter. It might be helpful to know central obstruction when doing some calculations, but as you pointed out, even moderate change in central obstruction by diameter turns into rather small change in effective aperture. Just for comparison, weather conditions on a given night will have larger impact then central obstruction for light gathering. Transparency difference of only 0.1 AOD (for example between 0.2 and 0.3) will be equal to loss of 10% of light - that is how much light loss you get from 31% central obstruction (by diameter) in comparison to unobstructed scope. Change in F/stop also depends on your use of F/stop. F/ratio is not changed, so whenever you use F/ratio to do something (like define ratio focal length and aperture relation or define angle of beam or whatever) - it stays F/2. F/stop on the other hand has rather limited use with telescopes - it sort of represents "speed" of instrument, but it really has no meaning until you pair it with certain sensor for recording. It is usually counter productive to think about speed of telescope alone in terms of time needed to reach certain SNR. Btw, same logic that you applied with central obstruction to get F/2.2 instead of F/2 can be applied to other aspects of instrument. In case of RASA - you should account for transmission of corrector plate, and corrector in front of sensor (about 99.5% for each air/glass surface - two for corrector plate, and about 6 for corrector if it is 3 element piece) and reflectivity of primary mirror that is about 96-97%.
  16. Not sure what you are asking (either in original post, nor in this last one). If you want to know what can be improved in your image / workflow - please post details and relevant .fits files. Maybe I've missed it, but I have no idea what camera you used to get above image for example (is it DSLR or dedicated astro camera. Is it cooled or non cooled model, etc ...). If you are wondering about dark "frame" on right and bottom of this last image - my suspicion is that it is related to dark calibration, but I can't be sure until we examine master dark (and get details about cooling and such).
  17. Having images would really help, I think. Its hard to imagine what could be wrong because you mention that DSLR image is ok, while 385 image is not and behaves like there are optical aberrations in system (both refractor and mak) - field curvature or something else. 385 sensor is much smaller than DSLR sensor and it should be much less sensitive to aberrations that are inherent for particular optics (image is best in center of properly collimated scope). Only thing that I can think of is that you are not used to seeing stars / image detail at that scale and that you have same level of aberrations (probably due to tilt, or defocus or similar) on both cameras but 385 being much smaller shows them larger (think crop factor here, although I don't really like to use that term). What ever it is, I think we can sort it out, but like I said - best thing would be to record an image of star field / star (if you don't have any saved) and post it here so we get idea of what we are dealing with.
  18. No, you should leave it off. There are cases when you want to use that lens - like when using camera as all sky camera (without telescope to capture meteors or weather information), or when doing some fancy afocal things - combining eyepiece and camera lens into imaging system. But for general imaging - either planetary or DSO - you should leave it off.
  19. Would you consider EPs that have slightly different FL? How about 11mm - not much of a change in FL. If that is ok, then go for ES82 11mm because you like sharp and contrasty.
  20. Posting resulting images can help determine the cause. Also - a bit better description of each setup or possibly a picture of it would be good.
  21. In principle that article is correct, but I think that it is worth mentioning that there are cases where our visual system behaves a bit differently - threshold cases. In article, one thing is assumed - that our vision is "linear", or rather that it behaves consistently (not linear in true sense - as magnitudes are mentioned and fact that our perception is logarithmic in nature) which is not really true for low light scenarios. It states that contrast cannot be changed with changing aperture / magnification as both target brightness and LP are governed by aperture (exit pupil), but we know for a fact that every target (and observer) have sweet spot "magnification", or that in principle in heavier LP you want to keep exit pupil around 3-4mm while in dark skies that figure is more in 2-3mm range. It is true that physical contrast remains the same, but our perception of that contrast changes because our visual response is non linear.
  22. Above is "already binned" although I was probably not clear or forgot to mention it explicitly. If you need three panels in width to cover target (because focal length of larger scope is three times that of smaller scope) then to get same resulting image in terms of pixel count and sampling rate (same FOV, same number of pixels per width and height and there fore same sampling rate - that is what we would consider same image) you need to bin x3 which raises SNR x3. In my above example I mentioned that 1/9 of exposure time is compensated by x9 light gathering surface - but that is only if one makes sure that sampling rate is the same. If you on the other hand do not compensate for sampling rate/resolution then "standard rule" applies - two scopes of same F/ratio will have same "speed". Small scope will gather whole field and larger scope with gather only 1/9 of the field in same time so there is clear benefit of using smaller scope for doing wide field as with large scope and mosaic - you will end up with very large image in terms of mega pixels - but it is going to take x9 more time to get same SNR (or you will have x3 lower SNR in same time - something you can recover by binning x3 and equating sampling rate).
  23. Although I have light eyes (bluish / gray) and am sensitive to bright light (at least that is my perception, might be subjective thing), I don't like using ND filters on the Moon. I do have somewhat related question, how about polarizing filters? I'm under impression that polarizing filter would cut some of the light (like 50% or so) but it would cut scatter/glare more making image "cleaner", especially on reflectors (both from secondary support and mirror scatter in general). Anyone tried this and noticed difference?
  24. I think you need to re stack your data, but pay attention to remove offending frames. It looks like there were some thin clouds at some point that messed up image, here is green channel (cleaned from gradients and stretched a bit): It clearly shows tacking artifacts and high level clouds. Stacking artifacts are there because of high level clouds (lines on alignment seams - these are due to uneven background as clouds shift between subs). Nebulosity is there but its about the same brightness as cloud reflection from LP or maybe even less and that prevents you from showing it. Hopefully you will manage to find enough cloud free subs to stack and results should be better then.
  25. Sorry about that, I sometimes forget that people might not be acquainted with meaning of certain abbreviations. I'll expand on that. SharpCap and FireCapture are two pieces of software most commonly used to record planetary video for purpose of lucky imaging. Both are free (you can get pro version of SharpCap if you wish to support the product but at this stage I don't think it's necessary - free version will do the job just fine). Planetary cameras work in two different modes - one is 8bit mode and other is in 12bit mode - that just specifies number of bits used to record data. You can record video in 8bit format and that has its own advantages (less data and faster capture), but you need to adjust gain setting accordingly for each camera model to exploit this mode properly. If you don't set gain properly you can get data truncation and that is bad thing. 12bit mode will be a bit slower, but it should work regardless of the gain, there should be no truncation that I mentioned. This is the option in capture software - you will see different modes of capture offered - go with RAW16 (that is actually 12 bit rather than 16 because the way this particular camera works). ROI stands for Region Of Interest. With lucky imaging it is all about the speed of recording of frames. If you for example select 5ms exposure length - in theory you should be able to record 200 frames each second (200fps) as each one takes 5ms (1000ms / 5ms = 200 frames each second). There are certain technical limitations to how much data you can record, namely speed of camera/computer connection (which is USB type connection, and USB port has certain speed that it can transfer data at - version 2.0 of USB standard has lower speed than USB 3.0, this is why it is recommended to use USB 3.0 connection, but both camera and laptop has to support it - in your case you will be limited to USB 2.0 as your camera model operates on that standard) and speed of your hard drive that stores the data. Back to region of interest - instead of writing each frame as complete image - which contains a lot of pixels, you can select small region of sensor to be read out and recorded. This means less data to transfer and less data to store. Planets are small and usually only cover very small central region of sensor - something like couple hundred pixels across and most of the image is in fact just black background that you will not need and it's a waste to record / transfer and store that data. For that reason you can select smaller size of output image - just central region which will be large enough to contain complete planet - something like 320x200 or 640x480 instead of going for full 1280x1024 image size. If you look at the specs for QHY5II series of cameras at this page: https://www.qhyccd.com/index.php?m=content&c=index&a=show&catid=133&id=8&cut=1 you will see that there is quite a bit of difference in achieved frame rate between full frame and 640x480 ROI, with later being much faster: Only time when you don't want to "shrink" imaging area and use ROI is when you are shooting images of the Moon - simply because it is large enough and will usually fill the field of view and often be larger than sensor can cover (in that case you can do mosaics if you want to shoot whole lunar disk - meaning shooting separate parts of moon and then piecing separate parts in final large image). In any case - number of captured frames is really important for lucky imaging, because you will end up throwing away most of them since they will be distorted by seeing. More frames you have, more chance you will have enough good frames to stack and your image will be better. Last part of the equation is speed at which your laptop can record the video - it can also be a bottleneck in recording. This is why I mentioned replacing your standard hard drive with solid state drive - these are much faster storage devices. You might not need to do it, but if you want the best possible results at some point you will want to move to laptop with SSD in the future (along with some other upgrades that I'll mention in the end). SER vs AVI - that is just a format for storing movie. SER file format let's you record in higher bit count (above mentioned 12bit, or more than 8 bits, since M model seems to work on 10bit format unlike others models of QHY5II line, but all the same - go with highest number of bits that is available) and is simpler file format to handle - a sort of standard for planetary imaging. In the end I would like to say that 72ED is probably worst scope for this purpose (sorry about that, and I do believe it is fine scope, just not well suited for this purpose). With planetary imaging - it is aperture that matters as resolved detail is related to aperture size. Your planets will be tiny with that scope. It is quite ok for you to start planetary imaging with such scope to get the hang of it and learn capture and processing parts of imaging, but if you are really interested in planetary imaging, you will want bigger scope soon. It does not need to be expensive scope - something like F/8 150mm newtonian is going to be really nice and cheap planetary imaging scope. With planetary imaging unlike long exposure DSO imaging - you don't need very stable mount. As long as it can carry the scope and track with decent precision - it will do for planetary imaging. Exposures involved are so short that there is simply no way that there will be any blur due to mount not tracking properly. For example, I did this image of Jupiter on EQ2 mount with simple DC RA tracking motor (one that you need to set proper speed with potentiometer) and 130mm newtonian scope: (it was also taken with QHY5II - but L model and it was color camera)
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.