Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm rather surprised that they have different specs. Same sensor - pretty much the same specs. There could be tiny differences in some of these, I'll explain. First - read noise. There is no single read noise value because gain is variable. QHY lists read noise in 1e-3.7e range: (source: https://www.qhyccd.com/qhy600m-c/ ) ZWO lists it at 1.2-3.5e (source: https://astronomy-imaging-camera.com/product/asi6200mm-pro-mono ) Read noise with CMOS sensors consists out of two parts - pre amp noise and post amp noise. For this reason, read noise value is depending on gain applied, and it usually follows c+1/x curve. We can easily see this if we observe what happens with noise in this conversion between e and ADUs when reading out. Let X be pre amp noise and Y be post amp noise read noise in ADU is = X * gain + Y read noise in electrons is then (X * gain + Y) / gain = X + Y/gain Higher the gain - lower the read noise and vice verse (1/x curve). IMX455 has multiple read out modes. QHY model lets you select which mode you want to use, while ZWO combines them into one (it chooses mode based on your gain selection). In any case - here are read noise graphs for each: QHY: These actually look pretty much the same - it looks like ZWO chose #1 (blue line) mode. Crossover is at different place (ZWO uses db scale for the gain, so each ~61 in their gain e/ADU halves). In reality - both will have 1.5e read noise. Sony does not provide absolute QE, but rather gives relative QE chart. Each vendor then estimates peak absolute sensitivity with respect to that chart and their calibration source. Few percent of estimation error is quite possible. Frame rate depends mostly on USB 3.0 speed and number of pixels that you are going to transfer. QHY says it is 2.5 FPS ZWO lists it at 3.19FPS: Max speed of USB 3.0 port is 4.8Gbit/s and 9576 x 6388 x 16 = 978743808 bits That in theory gives max of 5.26 FPS (above two divided), however, USB protocol uses up only some percentage on data transfer - and actual speed also depends on speed of rest of hardware, so 2.5-3.2 FPS is realistic. Overall - I don't think there is any real difference between two cameras as far as performance is concerned. I'm rather skeptical of that result. I don't see enough in there to be able to make conclusion on it, but two key pieces of information that are missing is e/ADU measurement for each ISO value checked for read noise and read noise measurement methodology. I've never seen that low read noise. Best planetary camera in use today has read noise of ~0.75e (small sensor - imx224). In any case, if you want to compare SNR of two cameras, here is how it should work 1. take some surface and set number of photons emitted from unit area of that surface per unit time (flux) 2. calculate how much surface one pixel covers given lens focal length, binning and all of that 3. multiply with average QE - or if you know source spectrum and spectral response of camera - piecewise multiply the two and integrate over spectrum (you can also use peak QE as first approximation) 4. calculate any read noise due to binning - read noise raises as bin factor for CMOS sensors (bin x2 raises read noise x2 and bin x3 raises read noise x3 and so on) SNR = signal / sqrt( signal + total_read_noise^2) In your example if we calculate per pixel SNR for same size pixel and same lens and per pixel area signal is 5e then respective SNRs will be a7S III SNR = (5 * 0.65) / sqrt( (5 * 0.65) + 0.54^2) = ~1.727 QHY SNR = (5*0.87) / sqrt( (5*0.87) + 1.1^2) = ~1.845 Just to explain above formula - total noise is square root of sum of squares of all individual noise sources. In above calculation, we used only shot noise and read noise. We assumed that exposure is short enough so that dark current is effectively zero and that there is no background illumination (no light pollution). Second assumption might not be valid one - you'll need to check that. In astronomy, even SQM 22 sky is much brighter than targets and must be taken into account - fainter parts of brighter galaxies go down to SQM27-28 for example. Shot noise is equal to square root of signal. Out of interest, do you have single such frame in full resolution in raw format - say fits? Did you calibrate that frame? It might be worth checking if you need to do bias removal and will it improve SNR.
  2. I had a look at those, and some of misconceptions that we touched upon here seem to be prevailing in those discussions as well. I'm not going to go and dissect each of those threads, but if there is something in particular that needs to be further discussed or explained, I'll be happy to do it. All we have said so far can be really condensed in a few points: 1. seeing is not only thing determining resulting star FWHM in the image - size of aperture, guiding and seeing all play a part and we have defined relationship between those and resulting FWHM 2. for a given FWHM of stars in the image (effectively a blur PSF) there is simple relationship between it and "optimum" sampling rate (I can explain why I put optimum in quotation marks). 3. Over sampling - very bad, under sampling not bad at all - and no, stars will not be square because of it. By the way, I already presented images in this thread that were 4"/px for example and those did not look bad. In fact - whenever you look at large image on the screen that is scaled to fit - it is under sampled - yet looks fine. Here is example: This is a crop from M13 image - scaled to fit the screen - shown here at 28% of 2"/px resolution - which is effectively 7.14"/px Same image viewed at 100% (again crop): and I'll now use feature of IrfanView - software that I'm using to view this image and make screen shots - and I'll zoom in to 300%: Yes - stars look much softer in zoomed image - but there is no "pixelation". Moral of this is - images just should not be looked at past 100% zoom and they will look fine no matter what sampling rate we used. If one used 3"/px for the image and expects to see detail at 300% zoom level (effectively 1"/px) - well they will be disappointed.
  3. Two are essentially the same. If you don't want to over sample and you already have small pixels - then it is worth binning. You can bin in software after acquisition - only thing to remember is to consider read noise appropriately. Effect of read noise depends on background levels and if you over sample - same thing happens with background as happens with regular signal (they are both light signal) - it gets spread over more pixels. This in effect means that if you over sample by say factor of x2 with CMOS sensor because of small pixels - you'll need to make single exposure x4 longer than you would need with same aperture and focal length but with larger pixels with same read noise. Good thing about all of this is - if you have correct exposure dialed in for pixel size as is - you don't have to change anything to bin your data later in software - read noise levels vs exposure duration will be correct for those cases as well - regardless if you bin x2 or x3. I think that binning in software with small pixels is best of both worlds - you can get good sampling rate on night of good seeing - bin x2 and on night of not so good seeing - bin x3 so there is some flexibility there. Although fractional binning exists - it is not real binning and does not behave like true binning. True binning does not introduce pixel to pixel correlation - so no additional blur being made. It also has precisely defined SNR improvement (if noise is truly random) - SNR improvement is equal to bin factor so bin x2 will improve SNR by x2 and bin x3 will improve SNR by x3 and so on ... Fractional binning does not do that - it introduces pixel to pixel correlation and is no longer really predictable (maybe it is - just not as simple as above and probably requires rigorous mathematical analysis to confirm relationship). Even small fraction above bin factor can mess up things - for example fractional binning x2.1 might improve SNR by factor of say x2.5. We might say - well it is a good thing - but blurring the image also increases SNR - some of that SNR improvement is due to blurring caused by pixel to pixel correlation - and that is not something you want. I'm yet to analyze what is the best way to go about all of this, but so far, my gut feeling is - if you say want to bin x2.5 - I'd actually do it in one of two ways (one simple and one very advanced): 1. Bin x3 and then when doing alignment of the frames - upscale each frame so that their resulting "zoom" is same as if you binned x2.5 - make sampling rate adequate for detail. This will loose a bit of sharpness but will boost SNR more and will allow you to sharpen your image a bit without bringing too much noise - so net result will be as if you did x2.5 binning (sort of) 2. Since subs that go into the stack don't have same FWHM - some have larger and some have smaller FWHM - we set target FWHM prior to stacking - one that corresponds to certain bin factor. All subs that have larger FWHM will be deconvolved to reduce their FWHM down to target FWHM All subs that have lower FWHM will be convolved with Gaussian so hit target FWHM. This will create subs with very different SNR - but there is algorithm that can deal with this situation - and it will create close to optimum stack of such frames. Now this is something I'm working on - so not really yet available in software - just wanted to share it as alternative option. Hope this answers your question To recap - it does not matter if you image with small pixels as long as you are aware of that, select your sub duration appropriately and then bin your data to recover lost SNR while data is still linear - that is the same as originally imaging at correct sampling rate.
  4. vlaiv

    M31

    Very good image.
  5. No contest really. Tracking mount all the way. Whatever lens sharpness - at those resolutions it is lack of tracking that is blurring image the most.
  6. Probably encodes image in 16 bit after stretch if it is still linear and then converts to 32bit before "unstretching" it. One of good sides of applying gamma is handling of 16 bit data and introduced error. Gamma is essentially power law which means that produced values are stretched at the low end and squeezed at the high end (compared to linear data) If you stretch values a the lower end - and you round the values - you get less rounding error, and lower values will suffer less from rounding error in terms of SNR - because it's signal is already low. \ Overall it does not help much - but it does help a bit and every little bit counts.
  7. Using smaller pixels camera: 1. In case you are under sampled to start with - allows to capture additional detail. Now I have to explain what sort of additional detail this is - it is not like thing will disappear if you use some level of under sampling. For example if we use x2 larger pixel size than optimum - everything will still be in the image, only difference that might happen is that two close stars - that you think are two close stars - might me a bit harder to tell they are two close stars. It will be a bit harder to differentiate. We can all test this. We can take image that is properly sampled - by examining FWHM/1.6 rule and then we can bin that data and enlarge the image again back to starting size and compare to original. I'm going to make such comparison here to show you level of difference. This image at this scale is 100% zoom of what is close to optimum sampling: I now resize it to smaller size: It now has twice the smaller number of pixels in both x and y coordinate (I again cropped just galaxy as interesting feature and shown it here at 100%). This is scaled up to original size. Some of the detail has been lost due to using lower sampling - but it's not that things are missing - effect is a bit different. Things are blurrier - less precisely defined. Stars are a bit larger / softer and detail is not as well defined / sharp as in original image. Some local contrast has been lost - compare two images for bridge detail for example: So that is the effect of under sampling in astronomical images - almost exactly the same as using smaller aperture or shooting in poorer seeing. Detail is lost - it shows how two are related - but level of detail loss is very subtle even if we go with 4"/px instead of 2"/px This is one of reasons almost no one sees difference in sharpness between mono and OSC sensors. OSC sensors in reality sample at twice the lower rate than the same sensor in mono, but difference is so subtle, that it can barely be seen. Under sampling is not bad. 2. Using smaller pixels when you are properly sampled or already over sampled. You will gain nothing - even if we account for pixel blur, in over sampled scenario - you really gain nothing. You loose plenty by using smaller pixels. You loose SNR without getting anything in return. Simplest way to explain this is in following way: Say you have object that covers 100x100 pixels and you choose pixels that are half the size of original pixels. Object will with new camera cover 200x200 pixels - or x4 more pixels in total (10000 vs 40000). You did not change the aperture by switching to smaller pixels - it gathers same amount of light as before. Object did not change its luminosity - it still gives off the same amount of light. What did change is how that light is distributed - it is now divided among 40000 pixels instead of 10000 pixels. In this new setup - each pixel is getting only 1/4 of the light compared to initial setup. You sliced your signal to 1/4 of original and thus need x4 total imaging time to get the same signal and SNR as before. If you can make image in 4h / one night when properly sampled - you'll need to image for 2-3 full nights to get 16h to get same SNR image with x2 smaller pixels. This is why over sampling is very bad - especially for a beginner. Experienced imagers could possibly afford multi night session and many many hours spent on one target to compensate for small pixels - but in reality, why do that?
  8. Now, that is interesting assertion. Let's see what numbers say, shall we? ICX285 has 9.0 x 6.7 mm or 60.3mm2 APS-C sized sensor (not even full frame like 5D) has about 330mm2 - that is x5.5 more surface area. If we take say DSLR like Canon750 - it will have 6000x4000 so it can be binned at least x4 and still have more pixels than ICX285. If we put both sensors on same type of scope - say F/4.5 apo (F/6 reduced to F/4.5) - then for same FOV DSLR will use scope with x5.5 more aperture and if we bin DSLR to same resolution - it will have larger pixels than ICX285. 3.75 * 4 = 15µm pixel size Even if ICX285 has x2-x3 better QE and even if we account for dark current noise - there is still massive signal advantage if we equate working resolution / FOV / pixel size by using different (but same type) optics. I would not so boldly state that Atik 314L will beat DSLR on any particular target if we set working resolution and make choice of optics accordingly.
  9. To clarify above for participants without programming background Agg, ps, pdf, svg formats are vector formats and differ from raster images (which contain pixels) and can be drawn without interpolation. Raster images can't be drawn without some sort of interpolation and "none" / "default" is almost always nearest neighbor interpolation as it is simplest to implement.
  10. You just simply showed two images with same interpolation - nearest neighbor and that is why they look the same. (source -https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html)
  11. I can't answer that question with simple yes or no. Indication / only consequence of under sampling is aliasing. Effects of aliasing depend on signal and the way it was sampled as well as any subsequent processing of the signal (we can discuss all of that in detail). With astrophotography we have good understanding of what sort of aliasing effects we might get, and also - what subsequent processing (stacking with sub pixel alignment) will do to the data. Usage of physical pixels as point sampling devices is also well understood - we know what sort and how much of "pixel blur" there will be. We can calculate MTF of this blur (not related to under sampling - it happens in all cases - under, over or correctly sampled). However - without exact definition of term "blockiness" - I can't answer your question correctly. If you simply mean - like in examples so far - that "visible" pixels - like "pixelated" stars is sign of under sampling - then no - that is only effect of interpolation used - remember pixel are point samples - they don't have size / shape and are not square - they are just number at coordinates.
  12. Az is much better option as with EQ and newtonian - eyepiece and finder scope end up in very awkward positions, but do consult FLO if Explorer 150 and AZ4 make a good combination. I have one concern - will you be able to point the scope straight up? Long tubes hit the mount and can't fully reach 90°: There is also potential for hitting the legs with OTA (but that is mostly with long tubes).
  13. According to my understanding - purpose of this thread should be to weed out technical side of things. Tool itself can be pretty much unchanged from the end user perspective - maybe only addition would be selection of the mount / expected guide performance. That is just another drop down that is easily used by people. On the other hand - we are trying to offer genuine advice to people - one that is correct to within limitations of the model (diffraction limited scope and mount performance for example are assumed but might not be correct). I insisted on the thread so that other people can have their say in the matter. Since I'm main driver behind the change - I feel more comfortable if what I say is reviewed by others and any possible error pointed out so it can be corrected if it is indeed error. I know that many people don't have enough knowledge to spot the problem straight away - but that is why we have this thread - if anyone is interested in making this tool better and is interested in ensuring correctness of the math behind it - then I'm more than happy to answer any questions, point to online sources, or participate in correcting of any errors spotted. Another point of this thread is to explain things to those that are interested. I'm much happier if people have understanding how something works - rather than just it being magic box - even if majority opts to use it as convenient tool without deeper understanding - as long as they have confidence that theory behind it is correct.
  14. Indeed. It is also quite expensive! 570e for manual mount that is EQ5 class. https://tavcso.hu/en/productgroup/mech_eqal-55
  15. I think that IMX455 should be able to provide you with that frame rate when using binning. I'm not entirely sure and that is something that will need to be checked. I have found this - which points to that possibility: http://www.touptek.com/product/showproduct.php?lang=en&id=298 It looks like it is just the matter of controller / firmware for these sensors. As long as data is binned before sending it over USB connection - there should be FPS improvement. Also - according to this document from Baader - it looks like bin x2 and bin x3 is available for IMX411 as well: https://www.baader-planetarium.com/en/downloads/dl/file/id/1656/product/4620/technical_data_in_comparsion_between_the_qhy_600_models_and_the_zwo_asi_6200_mm_pro.pdf Confirmed - here is ASI6200 manual: https://astronomy-imaging-camera.com/manuals/ASI6200_Manual_EN_v1.4.pdf You can bin x3 in hardware and then additionally bin x2 in software for example. That will give you 1596 x 1064 which you can adjust for 1080p Just confirm that hardware bin x3 will give you expected FPS with someone having the camera and USB 3.0 port.
  16. Mount not available separately for some reason. Idea is great. In principle any EQ mount operating at north pole is effectively Alt/Az mount (RA axis points straight up and is azimuth while DEC is then turned into altitude). People have modded EQ3 and EQ5 to be able to do this: https://www.cloudynights.com/topic/67539-astroview-modified-to-alt-az-mount/ https://www.iceinspace.com.au/forum/showthread.php?t=130920 There are a few drawbacks - like scope sometimes hitting tripod legs - but it can be countered with using column to raise mount head with respect to tripod. Another lightweight mount from Skywatcher was able to do that - it was AZ-EQ Avant. Also not available separately, and as far as I can tell - no longer available (shared some parts with StarQuest). See for example - this video: https://www.youtube.com/watch?v=wkVergOxt_E AZ GTI is alt az mount that can operate as EQ if put on wedge. AZEQ5 and AZEQ6 are skywatcher heavy duty goto mounts that are capable of being used in both AZ and EQ configurations. Then there is this thing: which is called EQ-AL55 and should be dual EQ / AZ mount from SkyWatcher - but that is something I've not seen yet. It is supposed to be available from some suppliers.
  17. Low light performance is not something that is intrinsic to camera - it is combination of factors and the way you utilize the camera itself. If you fix your field of view and opt for say 1080p over 4K as it will have better low light performance - then it is the matter of having the most aperture at these settings. That combination will have best low light performance. For example - using 24FPS over 30FPS will provide significant benefit as each exposure will be 25% longer (25% more signal just by choosing FPS). 1080p versus 4K will be even more crucial. 4K resolution has x4 more pixels than 1080p - that means same FOV is spread over x4 more pixels, or signal is spread over x4 pixels - each pixel gets only 1/4 of the light. Important question is how this full frame sensor records video - in particular when you select 1080p video, is data binned internally or not? I guess it is compressed - which is not ideal, compression artifacts can cause issues with low level signal. If you opt for any of astronomy cameras, then here is what I would advise for best low level performance: - use binning so that you get 1080p output. For example ASI294mm (still cheaper than those Sony cameras) can bin x2 to get 9.26µm pixel size and still provide you with 1080p type resolution - use raw format. This is going to another problem for you to solve. 1920 * 1080 * 24 * 60 * 60 * 4 = 716636160000 bytes = 699840000 KB = 683437.5 MB = ~667.5 GB You'll need about 670GB of data storage for 4h of video material (well not that bad - m.2 SSD with 1TB would be enough to store single session). - get largest aperture lens that will provide you with wanted FOV For example if you opt for ASI294mm - get fastest 25mm lens that you can afford. I'm not sure what sort of budget do you have for this - but there are other, more serious sensors out there. If you get IMX455 sensor - you'll have effective pixel size of 15µm and still be able to get 1080p - but that is going to cost as much as 2-3 those consumer Sony cameras.
  18. I did a brief search and reported figures vary greatly although there is not much data to go by. I've seen below 1" RMS reported and over 3" RMS reported. I'm sort of skeptical of 1" RMS performance from EQ3 class mount given that it even does not have proper ball bearings on DEC shaft. Motor resolution is about 0.28" / step as well - so that is somewhat limiting. Next problem is 480s period - which is rather quick. That leaves 240s for P2P and if periodic error is worse than say Heq5 (being 30-35") - maybe say 45" then we will have 0.1875"/s on average. It takes only few seconds for mount to drift 0.5" - and sometimes even faster than that (periodic drift is never uniform). My vote goes for 1.5-2" RMS on average when tracking near meridian.
  19. Especially with regards to "under sampling and blocky stars" and how good results are from current version of astronomy tools
  20. I'll disagree on all three counts. All modern cameras have small pixels. Cheap large pixel camera available second hand is anecdotal event rather than regular thing. There is no software that will make mediocre mount guide like a dream - otherwise we would all be using it. Would like to see good (astronomical) quality short and fast optics at affordable prices
  21. One way it can be handled is to do linear stretch. and save image as 16bit. (not sure if new version still requires 16bit images). Another would be to do "reversible" stretch before creating starless image. For linear stretch - just set white level somewhere just before target starts to saturate. Stars will saturate because of this - but that does not matter as they will be removed. Reversible non linear stretch would consist out of three steps: step 1 - same as above - lower white point almost down to saturation of target step 2 - apply gamma of known power step 3 - bring up black point just enough so you avoid clipping on left in each step note down what you used. To get linear data again - do inverse steps in reverse order - bring down black point by same amount - do inverse gamma (with reciprocal of the power) - move white point to original position Now you can subtract such data from original image to have stars only image and you can also start stretching it again as you please.
  22. It will always be faster setup than a small scope for certain targets.
  23. No. Pixels that come out of the sensor are point samples - they don't have size, they only have position and value. Take almost any image online - can you say what size of those pixels is? No - because it is not important. If it were important metric - it would be embedded in the image. What we are used to seeing in software are little squares - not because pixels are squares (by the way - most sensors don't actually have square pixels - but rounded ones) but because "default" interpolation algorithm is nearest neighbor - because it is by far the simplest implementation - take coordinates and round them up to nearest integer - and there you go. Nothing complex needs to be done and it is fast. For this reason most software has this interpolation as default setting - but it does not have to be that way. In any case - squares are consequence of this and not square pixels on the sensor - that btw look like this: or like this: or this: If image on the screen reflects sensor pixel shape - why don't they look like this? I still maintain that you only need to sample at FWHM / 1.6 - and that will cover star with hand full of sample points - and you will still be able to reconstruct it. This is not something that I made up - this is something that math is telling us - proven theorems (and examples - like I shown you above).
  24. Bresser website lists that same mount at 13Kg max payload ... https://www.bresser.de/en/Astronomy/Accessories/Mounts/EXPLORE-SCIENTIFIC-EXOS-2-PMC-Eight-GOTO-Mount.html
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.