Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,056
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes, there is easy way to test this - do a 4 minute video for example (or even 5 minute) and stack that using best 10% or whatever percentage of frames, then simply take the same video and split it into 2 parts of 2 minutes and then stack those using same percentage of frames. SNR difference won't be huge and you will see if there is issue with rotation. If two short videos produce images of same quality that means that seeing was similar during whole session and that 4 minute video won't be poorer due to seeing alone. If there is a difference it will be due to rotation.
  2. Well, we can do some calculations to see what would happen so that we get a good idea of what is limiting total time. Circumference of Jupiter along the equator is about ~439300Km (diameter times pi). Rotation period of Jupiter is 9h 55m = 35700s Speed of point in equator is thus ~12.3Km/s Critical sampling rate for 0.4126"/px. Let's see what time it takes for a point on equator to move half pixel? At present, Jupiter is 602.34 million Km away (according to google). Half pixel will be 0.2 arc seconds. What length will subtend angle of 0.2 arc seconds at distance of 602.34? With speed of 12.3km, this distance will be covered in only ~47.5s! Above suggests that we can have motion blur even under one minute! However, we are using Autostakkert for stacking and one of its features is ability to correct for lowest order atmospheric disturbance - namely tilt. In average to very good seeing star profile will present FWHM of say 1.5-2". That translates into 0.64 - 0.85 arc seconds of RMS displacement from true position. We can see that when we observe this slow motion recording of lunar surface: Parts of the image "jump around" by at least half of arc second if not more and if we "freeze" the seeing - we won't get motion blur because of this - only geometric distortion. AS!3 handles geometric distortion with alignment points. This means that software can "return" part of the image that displaced up to 1 arc second from its true position. Now, if we have this feature in software - this means that it will also "derotate" image automatically if it moves up to say 1 arc second (or even more, depending on size of alignment point). So we can put 1" instead of 0.2" in above calculation and get x5 higher duration. 47.5s x 5 = 237.5s or ~240s or up to 4 minutes. In fact - if AS!3 takes middle of the recording to produce reference frame - it can correct for rotation for +/- 4 minutes around that point.
  3. Color is a bit weird which suggests that you did not use UV/IR cut filter with that camera? Camera only has AR coated window (anti reflection coating) but passes full spectrum of light and sensor is sensitive in IR part of spectrum as well. If you want proper colors out of that camera you should filter the light to 400-700nm range - which means using UV/IR cut filter. You don't have to use it if you don't mind strange color cast. Here are some tips: - use high FPS and low exposure time. Something like 5-6ms will work good. Don't be alarmed if video looks under exposed and planet looks very dark in video - that is ok as stacking will sort it out - record for at least 3-4 minutes. You should get about 40000-50000 frames in total with these settings (if everything is ok - your computer is capable of recording at those speeds and you use USB 3.0). Do use ROI - no need to shoot at higher resolution than say 800x600px - stack only 5-10% of best frames. With above frame count - that will give you plenty of frames and smooth image in the end.
  4. I would say that using arbitrary ratio of Ha signal to reproduce Hb signal is "cheating" There is no reason to suppose that: a) this ratio is constant (and indeed it is not) in hydrogen gas b) this ratio is constant even on selected target Look at this example: This is part of M42 taken as OSC image - Red versus Green channel. Red will contain Ha obviously, and Green will contain Hb. There are parts of nebula that are visible in both images - almost the same brightness - which means that Ha to Hb ratio is very close to 1:1 while some other features are present only in Ha - which means that Hb is much much weaker if present at all - and all of this in the same object. Given that Hb series is higher energy transition than Ha - something needs to excite Hydrogen gas more in order to produce this emission - and I'm guessing that there is some interesting physics behind finding the ratio of the two. For reference, here is list of visible Balmer series transitions and their color:
  5. Not directly. With planetary imaging it is important to freeze the seeing and for that reason exposure needs to be very short - often in 5-6ms range. Even if image looks under exposed. When you image at those exposure lengths - you can expect to ideally achieve up to 200fps (1s / 5ms = 200fps). There is a limit to how many of those frames can be recorded and this limit is imposed by USB connection speed and speed of your disk drive (SSD/NVME can easily cope with needed speeds so its worth having those in your imaging rig). USB link has limited bandwidth - it can achieve only certain amount of data transfer speed. Each frame you record contains some amount of data. If you increase FPS - you increase amount of data needed to be transferred over USB connection. At some point USB connection can become bottleneck in your recording. When this happens - it is beneficial to reduce ROI as size of each frame determines how much data it contains - smaller ROI less data per frame - more frames per second can be transferred over USB. You can achieve best contrast/detail/surface detail if you capture the most data and ROI can help with that - so in that sense it helps to increase those - but only for reasons of data transfer. After you hit max data rate / max FPS allowed by your exposure length - smaller ROI won't contribute anything. BTW, ZWO publishes max theoretical FPS for every camera / ROI size combination and it is worth checking out. Say that you work in 8bit format and you want to hit 200FPS because you are using 5ms exposure length - then you need to drop your ROI to at least 800x600. One more note - exposure length should really be judged properly. It needs to be short enough to freeze the seeing - which really means that distortion of atmosphere is "static" in short period of single frame. If you don't do this you will have "cumulative effect" of two or more different distortions averaged - which is just "motion blur" of different distortions and is a bad thing. On the other hand - you don't want to have your exposure set any lower than that because it will hurt you final image (more noise then needed). Reality is - that there is no single well defined exposure length, and it is trade off - some frames will be usable some won't as there will be motion blur. Longer the exposure length - more frames you'll need to discard and stack fewer good ones so it is a fine balance of finding good exposure length. Another thing is how bad the seeing is - in average seeing you will need exposure in range of 5-6ms. In really good seeing you might afford to have exposure set to 10ms or even 15ms. In really poor conditions you might need to go as low as 3-4ms. Btw, Lunar imaging can often employ 3ms as standard because of amount of light - this even allows for narrowband filters to be used with lunar with longer exposures - but that is "advanced topic"
  6. Well, I thought that this is well known thing. Here is full explanation of how things behave. Only difference in stack of many shorter subs vs stack of few longer subs (or one very long sub) all totaling to same exposure length is in read noise. If we had cameras with zero read noise - it would be the completely the same (as far as SNR goes) whether you use many or few subs. Since we don't have such cameras and every camera has some level of read noise - this creates a difference since read noise is only noise source that does not grow with time - it is exclusively per exposure type of noise. Everything else grows with time, both signal and associated noise and it does not matter if you sum them physically by using long integration time or you sum them mathematically which is in fact what stacking is (as far as data goes, it makes no difference if we sum or average pixel value - average is sum multiplied or rather divided with constant and image multiplied with constant remains the same - it just does the linear stretch which we alter again in processing anyway). Thus, stack of many shorter subs will always be of worse quality (have lower SNR) than stack of few longer subs with same total imaging time. However - difference in SNR between the two can range between significant down to imperceptible and it solely depends on how big read noise is compared to some other noise source. This is because of how noises add. They don't add "normally" like numbers, but rather "in a special way" - like vectors, or to be more precise - linearly independent vectors (ones that are at 90 degrees to each other). Above image explains what happens. If we have two noise sources, a and b, here a is some noise source like thermal noise or LP noise and b is read noise, then in first example if a is equal or comparable in size to b - resulting total noise c will be obviously larger than a or in fact from either of them (a or b). Diagonal of square is longer then either of the sides. In example two, we have that read noise b is significantly smaller than this other noise source a. This results in total noise c being about the same size as greater component a. Impact of read noise becomes insignificant. All of this explains why people get different results when stacking different exposure lengths and it also gives way to calculate what is good sub exposure length which will not impact stack in visible way. CCD cameras have high read noise and when they were popular fewer people were into astrophotography and most of them tried to do it in dark skies. Cooled camera in low light pollution (or when using narrowband filters) - does not have significant noise source to overpower read noise (thermal noise is low and LP noise is low) and long exposure is needed for either of the two to build up enough to swamp read noise. With CMOS cameras that are very low read noise and increase in popularity of astrophotography which lead to many more people trying to image from cities where LP is high (and also steady increase in LP over years) brought totally different situation. Today many people use 1 minute or even 30s subs without problem rather than 20 or 30 minutes, simply because there is noise source (namely LP) that will quickly swamp low read noise of CMOS sensor. In the end - good value by which LP noise should swamp read noise is in range of x3-x5. I personally advocate x5 because it produces only 2% difference in total noise per sub and that can't be distinguished by human eye. Calculating optimum sub length in above sense is thus easy - one should measure sky signal in their sub and make sure it is at least (5*read_noise)^2 (average background signal in sub in electrons needs to be square of 5 times read noise since signal level and its noise are related by equation signal = noise^2 or noise = sqrt(signal)).
  7. Common misconception is that signal must overwhelm noise, and that is true for final image - but it is really not important in single exposure. You can have single exposure with signal well below noise levels and still end up with final SNR that will be acceptable and show the target. Another misconception is that "no photons" can be captured in single exposure and thus result will always be zero. I'd be happy to image something that has say 0.1 electron of signal per exposure - even if noise in that region is say 3e. In that case roughly 9 out of 10 exposures will indeed fail to capture even single photon from target and all exposures will have signal much weaker than noise - yet, stack enough of them (in this particular example SNR is ~0.0333 and you would need to stack (5 / 0.03333)^2 = 22500 exposures to reach SNR of 5, but it will happen.
  8. Yes. It does depend on type of CC in question. Some are really terrible in that regard - like simple two element models. There is visible blurring/loss of resolution even in long exposure images where seeing mostly dominates, but in general - all will trade off some center of field sharpness for correction in outer part of the field. There are some telescope designs that don't suffer from this - for example Mak-Newtonians are known to be excellent planetary performers (especially F/6 - so somewhat slower models) - while having good star definition over larger field.
  9. Telescope needs to be at least diffraction limited for best planetary performance. Telescopes that have built in field flattener / or have FF/FR added, are usually not diffraction limited telescopes - which means that their Airy pattern is larger than it should be for given aperture or that they behave as smaller aperture telescopes as far as resolving capability of telescope goes. Look at spot diagram of Askar scope, and in particular RMS value on axis it is 1.617 (which is very good value by the way, so this scope is really not that much hampered by being corrected for photography). Now, let's do some math: Diameter of Airy disk of diffraction limited F/7.7 scope is ~4.3um Radius is therefore half that - or ~2.15um There is relationship between Airy profile and corresponding Gaussian profile that goes like this: So we are looking at about 1.46um of RMS (approximation) - we now have two values that we can compare: ~1.62 vs ~1.46 It is clear that Askar provides larger Airy pattern than diffraction limited telescope would - and is not diffraction limited. It blurs image more (much like smaller aperture).
  10. Yeah, I would not really consider that seriously. Best it could read is: FPS potential increased by 30% (but not necessarily actual FPS), and for imaging efficiency - no comment there You need to match pixel size of your planetary camera to F/ratio of the scope of choice. I would use simple telescope design for planetary imaging rather than telescope meant for DSO Imaging. There are some tradeoffs when you aim for good flat field - it usually scarifies diffraction limited performance in center. Something like 6" F/8 newtonian will eat that Askar for lunch on planets. Anyway: - camera pixels must match F/ratio of setup (you can adjust F/ratio with barlow lens) - pixel size * 5 = F/ratio, so for say camera with 2.9um pixels you want your F/ratio to be around F/14.5 - get camera capable of high FPS (USB 3.0 connection and preferably computer capable of recording data at high rate so SSD / NVME drive) - High QE - As Low read noise as you can get - for lunar and solar work, mono is better, for planets color/OSC is less hassle - for white light solar - either get full aperture Baader solar foil filter ND3 version (photographic) for newtonians / maks (anything with mirror), while for refractor get Herschel prism. Get Baader Solar Continuum filter as well - here formula for F/ratio is pixel_size * 3.7 (if you use Baader Solar Continuum filter) - for Solar Ha - that is whole new ballgame - get solar telescope and here F/ratio again different - with F/ratio = pixel_size * 3
  11. No idea of what is going on there. Did you have previous autopec / pec curve loaded? Maybe it just repeated earlier corrections?
  12. Pecprep wants to know how much mount trails or leads sidereal rate (what the actual error is) and it computes that based on guide star position versus start position. It can do this from variety of logs and all need to do the same - have guide star be displaced from original position. If you enable guide output - you always return guide star to (near) its original position. Calculations then need to account for this and you need to sum up thousands of small corrections. In ideal world - these two approaches would yield the same result - but there is significant difference between the two approaches and that difference is how error behaves. In each step there is some error. Each time guide star is measured there is some error in its position. Mount itself won't react 100% perfectly to correction so guide correction and guide pulse won't completely match. When you add 1000 corrections with this sort of error - error accumulates, but when you simply track the star - each measurement will have some small error - but these don't accumulate over time. For this reason it is much much better to simply record how mount performs rather than to try to correct it and assume your correction is perfect and then calculate result (and do so a thousand times per recording). Autopec needs to do this because it does not have idea where the star is - it relies on corrections (how ever imperfect those are) to calculate where it thinks guide star is. Autopec is less reliable than measuring PE on your own and then calculating PEC curve, but it is only way it can be automated while the mount tracks and guides.
  13. Your mount does not have vixen type clamp, but you could possibly fit one to it. For example this: https://www.firstlightoptics.com/dovetails-saddles-clamps/astro-essentials-mini-vixen-style-dovetail-clamp.html if you can unscrew current piece for telescope attachment (it looks like it is held by single bolt in the middle) - then you could possibly use the same bolt to attach vixen clamp instead of it. There are also bigger versions that are more sturdy but also more expensive: https://www.firstlightoptics.com/dovetails-saddles-clamps/william-optics-90mm-saddle-plate-for-vixen-style-dovetail-rails.html
  14. I'm totally Lionel in this regard.
  15. Do be careful with such assertions. You have a lot of sky covered in such mosaic and there must be some level of distortion present when you project large part of sphere onto a flat plane. One of projections used has a consequence of enlarging objects that are close to the edge versus those that are in center. This is well known in map of the world vs globe for size of landmasses (for example Svalbard looks larger than Madagascar on google maps - but in reality it is only 1/3 of the size - ~1500km vs ~500km)
  16. I think it is rather good. I can't say if it's worth the money or not to be honest. There are other Baader filters that I think are worth the money - for example Baader Solar Continuum filter. That one is expensive - but it is rather interesting piece of kit - it works on white light solar but it also works for lunar imaging to minimize seeing effects. It can also help with chromatic aberration (although gives extremely green view, but if you can look past that - view is razor sharp). Can be used for telescope testing as it passes very narrow range of wavelengths where the eye is most sensitive and so on - so you see, it is versatile piece of kit and for that reason, I think it's worth the money (for anyone wishing to do above). On the other hand - Baader Contrast Booster is well - just contrast booster and it tames a bit of chromatic aberration. It's not wonder filter. It does not remove it all. I still see bluish halo on very bright stars with it. Cast that it imparts on the image is rather subtle. Yes, it is there but after a bit, when the eye/brain adapts - view looks normal. I guess, it being worth the money will depend on financial situation. It's not clear cut case - like for some bits that are definitively worth or not worth no matter how much extra cash you have lying around. Here - if you can afford, then it's worth, but if you can't, then I would not sweat too much about it.
  17. I can't really remember as it was quite some time ago. Currently I have 102 / 1000 achromat (as far as achromats go) and I don't need neither aperture mask nor yellow filter with for the most part. Jupiter shows some CA, but I use Baader Contrast Booster to tame it. There is a chart for achromats: So you have F/ratio and aperture size and this ratio gives you level of chromatic aberration. If you want to see how the scope will perform without filter - just calculate this CA index. For example if you have 120/600 scope and you want to stop it down to say CA index of 3 (just as comparison, my F/10 achromat has CA index of 2.5 as it is 4inch scope with F/10 so 10/4 = 2.5, and yes it is indeed in filterable range - but very minor CA). Then you would need say 70mm mask. With that you have 2.75" aperture and 600/70 = F/8.6 and their ratio is 3.11 (CA index) That will give you pretty much ED experience. You can certainly get similar experience with CA index of 2.5 and Baader Contrast Booster, so aperture mask of 80mm (CA index of ~2.4).
  18. I'm not sure above rendition is correct in terms of size. Moon is half a degree, and Andromeda has 3 degrees on full extent, so you can fit about 6 full moons in Andromeda from end to end. It looks like the Moon is a bit small in your composite image? Here - look at the size comparison from Stellarium (crude composite :D) Moon just about fits between M31 and M110 cores. In your image, you can fit x2 Moon between cores - so the Moon is about the half of the size it should be.
  19. I understand, but it is not really the sales pitch here - it is very old convention on how to label different sensor sizes. If you visit that wiki page I linked earlier (here it is again: https://en.wikipedia.org/wiki/Image_sensor_format), you will see that most sensor sizes are in fact labeled as a fraction of that strange one inch which is 16mm and not 25.4mm as you would expect. I do get your concern, and at one point I also myself wondered why they are labeled so strangely (although I never trusted that labeling and always checked diagonal in mm - probably because I'm used to metric and never think in inches). Fact that Sony replaced term 1" with Type 1 - is also rather telling. I guess that many people noticed this discrepancy and that "Type" nomenclature is introduced to remedy this somewhat.
  20. I don't think they are advertising that as one inch sensor - just using normal convention (which is admittedly very weird and counter intuitive). If it was a case of consumer deceit (like using same packaging for different amounts of food for example), then companies like Sony would raise much more concern then ZWO. Sony sells way way more sensors - and they also label them inappropriately in above sense. Well, I stand corrected - Sony obviously decided to avoid all the confusion and started labeling them as Type rather than Inch I was not aware of this change, I seem to remember them also having inch convention in their documents - but now it seems to be "Type":
  21. I don't know if it can be clearer than this: (taken from ZWO website page on ASI533MC Pro) You also have all other necessary information regarding light gathering potential of the sensor:
  22. ZWO publishes correct data. I think that your confusion arises from the "1 inch" part - which is remnant of the old times and does not actually measure 1 inch in diagonal. https://www.imaging-resource.com/news/2022/07/28/dealing-with-the-confusing-and-misleading-1-inch-type-image-sensor This means that when you see something like 4/3 or 1/2" sensor size - you can't really calculate those in inches but rather see one inch to be about 16mm in this context. Sensor that is 1/2" will have about 8mm diagonal. (see this page: https://en.wikipedia.org/wiki/Image_sensor_format)
  23. FWHM / 1.6 does not address display part of things at all - because it does not need to. It is just concerned with image acquisition part and answers the question - what sampling rate for image acquisition should be to capture all the data there is. Viewing part has been made difficult in part due to ever increasing screen resolutions - which is really a marketing trick rather than anything else. Let's do some math to understand what is the effective limit of display resolution and what is actually manufactured and sold. Most people have visual acuity of 1 minute of arc. https://en.wikipedia.org/wiki/Visual_acuity (see table and MAR column - "minimum angle of resolution") Very few people have sharper vision than that. That should really be size of pixel when viewed from a distance as it represents this gap: (you can see that letters used to determine visual acuity are made out of square bits that are of equal size - and you need to be able to resolve black and white bit of that size in order to recognize the letter - so minimum size of that is one pixel - either white or black). Now, let's put that into perspective when viewing computer screen and viewing mobile screen. Let's assume that we use computer screen from a distance of at least 50cm. At 50 cm, 1 arc minute is represented by 0.14544mm. If we turn that in DPI/PPI value it will be 25.4 / 0.14544 = 174 You don't need computer screen with more than 174 DPI as most humans can't resolve pixels that small - In fact - most computer screens are 96 dpi or there about - not even that small pixels and we still don't see pixelation easily. Phone screens are different matter - they are ever increasing - but we don't have need for them. If we apply same logic as above and say that we use smart phones 25cm away from our eyes - we come to upper limit of about 300-350dpi. If you do a google search for smart phones with highest PPI - you will find that top 100 of them has higher PPI than that - and they range from 400-600 - which is just nonsense - human eye can't resolve that small - or it could, but you need to keep the phone 10cm way from your eyes - and even newborns might have issues focusing that close (in fact, I think that perhaps babies can focus at 10cm but that ability goes away quite soon after birth). Ok, so computer screens are ok with resolution, but mobile phones are not and they have smaller pixels than is needed. Further, to answer your question about viewing - you need to say what type of display style you are using in your app. Does your app just scale photo to whole screen of device, part of screen or perhaps uses some sort of zoom and pan feature? These are all different scenarios and the size of the image presented to you will vary and will depend on actual pixel count of the image versus pixel count of the display device. I always tend to look at images at 100% zoom level - which means that one pixel of image is mapped to one pixel of display device. Most people don't really think about that and view image as is presented by software. But in either case - you as a viewer have control of how are you going to view the image and you can select best way to suit you depending on your viewing device. You don't have the control of how the image was captured - so it is best to do it in optimal way as far as amount of detail is concerned (or some other property is optimized - like sensor real estate in case of wide field images). Don't really know why, but here, look at this: Left is your larger version and right is your smaller version (in 8bit already stretched) that I took and did simple touch up in Gimp. Resized to the same size and did some sharpening. Now difference is not so great, right? Yes, left image still looks a bit better - but I was working on already stretched data that is saved in 8 bit in jpeg format. I don't mind people over sampling if they choose to do so - but you can't beat the laws of physics. If you want to over sample because that makes you make better image as it is somehow easier to apply processing algorithms - then do that - just be aware what you are giving up (SNR) and what you won't be able to achieve (show detail that is simply not there).
  24. Here is another interesting bit - look what happens if I reduce sampling rate of both images equally by 50%: Now they start looking more alike one another, right? This means that information in them starts to be the same (reference image lost some of information due to lower sampling - and image from video did not have it to begin with so they are closer in information content now). This shows you that image in video was at least x2 over sampled.
  25. Both are over sampled and that can be easily seen. Difference in sharpness between two images does not come from pixel scale - it comes from sharpness of the optics. RASA is simply not diffraction limited system. Quattro might also of lower quality than diffraction limited, but that depends on coma corrector used. As is, Quattro is diffraction limited in center of the field (coma free zone in center - which is rather small for such a fast system). When you add coma corrector - things regarding coma improve (obviously) - but CC can introduce other aberrations (often spherical) and lower sharpness of the optics. In any case - difference between the two systems is not down to pixel scale - it is down to sharpness of the optics. Here is assessment of RASA 11 system and its sharpness: (this is taken from rasa white paper: https://celestron-site-support-files.s3.amazonaws.com/support_files/RASA_White_Paper_2020_Web.pdf , appendix B). For 280mm of aperture, size of airy disk is ~1", or ~3um (for 550nm), while RMS of that pattern is about 2/3 of that (source: https://en.wikipedia.org/wiki/Airy_disk) Or RMS of diffraction limited 11" aperture should be about 2um So RASA 11 produces twice as large star image without influence of mount and atmosphere than diffraction limited scope would. (and above is given for perfect telescope with zero manufacturing aberrations, not production units that are not quite as good as model). By the way, there is simple way to see what the properly sampled image looks like - just take HST image of the same object at the scale that you are looking at. Such image will contain all the information that can be contained at given sampling rate - that will be upper limit - and if your image looks anywhere close to that - you then sampled properly and not over sampled. Look at this (although this is not HST image - it is still not over sampled at given resolution - so you can see the difference): Left is sharper image of NGC7331 from the video and right is example of how sharp image will be if you properly sample it (not over sampled at this scale). I think that difference is obvious.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.