Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Pecprep wants to know how much mount trails or leads sidereal rate (what the actual error is) and it computes that based on guide star position versus start position. It can do this from variety of logs and all need to do the same - have guide star be displaced from original position. If you enable guide output - you always return guide star to (near) its original position. Calculations then need to account for this and you need to sum up thousands of small corrections. In ideal world - these two approaches would yield the same result - but there is significant difference between the two approaches and that difference is how error behaves. In each step there is some error. Each time guide star is measured there is some error in its position. Mount itself won't react 100% perfectly to correction so guide correction and guide pulse won't completely match. When you add 1000 corrections with this sort of error - error accumulates, but when you simply track the star - each measurement will have some small error - but these don't accumulate over time. For this reason it is much much better to simply record how mount performs rather than to try to correct it and assume your correction is perfect and then calculate result (and do so a thousand times per recording). Autopec needs to do this because it does not have idea where the star is - it relies on corrections (how ever imperfect those are) to calculate where it thinks guide star is. Autopec is less reliable than measuring PE on your own and then calculating PEC curve, but it is only way it can be automated while the mount tracks and guides.
  2. Your mount does not have vixen type clamp, but you could possibly fit one to it. For example this: https://www.firstlightoptics.com/dovetails-saddles-clamps/astro-essentials-mini-vixen-style-dovetail-clamp.html if you can unscrew current piece for telescope attachment (it looks like it is held by single bolt in the middle) - then you could possibly use the same bolt to attach vixen clamp instead of it. There are also bigger versions that are more sturdy but also more expensive: https://www.firstlightoptics.com/dovetails-saddles-clamps/william-optics-90mm-saddle-plate-for-vixen-style-dovetail-rails.html
  3. I'm totally Lionel in this regard.
  4. Do be careful with such assertions. You have a lot of sky covered in such mosaic and there must be some level of distortion present when you project large part of sphere onto a flat plane. One of projections used has a consequence of enlarging objects that are close to the edge versus those that are in center. This is well known in map of the world vs globe for size of landmasses (for example Svalbard looks larger than Madagascar on google maps - but in reality it is only 1/3 of the size - ~1500km vs ~500km)
  5. I think it is rather good. I can't say if it's worth the money or not to be honest. There are other Baader filters that I think are worth the money - for example Baader Solar Continuum filter. That one is expensive - but it is rather interesting piece of kit - it works on white light solar but it also works for lunar imaging to minimize seeing effects. It can also help with chromatic aberration (although gives extremely green view, but if you can look past that - view is razor sharp). Can be used for telescope testing as it passes very narrow range of wavelengths where the eye is most sensitive and so on - so you see, it is versatile piece of kit and for that reason, I think it's worth the money (for anyone wishing to do above). On the other hand - Baader Contrast Booster is well - just contrast booster and it tames a bit of chromatic aberration. It's not wonder filter. It does not remove it all. I still see bluish halo on very bright stars with it. Cast that it imparts on the image is rather subtle. Yes, it is there but after a bit, when the eye/brain adapts - view looks normal. I guess, it being worth the money will depend on financial situation. It's not clear cut case - like for some bits that are definitively worth or not worth no matter how much extra cash you have lying around. Here - if you can afford, then it's worth, but if you can't, then I would not sweat too much about it.
  6. I can't really remember as it was quite some time ago. Currently I have 102 / 1000 achromat (as far as achromats go) and I don't need neither aperture mask nor yellow filter with for the most part. Jupiter shows some CA, but I use Baader Contrast Booster to tame it. There is a chart for achromats: So you have F/ratio and aperture size and this ratio gives you level of chromatic aberration. If you want to see how the scope will perform without filter - just calculate this CA index. For example if you have 120/600 scope and you want to stop it down to say CA index of 3 (just as comparison, my F/10 achromat has CA index of 2.5 as it is 4inch scope with F/10 so 10/4 = 2.5, and yes it is indeed in filterable range - but very minor CA). Then you would need say 70mm mask. With that you have 2.75" aperture and 600/70 = F/8.6 and their ratio is 3.11 (CA index) That will give you pretty much ED experience. You can certainly get similar experience with CA index of 2.5 and Baader Contrast Booster, so aperture mask of 80mm (CA index of ~2.4).
  7. I'm not sure above rendition is correct in terms of size. Moon is half a degree, and Andromeda has 3 degrees on full extent, so you can fit about 6 full moons in Andromeda from end to end. It looks like the Moon is a bit small in your composite image? Here - look at the size comparison from Stellarium (crude composite :D) Moon just about fits between M31 and M110 cores. In your image, you can fit x2 Moon between cores - so the Moon is about the half of the size it should be.
  8. I understand, but it is not really the sales pitch here - it is very old convention on how to label different sensor sizes. If you visit that wiki page I linked earlier (here it is again: https://en.wikipedia.org/wiki/Image_sensor_format), you will see that most sensor sizes are in fact labeled as a fraction of that strange one inch which is 16mm and not 25.4mm as you would expect. I do get your concern, and at one point I also myself wondered why they are labeled so strangely (although I never trusted that labeling and always checked diagonal in mm - probably because I'm used to metric and never think in inches). Fact that Sony replaced term 1" with Type 1 - is also rather telling. I guess that many people noticed this discrepancy and that "Type" nomenclature is introduced to remedy this somewhat.
  9. I don't think they are advertising that as one inch sensor - just using normal convention (which is admittedly very weird and counter intuitive). If it was a case of consumer deceit (like using same packaging for different amounts of food for example), then companies like Sony would raise much more concern then ZWO. Sony sells way way more sensors - and they also label them inappropriately in above sense. Well, I stand corrected - Sony obviously decided to avoid all the confusion and started labeling them as Type rather than Inch I was not aware of this change, I seem to remember them also having inch convention in their documents - but now it seems to be "Type":
  10. I don't know if it can be clearer than this: (taken from ZWO website page on ASI533MC Pro) You also have all other necessary information regarding light gathering potential of the sensor:
  11. ZWO publishes correct data. I think that your confusion arises from the "1 inch" part - which is remnant of the old times and does not actually measure 1 inch in diagonal. https://www.imaging-resource.com/news/2022/07/28/dealing-with-the-confusing-and-misleading-1-inch-type-image-sensor This means that when you see something like 4/3 or 1/2" sensor size - you can't really calculate those in inches but rather see one inch to be about 16mm in this context. Sensor that is 1/2" will have about 8mm diagonal. (see this page: https://en.wikipedia.org/wiki/Image_sensor_format)
  12. FWHM / 1.6 does not address display part of things at all - because it does not need to. It is just concerned with image acquisition part and answers the question - what sampling rate for image acquisition should be to capture all the data there is. Viewing part has been made difficult in part due to ever increasing screen resolutions - which is really a marketing trick rather than anything else. Let's do some math to understand what is the effective limit of display resolution and what is actually manufactured and sold. Most people have visual acuity of 1 minute of arc. https://en.wikipedia.org/wiki/Visual_acuity (see table and MAR column - "minimum angle of resolution") Very few people have sharper vision than that. That should really be size of pixel when viewed from a distance as it represents this gap: (you can see that letters used to determine visual acuity are made out of square bits that are of equal size - and you need to be able to resolve black and white bit of that size in order to recognize the letter - so minimum size of that is one pixel - either white or black). Now, let's put that into perspective when viewing computer screen and viewing mobile screen. Let's assume that we use computer screen from a distance of at least 50cm. At 50 cm, 1 arc minute is represented by 0.14544mm. If we turn that in DPI/PPI value it will be 25.4 / 0.14544 = 174 You don't need computer screen with more than 174 DPI as most humans can't resolve pixels that small - In fact - most computer screens are 96 dpi or there about - not even that small pixels and we still don't see pixelation easily. Phone screens are different matter - they are ever increasing - but we don't have need for them. If we apply same logic as above and say that we use smart phones 25cm away from our eyes - we come to upper limit of about 300-350dpi. If you do a google search for smart phones with highest PPI - you will find that top 100 of them has higher PPI than that - and they range from 400-600 - which is just nonsense - human eye can't resolve that small - or it could, but you need to keep the phone 10cm way from your eyes - and even newborns might have issues focusing that close (in fact, I think that perhaps babies can focus at 10cm but that ability goes away quite soon after birth). Ok, so computer screens are ok with resolution, but mobile phones are not and they have smaller pixels than is needed. Further, to answer your question about viewing - you need to say what type of display style you are using in your app. Does your app just scale photo to whole screen of device, part of screen or perhaps uses some sort of zoom and pan feature? These are all different scenarios and the size of the image presented to you will vary and will depend on actual pixel count of the image versus pixel count of the display device. I always tend to look at images at 100% zoom level - which means that one pixel of image is mapped to one pixel of display device. Most people don't really think about that and view image as is presented by software. But in either case - you as a viewer have control of how are you going to view the image and you can select best way to suit you depending on your viewing device. You don't have the control of how the image was captured - so it is best to do it in optimal way as far as amount of detail is concerned (or some other property is optimized - like sensor real estate in case of wide field images). Don't really know why, but here, look at this: Left is your larger version and right is your smaller version (in 8bit already stretched) that I took and did simple touch up in Gimp. Resized to the same size and did some sharpening. Now difference is not so great, right? Yes, left image still looks a bit better - but I was working on already stretched data that is saved in 8 bit in jpeg format. I don't mind people over sampling if they choose to do so - but you can't beat the laws of physics. If you want to over sample because that makes you make better image as it is somehow easier to apply processing algorithms - then do that - just be aware what you are giving up (SNR) and what you won't be able to achieve (show detail that is simply not there).
  13. Here is another interesting bit - look what happens if I reduce sampling rate of both images equally by 50%: Now they start looking more alike one another, right? This means that information in them starts to be the same (reference image lost some of information due to lower sampling - and image from video did not have it to begin with so they are closer in information content now). This shows you that image in video was at least x2 over sampled.
  14. Both are over sampled and that can be easily seen. Difference in sharpness between two images does not come from pixel scale - it comes from sharpness of the optics. RASA is simply not diffraction limited system. Quattro might also of lower quality than diffraction limited, but that depends on coma corrector used. As is, Quattro is diffraction limited in center of the field (coma free zone in center - which is rather small for such a fast system). When you add coma corrector - things regarding coma improve (obviously) - but CC can introduce other aberrations (often spherical) and lower sharpness of the optics. In any case - difference between the two systems is not down to pixel scale - it is down to sharpness of the optics. Here is assessment of RASA 11 system and its sharpness: (this is taken from rasa white paper: https://celestron-site-support-files.s3.amazonaws.com/support_files/RASA_White_Paper_2020_Web.pdf , appendix B). For 280mm of aperture, size of airy disk is ~1", or ~3um (for 550nm), while RMS of that pattern is about 2/3 of that (source: https://en.wikipedia.org/wiki/Airy_disk) Or RMS of diffraction limited 11" aperture should be about 2um So RASA 11 produces twice as large star image without influence of mount and atmosphere than diffraction limited scope would. (and above is given for perfect telescope with zero manufacturing aberrations, not production units that are not quite as good as model). By the way, there is simple way to see what the properly sampled image looks like - just take HST image of the same object at the scale that you are looking at. Such image will contain all the information that can be contained at given sampling rate - that will be upper limit - and if your image looks anywhere close to that - you then sampled properly and not over sampled. Look at this (although this is not HST image - it is still not over sampled at given resolution - so you can see the difference): Left is sharper image of NGC7331 from the video and right is example of how sharp image will be if you properly sample it (not over sampled at this scale). I think that difference is obvious.
  15. I'm think that I would rather go a little under sampled rather than little over sampled. Usually difference in x2 sampling is really not that big in level of detail - it is noticeable but barely so (provided that one is over and other is under sampled - it is much more obvious if both are under sampled). To me SNR gain is simply better deal than having larger image. Captured detail does not automatically translate into sharp image. It is only potential for sharp image. Even properly sampled (and even under sampled) image must be sharpened a bit (to be closer to true image undistorted by optical elements) and how much you can sharpen also depends on how good your SNR is. I'd rather have slightly under sampled image with higher SNR that I can sharpen a bit more rather than potential for all the details but not being able to sharpen as much as I want because of noise issues.
  16. Well, it depends. I think that best way to think about it would be this - imagine telescope / mount system working without digital camera - with analog photo film. It will perform the way it does regardless of what is put at focal plane. Telescope and sky and physics in general does not know what is at focal plane nor does it care. It will produce image at focal plane with certain level of detail. This image, again does not depend on sampling rate, on pixel size, or anything like that - as you don't even need to have camera with pixels - you can use film. FWHM is then measure of this ability of telescope / mount system (together with atmosphere) that characterizes how sharp the resulting image is. Again - it does not depend (this is not entirely true, pixels being area sampling devices do impact FWHM to small degree - but that is very complex topic) on pixels / camera used FWHM of signal will be what it is at focal plane. After we have established that there is some image at focal plane, it is what it is and the fact that we are using pixels won't change that image - we can then address what pixels do and their ability to record the image, and it is quite simple, it goes like this (I'm again simplifying things for purpose of this explanation but effects that I'm neglecting are rather small and would unnecessarily complicate things / explanation): too large - right size - too small Your pixels can be too large, just right and too small. If pixels are too large - then you are under sampling if pixels are just right ("Goldilocks pixels") - then you are sampling good if pixels are too small - then you are over sampling Under sampling is not a bad thing. It just means that you might not capture all the detail there is in the image (and by detail - think detecting two close stars as two stars or oval blob where you are not certain what it is - one or more features - that is the meaning of "to resolve" - root of resolution thing). Optimum sampling means that you'll be able to resolve all there is to be resolved in already formed image - and you will use the largest possible pixels to do so. Over sampling mans that you will again be able to record / to resolve all there is to be resolved in image that is already formed in focal plane - but you will do that with smaller pixels than needed. This is hurting your SNR as smaller pixels simply means that you split light over more "buckets" than you need to and each "bucket" gets less light for that reason. Less light = lower signal = lower SNR / noisier image. In above sense - pixel size is ability to record detail, however, FWHM is independent of that - FWHM is intrinsic property of image. You can measure it from recorded data - and if you are over sampled or correctly sampled - you will measure correct FWHM (with small caveat of pixel blur, but again - technical, complex detail) and if you are under sampled - then you will start loosing detail and your FWHM will be off in its measurement for small amount (in fact - amount of error depends on how much you under sample and in usual cases, it's not that big of a deal). Again, in that sense - measurement of that FWHM from image that you've recorded does provide you with information of how sharp the image at focal plane was (regardless of what we used to record it). Once you measure FWHM - then you have idea what are Goldilocks pixels in above sense - you take FWHM divide that value with 1.6 and this gives you sampling rate you should be aiming for (take that into account and your focal length and /or any focal reducers to get wanted pixel size - then bin accordingly or replace camera if that makes more sense - or as last option - don't bother - if you can live with SNR loss due to over sampling).
  17. With said telescope - yes, but not with all telescopes. How did you get that figure? I'm guessing that you took 1000mm of focal length of your scope and you took 3.76um pixel size and you calculated 206.3 * 3.76 / 1000mm = 0.76"/px, right? Now if you take camera that has say 2.4um pixel size - you will get different sampling rate, but how is this related to what the telescope can resolve? You did nothing to the telescope itself, you just used different camera. Telescope remained the same and thus can't have its potential resolution changed. If you want to know what your telescope can resolve in outer space with no need for tracking and without impact of atmosphere (and under assumption that your optics is diffraction limited) - then you can use planetary sampling formula which gives you that you need to have F/ratio that is at most x5 pixel size. For your telescope that is F/7.7 - so ideal pixel size would be 7.7 / 5 = 1.54um and corresponding sampling rate would be 206.3 * 1.54 / 1000 = 0.3177 = ~0.32"/px That is maximum potential resolution of your telescope alone (and that is for blue light at 400nm - for say Ha that is 656nm - that is going to be different - slightly or about x1.5 times lower). In any case - optics has potential to deliver certain resolution of image. Think of that as analog image. In order to properly digitize it - you need to sample at certain intervals (use certain pixel size). Using too fine sampling rate (too small pixels) - is waste of SNR as you don't need that fine pixel scale to record the image as is, and using smaller pixels just makes them receive lower signal each (same amount of light is spread over more pixels so in turn each get less light - less signal, lower SNR).
  18. Yes. I'd add the following to make things more clear: - sampling rate (rather than potential resolution) is what you have when you pair certain focal length with certain pixel size (or pixel spacing to be even more correct). - potential resolution of the system depends on aperture size, optical figure (diffraction limited optics or not - spot diagram RMS), mount tracking performance (guide RMS) and seeing conditions. In most cases we "calculate" for diffraction limited optics, although in some cases one should really account for spot diagram RMS if it is too large. - you want to achieve good match between the two above - and first is easy to calculate, but second is easy to measure. Don't just settle for one image / one session - measure across sessions to get the feel for average FWHM you will get from your system as each night will be different. I've also found discrepancy between FWHM measurements in software - different software report different figures for some reason. I tent to trust ImageJ/AstroImageJ for this measurement. In ImageJ you can't measure FWHM directly like in AstroImageJ (which has nice shortcut for that - just alt+click on star of interest) - but you can plot profile of a star and then fit gaussian shape to that profile to calculate FWHM. Both methods give very accurate answers on simulated gaussian profiles and agree on results (for stars that are round and horizontal profile in ImageJ). To reiterate - arc seconds per pixel that you get for certain focal length and certain pixel size is not directly related to potential resolution. Rather, think of it millimeter scale on your caliper. Machining precision of caliper (how precisely it can physically measure) - that is potential resolution, that is telescope aperture + seeing + mount. It serves you no good to have very fine micrometer scale if your caliper is loose and you can't physically measure precisely enough.
  19. Sometimes I think that my worst astronomy purchase was buying into astronomy
  20. I think that this both makes a point and shows how we misunderstood each other. To reiterate "the law": "If you can't take your stack, do basic white point / gamma 2.2 / black point stretch and get nice looking image - you are doing it wrong" Since you don't like big crude stars from small amateur optics - "you are doing it wrong" - take big amateur optics And second thing - I did not say that image can't be made nicer, better looking, more attractive with extensive processing - my point was exactly as expressed, if you do very basic stretch and you have no obvious objections to the image - then you are doing it right. Sure you can pull out more with stronger stretch, but that is not the point of above "law". Point of above "law" was to point to obvious flaw in data gathering / processing step. In this particular case you don't like the star shapes which does indicate that optics you were using is not without flaws. I don't know what was used, but given that you are into fast optics / going deep at the moment - I suspect one of the two: Samyang 135mm F/2 or RASA8. Both of these systems are not diffraction limited, so no wonder you don't like the stars if it was taken with one of them. Now again, I'm not saying that you should not use such systems if you want to accomplish something in particular - but I do think that above still applies. Just basic stretch will reveal issues that would otherwise be masked by "special processing".
  21. Well, now that you put it that way and we need to discuss aesthetics then it all goes down the drain My point by this "first law" was that one should really hone capture / gathering step as well as data reduction part - calibration and stacking, rather than ever increasingly rely on sophisticated algorithms to produce nice looking image. If your data looks nice - meaning no need for sharpening / noise reduction / star removal / ai assisted routines / star rounding or reduction or whatever, if you do basic 3 step stretch, and by nice - I mean without any artifacts and showing at least some level of target / nebulosity (whatever was captured) with relatively tight round stars and not too much noise, then it will easy to touch up to make it a great image without excessive use of tools. Here is an example. This is by no means great image - but look at it for a moment - and this is really just 3 step stretch - set white point as low as possible without starting to clip signal, do 2.2-2.4 gamma stretch and move black point up as needed: It is mono only image, and is there anything obviously wrong with it, or does it look ok / nice (not great, not showing all there is to be show - just plain nice).
  22. I'm feeling cheeky tonight Here it goes: "If you can't take your stack, do basic white point / gamma 2.2 / black point stretch and get nice looking image - you are doing it wrong" aaaaand discuss!
  23. Maybe put it into numbers to help you out? There are several things that contribute to signal level on single sub: 1. Sub duration - if you want to have good SNR on single sub - you need to expose for longer time. However, in context of stacking - this becomes somewhat moot point, since it is total exposure time that dictates final SNR. What is important however, is to swamp the read noise with single exposure / background noise levels (look up how to determine single exposure here on SGL - there are numerous threads) 2. Aperture size. Large aperture gathers more light than small aperture in same amount of time - more signal, better SNR 3. Pixel scale. Pixel covers part of the sky - larger part of the sky pixel covers - more signal it will record (this is strictly speaking true for only extended light sources - but those we are interested in primarily. No one complains that the stars are too faint - mostly it is nebulosity / galaxies that are faint). 4. Quantum efficiency of you system. This includes any losses in optical train, any filters used and finally QE of your camera There are several constraints in above things - like it does not make sense to talk about aperture increase if you don't want to swap out the scope. There is very limited sense in which we can talk about pixel scale. It generally depends on physical pixel size in microns and focal length of telescope. You can't change first and you can slightly alter second. You can add focal reducer that will change the pixel scale. There is a way to "alter" pixel size or pixel physical dimensions - and it is called binning. It is not as complicated topic per se - but it gets complicated when you account for OSC camera and so on. You can alter QE of your system by changing or modifying your camera (since it is not modded).
  24. Here are some tips that might be helpful. 1. watch out for noise. In fact - SNR is different in different parts of the image and you can't sharpen the whole image the same if you want to get the most out of your sharpening. Best approach that I've found is as follows (Gimp but can be adopted to other software): - create copy of the image - sharpen that copy until you are satisfied how bright parts look. This will inevitably create a lot of noise in the background - add layer mask to this sharpened image. Copy original image as layer mask. Invert mask as necessary - bright parts should show sharpened layer and dark parts should show original. Stretch the mask more then you did the image. With opacity slider for whole layer control how much of it is blended with original image. - when happy - flatten the image. Above approach can work for denoising as well. Only difference is that you want inverted mask - you want to show denoised version in dark areas where SNR is poor. Signs that you've over sharpened: 1. There is dark ring around bright stars 2. You've reduced stars to bright single pixels 3. All stars in the image are at peak brightness (there is no natural variation of brightness among stars) 4. Noise obviously 5. You started to create posterization artifacts
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.