Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,048
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. 13 minutes ago, AstroMuni said:

    Could you explain the reasons for this please? I can understand that a 6" would do a better job than a 5" and all that glass wouldnt help either, but is there something I am missing?

    Telescope needs to be at least diffraction limited for best planetary performance.

    Telescopes that have built in field flattener / or have FF/FR added, are usually not diffraction limited telescopes - which means that their Airy pattern is larger than it should be for given aperture or that they behave as smaller aperture telescopes as far as resolving capability of telescope goes.

    Look at spot diagram of Askar scope, and in particular RMS value on axis

    askar_130phq_dia_1.jpg

    it is 1.617 (which is very good value by the way, so this scope is really not that much hampered by being corrected for photography).

    Now, let's do some math:

    Diameter of Airy disk of diffraction limited F/7.7 scope is ~4.3um

    image.png.aecf2c97ec2a5f275004be202071bfdf.png

    Radius is therefore half that - or ~2.15um

    There is relationship between Airy profile and corresponding Gaussian profile that goes like this:

    image.png.346963460ba08da80fca275664e87710.png

    So we are looking at about 1.46um of RMS (approximation) - we now have two values that we can compare:

    ~1.62 vs ~1.46

    It is clear that Askar provides larger Airy pattern than diffraction limited telescope would - and is not diffraction limited. It blurs image more (much like smaller aperture).

     

  2. 5 minutes ago, Lee_P said:

    "Imaging efficiency at short exposures improves by 26%. FPS on video mode improves by 30%."

    Yeah, I would not really consider that seriously. Best it could read is: FPS potential increased by 30% (but not necessarily actual FPS), and for imaging efficiency - no comment there :D

    You need to match pixel size of your planetary camera to F/ratio of the scope of choice. I would use simple telescope design for planetary imaging rather than telescope meant for DSO Imaging. There are some tradeoffs when you aim for good flat field - it usually scarifies diffraction limited performance in center.

    Something like 6" F/8 newtonian will eat that Askar for lunch on planets.

    Anyway:

    - camera pixels must match F/ratio of setup (you can adjust F/ratio with barlow lens) - pixel size * 5 = F/ratio, so for say camera with 2.9um pixels you want your F/ratio to be around F/14.5

    - get camera capable of high FPS (USB 3.0 connection and preferably computer capable of recording data at high rate so SSD / NVME drive)

    - High QE

    - As Low read noise as you can get

    - for lunar and solar work, mono is better, for planets color/OSC is less hassle

    - for white light solar - either get full aperture Baader solar foil filter ND3 version (photographic) for newtonians / maks (anything with mirror), while for refractor get Herschel prism. Get Baader Solar Continuum filter as well - here formula for F/ratio is pixel_size * 3.7 (if you use Baader Solar Continuum filter)

    - for Solar Ha - that is whole new ballgame - get solar telescope and here F/ratio again different - with F/ratio = pixel_size * 3

     

    • Thanks 1
  3. 53 minutes ago, mgutierrez said:

    Do you know how autopec calculates these values and why they are !=0, even being the curve flat?

    No idea of what is going on there.

    Did you have previous autopec / pec curve loaded? Maybe it just repeated earlier corrections?

  4. 7 minutes ago, mgutierrez said:

    looking for info about pecprep I found this thread. There is only a thing I don't fully get. Why do we need to disable guide output from phd2? If we enable the output, corrections are also written to the log file and hence the error could be computed afterwards, no? why pecprep is not able to do it, but autopec via eqmod can?

    Pecprep wants to know how much mount trails or leads sidereal rate (what the actual error is) and it computes that based on guide star position versus start position.

    It can do this from variety of logs and all need to do the same - have guide star be displaced from original position.

    If you enable guide output - you always return guide star to (near) its original position. Calculations then need to account for this and you need to sum up thousands of small corrections.

    In ideal world - these two approaches would yield the same result - but there is significant difference between the two approaches and that difference is how error behaves. In each step there is some error. Each time guide star is measured there is some error in its position. Mount itself won't react 100% perfectly to correction so guide correction and guide pulse won't completely match.

    When you add 1000 corrections with this sort of error - error accumulates, but when you simply track the star - each measurement will have some small error - but these don't accumulate over time. For this reason it is much much better to simply record how mount performs rather than to try to correct it and assume your correction is perfect and then calculate result (and do so a thousand times per recording).

    Autopec needs to do this because it does not have idea where the star is - it relies on corrections (how ever imperfect those are) to calculate where it thinks guide star is. Autopec is less reliable than measuring PE on your own and then calculating PEC curve, but it is only way it can be automated while the mount tracks and guides.

    • Thanks 1
  5. 5 minutes ago, stephec said:

    I don't know anything about quick release though? 

    Your mount does not have vixen type clamp, but you could possibly fit one to it.

    For example this:

    https://www.firstlightoptics.com/dovetails-saddles-clamps/astro-essentials-mini-vixen-style-dovetail-clamp.html
     

    if you can unscrew current piece for telescope attachment (it looks like it is held by single bolt in the middle) - then you could possibly use the same bolt to attach vixen clamp instead of it.

    There are also bigger versions that are more sturdy but also more expensive:

    https://www.firstlightoptics.com/dovetails-saddles-clamps/william-optics-90mm-saddle-plate-for-vixen-style-dovetail-rails.html

    • Like 1
  6. 49 minutes ago, ollypenrice said:

    I do like finding out unexpected relationships between well-know objects and also seeing their relative sizes. The North America, for instance, is smaller than I thought.

    Do be careful with such assertions.

    You have a lot of sky covered in such mosaic and there must be some level of distortion present when you project large part of sphere onto a flat plane.

    One of projections used has a consequence of enlarging objects that are close to the edge versus those that are in center.

    This is well known in map of the world vs globe for size of landmasses (for example Svalbard looks larger than Madagascar on google maps - but in reality it is only 1/3 of the size - ~1500km vs ~500km)

  7. Just now, Mark2022 said:

    By  the way, how would you rate this filter? I was going to buy one some time back but I have read so much about the #8 being able to tame CA and was never sure or confident that the Baader would really do as I expected so £10 for a yellow #8 rather than 10x that for the Baader made sense.

    I think it is rather good. I can't say if it's worth the money or not to be honest. There are other Baader filters that I think are worth the money - for example Baader Solar Continuum filter. That one is expensive - but it is rather interesting piece of kit - it works on white light solar but it also works for lunar imaging to minimize seeing effects.

    It can also help with chromatic aberration (although gives extremely green view, but if you can look past that - view is razor sharp). Can be used for telescope testing as it passes very narrow range of wavelengths where the eye is most sensitive and so on - so you see, it is versatile piece of kit and for that reason, I think it's worth the money (for anyone wishing to do above).

    On the other hand - Baader Contrast Booster is well - just contrast booster and it tames a bit of chromatic aberration. It's not wonder filter. It does not remove it all. I still see bluish halo on very bright stars with it. Cast that it imparts on the image is rather subtle. Yes, it is there but after a bit, when the eye/brain adapts - view looks normal.

    I guess, it being worth the money will depend on financial situation. It's not clear cut case - like for some bits that are definitively worth or not worth no matter how much extra cash you have lying around. Here - if you can afford, then it's worth, but if you can't, then I would not sweat too much about it.

     

    • Like 1
  8. 12 hours ago, Mark2022 said:

    That #8 filter does a fine job Vlaiv (I just recently bought one myself but not had a chance to use it on my ST120). Even with no mask it has good effect but I think stopping down 20% aperture (i.e. to 80mm in your example) gives an excellent result. Not quite  ED quality but nearing it, would you say?

    I can't really remember as it was quite some time ago.

    Currently I have 102 / 1000 achromat (as far as achromats go) and I don't need neither aperture mask nor yellow filter with for the most part. Jupiter shows some CA, but I use Baader Contrast Booster to tame it.

    There is a chart for achromats:

    gallery_316937_14460_60058.jpg

    So you have F/ratio and aperture size and this ratio gives you level of chromatic aberration. If you want to see how the scope will perform without filter - just calculate this CA index.

    For example if you have 120/600 scope and you want to stop it down to say CA index of 3 (just as comparison, my F/10 achromat has CA index of 2.5 as it is 4inch scope with F/10 so 10/4 = 2.5, and yes it is indeed in filterable range - but very minor CA).

    Then you would need say 70mm mask. With that you have 2.75" aperture and 600/70 = F/8.6 and their ratio is 3.11 (CA index)

    That will give you pretty much ED experience. You can certainly get similar experience with CA index of 2.5 and Baader Contrast Booster, so aperture mask of 80mm (CA index of ~2.4).

    • Like 2
  9. I'm not sure above rendition is correct in terms of size.

    Moon is half a degree, and Andromeda has 3 degrees on full extent, so you can fit about 6 full moons in Andromeda from end to end.

    It looks like the Moon is a bit small in your composite image?

    Here - look at the size comparison from Stellarium (crude composite :D)

    image.png.eb6e9418c382d384a188c60c02e49c8e.png

    Moon just about fits between M31 and M110 cores.

    In your image, you can fit x2 Moon between cores - so the Moon is about the half of the size it should be.

    • Like 1
  10. 11 minutes ago, Albir phil said:

    Reason I brought this issue up is because I bought a zwo 533 because they sold it as a1inch sensor.when it came hello it's 16mm. No problem with the camera just the sales pitch 🤔

    I understand, but it is not really the sales pitch here - it is very old convention on how to label different sensor sizes.

    If you visit that wiki page I linked earlier (here it is again: https://en.wikipedia.org/wiki/Image_sensor_format), you will see that most sensor sizes are in fact labeled as a fraction of that strange one inch which is 16mm and not 25.4mm as you would expect.

    I do get your concern, and at one point I also myself wondered why they are labeled so strangely (although I never trusted that labeling and always checked diagonal in mm - probably because I'm used to metric and never think in inches).

    Fact that Sony replaced term 1" with Type 1 - is also rather telling. I guess that many people noticed this discrepancy and that "Type" nomenclature is introduced to remedy this somewhat.

    • Like 1
  11. 2 minutes ago, Albir phil said:

    Yes I get that my point is don't advertise as having a 1 inch sensor when it is clear by looking at it it is physically not the case. if I buy a 80mm aperture scope I get a scope which is 80mm,not less.

    I don't think they are advertising that as one inch sensor - just using normal convention (which is admittedly very weird and counter intuitive).

    If it was a case of consumer deceit (like using same packaging for different amounts of food for example), then companies like Sony would raise much more concern then ZWO.

    Sony sells way way more sensors - and they also label them inappropriately in above sense.

    Well, I stand corrected - Sony obviously decided to avoid all the confusion and started labeling them as Type rather than Inch :D

    I was not aware of this change, I seem to remember them also having inch convention in their documents - but now it seems to be "Type":

    image.png.b93ee7ce46404c2079fae2b4a394bc08.png

     

  12. 28 minutes ago, Albir phil said:

    Perhaps we should be informed of the actual aperture of the sensor,as it is advertised with telescopes,then we know what we are getting for our money 🤔

    I don't know if it can be clearer than this:

    image.png.9b62387e717dae43c6d95a6ff1afdf41.png

    (taken from ZWO website page on ASI533MC Pro)

    You also have all other necessary information regarding light gathering potential of the sensor:

    image.png.c7d62f81be9dbe33fb52157585f86cc2.png

  13. 48 minutes ago, Albir phil said:

    Much as been said regarding the. 533 censer size but dose anyone know the actual size of the censer area that records photons when imaging, not the advertised size by ZWO.🤔

    ZWO publishes correct data.

    I think that your confusion arises from the "1 inch" part - which is remnant of the old times and does not actually measure 1 inch in diagonal.

    https://www.imaging-resource.com/news/2022/07/28/dealing-with-the-confusing-and-misleading-1-inch-type-image-sensor

    Quote

    The history of the "1-inch sensor"

    The odd naming convention goes back to the dimensions of a hypothetical glass tube that could surround the 1-inch sensor. Live broadcasting cameras in the 1950s used cathode-ray tubes (CRT) to project an image line after line. The glass tube that surrounded a signal plate had a 1-inch diameter, although the photosensitive area of the tube was only about 0.63" in diameter – or around 16mm. The typical diagonal of a modern 1-inch type sensor is, you guessed it, 16mm (15.9mm, to be precise).

     

    After all this time, and despite using wildly different technology than CRT broadcasting cameras in the 1950s, the 1-inch nomenclature has remained. The photosensitive area in question wasn't an inch in diameter back then, and it still isn't now. Modern 1-inch sensors refer to a hypothetical CRT tube that would be an inch in diameter to theoretically fit around a 1-inch type image sensor.

    This means that when you see something like 4/3 or 1/2" sensor size - you can't really calculate those in inches but rather see one inch to be about 16mm in this context.

    Sensor that is 1/2" will have about 8mm diagonal.

    image.png.2e83850e22de4802442479279c70acd4.png

    (see this page: https://en.wikipedia.org/wiki/Image_sensor_format)

    • Like 1
  14. 6 hours ago, ONIKKINEN said:

    @vlaiv i have had this one question for a while and this thread seems to be the place for it. How does the FWHM / 1.6 rule take into account different monitor resolutions and observer preferences? Surely there is a subjective portion to this theory as well, since there is no such thing as a typical monitor or typical observer and both of those things will greatly affect how the image is seen and appreciated.

    FWHM / 1.6 does not address display part of things at all - because it does not need to.

    It is just concerned with image acquisition part and answers the question - what sampling rate for image acquisition should be to capture all the data there is.

    Viewing part has been made difficult in part due to ever increasing screen resolutions - which is really a marketing trick rather than anything else.

    Let's do some math to understand what is the effective limit of display resolution and what is actually manufactured and sold.

    Most people have visual acuity of 1 minute of arc. https://en.wikipedia.org/wiki/Visual_acuity

    (see table and MAR column - "minimum angle of resolution")

    Very few people have sharper vision than that. That should really be size of pixel when viewed from a distance as it represents this gap:

    image.png.cbd88328edf8a4dcad27d94e4cad5702.png

    (you can see that letters used to determine visual acuity are made out of square bits that are of equal size - and you need to be able to resolve black and white bit of that size in order to recognize the letter - so minimum size of that is one pixel - either white or black).

    Now, let's put that into perspective when viewing computer screen and viewing mobile screen.

    Let's assume that we use computer screen from a distance of at least 50cm. At 50 cm, 1 arc minute is represented by  0.14544mm.

    If we turn that in DPI/PPI value it will be 25.4 /  0.14544 = 174

    You don't need computer screen with more than 174 DPI as most humans can't resolve pixels that small - In fact - most computer screens are 96 dpi or there about - not even that small pixels and we still don't see pixelation easily.

    Phone screens are different matter - they are ever increasing - but we don't have need for them. If we apply same logic as above and say that we use smart phones 25cm away from our eyes - we come to upper limit of about 300-350dpi.

    If you do a google search for smart phones with highest PPI - you will find that top 100 of them has higher PPI than that - and they range from 400-600 - which is just nonsense - human eye can't resolve that small - or it could, but you need to keep the phone 10cm way from your eyes - and even newborns might have issues focusing that close (in fact, I think that perhaps babies can focus at 10cm but that ability goes away quite soon after birth).

    Ok, so computer screens are ok with resolution, but mobile phones are not and they have smaller pixels than is needed.

    Further, to answer your question about viewing - you need to say what type of display style you are using in your app. Does your app just scale photo to whole screen of device, part of screen or perhaps uses some sort of zoom and pan feature?

    These are all different scenarios and the size of the image presented to you will vary and will depend on actual pixel count of the image versus pixel count of the display device.

    I always tend to look at images at 100% zoom level - which means that one pixel of image is mapped to one pixel of display device. Most people don't really think about that and view image as is presented by software. But in either case - you as a viewer have control of how are you going to view the image and you can select best way to suit you depending on your viewing device.

    You don't have the control of how the image was captured - so it is best to do it in optimal way as far as amount of detail is concerned (or some other property is optimized - like sensor real estate in case of wide field images).

    6 hours ago, ONIKKINEN said:

    To my eyes the second one looks much sharper. For whatever reason i had an easier time working on the higher resolution image and could sharpen it much further, why do you think that is? Is it that with properly sampled data one has very little leeway in how exactly they apply the sharpening, and so can easily do it wrong?

    Don't really know why, but here, look at this:

    image.png.60a088f75f1b913f1f007361e1cf754c.png

    Left is your larger version and right is your smaller version (in 8bit already stretched) that I took and did simple touch up in Gimp. Resized to the same size and did some sharpening.

    Now difference is not so great, right? Yes, left image still looks a bit better - but I was working on already stretched data that is saved in 8 bit in jpeg format.

    I don't mind people over sampling if they choose to do so - but you can't beat the laws of physics. If you want to over sample because that makes you make better image as it is somehow easier to apply processing algorithms - then do that - just be aware what you are giving up (SNR) and what you won't be able to achieve (show detail that is simply not there).

    • Like 2
  15. Here is another interesting bit - look what happens if I reduce sampling rate of both images equally by 50%:

    image.png.01a0678245fe280f144836160a7710f6.png

    Now they start looking more alike one another, right? This means that information in them starts to be the same (reference image lost some of information due to lower sampling - and image from video did not have it to begin with so they are closer in information content now).

    This shows you that image in video was at least x2 over sampled.

    • Like 1
  16. 23 minutes ago, Lee_P said:

    @vlaiv I'd be interested to get your thoughts on this video that just popped up in my feed. It seems that the author is comparing pixel scales and suggesting that the finer pixel scale is better, but there's no mention of matching working resolution to pixel scale 🥴

    Both are over sampled and that can be easily seen.

    Difference in sharpness between two images does not come from pixel scale - it comes from sharpness of the optics.

    RASA is simply not diffraction limited system. Quattro might also of lower quality than diffraction limited, but that depends on coma corrector used. As is, Quattro is diffraction limited in center of the field (coma free zone in center - which is rather small for such a fast system). When you add coma corrector - things regarding coma improve (obviously) - but CC can introduce other aberrations (often spherical) and lower sharpness of the optics.

    In any case - difference between the two systems is not down to pixel scale - it is down to sharpness of the optics.

    Here is assessment of RASA 11 system and its sharpness:

    image.png.dc688fb7c8095337664c5f970c59ccb2.png

    (this is taken from rasa white paper: https://celestron-site-support-files.s3.amazonaws.com/support_files/RASA_White_Paper_2020_Web.pdf , appendix B).

    For 280mm of aperture, size of airy disk is ~1", or ~3um (for 550nm), while RMS of that pattern is about 2/3 of that

    image.png.269949d065ddda563f23e22f8d946a28.png

    (source: https://en.wikipedia.org/wiki/Airy_disk)

    Or RMS of diffraction limited 11" aperture should be about 2um

    So RASA 11 produces twice as large star image without influence of mount and atmosphere than diffraction limited scope would.

    (and above is given for perfect telescope with zero manufacturing aberrations, not production units that are not quite as good as model).

    By the way, there is simple way to see what the properly sampled image looks like - just take HST image of the same object at the scale that you are looking at. Such image will contain all the information that can be contained at given sampling rate - that will be upper limit - and if your image looks anywhere close to that - you then sampled properly and not over sampled.

    Look at this (although this is not HST image - it is still not over sampled at given resolution - so you can see the difference):

    image.png.5fc06679f1e58b4314cd8bfc2b5c7ada.png

    Left is sharper image of NGC7331 from the video and right is example of how sharp image will be if you properly sample it (not over sampled at this scale). I think that difference is obvious.

    • Like 1
  17. Just now, Lee_P said:

    But if I bin the data, then the sampling rate is 2.59"/px, so they're a bit undersampled. In this case, to bin or not to bin?

    I'm think that I would rather go a little under sampled rather than little over sampled.

    Usually difference in x2 sampling is really not that big in level of detail - it is noticeable but barely so (provided that one is over and other is under sampled - it is much more obvious if both are under sampled). To me SNR gain is simply better deal than having larger image.

    Captured detail does not automatically translate into sharp image. It is only potential for sharp image. Even properly sampled (and even under sampled) image must be sharpened a bit (to be closer to true image undistorted by optical elements) and how much you can sharpen also depends on how good your SNR is.

    I'd rather have slightly under sampled image with higher SNR that I can sharpen a bit more rather than potential for all the details but not being able to sharpen as much as I want because of noise issues.

    • Like 1
    • Thanks 1
  18. 6 minutes ago, Lee_P said:

    Is it accurate to say that sampling rate is a measure of a telescope / camera combination's ability to record detail? And then an image's FWHM indicates how much resolution you've actually recorded? You want the two to match, so to do that you divide FWHM by 1.6, then if necessary bin your data so the sampling rate is close?

    Well, it depends.

    I think that best way to think about it would be this - imagine telescope / mount system working without digital camera - with analog photo film. It will perform the way it does regardless of what is put at focal plane. Telescope and sky and physics in general does not know what is at focal plane nor does it care. It will produce image at focal plane with certain level of detail. This image, again does not depend on sampling rate, on pixel size, or anything like that - as you don't even need to have camera with pixels - you can use film.

    FWHM is then measure of this ability of telescope / mount system (together with atmosphere) that characterizes how sharp the resulting image is. Again - it does not depend (this is not entirely true, pixels being area sampling devices do impact FWHM to small degree - but that is very complex topic) on pixels / camera used FWHM of signal will be what it is at focal plane.

    After we have established that there is some image at focal plane, it is what it is and the fact that we are using pixels won't change that image - we can then address what pixels do and their ability to record the image, and it is quite simple, it goes like this (I'm again simplifying things for purpose of this explanation but effects that I'm neglecting are rather small and would unnecessarily complicate things / explanation):

    too large - right size - too small

    Your pixels can be too large, just right and too small.

    If pixels are too large - then you are under sampling

    if pixels are just right ("Goldilocks pixels") - then you are sampling good

    if pixels are too small - then you are over sampling

    Under sampling is not a bad thing. It just means that you might not capture all the detail there is in the image (and by detail - think detecting two close stars as two stars or oval blob where you are not certain what it is - one or more features - that is the meaning of "to resolve" - root of resolution thing).

    Optimum sampling means that you'll be able to resolve all there is to be resolved in already formed image - and you will use the largest possible pixels to do so.

    Over sampling mans that you will again be able to record / to resolve all there is to be resolved in image that is already formed in focal plane - but you will do that with smaller pixels than needed. This is hurting your SNR as smaller pixels simply means that you split light over more "buckets" than you need to and each "bucket" gets less light for that reason. Less light = lower signal = lower SNR / noisier image.

    In above sense - pixel size is ability to record detail, however, FWHM is independent of that  - FWHM is intrinsic property of image. You can measure it from recorded data - and if you are over sampled or correctly sampled - you will measure correct FWHM (with small caveat of pixel blur, but again - technical, complex detail) and if you are under sampled - then you will start loosing detail and your FWHM will be off in its measurement for small amount (in fact - amount of error depends on how much you under sample and in usual cases, it's not that big of a deal). Again, in that sense - measurement of that FWHM from image that you've recorded does provide you with information of how sharp the image at focal plane was (regardless of what we used to record it).

    Once you measure FWHM - then you have idea what are Goldilocks pixels in above sense - you take FWHM divide that value with 1.6 and this gives you sampling rate you should be aiming for (take that into account and your focal length and /or any focal reducers to get wanted pixel size - then bin accordingly or replace camera if that makes more sense - or as last option - don't bother :D - if you can live with SNR loss due to over sampling).

     

    • Like 1
    • Thanks 1
  19. 21 minutes ago, Lee_P said:

    I'm having trouble getting my head around this bit. When I said "potential resolution", I meant if everything were perfect -- my telescope was transported into space, tracking was spot-on, and the optics were flawless. Would I then be able to achieve 0.78"/px?

    With said telescope - yes, but not with all telescopes.

    How did you get that figure? I'm guessing that you took 1000mm of focal length of your scope and you took 3.76um pixel size and you calculated 206.3 * 3.76 / 1000mm = 0.76"/px, right?

    Now if you take camera that has say 2.4um pixel size - you will get different sampling rate, but how is this related to what the telescope can resolve? You did nothing to the telescope itself, you just used different camera. Telescope remained the same and thus can't have its potential resolution changed.

    If you want to know what your telescope can resolve in outer space with no need for tracking and without impact of atmosphere (and under assumption that your optics is diffraction limited) - then you can use planetary sampling formula which gives you that you need to have F/ratio that is at most x5 pixel size. For your telescope that is F/7.7 - so ideal pixel size would be 7.7 / 5 = 1.54um and corresponding sampling rate would be 206.3 * 1.54 / 1000 = 0.3177 = ~0.32"/px

    That is maximum potential resolution of your telescope alone (and that is for blue light at 400nm - for say Ha that is 656nm - that is going to be different - slightly or about x1.5 times lower).

    In any case - optics has potential to deliver certain resolution of image. Think of that as analog image. In order to properly digitize it - you need to sample at certain intervals (use certain pixel size). Using too fine sampling rate (too small pixels) - is waste of SNR as you don't need that fine pixel scale to record the image as is, and using smaller pixels just makes them receive lower signal each (same amount of light is spread over more pixels so in turn each get less light - less signal, lower SNR).

     

     

  20. 33 minutes ago, Lee_P said:

    I've just come across this post and wonder if I could ask for some clarification. Is my understanding here correct:

    I'm using an Askar 130PHQ and a 2600MC camera. This gives the potential resolution of my system as 0.78"/px. However, atmospheric conditions and mount inaccuracies mean that in reality the resolution is lower. To calculate what would be optimal for my equipment and sky conditions, I can take the FWHM of an image fresh from integration and divide by 1.6. (2.94 / 1.6 = 1.84"/px). So, 0.78"/px is definitely oversampled. If I bin2, then the working resolution is 1.56"/px, which is close to 1.84"/px. And as a sense check, it fits the general rule of thumb that between 1 and 2 "/px is usually good working resolution. 

    The same idea, using old Askar FRA400 and 2600MC data:

    Potential resolution of 1.93"/px. FWHM of 2.24/1.6 = 1.39"/px. No need to bin.

     

    Yes.

    I'd add the following to make things more clear:

    - sampling rate (rather than potential resolution) is what you have when you pair certain focal length with certain pixel size (or pixel spacing to be even more correct).

    - potential resolution of the system depends on aperture size, optical figure (diffraction limited optics or not - spot diagram RMS), mount tracking performance (guide RMS) and seeing conditions. In most cases we "calculate" for diffraction limited optics, although in some cases one should really account for spot diagram RMS if it is too large.

    - you want to achieve good match between the two above - and first is easy to calculate, but second is easy to measure. Don't just settle for one image / one session - measure across sessions to get the feel for average FWHM you will get from your system as each night will be different. I've also found discrepancy between FWHM measurements in software - different software report different figures for some reason. I tent to trust ImageJ/AstroImageJ for this measurement.

    In ImageJ you can't measure FWHM directly like in AstroImageJ (which has nice shortcut for that - just alt+click on star of interest) - but you can plot profile of a star and then fit gaussian shape to that profile to calculate FWHM. Both methods give very accurate answers on simulated gaussian profiles and agree on results (for stars that are round and horizontal profile in ImageJ).

    To reiterate - arc seconds per pixel that you get for certain focal length and certain pixel size is not directly related to potential resolution. Rather, think of it millimeter scale on your caliper. Machining precision of caliper (how precisely it can physically measure) - that is potential resolution, that is telescope aperture + seeing + mount. It serves you no good to have very fine micrometer scale if your caliper is loose and you can't physically measure precisely enough.

     

    • Thanks 1
  21. 57 minutes ago, ollypenrice said:

    - Big crude stars as produced by small amateur optics.

    - Vast amounts of information contained in the data not rendered visible.  (Faint nebulosity from emission and reflection.)

    I think that this both makes a point and shows how we misunderstood each other.

    To reiterate "the law":  "If you can't take your stack, do basic white point / gamma 2.2 / black point stretch and get nice looking image - you are doing it wrong"

    Since you don't like big crude stars from small amateur optics - "you are doing it wrong" :D - take big amateur optics :D

    And second thing - I did not say that image can't be made nicer, better looking, more attractive with extensive processing - my point was exactly as expressed, if you do very basic stretch and you have no obvious objections to the image - then you are doing it right.

    Sure you can pull out more with stronger stretch, but that is not the point of above "law". Point of above "law" was to point to obvious flaw in data gathering / processing step. In this particular case you don't like the star shapes which does indicate that optics you were using is not without flaws.

    I don't know what was used, but given that you are into fast optics / going deep at the moment - I suspect one of the two: Samyang 135mm F/2 or RASA8. Both of these systems are not diffraction limited, so no wonder you don't like the stars if it was taken with one of them.

    Now again, I'm not saying that you should not use such systems if you want to accomplish something in particular - but I do think that above still applies. Just basic stretch will reveal issues that would otherwise be masked by "special processing".

  22. 1 hour ago, Elp said:

    Quantify nice.

    Well, now that you put it that way and we need to discuss aesthetics then it all goes down the drain :D

    My point by this "first law" was that one should really hone capture / gathering step as well as data reduction part - calibration and stacking, rather than ever increasingly rely on sophisticated algorithms to produce nice looking image.

    If your data looks nice - meaning no need for sharpening / noise reduction / star removal / ai assisted routines / star rounding or reduction or whatever, if you do basic 3 step stretch, and by nice - I mean without any artifacts and showing at least some level of target / nebulosity (whatever was captured) with relatively tight round stars and not too much noise, then it will easy to touch up to make it a great image without excessive use of tools.

    Here is an example. This is by no means great image - but look at it for a moment - and this is really just 3 step stretch - set white point as low as possible without starting to clip signal, do 2.2-2.4 gamma stretch and move black point up as needed:

    m13.thumb.jpeg.30a76c1cdaa8a6680496d4c0b673e670.jpeg

    It is mono only image, and is there anything obviously wrong with it, or does it look ok / nice (not great, not showing all there is to be show - just plain nice).

     

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.