Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Astronomy tools ccd suitability


vlaiv

Recommended Posts

2 minutes ago, Martin Meredith said:

And this was a good example for the Lodestar (left)

Can you now provide the same images at original resolution recorded and not enlarged by you using nearest neighbor resampling?

Link to comment
Share on other sites

7 minutes ago, Martin Meredith said:

nobody is saying beautiful stellar discs => optimum sampling, but I think its fair to say that ugly blocky discs are indicative of sub-optimal sampling.

I made comment about that earlier - that is simply not true. Ugly blocky discs are artifact of used resampling method and not over under sampling.

Here - look at this example. I have slightly under sampled stars here (according to FWHM/1.6):

Screenshot_2.png.e22b91d32c51374f82b6966713bc8b07.png

this is original 100% zoom / crop

I will now enlarge this image to say 500% using nearest neighbor:

Screenshot_3.jpg.7bc1e1d09c8748bf015ddc9d7ae2cfe2.jpg

uh that's ugly - must be under sampling, or is it?

Here is Lanczos resampling to 500% of that same data:

 

Screenshot_2.jpg.559e5ce0bc9820fbc513517115ea55c6.jpg

Whoa, what is this? Nice stellar discs?

"Pixellation" / "Blockiness" is not feature of under sampling. It is feature of resampling method used when you zoom in to such image. Look at one of my first posts in this thread and single pixel. Even single pixel will look like little ball/disk if you use proper resampling method.

Edited by vlaiv
typo ...
Link to comment
Share on other sites

8 minutes ago, vlaiv said:

Can you now provide the same images at original resolution recorded and not enlarged by you using nearest neighbor resampling?

sure, tomorrow.

But what you're seeing is exactly what I see with the tools I use -- no tricks here. Are you saying I should be (a) not enlarging them and instead squinting at the screen during my EEVA session; or (b) use a different tool with different interpolative functions? Or something else? I can only use what I have. Like I say, its practical experience, which must surely count for something.

Link to comment
Share on other sites

8 minutes ago, Martin Meredith said:

(b) use a different tool with different interpolative functions?

If you are going to enlarge any image to 500% of what it is original size and you don't want to see pixlation - use proper interpolation function.

Above is not astronomy related - it works on any image. If I take nice image online - like head of this parrot:

bird.png.2bf3a318510b849a706c4a25c9e6d355.png

And enlarge it to 500% using nearest neighbor - it won't look nice:

image.png.863b9c1d23ddaf773dd5f225bd980ba6.png

You'll see all the "pixels", but if I enlarge it with good interpolation technique:

image.png.d65803ef72670e786aa27d2afefee07e.png

Pixels won't be seen anymore and image will be smooth (although without detail as it is now over sampled by x5 due to resizing it to 500%)

Link to comment
Share on other sites

Yes, I saw your earlier panel of stars and to me they look like smoothed squares or diamonds and really not very disc-like, and some contain ringing artefacts. Lanczos is known to produce dark pixel artefacts https://www.astropixelprocessor.com/community/tutorials-workflows/interpolation-artifacts/ (I've also seen this quite a lot).

The point surely is that one can avoid this by increased sampling in the first place.

 

Link to comment
Share on other sites

1 minute ago, Martin Meredith said:

The point surely is that one can avoid this by increased sampling in the first place.

At expense of SNR and without additional detail captured.

However - proper sampling is not meant for looking at the image at 500% - most people don't even bother to look at image at 100% zoom level.

Proper sampling ensures one can capture all the detail available when viewing image at 100% zoom level.

Ringing artifacts are maybe frowned upon - but they are nature of the beast. Telescope produce ringing artifacts - Airy pattern is ringing artifact around airy disk. Increase magnification - beyond what is resolving power and you'll see ringing artifacts - airy rings.

Whenever you have band limited signal - it means that it is composed out of finite number of sine waves with different amplitudes and phases. If you want such signal to go down to 0 and vanish - you need infinite amount of sine waves - eventually mutually cancelling. That is not what telescopes produce and that is not what the sampling is all about. Aperture produces band limited signal, and atmosphere further attenuates high frequencies of that signal.

Anyway - there are other interpolation algorithms that deal with ringing - if you really want it to.

By the way - here is another example of optimum sampling (As you mentioned that single pixel does not produce accurate circle - it will not as it is not optimally sampling):

This is gaussian curve sampled at FWHM / 1.6 - minimum needed for perfect disk (enlarged to 2000% or x20 using nearest neighbor):

image.png.020a7b4ba1680183a09262e52848062d.png

It has just 9 pixels covering it -  3x3, but that is enough to properly sample it and if I enlarge it to 2000% by using Quintic B-Spline - here is what I get:

image.png.89d98142ed3e79b865fa59928900ac46.png

It has captured underlying function completely. Yet compare number of pixels in that "star" with number of pixels you have in stars that you claim to be heavily under sampled:

image.png.03676e9d824864741c5d57eee7dfad0a.png

If I take your "under sampled" image and bin it 3x3 as it seems that you enlarged it to 300% and then enlarge it using some nice resampling - situation does not look that bad ((mind you - this was done on 8bit data that is no longer linear).

image.png.3d2c44abf127d0fc0f7d5c3e423a6678.png

Stars actually seem tighter and definitively go deeper in that image.

 

Link to comment
Share on other sites

9 hours ago, vlaiv said:

Seems that many people rely on this tool and given that its hosted / maintained by @FLO

I think it would be wise to revisit validity of the information presented by it.

I've been several times involved in discussions where people refer to above tool and either accept very flawed advice offered by the tool, or question otherwise sound setup as it is not in the "green zone" according to the tool.

There are also several statements made on that page that are simply false and should be corrected.

 

Oh yeah loads of things wrong with that tool, I really dont like it.

1) Seems to suggest that anything outside of 1-2arc seconds per pixel is a bad idea and its just not the case.

2) Vastly overestimates average seeing conditions.

3) Takes no account of the fact that spot size vs pixel size determines how sampled a star is and not arcseconds per pixel, I work at 4.2arcseconds per pixel on a regular basis and don't experiance square stars.

4) Takes no account of apperture size in the calculation and so often leads to the 183mm pro 2.4um pixels being seen as the most suitable sensor for wide feild refractos when infact resolution would be diffraction limited to the Daws limit in these cases. 

5) Makes no attempt to balance resolution against Signal to Noise advantages of using larger pixel cameras. Or indeed acknowledge that smaller pixels have the same effect are slower optics and vice versa.

6) Doent really try and ask what the user is trying to image, different requirements for faint emission nebula imaging vs galaxy imaging.

7) Gives absolutely no consideration to the effect of a OSC bayer matrix vs Mono camera on effective resolution.

Could probably go on.

Just dont use it guys. Its missleading.

Oh and while i am at it I am not sure on the maths behind the filter size calculator either.

I would write my own online tool but I have three kids and a job so if I did stuff like that I would have no time left for the hobby itself.

Adam

Edited by Adam J
Link to comment
Share on other sites

10 hours ago, Adam J said:

7) Gives absolutely no consideration to the effect of a OSC bayer matrix vs Mono camera on effective resolution.

Yes, that is particularly hard one.

It can be handled in many different ways and best ones are rarely supported in software.

I think that simplest approach is - to leave things as they are and slowly introduce new concepts with software support. For the time being - treat color cameras as equals to mono with respect to resolution.

Link to comment
Share on other sites

12 hours ago, vlaiv said:

At expense of SNR and without additional detail captured.

However - proper sampling is not meant for looking at the image at 500% - most people don't even bother to look at image at 100% zoom level.

Proper sampling ensures one can capture all the detail available when viewing image at 100% zoom level.

Ringing artifacts are maybe frowned upon - but they are nature of the beast. Telescope produce ringing artifacts - Airy pattern is ringing artifact around airy disk. Increase magnification - beyond what is resolving power and you'll see ringing artifacts - airy rings.

Whenever you have band limited signal - it means that it is composed out of finite number of sine waves with different amplitudes and phases. If you want such signal to go down to 0 and vanish - you need infinite amount of sine waves - eventually mutually cancelling. That is not what telescopes produce and that is not what the sampling is all about. Aperture produces band limited signal, and atmosphere further attenuates high frequencies of that signal.

Anyway - there are other interpolation algorithms that deal with ringing - if you really want it to.

By the way - here is another example of optimum sampling (As you mentioned that single pixel does not produce accurate circle - it will not as it is not optimally sampling):

This is gaussian curve sampled at FWHM / 1.6 - minimum needed for perfect disk (enlarged to 2000% or x20 using nearest neighbor):

image.png.020a7b4ba1680183a09262e52848062d.png

It has just 9 pixels covering it -  3x3, but that is enough to properly sample it and if I enlarge it to 2000% by using Quintic B-Spline - here is what I get:

image.png.89d98142ed3e79b865fa59928900ac46.png

It has captured underlying function completely. Yet compare number of pixels in that "star" with number of pixels you have in stars that you claim to be heavily under sampled:

image.png.03676e9d824864741c5d57eee7dfad0a.png

If I take your "under sampled" image and bin it 3x3 as it seems that you enlarged it to 300% and then enlarge it using some nice resampling - situation does not look that bad ((mind you - this was done on 8bit data that is no longer linear).

image.png.3d2c44abf127d0fc0f7d5c3e423a6678.png

Stars actually seem tighter and definitively go deeper in that image.

 

Still fundamentally square though

 

Link to comment
Share on other sites

The stars.

 

I also wanted to pick up on something else you said towards the head of this thread. You said

"Star won't be appear blocky or angular if it is under sampled. This is myth and is consequence of interpolation / resampling algorithm used"

You appear to be saying that the raw data (captured by the sensor array) contains nice round stars that some brutalist interpolation/resampling algorithm has turned into blocks. But you can't possibly mean this.

What you have gone on to demonstrate essentially goes in the opposite direction: by using a 'nice' interpolation algorithm we can turn the blocky stars we started out with into somewhat rounder stars. BTW nobody is going to disagree with this, though whether anyone would want to do it when the alternative is to sample properly in the first place is another matter.

The fundamental question is why are they blocky in the first place? And the answer is that they are undersampled.

 

 

 

Edited by Martin Meredith
typo
Link to comment
Share on other sites

4 hours ago, Martin Meredith said:

The fundamental question is why are they blocky in the first place? And the answer is that they are undersampled.

No.

Pixels that come out of the sensor are point samples - they don't have size, they only have position and value.

Take almost any image online - can you say what size of those pixels is? No - because it is not important. If it were important metric - it would be embedded in the image.

What we are used to seeing in software are little squares - not because pixels are squares (by the way - most sensors don't actually have square pixels - but rounded ones) but because "default" interpolation algorithm is nearest neighbor - because it is by far the simplest implementation - take coordinates and round them up to nearest integer - and there you go. Nothing complex needs to be done and it is fast.

For this reason most software has this interpolation as default setting - but it does not have to be that way.

In any case - squares are consequence of this and not square pixels on the sensor - that btw look like this:

image.png.f2b13223701860864efea06d12607524.png

or like this:

image.png.6b99b70521c6a9fb148a55134754850c.png

or this:

image.png.fd47b71f945875cf59afc2a2a9a2db2c.png

If image on the screen reflects sensor pixel shape - why don't they look like this?

I still maintain that you only need to sample at FWHM / 1.6 - and that will cover star with hand full of sample points - and you will still be able to reconstruct it.

This is not something that I made up - this is something that math is telling us - proven theorems (and examples - like I shown you above).

Link to comment
Share on other sites

4 hours ago, Martin Meredith said:

Still fundamentally square though

 

 

I don't do AP, yet, or at least the last time I did we were still using film. What I used to do is build test systems for imager development, mainly IR, but the testing included the human visual interfaces so characterising what a human would see with the system in terms of thermal and spacial resolution. One project involved designing a test rig to scan a spot of light less than half the size* of a ccd pixel across a sensor to characterise individual sensors for selection for a satellite package.

No, they are not fundamantally square, that is an artifact of the system, if the pixels were circular or hexagonal the output could still look square on a screen if that is how the processing chooses to represent it. Unless the light reaching the sensor is a small fraction of the size of the pixel there is going to be some light reaching surrounding pixels and that can be used to display circles on an output that has more pixels than the light reached on the sensor if disccs is what they are assumed to be. What you cannot do is extract more detail than you have information for, if you have two stars overlapping and imaged on to the sensor on a single pixel you might get an oval blob on the display or the processing might just turn it into a disc, you are not going to get overlapping discs. What Vlaiv is trying to explain is if the smallest artifact you want to see is the disc of a star then sampling at 1.6 times the resolution or star size will get you that. You can oversample, but then you start to trade off noise and exposure time because each pixel is receiving less photons.

I was trying to think of an analogy, lets say you are trying to measure rainfall, if you have a 10cm diameter funnel going into a 1cm diameter measuring tube you will get a measurable reading over a wide range of fall rates. If you instead have and array of a 100 1cm tubes some of the droplets will fall between the tubes or hit the lip with some going into the tube and some running down the outside. At lower rates of fall you will be getting different amounts in each small tube and measuring it accurately can become difficult.

 * The spot of light was at less than 10% illumination at a diameter equivalent to half of the pixel size.

 

  • Like 1
Link to comment
Share on other sites

Not having years of experience nor any qualifications in this fascinating hobby, online resources are more than welcome. So thank you to FLO for providing what I consider to be a valuable and simple to use resource for a complete numpty. If you do change the tool, please keep it as simple as it currently is.

And copied from the CCD calculator page.... "At Astronomy Tools we want to make useful information available to all. If you can see a way we can improve any of our calculators, or would like us to build a new one, please contact us."

 

Link to comment
Share on other sites

While a lot of these discussions are interesting, I feel they are straying away from the subject of updating the tools. There is the danger of making things too complicated for people to understand. 
 

As with the previous poster, if I, also a numpty, can understand it, then its purpose is served. 

Link to comment
Share on other sites

14 minutes ago, Mr Spock said:

While a lot of these discussions are interesting, I feel they are straying away from the subject of updating the tools. There is the danger of making things too complicated for people to understand. 
 

As with the previous poster, if I, also a numpty, can understand it, then its purpose is served. 

According to my understanding - purpose of this thread should be to weed out technical side of things.

Tool itself can be pretty much unchanged from the end user perspective - maybe only addition would be selection of the mount / expected guide performance.

That is just another drop down that is easily used by people.

On the other hand - we are trying to offer genuine advice to people - one that is correct to within limitations of the model (diffraction limited scope and mount performance for example are assumed but might not be correct).

I insisted on the thread so that other people can have their say in the matter. Since I'm main driver behind the change - I feel more comfortable if what I say is reviewed by others and any possible error pointed out so it can be corrected if it is indeed error.

I know that many people don't have enough knowledge to spot the problem straight away - but that is why we have this thread - if anyone is interested in making this tool better and is interested in ensuring correctness of the math behind it - then I'm more than happy to answer any questions, point to online sources, or participate in correcting of any errors spotted.

Another point of this thread is to explain things to those that are interested. I'm much happier if people have understanding how something works - rather than just it being magic box - even if majority opts to use it as convenient tool without deeper understanding - as long as they have confidence that theory behind it is correct.

  • Like 4
Link to comment
Share on other sites

As a beginner in this hobby I have found this tool easy to use and the results easy to understand.  For what its worth, from my perspective I think:

1)   The tool MUST continue to be easy to use and the results easy to understand for a beginner.

2)  If the accuracy of the tool can be improved given the theoretical and practical knowledge that is self evident in this thread then it will be an excellent tool indeed.

3)  Any limiting assumptions (e.g. diffraction limiting, mount performance etc) must be called out.

I think this is what @vlaiv is driving at by starting this thread.

Also, whilst I understand the maths, I don't yet completely get the concepts behind it but that it something I will work on and having it played out in this thread  (including the broader discussion) is extremely helpful.

Very happy to be involved in any user testing if @FLO decides to make changes.

Neil

Link to comment
Share on other sites

Meanwhile, here's a real example of what I'm talking about.

I selected this image because it was a night of excellent seeing (as recorded at the time on the capture), and it is when the seeing is very good-excellent that I see square stars with the Lodestar. 

497753386_WBL72302Feb22_16_44_55.png.616b5048a6fd5c5b5033e762c23b1636.png

 

[edit]: removed incorrect interpolation = None example. 

 

Edited by Martin Meredith
Link to comment
Share on other sites

46 minutes ago, Martin Meredith said:

So a simple yes/no Q for Vlaiv: do you agree with this statement: blockiness is *never* an indication of undersampling

I can't answer that question with simple yes or no.

Indication / only consequence of under sampling is aliasing. Effects of aliasing depend on signal and the way it was sampled as well as any subsequent processing of the signal (we can discuss all of that in detail).

With astrophotography we have good understanding of what sort of aliasing effects we might get, and also - what subsequent processing (stacking with sub pixel alignment) will do to the data.

Usage of physical pixels as point sampling devices is also well understood - we know what sort and how much of "pixel blur" there will be. We can calculate MTF of this blur (not related to under sampling - it happens in all cases - under, over or correctly sampled).

However - without exact definition of term "blockiness" - I can't answer your question correctly. If you simply mean - like in examples so far - that "visible" pixels - like "pixelated" stars is sign of under sampling - then no - that is only effect of interpolation used - remember pixel are point samples - they don't have size / shape and are not square - they are just number at coordinates.

 

 

Link to comment
Share on other sites

To clarify above for participants without programming background

Agg, ps, pdf, svg formats are vector formats and differ from raster images (which contain pixels) and can be drawn without interpolation.

Raster images can't be drawn without some sort of interpolation and "none" / "default" is almost always nearest neighbor interpolation as it is simplest to implement.

Link to comment
Share on other sites

Yes, good catch. I've edited the post to remove that example for now.

Here's a quiz. These are 8 images taken on different nights during the last couple of years with the exact same kit (800mm scope plus Lodestar).I typically make an impressionist record of the seeing most nights I'm out. So these 8 examples represent 2 each of what I recorded as 'excellent', 'good', 'ok', 'poor'. Can you match them up?

1098143906_Screenshot2022-02-02at17_48_15.thumb.png.3c0a5a6563b6c5ee52526d3d000afe40.png

The above images are a reasonable real-world selection of what I see in front of me. All were produced with the same interpolation algorithm.

Let's assume Vlaiv is correct and I am never undersampled with this setup and I can therefore recover all the information by using e.g. Lanczos interpolation, regardless of how excellent the seeing is. Let's say I go ahead and apply that form of interpolation to everything I look at. My worry with doing that, as I mentioned earlier, is that interpolation methods are not free of artefacts, esp. the more sophisticated ones (the fact that the function offers me a choice of 19 interpolation methods tells its own story here). Would it not be 'purer' and certainly simpler to use a small pixel (*) camera? Again, speaking from experience using the exact same 'bad' interpolation methods with the ASI 290, they do a much better job with the small pixel camera, which is hardly surprising, to me at least. 

Martin

(*) Vlaiv, a couple of posts up you say pixels don't have a size and shape; I'm not sure what you're talking about as both display pixels and sensor pixels do have a size and shape. Sure, they can only represent one value, but that isn't what I'm referring to when I talk about small pxiels, and I guess you know that.

 

 

 

 

 

Link to comment
Share on other sites

1 hour ago, Martin Meredith said:

Would it not be 'purer' and certainly simpler to use a small pixel (*) camera?

Using smaller pixels camera:

1. In case you are under sampled to start with - allows to capture additional detail. Now I have to explain what sort of additional detail this is - it is not like thing will disappear if you use some level of under sampling. For example if we use x2 larger pixel size than optimum - everything will still be in the image, only difference that might happen is that two close stars - that you think are two close stars - might me a bit harder to tell they are two close stars. It will be a bit harder to differentiate.

We can all test this. We can take image that is properly sampled - by examining FWHM/1.6 rule and then we can bin that data and enlarge the image again back to starting size and compare to original. I'm going to make such comparison here to show you level of difference.

This image at this scale is 100% zoom of what is close to optimum sampling:

image.png.a1a9a1a55f0af7f3229d92c7e0efa884.png

I now resize it to smaller size:

image.png.a06aa3ddf77376d4728f6c407b2fc25d.png

It now has twice the smaller number of pixels in both x and y coordinate (I again cropped just galaxy as interesting feature and shown it here at 100%).

image.png.3ccb438ec6bb758d81705bf9e9ef026b.png

This is scaled up to original size. Some of the detail has been lost due to using lower sampling - but it's not that things are missing - effect is a bit different. Things are blurrier - less precisely defined. Stars are a bit larger / softer and detail is not as well defined / sharp as in original image. Some local contrast has been lost - compare two images for bridge detail for example:

image.png.596e35588a7c7df9577d7f5d37c27bca.png

image.png.6b355635d9e36bcc53844e4e666ab037.png

So that is the effect of under sampling in astronomical images - almost exactly the same as using smaller aperture or shooting in poorer seeing. Detail is lost - it shows how two are related - but level of detail loss is very subtle even if we go with 4"/px instead of 2"/px

This is one of reasons almost no one sees difference in sharpness between mono and OSC sensors. OSC sensors in reality sample at twice the lower rate than the same sensor in mono, but difference is so subtle, that it can barely be seen.

Under sampling is not bad.

2. Using smaller pixels when you are properly sampled or already over sampled.

You will gain nothing - even if we account for pixel blur, in over sampled scenario - you really gain nothing. You loose plenty by using smaller pixels. You loose SNR without getting anything in return.

Simplest way to explain this is in following way:

Say you have object that covers 100x100 pixels and you choose pixels that are half the size of original pixels. Object will with new camera cover 200x200 pixels - or x4 more pixels in total (10000 vs 40000). You did not change the aperture by switching to smaller pixels - it gathers same amount of light as before. Object did not change its luminosity - it still gives off the same amount of light.

What did change is how that light is distributed - it is now divided among 40000 pixels instead of 10000 pixels. In this new setup - each pixel is getting only 1/4 of the light compared to initial setup. You sliced your signal to 1/4 of original and thus need x4 total imaging time to get the same signal and SNR as before.

If you can make image in 4h / one night when properly sampled - you'll need to image for 2-3 full nights to get 16h to get same SNR image with x2 smaller pixels.

This is why over sampling is very bad - especially for a beginner. Experienced imagers could possibly afford multi night session and many many hours spent on one target to compensate for small pixels - but in reality, why do that?

 

 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.