Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Jupiter sampling C14 F11 vs F18 - 11 Nov 2022


geoflewis

Recommended Posts

There seems to be some misunderstanding how seeing works and how we correct it.

I'll try to explain different aspects of seeing, how they function and what sort of impact they have on image and how do we correct for that in different stages of processing.

First thing to understand is that every point on the image is affected by seeing in different way. There is no single PSF like in long exposure image, where more or less all the image is blurred with same PSF which is average of the seeing over long exposure.

Each point in planetary image is affected by different PSF.

Following diagram will show exactly how those PSFs are formed:

image.png.2bffa0764772f9ec371f8f881f265ca6.png

Each point of the image is at slightly different angle to optical axis. In the image above - I exaggerated angles and drew 3 "pillars" representing 3 points in the image. Each of those pillars is wide as our aperture - so let's say its 30cm long and it is high as much as atmosphere - like 10-15Km or more (as far as there is enough air density to be significant to cause distortion of wavefront - not sure how much is that exactly).

So we have one of these skinny "tubes" for every pixel in our image. For very close points / pixel - part of these tubes overlap. This means that we won't have radically different wavefront aberration for close by points - it will sort of "morph" between close points - but points further apart will have different wavefront aberration.

Wavefront arrives as a straight line to our atmosphere (red line in above image) - but due to different layers / densities of air - it becomes distorted. It is very similar effect to say mirror of a telescope not being perfect curve but having some roughness to it.

From left to right I made 3 different levels of roughness in red wave fronts.

Once this wave front is focused to a single point and constructive / destructive interference does its thing - we get PSF at that point. For perfect wavefront PSF is airy disk.

image.png.145cc19fb9c9531d5fc007f295a2d6e0.png

For slightly deformed wavefront - we get slightly deformed airy disk as PSF:

image.png.075ffe9a2fd5092c014558fde811ffd7.png

As wave front is deformed more - so is Airy disk:

image.png.519dbc252ba073e49b36c4094fef0684.png

For very deformed wave front - we get complete mess:

image.png.b7a197acf4eb110ba4758cd57ce4d386.png

Now - all we talked about so far is "stationary" case.

We have shown how each point (not the image, but each point of the image) is distorted differently. When we blur the image - we distort each point with same PSF - we don't use different one for each pixel. With seeing each point is distorted with different PSF..

We can't correct for these yet. There is no sharpening that will fix above.

First thing that we can correct is geometric distortion. Wavefront can be decomposed into components (similarly how vector can be written as X axis, Y axis and Z axis component) - so can Wavefront be made up from sum of standard curves. These curves are represented by something we call Zernike polynomials (over unitary circle - but that is not important).

What is important is that there is "zero" and "first" order of such polynomials - similarly to regular polynomials. Zero order represents "DC offset" - and is not important for us, but first order polynomial is slope with regular polynomials and here it is wavefront tilt.

image.png.6d1046cf4e2c8a1b3e2e845b263bd5b7.png

In image above we have red line representing profile of our wavefront (which is actually 2d plane but we look at just cross section here for simplicity). Orange line is general slope of this wavefront - or tilt.

This tilt is responsible for moving our PSF away from its true center - for displacing it.

Each point in our image will be some what displaced from its true position. Close by points will be displaced similarly (but not the same) - because of overlap of those wavefront funnels that we mentioned at the start (same pieces of atmosphere impact them).

For stationary case - this creates geometric distortion.

Regular grid under such distortion would look a bit like this:

image.png.6d7a1de64f34c7d4b3b6153f5134e4c7.png

Remember - this is only tilt component of the wavefront aberration. There are other components that more or less distort airy disk. Tilt is only how much airy disk is displaced from its true position (that is why stars jump around in poor seeing rather than being still).

This sort of seeing error is corrected by alignment points in stacking software.

However - in order for that to happen - image must be "stationary" when it is recorded and we haven't yet discussed how it is changing with time.

If atmosphere was completely still / frozen in time - with all its higher and lower density regions - we would have just one frame. It would be affected by seeing - it would be distorted and it would be blurred differently in different parts (remember - every point in the image has its own PSF - some are nice or not so nice airy disks and others are complete distortions of airy disk).

Atmosphere changes with time and we need to understand how it changes with time - in order to get to other types of blur.

image.png.15f122fdbf745c995fe5029438464ffb.png

Here we have diagram of how atmosphere changes with time. Vertical black lines represent our funnel - one point in the image.

Different color rectangles - represent different layers of atmosphere. Some might have smooth gradient, other can contain turbulent air.

image.png.53b7d73aecbb2286095a7c7813eaa19f.png

Here we have image representing laminar and turbulent air flow. Laminar is smooth gradient type of air, while turbulent is full of eddies and swirls.

Each of these layers can move at some speed left or right with respect to our funnel.

When we have turbulent layer and it moves at some speed - so that eddies and swirls that are currently in our funnel - change significantly - so does our PSF change significantly.

Say that we have wind speed of say 4 m/s, and we have 30cm of aperture - it takes 0.3m / 4m/s = 0.075s or 75ms to completely "change" contents of our funnel. If we want to change contents of our funnel for just 10% - it will take 1/10th of that or 7.5ms

image.png.2fbc0bccef09312010d434927fd283e7.png

In 75ms - swirls and disturbances will move one whole funnel (blue is funnel at start and red is funnel at end of this time) - so contents of disturbance in funnel will be completely different - PSF will change 100%.

But in 7.5ms - this will happen instead:

image.png.afef8cd385470c6f29160e12d9158b26.png

Contents inside funnel will move very little - and almost the same deformation will be inside the funnel. PSF will change only slightly.

Now - this change in PSF and time - depends on speed at which layer is moving and how distorted / turbulent layer is. If layer is mostly laminar in nature - without any disturbances, then it does not matter if it is moving fast or slow - it won't make much difference and its contribution will mostly be good airy disk. We call this steady seeing (even if some layers are moving at some speed).

This usually happens when air is moving over large flat terrain without much heat sources to cause turbulent swirls. Deserts and oceans are prime candidates to create this type of laminar motion (hint Australia?).

Problem is when we have turbulent air that moves at some speed. This defines already mentioned coherence time.

Why is jet stream so detrimental for seeing? First because it is turbulent and second - because it can reach speeds up to 150kmh. That is 35-40m/s, or x10 faster than what we just discussed. For 300mm telescope this means that whole content of funnel will change in about 7.5ms - or if you want minimal change - you need 10% of that - or less than 1ms exposure to freeze this change.

So that is second type of blur that comes from seeing - one created by our exposure and changing PSF. It is motion blur type and we need to make sure to avoid it by using short exposure for our conditions.

Some sites will have laminar air flow in much of atmosphere and good seeing - and they don't have to worry as much about coherence time as PSF is mostly good and not changing at all, but if there is at least one turbulent layer in atmosphere - it is very important to figure out its speed and resulting coherence time given our aperture size.

Ok, so we now need to consider third type of blur.

We have said that every part of image is affected differently - each point has different PSF. Not only that - but PSF changes for each point from frame to frame. We could say that PSF changes in each of three dimensions - two dimensions of the image X and Y and also dimension of time T.

In each of these dimensions on very short distances it changes ever so slightly. Freezing the seeing means selecting short enough exposure so that PSF is not changing much in T direction.

Anyway - since each of our captured frames will have different PSF for each point - when we stack those frames (after accounting for tilt component / geometric distortion) - each point will actually be affected by average of PSFs over time. This bit is important.

Just a moment ago we said that we want to freeze the frame and avoid averaging of PSF - but now we average them - so what's the catch?

There are two parts to this - first is correcting for tilt / geometric distortion and second is frame selection. In fact AS!3 is so clever that it can detect which parts of single frame are sharp and which are not (remember - each point, different PSF) and use parts of frames, those that are sharp in stack.

image.png.8632aceffdbce7b417d3e9b33f28c077.png

you can select if you want quality estimator to work on whole frame (global) or do you want quality estimation around each alignment point (which is default and better option - as it allows for use of parts of sub).

In any case - when we discard these two major sources of blur - motion blur of geometric distortion and strong blur of heavy PSF distortion, we are left with only light distortion on Airy disk in our pixels. When these average out we get nice Gaussian type of blur.

This is due to mathematical principle called Central limit theorem that states:

Quote

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed.

https://en.wikipedia.org/wiki/Central_limit_theorem

or in another words - even if individual PSFs are random in nature - their average will tend to nice gaussian shape, and we know how to sharpen up gaussian type of blur (or other blurs that are nice and symmetric in nature like blur from aperture and ultimately their combination - which is what affects our planetary image in the end).

 

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

On 12/11/2022 at 16:12, Magnum said:

on the Down sides  the f25 image took much longer to stack and takes up much more disc space and was dimmer so had to use higher gain, yet the final image still looks smoother to me. 

The downside of the huge file sizes and processing overhead is something that's putting me off the higher sampling rate, perhaps other than in excellent seeing, if we ever get that...🙄

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.