Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Jupiter - 30 October 2022


geoflewis

Recommended Posts

I wonder if oversampling helps by making the finest detail captured physically larger in the image than the pixel scale shot noise.

If the smallest details the image contains are larger (occupy more pixels) than the noise, then does that make it easier to sharpen the detail without sharpening that noise? 
Conversley if the smallest details are only occupying two pixels (if properly sampled) then does it become more difficult to then sharpen that detail without also sharpening the pixel scale noise as they are much closer in size? 
 

 

  • Like 1
Link to comment
Share on other sites

1 minute ago, CraigT82 said:

I wonder if oversampling helps by making the finest detail captured physically larger in the image than the pixel scale shot noise.

If the smallest details the image contains are larger (occupy more pixels) than the noise, then does that make it easier to sharpen the detail without sharpening that noise? 
Conversley if the smallest details are only occupying two pixels (if properly sampled) then does it become more difficult to then sharpen that detail without also sharpening the pixel scale noise as they are much closer in size? 
 

 

I don't think so.

Ultimately it depends on sharpening method used, and recently, I've become aware that simplest method might be the most effective method - and no one is using it (no implementation in software).

That would be FFT division.

In any case - what happens to the noise and signal is best viewed in frequency domain.

I'll make some graphs to try to explain what is going on.

image.png.94ad5a9845bd41b09582edd0f4e669f9.png

This is signal in frequency domain - or "pure image". X axis represent frequency and closer to zero - coarser the detail and further away from zero - finer the detail.

Telescope has MTF that looks like this:

image.png.b2a21bf6a2317a77149d3417fb588f85.png

Seeing is very hard to model, and I honestly have no idea of how I would draw the curve. I know how to draw seeing curve without lucky imaging - that is easy, it is modeled by Gaussian shape, but selection of best frames and usage of alignment points "straighten up" things somewhat so they change the curve - in general, for this example, we can ignore it.

Above two combine by multiplication in frequency domain. That means that resulting graph will be high where both are high, and low if any of the two is low. Image captured with telescope will thus have graph looking like this:

image.png.61aef889bd867c6fb4819ef292431bde.png

Noise has more or less uniform distribution across frequency domains, and it looks like this:

image.png.f755da75dde10cae2516ac48c5f9ef29.png

In the end the two add to produce frequency response that looks a bit like this:

image.png.60c6b41d611cd3dfc1b351c3ceee7cba.png

at some point signal just drops to level of noise and then it stays there.

Sharpening is effectively "lifting" this sloped curve to be like original:

image.png.94ad5a9845bd41b09582edd0f4e669f9.png

but while we are lifting it - we are lifting it and also added noise.

When we lift part where noise is larger then signal - we will just get noise on those frequencies and not signal.

Now two things play part in how combined image will look when we have noise - first is sampling rate and second is level of noise or SNR.

image.png.248d2b3eccb9b36174b9dd3e56ac2750.png

when over sampled - like in above image, then following happens. Red is signal, blue is noise and green is combined. When you over sample - then you have signal only in part of frequency spectrum - but noise is present all over the spectrum. This means that when you "straighten" up the curve - you end up straightening one part that is pure noise - there is no signal there.

Depending on algorithm used - for example wavelets, this is the case of "finest" slider, or first one. If you over sample and you don't touch that first slider - you won't boost that noise, and there is no much difference between properly sampled and over sampled image, but if you try to sharpen up that "finest" detail that is not there - you'll just boost the noise without doing anything to signal.

This shows that in some cases - over sampling can cause even more noise when sharpening that properly sampled signal as there is high frequency component of noise that is not matched by high frequency component of signal.

We could actually do some nice experiments.

I can prepare some data that is:

a) properly sampled

b) over sampled

and I could also add different amounts of noise to each so we can determine what type of sharpening affects results in which way (we could all download data and try wavelets or other types of sharpening).

 

  • Like 1
Link to comment
Share on other sites

35 minutes ago, CraigT82 said:

I wonder if oversampling helps by making the finest detail captured physically larger in the image than the pixel scale shot noise.

If the smallest details the image contains are larger (occupy more pixels) than the noise, then does that make it easier to sharpen the detail without sharpening that noise? 
Conversley if the smallest details are only occupying two pixels (if properly sampled) then does it become more difficult to then sharpen that detail without also sharpening the pixel scale noise as they are much closer in size? 

Probably that kind of noise is easier to reduce in the first place. Then sharpening may have better effect. Depending on the software used these both processes may be done in the same place. 

It is also easy to simulate - get some Juno probe or Damian Peach images, scale it as you want, blur it, add some random noise as much as needed, and then process. 

Edited by drjolo
  • Like 1
Link to comment
Share on other sites

53 minutes ago, geoflewis said:

Thanks Vlaiv, I think you may have hit the nail on the head. It may just be that it is easier to dial the focus with a bigger 'oversampled' image on screen. I certainly found this to be true when Mars was last close to us in 2020. I was operating at ~F22 for Mars in 2020, but only F12 for Jupiter and Saturn last year, but of course Jupiter and Saturn were much lower down and imaging them at all was very challenging. So far this year I have imaged all three at F12, but I am going to experiment with different amplification to see what differences I get, especially with Mars smaller diameter this year, but up at ~60° elevation.

Thanks for all your advice on this complex (to me anyway) topic.

I agree with you Geoff, imaging at the supposed optical focal ratio often results in far too small am image, which makes it more difficult to focus on the laptop screen, and to set align points in Registax or AutoStakkert.

John 

  • Thanks 1
Link to comment
Share on other sites

1 minute ago, johnturley said:

I agree with you Geoff, imaging at the supposed optical focal ratio often results in far too small am image, which makes it more difficult to focus on the laptop screen, and to set align points in Registax or AutoStakkert.

John 

Thanks John, this has been a very enlightening discussion for me.

  • Like 1
Link to comment
Share on other sites

Here are test simulation files that compare F/14.5 to F/21.75 on C14 under "regular" conditions:

- 200fps / 3 minute capture - total of 36000 subs captured

- stacked top 10% subs, so 3600 subs stacked for each

- Peak signal strength for optimum sampling is 200e (for oversampling it is 2.25 times less as light is spread over 1.5 x 1.5 = 2.25 larger surface).

- Poisson noise was use to model shot noise

- Read noise was modeled with Gaussian noise of 0.8e per stacked sub.

- Each image was first blurred with telescope PSF (same used for both, adjusted by sampling rate)

- Each image was additionally blurred with Gaussian blur (again adjusted for different sampling rate - so same in arc seconds) to simulate seeing effects.

optimum.fits

oversampled.fits

Images are normalized in intensity and saved as 32bit fits. Green channel from above HST Jupiter image was used as baseline (I'm hoping that 8bit nature of baseline image won't have much effect given all the processing that went into making simulation as it was all done on 32bit version of data).

Link to comment
Share on other sites

1 hour ago, vlaiv said:

.

I just don't see why would you aim for F/22 or F/24 if you can sample optimally at F/14.5 given your pixel size. If you use barlow - then you can "dial in" magnification by changing barlow to sensor distance.

Maybe best approach is to just give it a try one way and the other and just choose what you feel is giving you best results. Most others are using that sort of an approach and don't really care that their images are over sampled, if it is easier for them to work that way.

 

Well you have asked the same question as I have, but my conclusion is the top imagers have already tried it both ways, and all seem to have settled on what you consider to be over sampling, if they could get as good results using the lower sampling that your calculations suggest ,then why aren't they doing that? its not like they would choose to oversample for no benefit is it?

I myself have tried using lower sampling then upscaling it in post, but my side by side results against the higher sampled images are worlds apart in terms of detail.

Also loss of SNR from oversampling is a non issue if using a large aperture, with my 12" scope I can still get 200fps at f24 on Jupiter and never have to go over gain 275 with the 462C sensor and the limit is the speed of the camera at that ROI not the shutter speed. So even if thats oversampled im still capturing more data than I can handle, 90 sec captures are 18,000 frames each. with Mars at the same scale with smaller ROI im getting 350 fps and 31,500 frames in 90 secs. 

if I go down to f15 I can max out the camera using a smaller ROI box but then the images don't look as good to me anyway so seems pointless.

If using a smaller aperture scope then it would be an issue to get a decent SNR, but if I had a 6" scope I wouldn't be trying to image at the same scale anyway.

Also we have to remember we can't use the f ratio number universally in these conversations, whenever I state the f number im using, it's only relevant for other 12" scopes.  If I ever get 1 metre scope then f7 would be all I need to get the same sampling with the same camera, or if I buy myself a 1um pixel camera then my current scope would be very over sampled at its native f10, its all relative. The only number we should really be quoting is the pixel scale of the system, and I like to be at around 0.1"/pixel. 

Lee

 

 

 

  • Like 2
Link to comment
Share on other sites

27 minutes ago, vlaiv said:

Here are test simulation files that compare F/14.5 to F/21.75 on C14 under "regular" conditions:

- 200fps / 3 minute capture - total of 36000 subs captured

- stacked top 10% subs, so 3600 subs stacked for each

- Peak signal strength for optimum sampling is 200e (for oversampling it is 2.25 times less as light is spread over 1.5 x 1.5 = 2.25 larger surface).

- Poisson noise was use to model shot noise

- Read noise was modeled with Gaussian noise of 0.8e per stacked sub.

- Each image was first blurred with telescope PSF (same used for both, adjusted by sampling rate)

- Each image was additionally blurred with Gaussian blur (again adjusted for different sampling rate - so same in arc seconds) to simulate seeing effects.

optimum.fits 950.63 kB · 1 download

oversampled.fits 2.1 MB · 0 downloads

Images are normalized in intensity and saved as 32bit fits. Green channel from above HST Jupiter image was used as baseline (I'm hoping that 8bit nature of baseline image won't have much effect given all the processing that went into making simulation as it was all done on 32bit version of data).

I don't understand why you are doing simulations, why not do it with real captures? 

  • Like 1
Link to comment
Share on other sites

1 minute ago, Magnum said:

and all seem to have settled on what you consider to be over sampling

I'd like to point out that that is not my opinion - that is a fact based on physics of light. It's not like I've chosen that to be the limit to my liking.

It is over sampling - regardless of what we consider it to be.

2 minutes ago, Magnum said:

Also loss of SNR from oversampling is a non issue if using a large aperture, with my 12" scope I can still get 200fps at f24 on Jupiter and never have to go over gain 275 with the 462C sensor and the limit is the speed of the camera at that ROI not the shutter speed.

Loss of SNR is always an issue. It does not matter if if you have a large or small aperture.

Per pixel photon flux is the same for critical sampling in small aperture as well as large aperture. This is because F/ratio is related to pixel size and if you increase the aperture and keep F/ratio the same - you also increase focal length and equally decrease light per pixel - so larger aperture but less light per pixel = constant photon flux per pixel.

Fact that you increase gain - does not mean that more photons arrive. SNR is related to number of captured photons per exposure.

If you expose for 2ms at high gain and you get same numerical value as exposing for 5ms at lower gain - does not mean that SNR is the same.

In 2ms you'll get x2.5 less photons than in 5ms exposure - regardless of the gain setting. It is number of photons that dictate signal. Gain is only important for read noise - higher gain means less read noise.

7 minutes ago, Magnum said:

Also we have to remember we can't use the f ratio number universally in these conversations, whenever I state the f number im using, it's only relevant for other 12" scopes.  If I ever get 1 metre scope then f7 would be all I need to get the same sampling with the same camera, or if I buy myself a 1um pixel camera then my current scope would be very over sampled at its native f10, its all relative. The only number we should really be quoting is the pixel scale of the system, and I like to be at around 0.1"/pixel. 

F/ratio is fixed for critical / optimum sampling.

Why?

Because as you put it 1m telescope being F/7 will give you same sampling rate as say 333mm at F/21 - but 1m has 3 times more resolving power than 333mm aperture scope and can thus utilize 3 times higher sampling rate.

If you double the aperture - you can double the sampling rate - which means double the focal length for same pixel size

2 x aperture / 2 x focal length = aperture / focal length = constant F/ratio

  • Like 1
Link to comment
Share on other sites

6 minutes ago, Magnum said:

I don't understand why you are doing simulations, why not do it with real captures? 

Someone wanted to know impact of noise vs sampling rate.

You can never perform proper comparison with real captures as there are so many variables that can get in the way.

Who is going to guarantee that you'll get the same number of usable subs in both captures and if you use same number of subs - who is going to guarantee that blur will be the same. Seeing is random. It changes by the minute.

In above simulation - all the variables are kept the same except for sampling rate and people can see for their style of processing (deconvolution / wavelets) - under given conditions (10% good frames, 200fps, 3 minute capture, excellent seeing, optimum vs x1.5 over sampling) - which will give them better results.

I think it is useful for many - as most people won't bother to do comparison under night sky - they want to get actual capturing done. This way they can test their processing against different conditions without wasting a good night of imaging on doing comparison.

  • Like 1
Link to comment
Share on other sites

On 07/11/2022 at 16:40, vlaiv said:

 

F/ratio is fixed for critical / optimum sampling.

Why?

Because as you put it 1m telescope being F/7 will give you same sampling rate as say 333mm at F/21 - but 1m has 3 times more resolving power than 333mm aperture scope and can thus utilize 3 times higher sampling rate.

If you double the aperture - you can double the sampling rate - which means double the focal length for same pixel size

2 x aperture / 2 x focal length = aperture / focal length = constant F/ratio

Sorry I didn't type that out very well, I understand how aperture effects sampling.

I didn't mean sampling of the scope I meant oversampled for the seeing, even with lucky imaging there is a practical seeing limit ( somewhere between 0.06 & 0.1"/pixel ), I know you previously said that seeing doesn't effect optimal sampling, but in practice I think  the seeing does limit how high we can sample no matter how short the exposures.

I may be wrong on this, im just talking from my experience.

all the best 

Lee

 

Link to comment
Share on other sites

3 minutes ago, Magnum said:

Sorry I didn't type that out very well, I understand how aperture effects sampling.

I didn't mean sampling of the scope I meant oversampled for the seeing, even with lucky imaging there is a practical seeing limit ( somewhere between 0.06 & 0.1"/pixel ), I know you previously said that seeing doesn't effect optimal sampling, but in practice I think  the seeing does limit how high we can sample no matter how short the exposures.

I may be wrong on this, im just talking from my experience.

all the best 

Lee

 

Actually, in planetary lucky imaging we choose to ignore the seeing effects when choosing sampling rate.

There is very good rationale behind this and it works well.

Most of the seeing blur comes from motion blur - change in PSF over time. When we say we have to use short exposures to freeze the seeing - that is what we are exploiting - fact that on short scales PSF is fixed. When we integrate for longer - we record superposition of different PSFs - and that acts as motion blur creating more blur then there might initially be.

One of the reasons we see much more detail in images versus when observing is because of that. Our eyes and brain integrate image for much longer then typical exposure for lucky imaging. We look at something like 30fps (so 33ms integration) while most of the time coherence time is 5-6ms.

Second important bit is that we end up choosing subs where dominant component of wavefront error is tilt rather than higher order terms in Zernike polynomial. What this really means is that software chooses subs that are only geometrically distorted rather than optically. (active optics systems that exist for amateur imaging also deal with this first order component - tilt unlike adaptive optics that tries to restore wavefront with higher precision).

If you look at this recording of the moon - you will notice this effect:

Seeing_Moon.gif

With appropriate choice of exposure length (and selection) we mostly get geometric distortion. This is dealt with in software by using of alignment points and software is able to "bend" image back into proper shape.

In the end - we do have some impact of atmosphere - but here we come to final piece of puzzle.

There is very big distinction between how atmosphere affects image (especially after stacking) and how aperture affects the image.

With aperture we have clear cut off point due to nature of light, airy disk and its representation in frequency domain. This can be seen on MTF graphs for a telescope that look like this:

image.png.eeed74aee274b1363e4823375cbf3bd2.png

MTF graph shows "telescope filter response" for image, and this filter drops to zero at some point. This means that all the frequencies above critical are effectively killed off - multiplied by zero (anything multiplied by zero will simply be zero - no way of restoring it).

Seeing induced blur behaves differently - it is much more like Gaussian shape (in fact - mathematically, given enough subs to stack under same seeing condition will produce exactly Gaussian shape in the limit), and Gaussian curve has following property:

image.png.f5d7e7fb2378bbca37c538ff65c53f2d.png

It never reaches exactly 0. This means that while seeing attenuates higher frequency components - it never completely removes them.

What this really means is that given enough SNR - seeing influence can always be reversed, but aperture influence can never be reversed.

When we sharpen - we are reversing effects of these low pass filters, and that is the second reason why we get more detailed images than we can ever see at the eyepiece - our brain can't sharpen up image, while computers can. Sharpening is not making up detail - it is actually restoring detail that has been attenuated in frequency domain - and better the sharpening algorithm - more accurate restoration is. However, we can only restore up to where aperture effects perform their cut off.

For this reason - we always use aperture related critical sampling. Seeing will sometimes prevent us from sharpening it properly, but sometimes recording will be restored to max detail allowed by aperture.

  • Like 2
Link to comment
Share on other sites

Em 11/07/2022 em 00:04, astroavani disse:

Um belo resultado, amigo geoflewis!
Interessante que só consigo bons resultados usando a C14 em f/22 com a PM 2X. Eu tentei muitas vezes na noite de má visão usar d/f nativo, mas os resultados não foram bons.
Acho que deve estar relacionado ao sampling e ao ASI 290 que é bem próximo do 462 que você usa. Como tem um pixel muito pequeno, parece que uma imagem maior torna o resultado final mais agradável.

I didn't think my simple comment would yield such an interesting discussion.
First of all I want to thank Vlaiv, Geof and others who participated in this post as this is of great interest to me.
I always try to juggle physics with practice and try to understand what can go wrong when it doesn't.
As some have said, I just think that processing oversampled images and getting a good result is easier than correctly sampled images. I can't explain why, it could be a deficiency of mine that I don't know how to process at f/11 or f/14 correctly.
The impression I've always had is that images at f/22 are smoother, so they're much easier to work with.
I'm a bit of a purist astrophotographer, I don't make defeats, I don't cut moons to process the piece and I never go over 90 seconds of image to avoid drag, so I actually use a very simple but effective processing line that gives me good results and despite All theory says that if I take pictures at f/14 with the C14 I should get good results, practice has not yet been able to show me that.

Edited by astroavani
  • Like 1
Link to comment
Share on other sites

3 hours ago, astroavani said:

I didn't think my simple comment would yield such an interesting discussion.
First of all I want to thank Vlaiv, Geof and others who participated in this post as this is of great interest to me.
I always try to juggle physics with practice and try to understand what can go wrong when it doesn't.
As some have said, I just think that processing oversampled images and getting a good result is easier than correctly sampled images. I can't explain why, it could be a deficiency of mine that I don't know how to process at f/11 or f/14 correctly.
The impression I've always had is that images at f/22 are smoother, so they're much easier to work with.
I'm a bit of a purist astrophotographer, I don't make defeats, I don't cut moons to process the piece and I never go over 90 seconds of image to avoid drag, so I actually use a very simple but effective processing line that gives me good results and despite All theory says that if I take pictures at f/14 with the C14 I should get good results, practice has not yet been able to show me that.

Hi Avani, thanks for you observations and contibution to this topic, which seem to align with what most of the top planetary imagers do. I also don't know why the field experience of so many experts doesn't support the mathematical theory. There must be something about imaging through Earth's atmosphere (or something else) that changes the maths in a way that I do not understand and certainly can't explain.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.