Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

in theory would a ts optics quad 100 with x2.5 powermate give as good image as a c8 edge hd for planetary?


Recommended Posts

1 hour ago, Peter Drew said:

No, in theory or in practice, all other things being equal.  For planetary observation or photography it could be argued that the Edge version is not stricktly needed.   😀

The Celestron White Paper shows spot diagrams for various SCT's and the Edge was the sharpest on axis. This is also my experience having owned a couple. 

Link to comment
Share on other sites

I think the Edge would have the, well, the Edge. For planetary you need resolution to pull out the detail. Seeing is a secondary consideration as you use fast frame rate imaging and only stack the images from when the seeing is steady. You basically need aperture for resolution, but I heard the benefit tails off after around 12-14" but I'm not sure why that is?

Link to comment
Share on other sites

19 minutes ago, Lockie said:

The Celestron White Paper shows spot diagrams for various SCT's and the Edge was the sharpest on axis. This is also my experience having owned a couple. 

It wasn't my "argument" I've seen this repeated several times "elsewhere". I have a number of SCT's but have not yet been fortunate enough to own or try an Edge version.   😀

  • Like 1
Link to comment
Share on other sites

1 hour ago, Lockie said:

The Celestron White Paper shows spot diagrams for various SCT's and the Edge was the sharpest on axis. This is also my experience having owned a couple. 

I would have thought that in theory it shouldn't make any difference, but in practice I wondered whether the Edge HD optics tend to be figured to a higher degree of accuracy than the standard SCT optics.

When I purchased my CPC 9.25, Rother Valley Optics advised me that if my main interest was in viewing planets visually, then it wasn't worth paying the extra for the Edge HD version.

John

Link to comment
Share on other sites

2 hours ago, Lockie said:

The Celestron White Paper shows spot diagrams for various SCT's and the Edge was the sharpest on axis. This is also my experience having owned a couple. 

I'm going to make some bold statements below :D

Although Edge is sharper on the axis then regular C8 - for imaging it is not crucial thing. It does help, and it is better to have perfect optics, but I stress again - it is not crucial thing.

It is much more important for visual. Why is that? Well any sort of symmetric aberration which happens on axis (I don't mean coma or astigmatism that can be due to poor collimation or design features off axis) will alter airy patter in predictable way. It will alter it by lowering strehl ratio. Same thing happens with for example central obstruction. There is calculation that shows that SCT CO is equivalent of 1/4 spherical on unobstructed scope - or close to it.

Why is it not crucial thing? Well for the same reason we can obtain sharp images with central obstruction. I borrowed this nice graph from CN post on similar topic (discussing central obstruction):

image.png.47c6118a9fe75076491fd6a718ea2b86.png

It shows MTF, or modulation transfer function for different apertures and central obstructions - similar graph can be plotted for various aberrations and less than tight spot diagram.

What does this graph represent? On X axis we have "detail", or to be more precise spatial frequency - when image is represented in frequency domain by Fourier transform it will be composite (sum) of sine waves with different frequencies and phases. X axis represents these frequencies - left being lower frequencies (higher wavelengths or "larger detail") while to the right we have higher frequencies (shorter wavelengths - "finer detail").

Y axis represents attenuation - by which amount any given frequency is attenuated. 0.6 on Y axis stands for particular frequency being attenuated to 60% in comparison to fully resolved image. Line represents the fact that as we go higher in frequency (finer and finer detail) - there is more attenuation until we reach 0 - no frequency after that is visible - we have reached the limit of aperture.

Attenuation can be viewed as contrast in visual - higher the line in one particular place - greater the contrast at that frequency - easier to see features of "that size" (not strictly true to equate frequency and feature size, but close enough for vague understanding).

This graph explains why unobstructed aperture gives best contrast for visual, and why larger scope provide finer detail even when somewhat obstructed over smaller aperture.

When it comes to imaging, something else happens - there is processing stage, and in particular there is sharpening involved. What does sharpening do? It "restores" attenuated frequencies. It is a bit like having sound equalizer control (like on old stereos):

image.png.59306b669e8aeb7784c94a2aae3d0752.png

and the higher we push certain frequency - more it is amplified. We want to amplify all those attenuated frequencies to restore them to their original amplitude - being 1.

This is in essence what wavelets in Registax for example do (we could discuss those further, but in essence to first approximation that is what happens):

image.png.2c6b0c5fad730a0258086d5e38b341ee.png

Bars here are levers of "equalizer" - more you push it to the left, more you amplify that frequency range. First layer is finest frequencies (most to the right in above MTF graph) and then goes down to coarser frequencies. Number next to slider is how much you amplify - in this particular case highest frequencies are amplified by x13.4 - meaning that person doing wavelets judged that those frequencies were at roughly 7.5% of their original value (1/13.4) so they need to be multiplied with 13.4 to get them back to 1.

If you have SNR high enough you can in principle restore all frequencies back to their original value - fully resolved image, regardless of the shape of MTF. This also means that more MTF deviates from clear aperture - strehl 1, one will need higher SNR to do restoration. This is why it is better to have sharper scope (and smaller CO) - but in principle it is not essential.

Main problem is of course noise - noise is also distributed over different frequencies (randomly) and more you boost certain frequency of signal - you also boost that frequency component of the noise - making it larger and more obvious. SNR helps there - if you have large enough SNR - you can boost signal to needed level while noise will still be sufficiently low not to show.

Here is bold statement:

In principle, given high enough SNR we could fully resolve any astro image, even long exposure DSO image influenced by seeing, up to resolution provided by aperture. It follows from above explanation of frequency restoration and the fact that PSF in case of seeing is Gaussian distribution and it's MTF is also Gaussian - which never falls to 0 even at infinity. Limiting factor thus can only be aperture of the scope. In real life we struggle to get decent SNR, let alone have enough of it to do complete frequency restoration.

  • Like 4
Link to comment
Share on other sites

57 minutes ago, vlaiv said:

This graph explains why unobstructed aperture gives best contrast for visual, and why larger scope provide finer detail even when somewhat obstructed over smaller aperture.

When it comes to imaging, something else happens - there is processing stage

Great post Vlaiv, I understand the first visual part and I'm intrigued to learn your ideas in the second- the idea of restoring lost frequencies due to CO,collimation errors, optical quality etc in processing. I would be very interested in seeing images that bear this out.

An easy one might be to image slightly de collimated and then restore the frequencies lost across the whole spectrum. Thoughts?

Link to comment
Share on other sites

9 minutes ago, jetstream said:

Great post Vlaiv, I understand the first visual part and I'm intrigued to learn your ideas in the second- the idea of restoring lost frequencies due to CO,collimation errors, optical quality etc in processing. I would be very interested in seeing images that bear this out.

An easy one might be to image slightly de collimated and then restore the frequencies lost across the whole spectrum. Thoughts?

A lot of work was done on the when the fault with the HST was found and before a hardware fix was available. Google "HST inage restoration" for some examples.

Various forms of deconvolution were tried.

Regards Andrew 

Edited by andrew s
  • Like 1
Link to comment
Share on other sites

19 minutes ago, andrew s said:

A lot of work was done on the when the fault with the HST was found and before a hardware fix was available. Google "HST inage restoration" for some examples.

Various forms of deconvolution were tried.

Regards Andrew 

Thanks Andrew.

Link to comment
Share on other sites

Actually, if true this points to using the largest aperture that can be used, for imaging, as processing can help with the effects of CO,etc, etc,.

Not to say that smaller apertures don't produce fine images- they sure can. I see why Astroavani, Kokathaman use the scopes they do - Avani has been helpful to me in understanding imaging and I have a long way to go... I am going to try a C8 down the road I think for lunar. For me, I want to start with as many things stacked in my favor as posssible- ie accurate, collimation, good optics and understanding the effects of IR (frequencies) vs seeing...

To the OP- very good thread started, sorry to sway the convo a bit!

Link to comment
Share on other sites

Don't forget other issues come into play with increased aperture comes, cost, increased weight, longer focal length etc. There is always a balance to be stuck.

Image restoration is in practice a compromise with artifacts appearing if pushed too far with realistic images.

Do all you can to get good images ,collimation, cool down, high SNR etc. Then enhance if needs be.

Regards  Andrew 

Edited by andrew s
  • Like 1
Link to comment
Share on other sites

21 minutes ago, jetstream said:

Great post Vlaiv, I understand the first visual part and I'm intrigued to learn your ideas in the second- the idea of restoring lost frequencies due to CO,collimation errors, optical quality etc in processing. I would be very interested in seeing images that bear this out.

An easy one might be to image slightly de collimated and then restore the frequencies lost across the whole spectrum. Thoughts?

Collimation errors are somewhat different - I think I excluded them in my post

1 hour ago, vlaiv said:

Well any sort of symmetric aberration which happens on axis (I don't mean coma or astigmatism that can be due to poor collimation or design features off axis) will alter airy patter in predictable way

I think that aberration needs to be symmetric - I might be wrong at that.

As for frequency restoration, we are doing it regularly - when we sharpen image using wavelets. Although wavelets seem somewhat "magical" - they operate on above given principle.

I can briefly explain principle of operation of wavelet sharpening - and give you an example how to do it by hand. I will then expand on it a bit and present better approach based on second generation wavelets and Lanczos kernel. It is a bit mathematically involved by I'll try to keep it simple and understandable.

Let's just for now look at above MTF diagram and think of what it means - it has frequencies on horizontal axis. We can think of image being a function in 2d space - where pixel intensity is value of the function at coordinates given by X and Y. Such function can be represented by Fourier analysis as sum of sine waves each having different frequency (and phase and amplitude). Important part in previous sentence is "sum".

I'll now briefly skip to analogy with numbers and digits. Imagine that you have a number like 314 - it has 3 digits - one representing hundreds, second multiples of ten and last one ones. Such number is in fact 3*100 + 1*10+4*1.  Now let's think how can we get each of those digits by some math - isolate it from the rest of the number. Here is what I propose - we find a way to replace digit in anyone place with 0. Than what we need to do is just subtract such number from original number.

If we want hundreds we do 314 - 014 = 300. If we want digit in second place (tens) we do the same 314 - 304 = 010 and of course ones - 314 - 310 = 4.

If we want to "isolate" particular frequency and do something with it we need to take original function and produce function that has value of 0 for that particular frequency and subtract it from our original function. Result will be only that frequency.

How do we do that - "kill off" certain frequency? Well, it is called band filter - we need to filter out particular frequency. With images it can be done with convolution - it is operation in spatial domain that is equivalent as multiplication in frequency domain. In our example with numbers we would do 314 - 011 & 314 = 300  - where & stands for - multiply each digit with corresponding digit (multiplying frequencies).

Here comes funny part - blurring of the image is convolution. If we blur with gaussian kernel (so called gaussian blur) we will be "killing" off high frequencies. Take image that we want to sharpen, and then make a copy, blur some more and subtract the two - what is left is only high frequency components of original image. Now we take that high frequency image and multiply with some factor to restore frequencies and we add to blurred copy and we will get sharpened original.

I know that this sounds funny blur image more to sharpen it, but it is just like numbers above - in order to isolate particular digit we need version of that number where there is 0 in place of wanted digit. Blurring is just that - reducing value of "particular digit".

Do this with multiple blurred images - each with progressively larger blur and you will be able to isolate different frequency bands that you can "boost" by different amount.

This is in essence what wavelets do. Let's do an example to see how it works in practice.

First we take some image like this:

image.png.836c9b93c0332e17e7de31b59922a230.png

then we apply gaussian blur - this is will be our "planet" that we will attempt to sharpen up a bit:

image.png.d774e7c1f97174fedad5ab45e07ef534.png

Next step would be to blur it a bit more like this:

image.png.febb562688daca7bcde4ab623b2e3ab9.png

And then we subtract the base blurred version and a bit more blurred version, and we get this:

image.png.336ea65e9c9cae25ff9a917959c307fa.png

You can already see that difference containing higher frequencies is already easier to read than both blurred versions - it is however low contrast (stretched here, but in reality it would be rather "smooth" gray with barely visible lettering).

We can now multiply that image with some number to "enhance it" - for example 5 and add to "more blurred" version to get a bit sharper version than our blurred baseline:

image.png.01b13008a6d1f9b44dc11c3df2b91494.png

Spread like this it is hard to see that there is some improvement, so I'll create blinking gif to make it easier to see the difference - here it is, blurred baseline vs a bit sharpened version:

blink.gif.d2a476b04eecf6905a3298bacda8fcb9.gif

We only did one "band" of frequencies and it already shows improvement in sharpness.

I would like to add that gaussian blur is not the best blur to do this. Fourier transform of Gaussian is Gaussian and that means when we blur with gaussian we are multiplying in frequency domain with gaussian. We won't have proper filter in frequency domain but rather some frequencies will be attenuate a bit and some more, but none will be exactly 0.

Compare two shapes:

image.png.a2afd35f6ddbd32f541fe861b378afc3.png

What we want is blur that has shape like black line that gives cutoff filter, but with gaussian blur we get blue line and that is why gaussian blur is not giving us optimum results. Cutoff filter is very hard to implement as blur - it is in principle sinc function or sin(x)/x as can be seen on this graph:

image.png.b863bc8fb6ed2be5f5be938854834e65.png

but Sinc function has values all the way to infinity and our image is limited in size, so we can't use it for blurring, but there is another function, a clever function which is windowed sinc function, and in particular Lanczos kernel which is sinc function windowed by another sinc function. We can for example compare limited sinc blur (due to finite size of image) with lanczos blur and their respective filters are:

image.png.f037854906a23fb9d178e8cb6ac4354f.png

We see that yellow line looks like cutoff filter much more than gaussian. Zig-zag pattern of sinc is because sinc is limited to size of image - otherwise it would tend to perfect cut off for infinite sinc function like this:

image.png.d3452a7cce909e9fee50ea140e4ac7a4.png

More frequencies we include it gets closer and closer to perfect cutoff - first frequency will be just sine, but as we increase "range" of sinc - it tends to perfect cut off as sinc tends to infinity in spatial extent.

Ok, that sort of concludes frequency restoration using spatial domain, or "How to blur image more in order to make it sharper" :D

There are other frequency restoration techniques like deconvolution, inverse filtering and others but most of them require knowledge of the blur to do restoration (except blind deconvolution which is seriously hard topic). Nice thing with above approach - wavelet restoration is that it is guided - you can use sliders and adjust each frequency band until image looks right - regardless of the MTF shape, we rely on our external knowledge of what proper image should look like to replace missing MTF information.

 

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

Ok, that sort of concludes frequency restoration using spatial domain, or "How to blur image more in order to make it sharper" :D

Holy cow!

Thanks Vlaiv- I'm going to read this many times and try to get the concepts.

Maybe I'll just stick to visual lol! I was going to buy a C8 to try imaging but held off until I actually understand the processing...

Link to comment
Share on other sites

6 hours ago, Peter Drew said:

It wasn't my "argument" I've seen this repeated several times "elsewhere". I have a number of SCT's but have not yet been fortunate enough to own or try an Edge version.   😀

They are blooming sharp for an SCT. I hope you get to look through one at some point. Superb with 100 degree EP's :) 

Link to comment
Share on other sites

5 hours ago, johnturley said:

I would have thought that in theory it shouldn't make any difference, but in practice I wondered whether the Edge HD optics tend to be figured to a higher degree of accuracy than the standard SCT optics.

When I purchased my CPC 9.25, Rother Valley Optics advised me that if my main interest was in viewing planets visually, then it wasn't worth paying the extra for the Edge HD version.

John

I know what you mean, I agree on the face of it, it shouldn't make any difference on axis. Having owned various non Edge and Edge SCT's I've given this some thought because I can see a difference. 

My thoughts are as follows. SCT's start producing coma very quickly from the very centre of the field of view. The Edge optics are completely flat (Aplanatic, pretty much zero coma). So if you take the term on axis to mean anywhere roughly in the middle, and not the exact central few mil, then I do see how it could be sharper 'on axis'. 

Both the white paper and my observations show the Edge to be the sharper SCT, and I'm wondering if this is because I never or rarely observed or imaged with the standard SCT absolutely dead centre on axis? 

Now I'm not saying RVO's advice is wrong because indeed the C9.25 is seen as the "Jewel in the Crown" of the Standard XLT Celestron SCT's. I believe this is because of it's slightly slower thus better corrected spherical primary mirror. I can see why they would say it wasn't worth paying the large premium for the C9.25 Edge. I don't think the gap between the C8 xlt and the C8 Edge is quite as large so in that case it may be more worth while. 

Edited by Lockie
  • Like 1
Link to comment
Share on other sites

Very indepth post vlaiv, thanks :)  I can't say I understood it perfectly but I get the point about reclaiming in post processing. I mainly observed with the Edge so for me it's a better choice, especially if you go on to consider non point sources like the Moon. But for the OP I can see that they wouldn't need an Edge SCT, but as many of us agree the aperture is very important. The 100mm frac wouldn't match the 200mm SCT for planetary imaging. 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.