Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

IMX571 on 8" EdgeHD


Recommended Posts

4 minutes ago, andrew s said:

Not sure exactly what your trying to show but on my device the top leaf is much sharper by eye than the lower one which also shows artifacts. Regards Andrew 

That is exactly my point.

If you have image that is properly sampled and you cut frequencies above certain point - it will show in image.

Take a look at one post above that - where I did the same - can you see equal loss of detail in that image - can you spot any place where you can no longer resolve feature - like smallest fibers not being resolvable in leaf example

  • Thanks 1
Link to comment
Share on other sites

11 hours ago, Dan_Paris said:

 

I don't make an unsubstantiated claim as you seem to suggest but share my experience of several years of galaxy imaging. The resolution increase when I swap a camera with 1"/pix to 0.66"/pix was just plain obvious. And none of the serious imagers that I know personally shares your point of view. 

 

I don't know if I'm a serious imager or not but I do share Vlaiv's point of view and have found no significant difference in resolution between 0.6 and 0.9"PP.  In the end I went for the 0.9 option (TEC 140/Atik 460 mono.) Indeed, I wrote a feature article saying as much for the British magazine Astronomy Now.  I was half expecting a good deal of comeback from it but there was none and, I must say, many of my imaging guests do feel as I do. Arch enthusiast of all things technological, the late Per Frejvall was persuaded by my TEC 140 and bought one himself. It's still here in other hands.

If you do find more real resolution from 0.6"PP then you do. I looked at my data carefully and concluded that I didn't. We can both be right.

On 01/09/2023 at 21:08, OK Apricot said:

 

I believe this is going to end up oversampled at 0.55"/px, but what's the deal here? Guiding on my EQ6-R is rarely  above that on the worst nights, so tracking errors shouldn't be an issue. Seeing? What about my sub frames - what will they look like? What's inherently bad about oversampling? 

Careful! I don't think anyone's picked up on this but, to realize the resolution of 0.5"PP your guiding RMS needs to half that. (This is a rule of thumb but good enough for government work...) You are very unlikely to reach a guide RMS of 0.25". I can get about 0.33 out of my Mesu 200.

Olly

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

That is exactly my point.

If you have image that is properly sampled and you cut frequencies above certain point - it will show in image.

Take a look at one post above that - where I did the same - can you see equal loss of detail in that image - can you spot any place where you can no longer resolve feature - like smallest fibers not being resolvable in leaf example

Musing about all this, the artifacts in the under-sampled leaf when stacked and processed could easily give the impression of enhanced detail if you don't have a higher resolution reference image.

One way to look at this is your seeing/guiding/optics need to be spatially band limiting your system.  This  gives some good examples of the problems you can get in normal photography if you under sample. It says this

"Many digital camera sensors— especially older cameras with interchangeable lenses— have anti-aliasing or optical lowpass filters (OLPFs) to reduce response above Nyquist. Anti-aliasing filters blur the image slightly, i.e., they reduce resolution. Sharp cutoff filters don’t exist in optics as they do in electronics, so some residual aliasing remains, especially with very sharp lenses. The design of anti-aliasing filters involves a tradeoff between sharpness and aliasing (with cost thrown in)."

Astro imagers have the atmosphere and their mounts and optics to do the filtering! 

My two pennyworth would be it's better to slightly oversample than risk under-sampling. 

Regards Andrew 

Edited by andrew s
Link to comment
Share on other sites

36 minutes ago, andrew s said:

My two pennyworth would be it's better to slightly oversample than risk under-sampling. 

I've yet to see aliasing effects from undersampled image in astroimaging.

Here is a brief explanation of what aliasing really is and why it's not a major concern in astrophotography.

If we start with some function that in frequency domain looks like this:

image.png.6a62da202380067bb020262665138fa3.png

Its sampled form will be this form repeating left and right all the way to infinity - spaced at sampling frequency intervals.

image.png.b0a4abed6180b6f2532c204f7e035ea7.png

Restoring original signal is then done by using box filter on original wave form to isolate it from all other copies. This can only be done if there is clear separation between wave forms - or the reason why:

1. signal must be band limited

2. we need to sample at at least twice maximum frequency (so that copies don't overlap)

When we sample at lower frequency than one given by Nyquist criteria - we get this case:

image.png.4c7a032a35f4493935361ed14ecff5f3.png

resulting frequency representations overlap / add and no filter can separate them.

This is where aliasing comes from.

Now back to astronomical images. In generation of astronomical images we have 3 major processes that create blur - Airy pattern, Gaussian Blur from seeing and Gaussian Blur from tracking error (provided that tracking error is random enough).

Look at frequency response of all of those three components:

Fourier Transform of gaussian is a gaussian form so we have:

image.png.e0c8846b47a848802e94f1fc687197ab.png

And Fourier Transform of Airy pattern is this:

image.png.4a321677fea70cb669865de590e0f0b3.png

All these three combined (they multiply in frequency domain) - give function that is extremely sharply falling towards the zero.

To put this into perspective, I'm going to use data posted on another thread (since I have it on hand and is still linear) and examine it in frequency domain:

image.png.df9f32194f45338f2eb2cb32a8303567.png

Frequency response has been normalized so that peak is at 1 - look how fast it plummets down towards the zero. By comparison telescope MTF part would be somewhere around 0.4 on right edge.

Wherever you put vertical line in above graph to represent sampling point - overlap that represents aliasing will simply be so small that it won't be seen in final image.

To prove that point further - I'm going to resample above image to x10 smaller size using nearest neighbor. In normal images that should create strong aliasing effects - but here we get:

image.png.79b8ae3e91ebaefbafde3bd345966aca.png

There is simply no trace of any sort of aliasing effect.

Undersampling is safe operation in astronomy imaging and many wide field images are in fact under sampled without any issues (you need to undersample wide field image - you can't get 8 degrees into 2000px wide image without being at 14.4"/px).

Edited by vlaiv
Link to comment
Share on other sites

3 hours ago, vlaiv said:

@Dan_Paris

Have a look at this and tell me what you think:

image.png.372f4c7d4ed9b6de771a72c32532d174.png

Top image is bin1 image and its Fourier transform.

Bottom right image is that same Fourier transform with me setting to zero all the values higher than half sampling frequency (frequency that corresponds to x2 coarser sampling) to zero. I effectively removed all the higher frequencies that would not be captured if you sampled at half the current rate.

Then I did inverse FT of that - which is bottom left image. Can you tell the difference in resolution between top and bottom image?

And mind you - this is even done on processed data - not even on 32bit float point with much higher precision, yet results are evident.

Well I can see differences in these two images. So that seems to contradict what you just posted. However,  I won't continue. 

Regards Andrew 

Link to comment
Share on other sites

2 minutes ago, andrew s said:

Well I can see differences in these two images. So that seems to contradict what you just posted. However,  I won't continue. 

Could you be so kind to point them out for me?

I really can't see any differences in detail. I can in level of stretch - background is darker in bottom one, but that is just due to different stretch after forward / backtward FT.

There is also difference in noise grain - again due to removal of high frequencies - noise spans all frequencies equally and will change its appearance with removal of high frequencies.

But I can't spot any differences in data itself, so would appreciate you helping me out by pointing to those differences.

  • Like 1
Link to comment
Share on other sites

It's difficult due to the difference in stretch but to me the whole lower image looks grainy. This is more obvious in the faint outer regions.

It may be due to the noise and or stretch but how can you clearly delineate the data from the noise. You can't, especially in the areas where the signal approaches the noise floor.  

Regards Andrew 

PS I assume you accept there are differences in the leaf picture given your earlier reply. The same maths applies to both. 

 

Edited by andrew s
Link to comment
Share on other sites

14 minutes ago, andrew s said:

PS I assume you accept there are differences in the leaf picture given your earlier reply. The same maths applies to both. 

Of course I do accept that there are differences in leaf picture - that is why I posted it.

Same math applies to both images, and math says - If there is something at high frequencies and you remove it - it will be obvious in the image.

On the other hand - if there is no signal in high frequencies and you remove it - there will be no difference.

This serves to prove that astronomy image is indeed over sampled as removing high frequency components only alters noise but not data itself.

Sure it is hard to differentiate noise from data in low SNR areas - but above equally affects both high and low SNR areas - no difference there, so if you can't see difference in high SNR area due to this - you won't in low SNR area either.

Maybe I can do another comparison - same thing, but this time, I'll blink image instead?

 

 

 

 

 

Edited by vlaiv
Link to comment
Share on other sites

3 hours ago, ollypenrice said:

f you do find more real resolution from 0.6"PP then you do. I looked at my data carefully and concluded that I didn't. We can both be right.

I think that this is the best possible conclusion: different setups and different seeing conditions so we can be both right.

I'm convinced that with my setup and my seeing conditions 0.67" gives more resolved images than 1"   having imaged many galaxies with both cameras attached to the same telescope. It may be that the ideal sampling is in between, I don't know.  Half of the nights my seeing is in the 1.5" to 2" range, and the FHWM produced  by my telescope/corrector combination is  4.8µm, i.e. 2 pixels with the IMX183, or below  (an experimental value inferred from my best subs).

I certainly don't claim that this holds for any scope in any seeing conditions.

Edited by Dan_Paris
  • Like 1
Link to comment
Share on other sites

24 minutes ago, vlaiv said:

Same math applies to both images, and math says - If there is something at high frequencies and you remove it - it will be obvious in the image.

On the other hand - if there is no signal in high frequencies and you remove it - there will be no difference.

Absolutely,  agree. By definition if you under sample the you are losing higher frequencies!  If your combined system optics/mount/seeing is capable of delivering 1 arc sec and you under sample you will lose resolution and information.  

A pure star field may look the same as your just looking at gaussian  images which scale. Look at a planet, nebula etc with 1 arc sec detail. and they will be different. 

Regards Andrew 

Edited by andrew s
Link to comment
Share on other sites

2 minutes ago, andrew s said:

If your combined system optics/mount/seeing is capable of delivering 1 arc sec and you under sample you will lose resolution and information.  

Above comparison is not dealing with under sampling - but rather over sampling.

I did that to show that astronomical image sampled at 0.6"/px is over sampled by roughly factor x2

Link to comment
Share on other sites

3 hours ago, vlaiv said:

That is exactly my point.

If you have image that is properly sampled and you cut frequencies above certain point - it will show in image.

Sorry we at cross purposes then. Your statement above seemed to me to be about under sampling. 

Regards Andrew 

Link to comment
Share on other sites

Just now, andrew s said:

Sorry we at cross purposes then. Your statement above seemed to me to be about under sampling. 

Regards Andrew 

That was to emphasize that if you are properly sampled and you remove high frequencies it will make a difference - but if it does not make a difference - well, you are not properly sampled and there is no data in high frequencies - which means you are over sampled rather than under sampled (under sampling will have data on all frequencies but will exhibit aliasing artifacts as well).

Link to comment
Share on other sites

16 minutes ago, Dan_Paris said:

For sure it may well be that the ideal sampling is in between though.

Could be, it is really hard to tell as differences are very tiny and they arise from different sources.

Sometimes choice of algorithm can have significant impact. Level of stretch and amount of sharpening also, so processing also must be accounted for.

That trick with Fourier analysis is very good one as you can vary how much of higher frequencies you decide to cut off thus allowing you to "dial in" the sampling rate - but it is best done on linear data (as most measurements are).

Link to comment
Share on other sites

Well I must say chaps, thank you for the impromptu optical theory lessons 😅. I've tried to have a good read through to understand what you're debating but sadly it is above my intelligence level! 

Could you summarise what you're saying here in layman's terms? 

@ollypenrice I didn't know guiding RMS has to be <half your sampling rate. I was under the impression that as long as the RMS was under your sampling rate, you wouldn't notice any tracking errors. Please could you explain why that is?

Cheers chaps 😊

Link to comment
Share on other sites

@OK Apricot

Here is summary on my part:

- It is ok to have small pixels on long focal length as long as you understand that you will be over sampling and that the way to recover lost SNR due to this is to bin you data afterwards

- You should aim at about ~1.2"/px sampling rate with 8" EdgeHD - but actual sampling rate that you should strive to can be measured given your conditions by following formula: FWHM you usually achieve / 1.6 = sampling rate. If you manage 2" FWHM on average, then you should bin your data to be close to 2 / 1.6 = 1.25"/px. Don't stress too much if you are off by some margin, and I advocate slight under sampling rather than slight over sampling as it will produce better looking image in post processing as you will be able to keep the noise down better and apply better sharpening.

- Keep your guide RMS as low as possible. Rule of the thumb Olly mentioned - is just rule of the thumb - reality is, lower RMS = better image, or rather tighter stars / smaller FWHM. Seeing, scope aperture and mount performance add up in a certain way (not like normal numbers but rather as square root of sum of squares) so that if any of the three component is small compared to others - it contribution diminishes - so you don't need to go overboard with chasing good RMS if your seeing is not very good or you are using small aperture - like 80mm or so.  For most setups being with guide RMS at half of sampling rate (provided that sampling rate is sensibly chosen) - simply starts to reduce its impact - and hence that rule of thumb.

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

10 hours ago, OK Apricot said:

Well I must say chaps, thank you for the impromptu optical theory lessons 😅. I've tried to have a good read through to understand what you're debating but sadly it is above my intelligence level! 

Could you summarise what you're saying here in layman's terms? 

@ollypenrice I didn't know guiding RMS has to be <half your sampling rate. I was under the impression that as long as the RMS was under your sampling rate, you wouldn't notice any tracking errors. Please could you explain why that is?

Cheers chaps 😊

I think Vlaiv has answered that far better than I would.

I also think it's worth saying that undersampling may not be a problem at all on some targets. Large, diffuse nebulae, for instance, often contain little fine structure to resolve. Galaxies, of course, contain a great deal and these are the subject of your thread. However, galaxies also produce tidal tails and streams which are inordinately faint.

Arp 94: I imaged this, or tried to, in the TEC 140/CCD rig at 0.9"PP. In the end I gave up. The interest in the image is the tidal streaming and I just couldn't get this to separate reliably from the background. I had nice detail in the galaxy cores but they were not what I wanted.  When I tried again with the RASA 8 I could not resolve fine core detail at 400mm FL but the tidal streams were plain sailing.  With finite resources there will always be compromises.

NGC3227FINclosecropWeb.thumb.jpg.b07c8a4248bc5321fe85f92a839f0691.jpg

Olly

 

Edited by ollypenrice
Link
  • Like 2
Link to comment
Share on other sites

Imaging small galaxies will always be a challenge of equipment, location and conditions, but in my book that’s what makes it interesting. We’ll never match the results of the professionals but with the recent developments in cameras and processing software I think we are still seeing improvements in this branch of AP.

Best wishes with your EdgeHD project, please keep us posted.

  • Like 2
Link to comment
Share on other sites

On 03/09/2023 at 16:06, Dan_Paris said:

There's nothing wrong with  oversampling (within reason), as long the level of read noise from the sensor is well below the photon noise from the sky background.

There is no benefit in binning with CMOS cameras except that it demands less computer resources (storage and CPU) but they are cheap these days. If you want to adapt to seeing conditions rescaling the final processed image gives always better results.

Regarding the ideal sampling you should first evaluate your seeing conditions, or more accurately what the combination of your seeing conditions and tracking accuracy allows you.

My current setup (200/750 newt with ASI183mm) gives me 0.67"/pix which is a perfect match for my seeing condtions. Indeed the average of the FWHM values that I got on my luminance stacks this year (24 imaging sessions) is 2", i.e. three times the sampling (ranging from 1.46" to 2.6"). On those nights with good seeing my best subs are around 1.3" so  I could sometimes benefit from a tighter sampling like 0.55".

My previous camera was an ASI1600mm which gave me 1"/pix. It gave me clearly inferior results, resolution-wise.

Instead of  an 8" EdgeHD, you could consider a 200/1000 Newtonian with a Paracorr (effective focal length 1150mm), which gives an ideal sampling for 2" seeing. In those conditions it would give a larger FOV than the EdgeHD, without sacrificing resolution.

Well, oversampling will unnecessary increase the exposure time and will lower the systems entendue. Depending on how many clear nights you have this could be a problem or not. From the point of view of a telescopes entendue if I compare my 8 inch refractor operating at 0.93 "/pixel to a C11 SCT operating at 0.42 "/pixel, the refractor is 3.57x better. If the C11 would be operating at bin x2, 0.84"/pixel, then the SCT would be 1.1x better than the 8 inch frac. If I wanted to keep the resolution for the C11 at 0.42"/pixel and have a bigger entendue than the frac, then I would need an SCT with roughly 400 mm mirror ... 

If you are aiming for resolution, those small pixels need to be coupled with a big mirror in order to compensate for the loss of flux at pixel level. 

And to make matters worse, the lower the plate scale the better the mount you'll need, because at 0.42"/pixel any guiding error bigger than that will increase your FWHM, and defeat your original purpose of high resolution imaging.

For my setup If I where to image at 0.42"/pixels I would need a mount capable at sustaining <= 0.2" guiding RMS, that means 10M, ASA, Planewave $$$ :)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.