Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Next Scope?


Recommended Posts

I'm looking for advice on the astrophotography of small nebula and galaxies.

I currently use a SWED80 refractor for large DSO's / Nebulas (e.g. North America Nebula, M42, Andromeda, Horsehead and flame nebulas etc) and a SW200P reflector for smaller targets (e.g. planetary nebula, sunflower and whirlpool galaxies, wizard and pacman nebulas etc).

The mount I use is a HEQ5 Pro (Rowan belt) and the trouble with that is it's payload limit. The SW200p reflector with guiding camera makes this set-up a struggle for imaging in anything more than a slight breeze!! 

The SW200p is a great scope (the first one I got) but since I've moved into astrophotography I find it very cumbersome.

I use a DSLR at the moment for image capture.

Can anyone suggest a replacement scope where the HEQ5 is not overburdened and yet can still image small targets like the sunflower galaxy?

Would the SWED120 Refractor be suitable???

Gerr.

Link to comment
Share on other sites

The ED120 is 900mm focal length , the ED80 is 600mm , so there is not a lot of gain in the step up as regards narrower field of view with the same camera.

The ED120 is a fine scope however

 

Use the simulation capabilities of the below link to see what you will get with a given scope and a given camera to make your decision.

astronomy.tools

use the imaging mode and choose your target and camera to try a set of telescopes to see what you would be best to go for.

 

Link to comment
Share on other sites

The really small galaxies require more than 1 m focal length, and even an EQ6 will start to struggle, I would guess. On my GP-DX I use a 762 mm focal length scope (Meade SN6), but with a smaller sensor and pixel size which gives reasonable results on the likes of M51 and M81. I sometimes struggle to get longer subs working reliably, I should add.

Link to comment
Share on other sites

Thanks for the replies - I am intrigued by Vlaiv's suggestion - StellaLyra 6" F/9 M-CRF Ritchey-Chrétien Telescope.

Could be a solution.

Maybe using a 2xBarlow lens on a smaller reflector is another thing worth considering?

Gerr.

 

Link to comment
Share on other sites

15 minutes ago, Gerr said:

Maybe using a 2xBarlow lens on a smaller reflector is another thing worth considering?

That is only viable option if you think that you are undersampling.

My guess is that you are not under sampling.

What is your intended working resolution?

Link to comment
Share on other sites

I use a Canon 650D:

Sensor type CMOS
Sensor size 22.3 × 14.9 mm (APS-C format)
Maximum resolution 5184 × 3456 (18.0 effective megapixels)

 

I am unfamiliar with what a good working resolution is? As a layman in this area - any image that does not appear pixelated. 

I am able to zoom up on images taken with my set-up but this is limited by noise and pixelation / definition.

I'm not very good with the science of this I'm afraid. I know that longer focal lengths allow for smaller targets to be seen as larger objects in the telescopes field of view but at a cost of less light gathering capability (slower scopes \ reduced f stop number) I think??

Ger.

 

Link to comment
Share on other sites

Ok, so here is quick break down of things:

- pixels are not little squares contrary to popular belief - they are just points without dimensions (they look like squares when you zoom too much and algorithm used to interpolate produces squares - this is simplest algorithm called nearest neighbor sampling)

- in that sense - no image will ever look pixelated if different interpolation algorithm is used

- there is limited amount of detail that telescope + mount + sky can deliver in long exposure photography  - and this is order of magnitude less detail then scopes are capable of delivering (in majority of cases) when there is no impact of tracking errors and seeing influence (atmosphere).

- if you are after most "zoomed" in image that you can get - above is the limit - you won't get more detail even if you zoom further in, if you are at the limit imposed by telescope + tracking + seeing.

- "zoom" can be defined as sampling rate or working resolution - it is expressed in arc seconds per pixels and represents how much of sky surface is recorded with single pixel. All the details within that pixel can't be resolved - since all of it will be recorded as single point.

This depends on pixel size and focal length. Your camera has 4.3µm pixel size and 200p for example has 1000mm of focal length.

http://www.wilmslowastro.com/software/formulae.htm#ARCSEC_PIXEL

image.png.e4b309cf6436709c06d8a1d303691d7f.png

(if you are working with color camera and not mono + filters - you need to factor your working resolution as twice the value - it has to do with how R, G and B pixels are spread over sensor - they are effectively spaced 2 pixels apart for each color).

So you were working at 1.77"/px with your 200p. That is "higher medium" resolution (this is really arbitrary naming).

Let's say that over 3-4"/px is wide field / very low resolution. 3"/px-2"/px is low resolution 2"/px-1.5"/px is medium resolution and <1.5"/px is high resolution

Most amateur setups and sky conditions simply don't allow for detail better than about 1"/px with long exposure imaging (there are techniques that can allow for higher resolutions but those are lucky DSO imaging with large apertures and such).

In order to achieve high resolutions - all there must be satisfied - larger aperture, excellent tracking and steady skies. In reality with Heq5 and smaller aperture scopes - I would say, stick to about 1.3"/px - in fact, if we enter focal length of scope that I linked - you'll get just that

image.png.4df3485b6f59ea3b26e374fd682eba95.png

Another rule of thumb is that your guide total RMS needs to be about half or less of working resolution. This means that if you want to go for smaller targets - you need to guide with 0.65" RMS or less.

How is your guiding?

I would not try 1.3" without at least 6" of aperture - as telescope aperture also goes into equation as well.

Another point - since you are using color camera - use super pixel mode in DSS - that will produce proper sampling rate for your calibrate images and stack.

Going higher resolution is just waste of resolution and waste of SNR - too small pixels are "less sensitive" pixels

Nothing wrong with going low resolution and in fact when doing wide field - you simply can't do wide field unless you go low resolution - but you won't capture all the detail available (that is ok for wide field).

True image resolution is when you watch it at 100% zoom - or one image pixel to one screen pixel - aim for your images to look good like that.

  • Like 1
Link to comment
Share on other sites

Wow, thanks for the informative reply. Those are some really useful equations for arc second / pixel ("/pix) resolution image capability. PHD guiding stats I tend to achieve for RA RMS are less than 2" and with Polar alignment error less than 5". I am worried about the scope you suggested as being slow (f 9). Won't this mean I have to take longer exposures?

Gerr.

Link to comment
Share on other sites

34 minutes ago, Gerr said:

PHD guiding stats I tend to achieve for RA RMS are less than 2" and with Polar alignment error less than 5"

Polar alignment is really not that crucial when you are guiding. Even when not guiding I feel that people over estimate importance of polar alignment over periodic error (with any half decent polar alignment - periodic error is likely to cause larger drift than drift due to polar alignment error on all but highest quality drives).

On the other hand - guiding RMS is something that you might want to work on. It could be that 2" RMS is with 8" Newtonian and wind? You really want to get total RMS below 1" at least if you want to work at high resolution. 

37 minutes ago, Gerr said:

I am worried about the scope you suggested as being slow (f 9). Won't this mean I have to take longer exposures?

Yes and no :D

F/ratio of telescope is not really indicative of imaging speed. We could go into length explanation of that - but it boils down to: Aperture at resolution. Once you have your working resolution set - larger aperture wins - F/ratio does not play a part there (it sort of plays part while determining working resolution / sampling rate).

In another words - if you compare super duper fast 80mm scope at F/3.2 and 6" scope at F/9 - both working at same sampling rate (arc second per pixel value) - 6" simply wins as it has more aperture. Trick is that some sampling rates are hard to get with certain type of scope.

You can't get high resolution with short focal length scope and conversely it is hard to get wide field image (low resolution) with long focal length scope.

On the other hand - yes, you will benefit from longer exposures for another reason - read noise. Your exposures need to be long enough for some other noise source to overcome read noise. With long focal length - light pollution is also sampled at high rate and sky becomes darker (think how darker skies become when using high power eyepiece - same thing). This means that you need longer exposures for light pollution noise to overcome read noise.

Effect is smaller with DSLR type cameras as they are not cooled and thermal noise can sometimes swamp read noise before LP.

In any case, yes, expose for longer individual subs and expose for how much total exposure time you can afford per target - as that overall reduces noise regardless of how "fast/slow" your system is.

Link to comment
Share on other sites

56 minutes ago, vlaiv said:

Ok, so here is quick break down of things:

- pixels are not little squares contrary to popular belief - they are just points without dimensions (they look like squares when you zoom too much and algorithm used to interpolate produces squares - this is simplest algorithm called nearest neighbor sampling)

- in that sense - no image will ever look pixelated if different interpolation algorithm is used

- there is limited amount of detail that telescope + mount + sky can deliver in long exposure photography  - and this is order of magnitude less detail then scopes are capable of delivering (in majority of cases) when there is no impact of tracking errors and seeing influence (atmosphere).

- if you are after most "zoomed" in image that you can get - above is the limit - you won't get more detail even if you zoom further in, if you are at the limit imposed by telescope + tracking + seeing.

- "zoom" can be defined as sampling rate or working resolution - it is expressed in arc seconds per pixels and represents how much of sky surface is recorded with single pixel. All the details within that pixel can't be resolved - since all of it will be recorded as single point.

This depends on pixel size and focal length. Your camera has 4.3µm pixel size and 200p for example has 1000mm of focal length.

http://www.wilmslowastro.com/software/formulae.htm#ARCSEC_PIXEL

image.png.e4b309cf6436709c06d8a1d303691d7f.png

(if you are working with color camera and not mono + filters - you need to factor your working resolution as twice the value - it has to do with how R, G and B pixels are spread over sensor - they are effectively spaced 2 pixels apart for each color).

So you were working at 1.77"/px with your 200p. That is "higher medium" resolution (this is really arbitrary naming).

Let's say that over 3-4"/px is wide field / very low resolution. 3"/px-2"/px is low resolution 2"/px-1.5"/px is medium resolution and <1.5"/px is high resolution

Most amateur setups and sky conditions simply don't allow for detail better than about 1"/px with long exposure imaging (there are techniques that can allow for higher resolutions but those are lucky DSO imaging with large apertures and such).

In order to achieve high resolutions - all there must be satisfied - larger aperture, excellent tracking and steady skies. In reality with Heq5 and smaller aperture scopes - I would say, stick to about 1.3"/px - in fact, if we enter focal length of scope that I linked - you'll get just that

image.png.4df3485b6f59ea3b26e374fd682eba95.png

Another rule of thumb is that your guide total RMS needs to be about half or less of working resolution. This means that if you want to go for smaller targets - you need to guide with 0.65" RMS or less.

How is your guiding?

I would not try 1.3" without at least 6" of aperture - as telescope aperture also goes into equation as well.

Another point - since you are using color camera - use super pixel mode in DSS - that will produce proper sampling rate for your calibrate images and stack.

Going higher resolution is just waste of resolution and waste of SNR - too small pixels are "less sensitive" pixels

Nothing wrong with going low resolution and in fact when doing wide field - you simply can't do wide field unless you go low resolution - but you won't capture all the detail available (that is ok for wide field).

True image resolution is when you watch it at 100% zoom - or one image pixel to one screen pixel - aim for your images to look good like that.

Hi Vlaiv

You say that for OSC cameras the true imaging scale is actually double what one might think it is, due to the Bayer CFA. But what if one were to use Bayer Drizzle rather than an interpolation algorithm?  Does this effectively bring the imaging scale back down to half the amount again? 

Richard Sweeney's recent amazing image of the HH Nebula comes to mind. He used a 2600mc camera with a Tak Epsilon 160, so roughly 1.5" resolution. I'll admit I am having a hard time accepting that it's really a 3" image it's that good. 

Ps - part of my reason for asking is that my next camera will be either the OSC or Mono version of the 2600. My head says go for the simplicity of OSC, but.......

Link to comment
Share on other sites

Good points Vlaiv. My RMS error improves to about 0.6" RMS when I guide with the ED80 refractor. With the Newt being like a big sail it always is difficult to manage and guide accurately in anything more than a force 2 wind. 

The 6" Ritchey-Chretiens design reflector is lighter (approx 5kgs) and much shorter in length (391mm), so less of a handful for the HEQ5. This set-up should allow for longer guided exposures to off-set the higher f ratio (which converts to 6.75 with field reducer) so as you suggest a possible solution for me and my conundrum!

I will give this some serious thought.

Many thanks for your very knowledgeable input and guidance - much appreciated. 

Gerr :)

 

Link to comment
Share on other sites

Forgot to say, Mabula (the creator of Astro Pixel Processor, which I am a big fan of for calibration and integration) suggests not to use SuperPixel mode for over-sampled OSC images, as the lower resolution would affect registration. What's  your opinion on using Bayer Drizzle (with appropriate droplet size) and an integration scale of 0.5 instead?

https://www.astropixelprocessor.com/community/tutorials-workflows/does-app-support-software-binning-of-osc-color-images/

Link to comment
Share on other sites

8 minutes ago, Xiga said:

Hi Vlaiv

You say that for OSC cameras the true imaging scale is actually double what one might think it is, due to the Bayer CFA. But what if one were to use Bayer Drizzle rather than an interpolation algorithm?  Does this effectively bring the imaging scale back down to half the amount again? 

Richard Sweeney's recent amazing image of the HH Nebula comes to mind. He used a 2600mc camera with a Tak Epsilon 160, so roughly 1.5" resolution. I'll admit I am having a hard time accepting that it's really a 3" image it's that good. 

Ps - part of my reason for asking is that my next camera will be either the OSC or Mono version of the 2600. My head says go for the simplicity of OSC, but.......

In theory, Bayer drizzle when executed properly will provide 100% of resolution available from pixel size alone - or same as mono camera.

Problem is in the way that Bayer drizzle is implemented and if we go that route - in reality we open up a can of worms really :D.

Due to the way we image and process our images - even mono is not sampling at the rate pixel size suggests, or rather - our final image is not of given sampling rate. Main culprit for this is interpolation algorithm used when aligning images.

If you want perfect Bayer drizzle that will recover 100% of resolution - you want to dither by integer pixel offsets - so that final registration of subs is simple "Shift and add" - you don't need to use interpolation. This is by the way also best way to produce image in terms of resolution. Using any interpolation algorithm reduces detail further ...

Good thing is that it is really hard to tell difference between say 1" and 2" or 1.5" and 3" visually in astronomical images. This is because of way blur in astronomical images works. It is much easier to spot that in planetary images where limiting factor is aperture of telescope and extensive sharpening / frequency restoration is performed.

image.png.8f654d80d9901ed06651f0efb485efb2.png

Here is screen shot of one of my images (btw taken in red zone with F/8 scope about 2h total integration time) here presented at ~1"/px. One copy was sampled down to 2"/px and then resampled back up to 1"/px.

Can you tell which one? Actually, you should be able to tell by noise grain - noise has just a bit larger grain and image is just a bit smoother - one that has been down sampled at 2"/px and then up sampled back to 1"/px

This is because detail in the image is not good enough for 1"/px - it is more for 1.5"/px - 2"/px sampling (seeing was not the best on particular night).

Btw, left image was sampled down and up again and right one is original.

Now, if I do that with 2"/px and 4"/px - you'll probably see the difference, but again, it won't be striking:

image.png.fc55125d4c90c90f8a024a23ea8120af.png

Here again - both have been reduced to 2"/px - but left one has been further reduced to 4"/px and then upsampled back to 2"/px.

First thing to notice - 2"/px is much more suited resolution for level of detail - image looks detailed and sharp. Second - now you can clearly see blurring due to sampling at lower rate - look at detail in bridge - it is clearly sharper in right image - but difference is not that obvious - you would not be able to tell unless looking at images side by side.

 

  • Like 1
Link to comment
Share on other sites

17 minutes ago, Gerr said:

Good points Vlaiv. My RMS error improves to about 0.6" RMS when I guide with the ED80 refractor. With the Newt being like a big sail it always is difficult to manage and guide accurately in anything more than a force 2 wind. 

The 6" Ritchey-Chretiens design reflector is lighter (approx 5kgs) and much shorter in length (391mm), so less of a handful for the HEQ5. This set-up should allow for longer guided exposures to off-set the higher f ratio (which converts to 6.75 with field reducer) so as you suggest a possible solution for me and my conundrum!

I will give this some serious thought.

Many thanks for your very knowledgeable input and guidance - much appreciated. 

Gerr :)

 

Hi Gerr

I also have an HEQ5-PRO. It's currently getting the StellarDrive treatment by Dave at DarkFrame. I've recently picked up a used RC6 for galaxy hunting. It's definitely at the high end of the scale for the mount, but as long as you don't aim for too high an imaging scale, it should handle it ok. 

Edited by Xiga
Link to comment
Share on other sites

Hi Xiga, I'm glad you can vouch for the RC6 - that really helps. My head is starting to hurt trying to understand the science involved in the image sampling process!! But the explanations are very useful for deciding my next camera and telescope upgrade - ££ depending!!

Thanks,

Gerr.

Link to comment
Share on other sites

13 minutes ago, vlaiv said:

In theory, Bayer drizzle when executed properly will provide 100% of resolution available from pixel size alone - or same as mono camera.

Problem is in the way that Bayer drizzle is implemented and if we go that route - in reality we open up a can of worms really :D.

Due to the way we image and process our images - even mono is not sampling at the rate pixel size suggests, or rather - our final image is not of given sampling rate. Main culprit for this is interpolation algorithm used when aligning images.

If you want perfect Bayer drizzle that will recover 100% of resolution - you want to dither by integer pixel offsets - so that final registration of subs is simple "Shift and add" - you don't need to use interpolation. This is by the way also best way to produce image in terms of resolution. Using any interpolation algorithm reduces detail further ...

Good thing is that it is really hard to tell difference between say 1" and 2" or 1.5" and 3" visually in astronomical images. This is because of way blur in astronomical images works. It is much easier to spot that in planetary images where limiting factor is aperture of telescope and extensive sharpening / frequency restoration is performed.

image.png.8f654d80d9901ed06651f0efb485efb2.png

Here is screen shot of one of my images (btw taken in red zone with F/8 scope about 2h total integration time) here presented at ~1"/px. One copy was sampled down to 2"/px and then resampled back up to 1"/px.

Can you tell which one? Actually, you should be able to tell by noise grain - noise has just a bit larger grain and image is just a bit smoother - one that has been down sampled at 2"/px and then up sampled back to 1"/px

This is because detail in the image is not good enough for 1"/px - it is more for 1.5"/px - 2"/px sampling (seeing was not the best on particular night).

Btw, left image was sampled down and up again and right one is original.

Now, if I do that with 2"/px and 4"/px - you'll probably see the difference, but again, it won't be striking:

image.png.fc55125d4c90c90f8a024a23ea8120af.png

Here again - both have been reduced to 2"/px - but left one has been further reduced to 4"/px and then upsampled back to 2"/px.

First thing to notice - 2"/px is much more suited resolution for level of detail - image looks detailed and sharp. Second - now you can clearly see blurring due to sampling at lower rate - look at detail in bridge - it is clearly sharper in right image - but difference is not that obvious - you would not be able to tell unless looking at images side by side.

 

Thanks Vlaiv. I'll admit it's really not that easy to tell which image is which in your example, which I wouldn't have imagined. 

I suppose with an OSC one will always want to use Bayer Drizzle. I only really used it once myself, on an image of M31, and I recall there being a noticeable improvement. So maybe the true resolution when using Bayer Drizzle isn't quite the full resolution, but is probably a lot closer to it than twice the value, would you say? 

Link to comment
Share on other sites

6 minutes ago, Xiga said:

Forgot to say, Mabula (the creator of Astro Pixel Processor, which I am a big fan of for calibration and integration) suggests not to use SuperPixel mode for over-sampled OSC images, as the lower resolution would affect registration. What's  your opinion on using Bayer Drizzle (with appropriate droplet size) and an integration scale of 0.5 instead?

https://www.astropixelprocessor.com/community/tutorials-workflows/does-app-support-software-binning-of-osc-color-images/

There are several things said there that I potentially disagree with.

I'm also against super pixel mode - but for different reason. I advocate splitting Bayer matrix component into separate images and working with them like mono images. That is cleanest way to go about it.

Each OSC sub will produce 1 red, 1 blue and 2 green subs - each of 1/4 size of original. They will indeed have twice coarser sampling rate than pixel size alone would suggest.

This step will not alter data in any way - everything is preserved as is.

Super pixel is not like that. When using super pixel two changes happen:

- green data is averaged and only one sample is created out of two green samples

- there is 1/4 pixel shift between channels for red and blue and green is even transformed in strange way - one green part of bayer matrix is shifted 1/4 in one direction and other green part of bayer matrix is shifted 1/4 in opposite direction and then they are averaged.

In my mind that is a mess.

Splitting data is cleanest way to go about that. After you have split your data into separate "fields" and treat them as regular mono+filter subs - you can decide which way you want to register and stack them.

I'm also rather against drizzle in general as I strongly believe it is misused feature. There is nothing wrong in being under sampled when doing wide field - there is no such thing as "blocky" stars - there is only inadequate upsampling algorithm when zooming in, and also - you really have to be undersampled to do wide image in single go if you don't want to make mosaic.

Using drizzle on anything except undersampled and properly dithered data is just not going to work properly (dithering proper amount is another big issue with drizzle algorithm).

To me, choosing proper resampling algorithm is much more important when doing registration that above.

People use DSS to stack their images - but it only uses bilinear interpolation, but bilinear, bicubic and so on interpolations are really resolution killers :D

Here is example that you can do to try to understand how interpolation works on data:

Create image with patch of pure noise in it:

image.png.6d0ab7f847c87bbcaff0513236059be0.png

Now make a copy of that image and shift by half a pixel in both x and y direction using some interpolation algorithm.

image.png.ac9362728aaba18a44d68dbc98914d4d.png

Now, if you do Fourier analysis of these two images - frequency spectrum should be the same (amplitudes) - as shifting by any amount is only altering phase part of FFT and not amplitudes.

You can test this if you shift by whole pixel number (then you don't need to use interpolation) - you'll get same FFTs from both original and shifted image.

If you take FFTs of both images and divide them - you'll get shape of filter that has been applied during interpolation.

image.png.42dac22ff23a789b2167d01dd48d9751.png

Here we can instantly see that something is wrong - left is FFT of original image and right is FFT of image shifter by 0.5px using Bilinear interpolation. If we divide the two images, we get:

image.png.7eca3c138d849250b2dc6d3b691d4343.png

Look at that - perfect low pass filter.

Bilinear interpolation just kills off high frequencies (detail in the image).

That is the reason why you get that very grainy noise in DSS stacks.

We need to be using much more sophisticated interpolation algorithms - or we need to control our dithers so that each image is exactly integer number of pixels shifted compared to other images.

We don't do later - nor have means to do it (but in theory it should not be that hard to do if one could connect PHD2 with imaging software and tell it how much to move so that image is always shifted exactly integer number of pixels).

Like I said - this is very involved and very technical discussion, so sorry to derail original thread on telescope choice - but it is good to know that even our choice of processing workflow has rather large impact - maybe even bigger than choice of working resolution.

 

  • Like 3
Link to comment
Share on other sites

4 minutes ago, Xiga said:

suppose with an OSC one will always want to use Bayer Drizzle. I only really used it once myself, on an image of M31, and I recall there being a noticeable improvement. So maybe the true resolution when using Bayer Drizzle isn't quite the full resolution, but is probably a lot closer to it than twice the value, would you say? 

I think that it really depends on what sort of resolution image supports.

We image at high resolution but reality is that most of our images are suitable for lower than 1.5"/px - around 2"/px or something like that.

Next - it is choice of interpolation method used when aligning images. Then there is matter of how you process your images and what is your final SNR.

If you have good enough SNR - you can sharpen back some of the blur created by this process. Proper sharpening is not "making things up" - it is restoring sharpness that was at one point there.

In planetary imaging it is done all the time and real detail is "sharpened up from the blur". For that reason, I like to call it frequency restoration process (opposed to low pass filter that is high frequency attenuation process).

My personal preference would be to do bayer split and then treat images like mono + filter. To my eye that is the least "destructive" approach.

Bayer drizzle is also ok - but the question is, how do you interpolate your data when registering it?

Original drizzle algorithm is using something we can call surface sampling and if there is no rotation of the frames involved - math is the same as bilinear interpolation - which we saw above is quite nasty. When there is rotation between frames - things are even worse. I know how I would adapt advanced interpolation to Bayer drizzle - but the question is, was it implemented like that or in similar fashion in software already available?

You know what you can try? You can try to measure things and see what sort of difference there will be between methods.

Take the data set you have - and make stacks using different registering techniques. Then take each of those masters and measure R, G and B fwhm on a certain star using for example AstroImageJ.

Method with smallest FWHM - produced sharpest result. That way you can see if Bayer drizzle actually adds back some resolution or is it actually blurring things further by using bilinear interpolation (as comparison sample you should split RGB as one method and use advanced resampling like Lanczos4 or 5 or higher B-spline like Quintic)

  • Like 1
Link to comment
Share on other sites

Thanks Vlaiv, very enlightening as always! I use Astropixelprocessor for stacking, which I think uses Lanczos-3 and Mabula's own custom interpolation algorithm, which he coins 'Adaptive Airy Disc'. 

But I agree, I think we've veered just a tad off-topic here, lol, sorry Gerr! ☺️

Link to comment
Share on other sites

Not at all. My knowledge of Astro processing has just taken a giant leap thanks to you guys. I have also realised that a field flattener can be counter productive to my aims in small DSO imaging as I will crop the target anyway and don’t need all the field of view being flat. The RC6 seems to be a candidate for planetary imaging too - bonus!!

Gerr.

Link to comment
Share on other sites

4 hours ago, Gerr said:

The RC6 seems to be a candidate for planetary imaging too - bonus!!

It's not going to produce the best results in that role.

Problem is size of central obstruction. It adds a sort of blur that needs to be sharpen (I wrote a bit about it above) - and larger the central obstruction - more sharpening needs to be done. In visual we see that as contrast loss over unobstructed aperture.

Whenever you need to sharpen, you need high SNR - more you sharpen, more SNR you need to start with because sharpening is restoring high frequencies, but it is also boosting high frequency components of noise - so noise gets amplified as well. If you have poor SNR - then sharpened version will just look too noisy.

Here is what my 8" RC can achieve in planetary role:

image.png.dca84660c07c07090645c2f55a58f963.png

To be honest, that is not level of detail I would expect 8" of aperture. Maybe it was just that particular night. I also over sampled quite a bit.

Here is comparison with 5" scope:

image.png.9662626cbd9bd6d88371beec319db665.png

Although image is smaller (better matched resolution), I don't think there is much more detail in above image over this one.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.