Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Advice Required Choosing a Long Focal Length Scope


Recommended Posts

Hi everyone,

I'm just looking for some advice regarding a decision that I've been ruminating over for the past few months.

I am currently in the process of planning the installation of a Pulsar 2.7m dome in my back garden, to allow more imaging time.

Up to this point, for the past three or four years, my imaging setup has been mobile and has comprised a Takahashi FSQ-85EDX, EQ6-R Pro and Atik 383L with a Xagyl 7 position filter wheel.

My plan is to fully automate the observatory (which I already have all figured out) and install the FSQ-85EDX in there for wide-field imaging.

To cut to the chase; my dilemma is that I want to install another longer focal length scope, in either a piggyback or side-by-side configuration with FSQ, but I cannot decide what scope to go for. I want a scope in which I can image smaller deep sky objects, such galaxies and planetary nebulae e.g. M51, M27 etc.

I will also be upgrading the mount to an EQ8, for more stability.

I have narrowed the decision down to the following scopes:

- Sky-Watcher Esprit 150ED.

- CFF 250mm RC.

- Celestron Edge HD 11.

I'm trending towards the Esprit, as it's likely to have better constrast (due to no central obstruction) and be lower maintenance i.e. unlikely to require collimation or have mirrors to clean etc. Also, I can install the Alnitak Flip-Flat on this scope and the FSQ (with some modifications.

On the other hand, the CFF 250mm RC, although more expensive, has more aperture, which for a given image scale and CCD QE, gives shorter exposures. I also like the fact that it looks the business and has inbuilt primary and secondary dew heaters, which a lot of cheaper RCs don't have. I used to own a GSO type 8" RC, but found it quite frustrating to use, due to the lack of inbuilt dew heaters and poor mechanics etc. However, I'm not overly concerned about having to collimate the CFF RC, as it will be in a permanent setup. One downside of the CFF RC is that the design means that the Alnitak Flip-Flat or Gemini dust cover systems cannot be installed, meaning that the mirrors will be permanently exposed to dust etc.

The Edge HD 11 seems like it is good value for money, considering the aperture, however I've heard that people have issues with stars becoming bloated. I'm guessing this is because its long focal length invites oversampled images with average CCD cameras.

Can anyone offer any testimonials or advice on these three scopes and perhaps give their recommendations? If there are any other scopes that you can recommend as well, I would appreciate it.

The other considerations that I have are that the shutter width (700mm) of the Pulsar dome needs to be wide enough for the setup and the image scale needs to be sensible for the longer focal length scope. I think CCD cameras with a Sony chip are preferrential, due to lower noise and better QE, however they seem to suit the Esprit more than the other scopes, in terms of image scale.

I look forward to your responses.

Many thanks for your help.

Chris

Link to comment
Share on other sites

  • Replies 44
  • Created
  • Last Reply

Decide on your imaging resolution and then get most aperture that will get you this resolution.

If you are going to go for EQ8 mount it's safe to say that for most time, due to seeing you will be limited to about 1..0 - 1.2"/pixel. With Atik383 / Kaf8300 sensor this means FL of about 1000mm. With this sensor - anything longer and you will be oversampling.

This makes Esprit 150 perfect choice from three mentioned above. If you want larger aperture - SW MN190 is worth having a look - it's also 1000mm FL but 190mm aperture.

Most light gathering will be offered by 10" F/4 newtonian.

Using different camera will allow for different choice of scopes.

Link to comment
Share on other sites

Thanks for the response and information. I will check out the SW MN190.

I do however think that the seeing range of 1"-1.2" per pixel might be a little pessimistic for my location, as I live in a very rural location, at a reasonably high elevation.

If I were to go with the Esprit and a CCD with a Sony sensor, then I could always bin if necessary on the camera, or in post-processing afterwards. It's better to oversample slightly on longer focal length scopes, rather than undersample.

I think a Sony chipped CCD would struggle on the other two scopes.

Link to comment
Share on other sites

I've found that sampling rate of 1"/pixel is actually very optimistic for most locations :D

This is not related to seeing, as seeing is only a part of equation. Image resolution (resolved detail) depends on number of factors - seeing, tracking / guiding precision, scope aperture to name a few. If you can get stars with FWHM of 1.6" with your particular setup in long exposure (btw, this is very hard to get as you need large scope, very good guiding and 1" or less seeing) - then 1"/pixel is good sampling rate. Anything more than 1.6" FWHM and you will be effectively oversampling with 1"/pixel.

If you are interested, I can explain how all of that works, but it involves quite a bit of math.

You certainly can bin in hardware or software - I do it, my high res setup is 8" RC with ASI1600 - effective resolution of ~0.5"/pixel, and I regularly bin x2 or x3 depending on conditions. I prefer CMOS and binning in software (for the reasons we can also discuss) over hardware binning and CCD. This is why I mentioned that you can consider different camera and scope. I was under impression that you are willing to continue using KAF8300 based sensor (it's a good sensor for AP).

Are you planning to get both new scope and new camera? What sensor did you have in mind?

Link to comment
Share on other sites

I am enjoying this thread, finding it very interesting as I do not fully understand under / over sampling. I still don't...! If you care to offer an explanation of what each is please and their effects, I would be very grateful. I am also not entirely sure how to go about finding things like the FWHM of my stars and the value of my seeing? These are basic questions, but I feel that if I go on any longer in AP without asking and understanding these basic things it will only lead to a fall at some point soon! Thank you in advance.

Link to comment
Share on other sites

6 minutes ago, PhotoGav said:

I am enjoying this thread, finding it very interesting as I do not fully understand under / over sampling. I still don't...! If you care to offer an explanation of what each is please and their effects, I would be very grateful. I am also not entirely sure how to go about finding things like the FWHM of my stars and the value of my seeing? These are basic questions, but I feel that if I go on any longer in AP without asking and understanding these basic things it will only lead to a fall at some point soon! Thank you in advance.

Try this as a starting point Gav ?

https://www.aavso.org/sites/default/files/publications_files/ccd_photometry_guide/PhotometryGuide-Chapter3.pdf  (there's more great stuff there, but this chapter tackles the questions you've raised)

Helen

Link to comment
Share on other sites

I would also appreciate an explanation of this too, if you don't mind?

To answer your question regarding the camera; I am going to continue using my 383L with the FSQ-85EDX and add a focal reducer in. I was thinking of pairing the Esprit with an Atik ONE 9.0 or 6.0. The ONE 9.0 would give an image scale of 0.72" per pixel, which would oversample. However, 2x2 binning would take it to 1.45" per pixel. Alternatively, I could use the ONE 6.0 and get an image scale of 0.89" per pixel at 1x1, which is closer to the ideal of 1.0" per pixel. So it might oversample slightly, but I presume this would be more preferrable than undersampling slightly at 1.45" per pixel with the ONE 9.0?

If I were to go with the CFF 250mm RC or Edge HD 11, then the Sony chips would not be suitable, as they would oversample too much. However, the 383L would give an image scale of 1.11" and 0.79" per pixel respectively when binned at 2x2.

In regards to performance factor (PF) of these configurations, for the Esprit 150 with the ONE 6.0 at 1x1:

PF = 150 x 0.89 x SQRT(0.77) =117.14

For the CFF 250mm RC with the 383L at 2x2:

PF = 250 x 1.11 x SQRT(0.56) = 207.66

For the Edge HD 11 with the 383L at 2x2:

PF = 279 x 0.79 x SQRT(0.56) =164.93.

So, the CFF with the 383L would have a PF almost double that of the Esprit with the ONE 6.0, meaning shorter exposures. This is perhaps one benefit of the RC over the over scopes? Although perhaps the 16200 or 11000 would be more suitable for the RC or Edge HD 11?

Link to comment
Share on other sites

All this mathematics aside... I currently use an EdgeHD 8" with a KAF 8300 chip in a QSI 683 for my long focal length imaging. I sometimes use the 0.7x focal reducer. Am I happy with results? Almost, but I am never actually satisfied. The stars are bloated, collimation is an issue and there is a slight softness to the image quality. I also have an Esprit 100ED and the results with that are much nearer to my satisfaction level. OK, shorter focal length is more forgiving (see all the maths above), so the Esprit 100 is always going to give 'better' results.

However, I am extremely close to ditching the Edge 8" in favour of an Esprit 150ED. If I had the cash in the bank I would have done it already! Chances are I would keep the Edge 8" for a while at least as I do really really like the focal length of that scope and the fact that it doesn't create diffraction spikes. I have accepted that the opportunity costs of that roughly 2000mm native or 1450mm reduced focal length are the problems mentioned above. I can live with them. The Esprit 150 would give better images, but some things would still appear pretty small in the field of view... my search for the perfect long focal length telescope that is actually affordable continues...

Link to comment
Share on other sites

Let's see if I can explain under / over sampling in simple terms :D

When we capture some signal we are interested in mathematical function that is representation of that signal. For start imagine simple graph of simple function - let it be linear function (straight line at some angle to X axis). If we measure "height" of this function in one point - we will not be able to reconstruct that function. On the other hand if we measure height at two different points - we will have enough information to fully reconstruct such linear function. We can measure height at 3, 4 of more different X values - we will still get the same function. So for linear function 1 measurement is not enough to precisely determine function, while 3,4 or more samples are not needed - two is enough.

This is what under and over sampling is in essence. Number of sampling points and their density that we need to precisely reconstruct function. If we don't sample with enough sample points - we won't be able to determine exact shape of function - there will be some error. If we sample at more points than it's needed - we won't get more precise result - we will just "waste" measurements.

Image that telescope produces is something called band limited signal / function. Each periodic function can be represented as sum of sine functions with different phases, frequencies and amplitudes. This is Fourier series. Extension of this method to any function is something called Fourier transform. It represents "density' of different sine functions needed to reconstruct original function. Actual Fourier transform is complex function where complex numbers represent phase and amplitude of individual sine components. Band limited signal means that there are no sine components with higher frequency then band limit in Fourier transform of that initial signal (it's function) - or that their amplitude is 0. Due to nature of light and circular aperture of telescope - image at focal plane is band limited - this is why we say that telescope can't resolve something.

There is something called Nyquist theorem (or Nyquist-Shannon sampling theorem) that states following: In order to fully sample (be able to reconstruct from samples) original band limited function, you need to sample at twice the rate of highest frequency sine component of that signal. For example: human ability to hear high frequency sound ends around 20Khz - we can't hear sounds of higher frequency. This is why CD digital sound represents sound sampled at 44.100 samples per second - twice the highest frequency component.

For telescope image there is hard cut off point in frequency and it is related to size of airy disk. By hard cut off point I mean that all frequencies of signal above this cut off frequency are 0. This however is not applicable to long exposure imaging. It has it's use in planetary / lucky imaging and it defines critical sampling. Using higher sampling rates - meaning less arc seconds per pixel will simply not yield better detail even theoretically - since frequencies needed to define this better detail are non existent in telescope image.

With long exposure imaging there is something else in play, and effective sampling resolution is much less. I'll explain why. Due to fact that telescope image is "blurred" by PSF, or airy disk, and because of other factors - like seeing, tracking / guiding error, telescope image is effectively blurred by low pass filter defined by star PSF. There is another theorem in maths - Central theorem. It explains that if we have random distributions summed - sum of those will tend to Gaussian distribution. This is why seeing and guiding errors combined tend to give us stars that are well approximated by Gaussian distribution.

Here comes important part: this sort of blur acts as convolution of blur kernel - approximated by Gaussian distribution and our original image. Convolution in spatial domain is the same as multiplication in frequency domain. Fourier transform of Gaussian profile is another Gaussian profile. Now look at this graph:

image.png.0c85b426cab5f7bd54d333c87fba215f.png

In frequency domain, X axis represents frequencies of sine components, 0 on x axis being component of infinite wave length (DC component / offset) and growing X to the left represents higher and higher frequencies - smaller/shorter wavelength - finder detail. Any sort of function multiplied by above Gaussian function will rapidly fall off to 0 with increasing X. True Gaussian will never become 0, but this is approximation and telescope PSF will hit 0 at some frequency. What is important to see is that even with approximation we hit very low values quickly (before we hit telescope resolving power).

Above advice for 1.6" star FWHM -> 1"/pixel is derived from such gaussian when it's value falls below 1% which is effectively 0 in terms of image quality. You can do a bit of math, write down gaussian function with certain FWHM, take its fourier transform and see when it's value falls below 1% to get "effective" sampling rate.

I need one more thing to add to this, and that is how gaussian FWHM / star FWHM is formed when imaging. For simplicity we will take three components: 1. Scope airy disk - it can be approximated with gaussian. 2. guiding error - it can also be approximated with gaussian, and seeing PSF - also approximated with gaussian. These three convolve to give total blur function. Convolution of gaussian with gaussian is another gaussian with variance of result being sum of variances of parts.

This can help you determine star FWHM in your image depending on scope size, your guide precision and seeing FWHM.

I set out to explain under / over sampling in simple terms, and I'm not sure I managed to do so. But if you have any additional questions, I'll be happy to try to answer them.

Link to comment
Share on other sites

Gav, I have heard that quite a lot in regards to the Edge HD i.e. producing bloated stars. Thanks for the feedback.

Also, thanks for the explanation Vlaiv. It makes sense.

Just from your own perspectives, if you were in my position, which scope and camera combination would you go for?

Link to comment
Share on other sites

I've chased the longer focal length Shangri-La and have managed to get through a couple of combinations.

Firstly I tried a C9.25 just as a dabble to see if I liked the longer focal length stuff..... Of course I did!!! So sold it and got a 8" RC from GSO. This was a pretty good scope as it goes and I was pleased with it, but always fancied just a little more reach so sold it! Next I bought an ODK10..... that was a great scope and coupled with the 8300 sensor gave me a resolution of about 0.63".... I loved the combination, but after a couple of seasons realised just how problematic I would find an open tube as I have zero confidence in taking it apart to clean the mirrors....... so sold it! 

Luckily at this time a friend of mine was selling a TMB 152/1200 - I snapped it up and coupled with a Sony sensor in the form of a QSI690 I LOVE the combination. Refractors are just lovely and as easy as you can get. At least when stuff goes wrong it's never the scope!!! In your position therefore I would go for the Esprit 150 in a heartbeat.

However........ I have one more point to make. I use a Mesu 200 with the long focal length scope and it works OK. I have an EQ8 for the shorter focal length dual rig and I don't feel that it guides accurately enough for 0.63" pp. I am thrilled with my EQ8 I'd like to add and at 3.38" pp it was a no brainer, but I couldn't put the long focal length on there with any confidence at all. Perhaps it's just my mount as they can be rather variable. I am pleased that I have it don't get me wrong, it works brilliantly for what I use it for....... Also I've never bothered to try tweaking it with software or hardware as it's fine for me.... perhaps it's as simple as my PHD settings not being great, but I don't need to waste the time to find out!!!

 

Link to comment
Share on other sites

Thanks Sara for the response. I must admit that I was inspired to get a large refractor for deep sky imaging when I saw that TMB 152 on your website. Hence the interest in the Esprit 150. However, as you said, I have had the feeling that the EQ8 would be the weak link in the imaging setup. I suppose the Mesu or even a Paramount would be more preferable, but like everything in astrophotography, it comes down to cost.

What you said about the mirrors getting dirty and dusty on the ODK10 is one of the main reasons for my hesitation in going for an RC.

When you had the ODK10 and 8300, did you find yourself having to bin a lot, or was the native image scale of 0.63”/pixel adequate?

Chris

Link to comment
Share on other sites

I have never used binning - Even when the C9.25 and camera combination (I *think* it was an Atik 460) was giving me about 0.4" pp! 

I really do get what you say about budget when it comes to mounts, but if the mount is your weak link when it comes down to it then you will be buying twice as you'll never get images you are happy with....... I'd have my eye on an Astro Physics mount if I was in the market right now.... perhaps in a while.

Link to comment
Share on other sites

Vlad - thank you for the explanation above. That makes some sense, which is great, but it is heavily maths driven and theoretical. I am keen to understand the effects of under / over sampling in terms of pixel scales and 'real world' imaging.

As far as I can work out, it is all to do with how many pixels are used to capture details. If the size of detail available is smaller than the pixel size used then you won't capture sufficient resolution of detail - you are undersampling things and quality suffers. Conversely, if the detail is bigger than the pixel size, then you are oversampling and no amount of extra pixels will capture any greater detail. However, you are not compromising image quality, so oversampling is not bad whereas undersampling is.

Is this all correct so far?

The next thing to consider is what size is the detail that you are trying to image and that's where the airy disk, seeing, focal length and camera pixel size all come in to play.

I think - have I got this right?

Link to comment
Share on other sites

Vlad - thank you for the explanation above. That makes some sense, which is great, but it is heavily maths driven and theoretical. I am keen to understand the effects of under / over sampling in terms of pixel scales and 'real world' imaging.

As far as I can work out, it is all to do with how many pixels are used to capture details. If the size of detail available is smaller than the pixel size used then you won't capture sufficient resolution of detail - you are undersampling things and quality suffers. Conversely, if the detail is bigger than the pixel size, then you are oversampling and no amount of extra pixels will capture any greater detail. However, you are not compromising image quality, so oversampling is not bad whereas undersampling is.

Is this all correct so far?

The next thing to consider is what size is the detail that you are trying to image and that's where the airy disk, seeing, focal length and camera pixel size all come in to play.

I think - have I got this right?

Link to comment
Share on other sites

42 minutes ago, Chris Willocks said:

Just from your own perspectives, if you were in my position, which scope and camera combination would you go for?

For some reason I really like RC type scopes, so I would go with that. If you lack the funds for Mesu or something even more expensive (I would personally love Mesu and hope one day I'll have funds for it), have a look at CEM60 with encoders. It should allow you to guide to precision you are going to need if you are planing to go for 1"/pixel. There is general rule of thumb that you need guide RMS to be at least half that of imaging resolution. If you are planing for 1"/pixel - you need mount that is capable of 0.5" RMS or less. Well you don't need one, but you want one :D

As for camera - I like to go with modern CMOS sensors - that is something I'm used to. Party because I don't have high end mount - CMOS allows for shorter exposures due to low read noise. There still is no "proper" CMOS sensor for large focal lengths. Ideally it should be APS-C or full frame one with pixels about 4.5um or so. There are OSC models but no mono with such specs.

We have not touched one more thing related to sampling rate, and that one is "tricky". Pixel size actually introduces certain pixel blur into image. This is due to fact that pixel is surface integration device (sums signal over surface) and not point sampling device. Above holds for point samples. When you have surface sampling you are introducing another blur component - you can think of it as averaging over surface of pixel - which is effectively what is happening. Smaller pixel size helps with this even if you are oversampling. In this case 2x2 bin can provide better resolution than larger pixel under some circumstances. Regular pixel binning will be the same in terms of resolution, but if doing software binning you don't have to sum adjacent pixel values to bin. You can split image into 4 sub images (every other pixel in X and Y). Each of those will have twice lower resolution in each axis - this ensures proper sampling. It will have smaller pixel size - this reduces pixel blur, but you will still get x4 images to stack - this improves SNR by factor of 2 - same as regular binning.

11 minutes ago, PhotoGav said:

Vlad - thank you for the explanation above. That makes some sense, which is great, but it is heavily maths driven and theoretical. I am keen to understand the effects of under / over sampling in terms of pixel scales and 'real world' imaging.

As far as I can work out, it is all to do with how many pixels are used to capture details. If the size of detail available is smaller than the pixel size used then you won't capture sufficient resolution of detail - you are undersampling things and quality suffers. Conversely, if the detail is bigger than the pixel size, then you are oversampling and no amount of extra pixels will capture any greater detail. However, you are not compromising image quality, so oversampling is not bad whereas undersampling is.

Is this all correct so far?

The next thing to consider is what size is the detail that you are trying to image and that's where the airy disk, seeing, focal length and camera pixel size all come in to play.

I think - have I got this right?

Over sampling is bad because it reduces SNR. Target signal that we are interested in gets spread over more pixels, so each pixel value is lower. Lower signal value leads to lower SNR. This is main concern when over sampling - you are going for lower SNR without any gain in detail.

Under sampling is opposite - it allows for higher SNR but at expense of detail. Detail lost is hard to explain - it's not like something is going to suddenly vanish from image. Stars will be there. It's local contrast that gets reduced when you loose resolution. Small change in sampling rate will not have dramatic impact on image. I've already done some examples in previous threads discussing this, I'm going to try to find it and link it here.

Have a look at above post - I did 4 comparison images - 1x1, 2x2, 3x3, 4x4 bin and then resample to original resolution. It shows effects of lower sampling rate on image detail. You will notice that 2x2 binning has almost no effect, 3x3 binning starts to show effect, while 4x4 effect is clearly visible in terms of loss of detail - you can't read text any more, but you see the text - signal is still there, it's the details missing that allow our brain to distinguish letters.

You can look at above in terms of sampling rates - 1"/pixel, 2"/pixel, 3"/pixel, 4"/pixel - 2"/pixel will cost you just a hint of clarity over 1"/pixel, and lower resolutions will have more impact. Under normal circumstances you would not resample your images to 1"/pixel as they would tend to look blurry (much like text) - but would leave them at native / captured resolution.

 

Link to comment
Share on other sites

Just for fun.and to be a little contentious I'll chuck another 'scope into the mix, namely the ODK12 at just over 2m focal length. OK, I'll 'fess up and say I have an interest here having just put down the deposit on one :eek:. IMHO a nice camera to pair with this is one of the 16200 based jobbies, giving an on-the-edge image scale of 0.61"pp, and with enough pixels to allow binning or cropping.

And to be *really* contentious, I'll chuck an ASA DDM85 Basic into the mix. Having used the cheaper (And no longer made, alas) DDM60 I've fallen in love with direct drive and encoder guiding. No more loss of guidestar, or trying to guide on a hot pixel!

RMS guide error below 0.2"

Link to comment
Share on other sites

1 hour ago, PhotoGav said:

if the detail is bigger than the pixel size, then you are oversampling and no amount of extra pixels will capture any greater detail. However, you are not compromising image quality, so oversampling is not bad whereas undersampling is.

My take on this is that if the energy the scope receives from a single star is spread out across many pixels, then each one will only receive a fraction of the total light. But when you read the data out of the sensor, you will get the full noise component from each pixel. So your signal / noise ratio will suffer.
However, even with a CCD that is perfectly matched to your seeing: let's say your seeing is 2 arc-sec and your pixel size is such that each one "sees" 2 arc-sec of sky. Then is is incredibly rare that a single star will be exactly aligned with just one, single pixel throughout the time of a subframe. The atmosphere will still move that star around on your sensor and there is always the possibility of wind, minute amounts of flexture and temperature variations moving things around too.
If your pixels are larger than your seeing, there is more chance a star will stay on a single pixel. But if that star is aligned across 2, 3 or 4 pixels just due to its position, then it's just like having that star oversampled.

This is one of the reasons for dithering between subframes, to average out the errors. So in general I would reckon that whatever your design goals, most stars will always be seen by more than 1 pixel, but that when multiple subs are stacked you end up with a Gaussian (random) distribution that will give you round stars. And if they look bloated, just use a processing tool to make them smaller.

Link to comment
Share on other sites

22 minutes ago, Chris Willocks said:

Thanks for the recommendations. Has anyone had any experience with the CEM120 mount?

It looks like a good option if you go for EC version. Don't know what the difference between EC and EC2 would be though and if it's worth the cost.

Have a look at this thread:

Positive reviews here as well:

https://www.cloudynights.com/topic/637908-upgrading-mount-ioptron-cem120/

(Btw looks like EC2 has encoders on both axis, while EC has it on RA only)

Link to comment
Share on other sites

Thanks. It looks like a decent option, like the Mesu.

Another thought that crossed my mind is to use the Starlight Xpress Adaptive Optics unit.

Would installing one of these in the imaging setup allow for a less precise (and less expensive) mount e.g. an EQ8 and achieve a similar level of precision as the Mesu or CEM120 without AO?

Link to comment
Share on other sites

15 hours ago, DaveS said:

There was a discussion on here a little while ago regarding AO. I think @swag72 used one but eventually gave it up as a bad job.

I've tried a few things thinking it would help and make my life easier!! The AO was an abject failure for me!!

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.