Jump to content

stargazine_ep44_banner.thumb.jpg.6153c4d39ed5a64b7d8af2d3d6061f61.jpg

Jupiter with Asi290mm and C9.25, to Barlow or not to Barlow that is the question


Recommended Posts

Having recently purchased a Asi290mm, I have some fairly decent Lunar images and some average Jupiter images (still a work in process)

I have been mostly imaging at the native focal length, possibly a little more with the focuser/filter when in position. This gives me resolution of 0.25"/pixel, if I use a x1.8 Barlow this will be approximately 0.14"/pixel. This seems a bit oversampled, especially for the UK seeing conditions and the position of the planets.

The native set up seems to give an image that lacks a bit scale to ne honest. The image below is a work in progress from the 25th

5a9715c379eb8_2018-02-25-0544_2-PeterPresland-RGB.png.6cf94beab3442e772917ce04692b1596.png

 

  • Like 4
Link to post
Share on other sites

I also have this camera for planetary and lunar imaging. I am not an expert in this kind of imaging, but I was told and also read, that for this pixel size optimum sampling is at f/15. So maybe f/18 is a little bit too much, but also Jupiter now is far from perfect position in northern Europe, so it will be hard to get decent images this season I think :( 

If you have regular barlow (not powermate or other teleextender) you can decrease its magnification a little by shortening distance to sensor (if it can be done at all). 

  • Like 1
Link to post
Share on other sites
2 minutes ago, drjolo said:

I also have this camera for planetary and lunar imaging. I am not an expert in this kind of imaging, but I was told and also read, that for this pixel size optimum sampling is at f/15. So maybe f/18 is a little bit too much, but also Jupiter now is far from perfect position in northern Europe, so it will be hard to get decent images this season I think :( 

If you have regular barlow (not powermate or other teleextender) you can decrease its magnification a little by shortening distance to sensor (if it can be done at all). 

I forgot you could do that, I have X1.8, X2 Barlow's and a X 2.5 Powermate.

Link to post
Share on other sites
14 minutes ago, drjolo said:

I have found link to some useful calculators in my bookmarks, and also for CCD sampling http://www.wilmslowastro.com/software/formulae.htm#CCD_Sampling . For green light and 2.9um pixel it gives f/14. 

In some of my "explorations" into this subject, I came with slightly different figure than usually assumed.

Instead of using x3 in given formula, value that should be used according to my research is 2.4

So for camera with 2.9um, optimum resolution for green light would be F/11.2

Here is original thread for reference:

 

 

  • Like 2
Link to post
Share on other sites

A very interesting thread @vlaiv thanks for sharing.

Using Dawes limit to solve the resolving power of my C9.25" telescope it can resolve 0.49 arc-seconds.
According to sampling theorem i need to sample at twice the frequency of the signal being sampled. So the ideal resolution for my C9.25" would be to sample at about 0.25 arc-seconds/pixel, which would be no Barlow. Just seems a bit low at F15, let alone F11.2
 

Link to post
Share on other sites
22 hours ago, drjolo said:

I have found link to some useful calculators in my bookmarks, and also for CCD sampling http://www.wilmslowastro.com/software/formulae.htm#CCD_Sampling . For green light and 2.9um pixel it gives f/14. 

The calculators also suggest some where between F11-F14 for the Asi290mm. But it also suggests a Focal length of between 2578mm (red) and 3528mm (blue).

Link to post
Share on other sites
2 hours ago, Pete Presland said:

A very interesting thread @vlaiv thanks for sharing.

Using Dawes limit to solve the resolving power of my C9.25" telescope it can resolve 0.49 arc-seconds.
According to sampling theorem i need to sample at twice the frequency of the signal being sampled. So the ideal resolution for my C9.25" would be to sample at about 0.25 arc-seconds/pixel, which would be no Barlow. Just seems a bit low at F15, let alone F11.2
 

Dawes limit and Rayleigh criterion are based on two point sources separated so that second is located at first minimum of airy pattern. It is usable for visual with relatively closely matched intensity of sources (like double stars). Applying twice sampling to that criterion is really not what Nyquist theorem is about. There are two problems with it:

1. It does not take into account that we are discussing imaging - and we have certain algorithms at our disposal to restore information in blurred image

2. It "operates" in spatial domain. Nyquist theorem is in frequency domain. Point sources transform to rather different frequency representation even without being diffracted to airy disk (one can think of them as being delta functions).

Proper way to address this problem is to look at what airy disk pattern is doing in frequency domain (MTF) and based on that choose optimum sampling.

  • Thanks 1
Link to post
Share on other sites

Helpful thread, ideally I need adjustable barlow or electronic barlow wheel for my set up.  2.5x barlow is close with blue using vlaiv's multiplier  f .76 higher and 229mm fl higher but far off on red f 7.25 high and 2045mm FL high.  With 2x barlow blue is f 8.24 low and 1181mm low but red is f 1.76 low and 645mm high?

 

Link to post
Share on other sites
5 hours ago, MilwaukeeLion said:

Helpful thread, ideally I need adjustable barlow or electronic barlow wheel for my set up.  2.5x barlow is close with blue using vlaiv's multiplier  f .76 higher and 229mm fl higher but far off on red f 7.25 high and 2045mm FL high.  With 2x barlow blue is f 8.24 low and 1181mm low but red is f 1.76 low and 645mm high?

 

Honestly, I don't quite understand what you said :D

But I would like to point something out.

2.4 pixels per Airy disk radius is theoretical optimum sampling value based on ideal seeing and ability to do frequency restoration for those frequencies that are attenuated to 0.01 of their original value. This means good SNR (like 50-100 SNR) to be able to do that, as well as good processing tools that enable one to do it.

In real life scenario, seeing will cause additional attenuation of frequencies (but not cut off like Airy pattern), that is combined with Airy pattern attenuation and cut off. So while ideal sampling will allow one to capture all potential information - it is not guaranteed that all information will be captured. On the other hand, I just had a discussion with Avani in this thread:

where I performed simple experiment on his wonderful image of Jupiter taken at F/22 with ASI290 to show that same amount of detail could be captured with F/11 with this camera. He confirmed that by taking another image at F/11, but said that for his workflow and processing he prefers using F/22 as it gives him material that is easier to work with. So while theoretical value is correct, sometimes people will benefit from lower sampling - if seeing is poor, and sometimes people will benefit from higher sampling simply because post processing and tools better handle such data.

So bottom line to this is that one should not try to achieve theoretical sampling value at all costs. It is still good guideline but everyone should do a bit experimenting (if gear allows for it) to find what they see as best sampling resolution for their conditions - seeing and also tools and processing workflow.

  • Like 2
Link to post
Share on other sites
On 02/03/2018 at 09:18, vlaiv said:

Honestly, I don't quite understand what you said :D

But I would like to point something out.

2.4 pixels per Airy disk radius is theoretical optimum sampling value based on ideal seeing and ability to do frequency restoration for those frequencies that are attenuated to 0.01 of their original value. This means good SNR (like 50-100 SNR) to be able to do that, as well as good processing tools that enable one to do it.

In real life scenario, seeing will cause additional attenuation of frequencies (but not cut off like Airy pattern), that is combined with Airy pattern attenuation and cut off. So while ideal sampling will allow one to capture all potential information - it is not guaranteed that all information will be captured. On the other hand, I just had a discussion with Avani in this thread:

where I performed simple experiment on his wonderful image of Jupiter taken at F/22 with ASI290 to show that same amount of detail could be captured with F/11 with this camera. He confirmed that by taking another image at F/11, but said that for his workflow and processing he prefers using F/22 as it gives him material that is easier to work with. So while theoretical value is correct, sometimes people will benefit from lower sampling - if seeing is poor, and sometimes people will benefit from higher sampling simply because post processing and tools better handle such data.

So bottom line to this is that one should not try to achieve theoretical sampling value at all costs. It is still good guideline but everyone should do a bit experimenting (if gear allows for it) to find what they see as best sampling resolution for their conditions - seeing and also tools and processing workflow.

Yes, we've been talking about this in another post!
I do not know if they can understand me, I do not speak English and I depend on the translator,
 One very interesting thing is this Signal / Noise issue. According to Vlaiv, considering the Niquist theorem, he used the Fourier transform to determine the best point of image capture, in which case he arrived at F / 11. No problem at all! Very straightforward, since the Fourier transform is used in the manipulation of analogue / digital signals.
I found the approach quite pertinent. On the other hand, he exemplified with an image of mine, in which he transposed to another image that would be the image in F / 11 so as to compare the capture of details. I do not know how he does it with JPEG image, everything he approached has his application share and he knows the limitations (even because he does not mention them). In theory, all that applies and would be the ideal of worlds. But the world is not ideal and one of the biggest variables that I see is precisely that of the lack of idealism.
Translating, under controlled conditions, all mathematics will apply to Niquist's theorem and all. In practice, variation occurs. Because? Because the system is not ideal, it is stochastic, that is, it varies at every moment. This is the domain of stochastic calculation, much more complex and difficult in order to generate a mathematical model that comes closer to reality. That's what stochastic calculation tries to do. We can? Yes, but not in full. For example, use the Monte Carlo Method to get a template that applies to the conditions in which you capture your data. For this we would have to have a large mass of data and determine what data (variables) we would consider. The more variables, the more computation time is applied, which can take hours, days or months. So nobody does it amateurishly.
You have to have access to supercomputers or computer clusters. Some variables: humidity, temperature, pressure, number of particles in the atmosphere, optical distortion of your equipment, electric flow in the CCD, data capture and drainage speed, data archiving form, processor TDP, processing capacity, etc. That's why math helps you not to waste time, but reality is far from it, it comes close, but it's not reality. All this is a simulation to represent reality and not reality. So what's this for? So you do not waste time on empiricism. In other words, the best catch of subtle details is, in your case, around F / 11. It may be F / 18, F / 22, but it will not be F / 60 or F / 80. How to know? As Dr. Strange's movie says: Study and Practice? The study tells you where you can be, but practice, where it really is. Its atmosphere does not leave, it is not mathematical, it approaches, but it is not ....

  • Like 1
Link to post
Share on other sites

It has been a very interesting discussion. I have not understood all of it, but it seems pretty clear that My Barlow is not going to be used much with the Asi290. If I get some decent conditions, I will try a few experiments with it and post what I get in comparison to the native focal length.

  • Like 1
Link to post
Share on other sites
  • 2 years later...

I am interested in this subject as well. If I get a C9.25, what would be the best route for planetary shots?

If I go for the ASI290 OSC, should I stay at native focal length? Or should I try with a 2x or so Barlow?

What if I try with an ASI533?

I guess that the only way is to try for ourselves, because there are so many individual parameters to change the end result... (seeing, atmosphere, mount, etc)

N.F.

  • Like 1
Link to post
Share on other sites

I recently got the HD925 and the ASI290MC.  So far when imaging Jupiter  I find I get more detail when NOT using my 2x Barlow, and staying at the native F11.  But I have learnt from Vlaiv in an earlier post that in non-perfect seeing conditions its best to go for slightly longer exposures of around 6ms to get the best details.  I don't think there is one mathematic ally correct answer as there are so many variables.  Practice and experimentation certainly pays off.

Link to post
Share on other sites
3 minutes ago, runway77 said:

I don't know if it helps but I think that this camera doesn't have a built-in IR/UV filter. Without it you probably get blurry images. Try to use an IR/UV filter next time.

The camera does not have a built in UV/IR filter but even with one fitted you still can have issues.  Because of Jupiters low position at the moment the atmospheric distortion is also a factor.

Link to post
Share on other sites
On 28/02/2018 at 21:14, Pete Presland said:

I forgot you could do that, I have X1.8, X2 Barlow's and a X 2.5 Powermate.

Hi Pete

I purchased a 1.5X and a 1.75 amplifier from Siebert Optics in the US. They are very good optically so may be worth a Google and a look?

Best Wishes

Harvey

  • Like 1
Link to post
Share on other sites

A bit off topic:

This reminds me, if I get the ASI462 camera I'll need the UV/IR cut filter, right? I wonder, the extended red/IR range of this sensor might be useful with an L filter?

Because I got an idea: use an L filter, keep the results as a monochrome image, then shoot RGB and separate each channel for an LRGB mix?

N.F.

Link to post
Share on other sites
11 minutes ago, nfotis said:

A bit off topic:

This reminds me, if I get the ASI462 camera I'll need the UV/IR cut filter, right? I wonder, the extended red/IR range of this sensor might be useful with an L filter?

Because I got an idea: use an L filter, keep the results as a monochrome image, then shoot RGB and separate each channel for an LRGB mix?

N.F.

L filters generally also cut off UV and IR, you will need an IR pass filter to capture any IR. Also you don't need the split the RGB channels of the colour capture, just add the IR to the colour as luminance

Edit: just seen that the Baader Clear filter they advertise as an L filter but seems to pass everything.

Edited by CraigT82
Link to post
Share on other sites
19 minutes ago, nfotis said:

A bit off topic:

This reminds me, if I get the ASI462 camera I'll need the UV/IR cut filter, right? I wonder, the extended red/IR range of this sensor might be useful with an L filter?

Because I got an idea: use an L filter, keep the results as a monochrome image, then shoot RGB and separate each channel for an LRGB mix?

N.F.

You could do something like that - provided that you use IR pass rather than L filter.

That way sensor will act as monochrome. You'll need filter that passes above 800nm.

There are two issues with that approach however.

First is that you won't get resolution that you would otherwise get. Airy disk at 900nm is twice as large as Airy disk at 450nm. Human eye finds sharpness in luminance information rather than in color. You'll be shooting luminance in region where your telescope is acting as if having almost half of aperture. There is upside to this - IR is less affected by atmosphere so it will be easier to sharpen up recording.

Second issue is that luminance is related to color in human vision, or rather two different colors will have different luminance data. Green carries more luminance than red and blue for example. This reflects somewhat in QE curve of sensor. We have no way of knowing relative luminance of different colors in IR part of spectrum.

What should be relatively bright like yellow color could end up having very low IR intensity and something that should be dark - like deep red might end up having quite a bit of intensity in IR - all of those will create odd looking image.

Link to post
Share on other sites

Yeah, I meant a red+IR pass filter (maybe). I was trying to understand where the deep red and IR sensitivity of this sensor could be used...

 

I suppose that we shall see a monochrome version of this sensor soon?

N.F.

 

Edited by nfotis
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.