Jump to content

Narrowband

Planetary imaging - what do I need to know.....?


fwm891

Recommended Posts

Hi All,

I've just started to capture some planetary images with my kit:

Optics: TS RC8 1624mm fl

ES 3x telextender (giving circa 4800 mm fl)

ASI 662 MC uncooled camera ( fov 11.88' x 6.73'   -  0.73"/pixel native,  3.96' x 2.24'   0.12"/pixel with x3 extender).

)

iOptron CEM 60 mount (pier mounted in Obsy).

I have a TS 80mm finder/guider with ASI120mm camera - not used for planetary  as a guider, primarily acting as a finder.

Software: Firecapture (for capture ser files), AS!3 and AstroSurface for image processing. Photoshop CS3 for tweaking.

I've had a couple of attempts on Jupiter and Saturn so far with Jupiter being the easiest target, bigger, brighter and higher. However results are somewhat dissapointing and I'm not sure why.

What should I be looking to do with a set-up? Shortest exposures, maxium FPS, long/short sequences, gain settings: high/low somewhere between, use of ROI during capture/processsing ?

I keep the kit well collimated, where possible I try to get the system cooled to ambient before starting. I focus (as best i can - normally on a nearby star or planet's moon) during runs.

Two of my best effors so far below.

 

Can someone give me starting point given the kit available. I look at others results and read what was done etc and try and apply that to my situation but I seem to end up with poorish results each time...

Chers

Francis

 

2022-09-23-2313_1-U-RGB-Jup____sigma_100r__1503reg_B.jpg

2022-09-23-2315_1-U-RGB-Jup____sigma_100r__999reg_B.jpg

Edited by fwm891
Text added
Link to comment
Share on other sites

RC is not the best tool for this as it's got massive central obstruction, so any result will be worse than is possible with given aperture size.

Other than that - you are using way too much focal length for your camera.

Ideal F/ratio depends on pixel size and since you are using 2.9um camera, you really need to be at F/11.6, or to round that up F/12. With current setup you are at twice that (some people use it like that, but you need excellent seeing so you can use longer exposures).

Next step to check is exposure length. It needs to be really short. You want to freeze the seeing, so limit yourself to 5-6ms exposure length in most cases.

In fact, check these two recent threads:

 

  • Thanks 1
Link to comment
Share on other sites

Good advice above.  Your FL is dictated by your pixel size and the best results are obtained at 4x or 5x that hence the advice to go to f/12 (4 x 2.9 = 11.6).  Use the smallest ROI you can get away with which will help get the frame rate up and limit the total capture time to 30 seconds for Jupiter otherwise you will need to add in a de-rotation step in your processing.

  • Thanks 1
Link to comment
Share on other sites

6 minutes ago, Owmuchonomy said:

limit the total capture time to 30 seconds for Jupiter otherwise you will need to add in a de-rotation step in your processing.

Well, actually no. One can go up to 3-4 minutes with given setup. I'll explain.

Jupiter is ~440000Km in diameter and rotates once every ~10 hours. This means that point on equator closest to us moves at ~12.22Km/s. Given that current distance to Jupiter is ~591,000,000Km, this motion is 0.0042656"/s.

In 30s this point will travel ~0.12".

Given above setup at optimal sampling rate at ~0.258"/px, so fastest moving point will only move half a pixel in 30 seconds.

If everything was perfect, then yes, 30s would be sensible limit to prevent motion blur, however, we have influence of atmosphere, and due to seeing different parts of the image "jump around" for more than fraction of arc second. If seeing is say 1.5" on a given night - that really means that distribution of point position has FWHM of 1.5" or standard deviation of 0.637". That is your average deviation of point position from its true position over course of few seconds.

On average, seeing creates x5 larger motion of points on the image from frame to frame than rotation makes in 30 seconds.

Look at this gif from wiki from seeing article:

Seeing_Moon.gif

How much motion/distortion there is from frame to frame.

Stacking software knows how to deal with this - it uses alignment points and creates "counter distortion" (it can't undo blur, but can create opposite distortion, based on feature average position over time - that is how reference frame is created, alignment point deviations are averaged over period of time and that is taken to be reference position for that alignment point). It can correct for feature being out of place for several pixels (alignment point of 25px is often used so max displacement is 12px, but in reality it is more like 7-8px max).

Given this, software alignment of feature on order of 3-4px is not a problem and is easily handled by stacking software, so from above calculations we can see that there is really no need to derotate video for up to 3-4 minutes as stacking can handle any rotation.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

6 hours ago, vlaiv said:

Well, actually no. One can go up to 3-4 minutes with given setup. I'll explain.

Jupiter is ~440000Km in diameter and rotates once every ~10 hours. This means that point on equator closest to us moves at ~12.22Km/s. Given that current distance to Jupiter is ~591,000,000Km, this motion is 0.0042656"/s.

In 30s this point will travel ~0.12".

Given above setup at optimal sampling rate at ~0.258"/px, so fastest moving point will only move half a pixel in 30 seconds.

If everything was perfect, then yes, 30s would be sensible limit to prevent motion blur, however, we have influence of atmosphere, and due to seeing different parts of the image "jump around" for more than fraction of arc second. If seeing is say 1.5" on a given night - that really means that distribution of point position has FWHM of 1.5" or standard deviation of 0.637". That is your average deviation of point position from its true position over course of few seconds.

On average, seeing creates x5 larger motion of points on the image from frame to frame than rotation makes in 30 seconds.

Look at this gif from wiki from seeing article:

Seeing_Moon.gif

How much motion/distortion there is from frame to frame.

Stacking software knows how to deal with this - it uses alignment points and creates "counter distortion" (it can't undo blur, but can create opposite distortion, based on feature average position over time - that is how reference frame is created, alignment point deviations are averaged over period of time and that is taken to be reference position for that alignment point). It can correct for feature being out of place for several pixels (alignment point of 25px is often used so max displacement is 12px, but in reality it is more like 7-8px max).

Given this, software alignment of feature on order of 3-4px is not a problem and is easily handled by stacking software, so from above calculations we can see that there is really no need to derotate video for up to 3-4 minutes as stacking can handle any rotation.

Over my head but its all good and I believe everything you say. 😁

Link to comment
Share on other sites

1 hour ago, Adam1234 said:

@vlaiv, just out of interest, how did you calculate the 0.0042656"/s? I'm making note of these calculations!

It is fairly simple calculation really - trick is to just imagine single point on Jupiter's equator.

440000Km is circumference and Jupiter makes one revolution in ~10 hours, so any point on equator will travel at 440000Km / 10h so we get 12.22Km/s when we do the math.

From our perspective in one second - that point will move 12.22km - imagine tangent to Jupiter's equator. When diameter is large compared to motion - it is like point is moving perpendicular to us.

Here is little diagram (not to scale because of distances and sizes involved):

image.png.06c5b3fb6f0c893d21833fc768c0149f.png

Arrow is how much point will move in one second. This is correct only if movement is very small compared to diameter / circumference - otherwise curving needs to be taken into account - but 12Km is very small compared to size of Jupiter so distance along straight line and along surface are almost the same.

It is then just simple trigonometry: tan(angle) = 12.222...Km / distance to Jupiter => angle = arctan(12.222.. / 591,000,000)

(just remember to convert to degrees if your calculator is using radians - standard for trig functions is radians if not otherwise indicated)

By the way - if you search google for "arctan(12.22 / 591,000,000)" it will give you right answer in radians :D  - and even better do search for arc seconds :

image.png.5a9652c95e042d2dcfb688147bac6bb5.png

That is just brilliant :D

By the way - all the numbers come from google - I searched for circumference of Jupiter, rotation speed and current distance to earth (and I rounded results)

  • Thanks 1
Link to comment
Share on other sites

7 hours ago, Owmuchonomy said:

Good advice above.  Your FL is dictated by your pixel size and the best results are obtained at 4x or 5x that hence the advice to go to f/12 (4 x 2.9 = 11.6).  Use the smallest ROI you can get away with which will help get the frame rate up and limit the total capture time to 30 seconds for Jupiter otherwise you will need to add in a de-rotation step in your processing.

So if he had a C14 would that calculation still ring true? Christopher go uses a 290 with a pixel size of 2.9 so should he be using his at f11.6 or f14.6... so that calculation is abit vague and not really relevant

And the calculation of 30 secs is out also..you don't de-rotate to correct smeared data, you de rotate to get sharper images across the disc.. that's why you do a series of captures

Link to comment
Share on other sites

3 minutes ago, newbie alert said:

So if he had a C14 would that calculation still ring true? Christopher go uses a 290 with a pixel size of 2.9 so should he be using his at f11.6 or f14.6... so that calculation is abit vague and not really relevant

Actual calculation is very straight forward and very unambiguous and it goes like this:

F/ratio = pixel_size * 2 / wavelength_of_light

and holds true for monochromatic light. Resolution of telescope depends on wavelength of light. It is highest in blue region (shortest wavelengths) and lowest in red region (longest wavelengths).

When you image in color - you have a choice. You can select what wavelength to use as your baseline.

If you want to be absolutely certain you caught all there is - then you need to use 400nm (or 380nm - depending on your UV/IR cut filter, officially human vision starts at 380nm or even a bit lower, but most UV/IR cut filters cover 400-700nm range). In that case formula is as follows:

F/ratio = pixel_size * 2 / 0.4 = pixel_size * 5

I personally recommend people to go with 500nm as wavelength even for color imaging. There are few reasons for this. First - that is peak of luminance.

image.png.3c25115ff723c7ea62ef88f82253b3ec.png

second, most refractors are best corrected around 500nm line (because of peak luminance sensitivity is there).

and third - atmosphere bends different wavelengths of light by different amount. This is law of refraction where angle depends on wavelength (that is why we have rainbow).

Blue light is bent the most (short wavelengths) and red (long wavelengths) is bent the least.

This means that seeing will affect blue light or rather wavelengths around 400nm the most and that detail will be most difficult to recover so you don't need to go all crazy with resolution and try to capture every last bit of something that is going to be blurred anyway.

Remember - more you increase F/ratio, more you spread the light and less signal and SNR you have.

That is why I recommend 500nm as wavelength in above calculation for RGB imaging (for narrow band - one should use actual frequency, like when using Ha or OIII filter to minimize seeing when imaging the Moon).

If you put 500nm in equation then you get

F/ratio = pixel size * 2 / 0.5 = pixel_size *4

Both are correct, and if you are pedantic about it - pixel_size * 5 is "more correct", but you'll see almost no difference in captured detail between pixel_size * 5 and pixel_size * 4, and pixel_size * 4 will give you better SNR (which lets you sharpen more and bring out detail that you did manage to capture in the first place).

By the way - this holds true for any type of scope 8" newtonian / 4" refractor or C14. It has to do with size of aperture of scope and physics of light (wave nature of electro magnetic radiation).

Link to comment
Share on other sites

17 minutes ago, newbie alert said:

And the calculation of 30 secs is out also..you don't de-rotate to correct smeared data, you de rotate to get sharper images across the disc.. that's why you do a series of captures

Not sure I follow?

De rotation is still needed if long video is captured or if one wants to combine images produced with several short videos (stack them).

It is done to remove difference in feature position, but removing difference in feature position is what stacking software does by default when using alignment points, so for short videos up to 3-4 minutes, there is no need for derotation to be done.

Yesterday I saw a post by someone showing 3 minute derotated video vs regular AS!3 stack of that video without derotation - and surely enough, there is no difference between the two.

I'll try to find it and will post the link

Link to comment
Share on other sites

all good stuff @vlaiv - thanks!

I had struggled trying to get to grips with WinJUPOS in previous years thinking that videos as long as 2 minutes may benefit from de-rotation, but this is clearly unnecessary.

and one of my images from last night, improved by advice given by yourself in another recent post, mainly by using short exposures (3ms here) and stacking the best 12.5% of 20,000 frames (@115fps over ~3 minutes).

image.png.9dcf0676826643130b2bc1a96a2cf373.png

 

  • Like 1
Link to comment
Share on other sites

3 hours ago, vlaiv said:

Not sure I follow?

De rotation is still needed if long video is captured or if one wants to combine images produced with several short videos (stack them).

It is done to remove difference in feature position, but removing difference in feature position is what stacking software does by default when using alignment points, so for short videos up to 3-4 minutes, there is no need for derotation to be done.

Yesterday I saw a post by someone showing 3 minute derotated video vs regular AS!3 stack of that video without derotation - and surely enough, there is no difference between the two.

I'll try to find it and will post the link

I'm not qualified to suggest anything other than what I've read as I've never used the program but... The idea is to do a run of stacks, put them through windupos and de-rotate.. on a u tube video Christopher go does a run of 6 stacks when seeing is good, more when seeing is bad and then de-rotates... His comments are that it's like stacking on steroids and helps with smoothing out the noise... If you de-rotate a 3 min video then of course there wouldn't be any difference...same as you wouldn't do a 10 min video and de-rotate, you would do a few stacks and then de-rotate

That's how I understand it anyway

  • Like 1
Link to comment
Share on other sites

4 hours ago, vlaiv said:

It is fairly simple calculation really - trick is to just imagine single point on Jupiter's equator.

440000Km is circumference and Jupiter makes one revolution in ~10 hours, so any point on equator will travel at 440000Km / 10h so we get 12.22Km/s when we do the math.

From our perspective in one second - that point will move 12.22km - imagine tangent to Jupiter's equator. When diameter is large compared to motion - it is like point is moving perpendicular to us.

Here is little diagram (not to scale because of distances and sizes involved):

image.png.06c5b3fb6f0c893d21833fc768c0149f.png

Arrow is how much point will move in one second. This is correct only if movement is very small compared to diameter / circumference - otherwise curving needs to be taken into account - but 12Km is very small compared to size of Jupiter so distance along straight line and along surface are almost the same.

It is then just simple trigonometry: tan(angle) = 12.222...Km / distance to Jupiter => angle = arctan(12.222.. / 591,000,000)

(just remember to convert to degrees if your calculator is using radians - standard for trig functions is radians if not otherwise indicated)

By the way - if you search google for "arctan(12.22 / 591,000,000)" it will give you right answer in radians :D  - and even better do search for arc seconds :

image.png.5a9652c95e042d2dcfb688147bac6bb5.png

That is just brilliant :D

By the way - all the numbers come from google - I searched for circumference of Jupiter, rotation speed and current distance to earth (and I rounded results)

Perfect, thanks!

Link to comment
Share on other sites

7 minutes ago, newbie alert said:

I'm not qualified to suggest anything other than what I've read as I've never used the program but... The idea is to do a run of stacks, put them through windupos and de-rotate.. on a u tube video Christopher go does a run of 6 stacks when seeing is good, more when seeing is bad and then de-rotates... His comments are that it's like stacking on steroids and helps with smoothing out the noise... If you de-rotate a 3 min video then of course there wouldn't be any difference...same as you wouldn't do a 10 min video and de-rotate, you would do a few stacks and then de-rotate

That's how I understand it anyway

There are two modes that you can use with WinJupos.

First is derotation of single images. This is useful feature in several cases. One of them being stacking of several stack that are taken over period of time - like you say.

Other useful use case is to do LRGB or RGB imaging with mono camera and filters. You record for example 3 minute video with each R, G and B. Now your recording spans up to 10 minutes or so (9 minutes of footage and filter swap / refocus in between). In order to align colors properly - two of 3 colors need to be derotated to match one base color.

Then there is derotation of whole video where individual frames are derotated prior to stacking - much in the same way as above still images are - depending on their time stamp (which determines how much derotation to apply based on how much time has passed from reference moment to the moment of image/sub).

That is useful if one records long video - but does not want to slice it up in pieces to create individual stacks and images to be rotated and then again stacked - but wants to be able to stack whole video at once. This can/should be done with videos that are longer than said 3-4 minutes.

Link to comment
Share on other sites

27 minutes ago, newbie alert said:

And reference to the de-rotated 3 min  video, windupos is done after stacking,with the video not long enough to show much,if any movement what's being derotated?

If for example you have 3 videos, each 3 minute long over say 11 minutes total. Each video being certain color.

Stacking will handle any rotation between first and last frame of first video. It will do so for second video and it will do that for third video.

This means that each video will be stacked without motion blur from rotation, but when you try to compose RGB image out of it - these images will represent different times - there will be at least 8 minutes of difference between first and last stack (if we assume that reference frame represents midpoint in video and if we have 11 minutes of total time - half of first video is 1.5 minutes, half of last video is 1.5 minutes so difference is 11 - 3 = 8 minutes between these two time points).

In 8 minutes features in center will move about 4 pixels according to above calculation for 8" aperture. Edges will still align as features drift progressively slower as you move towards the edge of disk

This means that in center you will have misalignment of R, G and B data - but not on the edges, so simple "align RGB" won't help there - you need to derotate actual stacks / channel images to align them properly.

This is example of why you would want to derotate stack of 3 minute video (but not video itself).

Another example would be if you for example star your capture and capture 3 minutes of video but cloud rolls in and blocks the view. It passes after 5 minutes and you then make another 3 minute video, and you get another interruption for whatever reason (maybe your storage can only handle writing of 3 minute videos).

In any case - now you have short recordings, each of which can be stacked on its own without issues - but you want to combine resulting images by stacking them. In order to stack them, you need to derotate them for the same reason as with RGB - their centers won't align but edges will.

Now if you've captured 15 minutes of video in single run - you have two options. Either chop up such video into smaller fragments, stack each fragment and then derotate each of those for final stack of stacks. Alternative is to simply derotate whole video and stack it in a single go without worrying that stacking software won't be able to pick up large difference between first and last frame (or some of first and some of last frames depending what is left after rejection based on quality).

Link to comment
Share on other sites

On 25/09/2022 at 17:53, newbie alert said:

So if he had a C14 would that calculation still ring true? Christopher go uses a 290 with a pixel size of 2.9 so should he be using his at f11.6 or f14.6... so that calculation is abit vague and not really relevant

That information came originally from Chris Go.  Applies to all scopes as far as I know.

Link to comment
Share on other sites

44 minutes ago, newbie alert said:

He's using a x2 Barlow, so he's nowhere near f11.6 or f14.6

Might explain why his images aren't as good as they perhaps should be.

Here is a comparison of recent images from Chris Go and a few northern Europe planetary imagers.  Chris should be doing better than the rest due to the planet elevation advantage he enjoys in the Philipines (Jupiter at 80 degrees alt), but he isn't, in fact I'd say he's doing worse which is an arguable opinion I know, could just be his processing technique.

Jupiter comparison.png

Link to comment
Share on other sites

21 minutes ago, CraigT82 said:

Might explain why his images aren't as good as they perhaps should be.

Here is a comparison of recent images from Chris Go and a few northern Europe planetary imagers.  Chris should be doing better than the rest due to the planet elevation advantage he enjoys in the Philipines (Jupiter at 80 degrees alt), but he isn't, in fact I'd say he's doing worse which is an arguable opinion I know, could just be his processing technique.

Jupiter comparison.png

Or could be just the weather, which if it isn't in your favour there's not much you can do about it

Link to comment
Share on other sites

44 minutes ago, newbie alert said:

You're going to have to explain that as has no linked relevance in my brain to what I said

 

Sure, explanation goes like this (just a bit of math fiddling really):

On said wiki page you can read about spatial cut off frequency - or maximum spatial frequency possible for telescope of certain aperture.

Aperture and focal length are in fact combined in F/ratio - because size of features in focal plane (in units of length - like millimeters or microns) depends on focal length and aperture dictates how much can be resolved in angular units - two combined give max spatial frequency.

Formula is given like this:

image.png.3385c6efd93d59d97aef73e6fac3042b.png

From that formula it is very easy to arrive to this:

spatial_wavelength = lambda * F_ratio

We just invert frequency from cycles / millimeter to wavelength of millimeters per cycle. Here we use millimeters, but we can use microns or meters as well.

Next we combine this with Nyquist sampling theorem that says you need two samples per max spatial wavelength (or sapling at twice the frequency) and we can write following

spatial_wavelength = 2 * pixel_size

(two pixels in one wavelength or sampling twice per wavelength)

Combine two:

2 * pixel_size = lambda * F/ratio

and rearrange for F/ratio and get

F/ratio = 2 * pixel_size / lambda

This is the same formula I gave you above in post explaining how to get proper F/ratio based on pixel size (why 4 or 5 as multiplier for pixel size).

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.