Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Histogram levels for the different planets


Chrb1985

Recommended Posts

7 hours ago, Chrb1985 said:

I use 70-90% on R, G and B when I image Jupiter.

For saturn and Mars I use: R 70% G 50% and B 25-30%

How are you guys doing it? I use those settings after i read an article on london astronomer:

https://www.thelondonastronomer.com/it-is-rocket-science/2018/6/7/a-quick-guide-to-planetary-imaging

That looks like an interesting article... I need to read it properly. But off the top of my head provided you aren't clipping data and are saving as RAW I guess the histo shouldn't matter too much. Someone will doubtless correct me though! 

When capturing without WP correction the histo distribution tends to be much as you describe for Mars. I tried comparing this to WP correction applied at time of capture, ie aligning RGB curves and it seemed to make little difference. 

Link to comment
Share on other sites

8 hours ago, Chrb1985 said:

I use 70-90% on R, G and B when I image Jupiter.

For saturn and Mars I use: R 70% G 50% and B 25-30%

How are you guys doing it? I use those settings after i read an article on london astronomer:

https://www.thelondonastronomer.com/it-is-rocket-science/2018/6/7/a-quick-guide-to-planetary-imaging

I did not read the article, just glimpsed over it, but there are two things wrong straight away with it.

- first is advice to target certain histogram value

- second is to limit capture for planets to very short times (without mention of capture resolution) - like Mars 90 seconds or Jupiter 45 seconds

To see if 90 second is max for Mars - please look at this post:

and also subsequent test performed based on these calculations:

 

Link to comment
Share on other sites

18 minutes ago, vlaiv said:

I did not read the article, just glimpsed over it, but there are two things wrong straight away with it.

- first is advice to target certain histogram value

- second is to limit capture for planets to very short times (without mention of capture resolution) - like Mars 90 seconds or Jupiter 45 seconds

To see if 90 second is max for Mars - please look at this post:

and also subsequent test performed based on these calculations:

 

So I should not target certain histogram values?

Or follow the recommended max capture lengths???

Edited by Chrb1985
Link to comment
Share on other sites

2 minutes ago, Chrb1985 said:

So I should not target certain histogram values?

Or follow the recommended max capture lengths???

No, you should not target certain histogram value.

You should target certain sampling rate (combination of barlows to get best "zoom" - or max details for given aperture but not too much "zoom" - beyond what scope can deliver because it just lowers SNR) and exposure time.

These are two variables that you should adjust for given "equipment profile" (gain or rather read noise is considered part of equipment profile).

You want your exposure time to be at or just below coherence time for your site on a given night so that you freeze the seeing (usually about 5-6 ms for average conditions) - but not lower as again you loose SNR that way.

Once you establish above parameters - you'll have certain histogram value - be that 20% or 80% - as long as there is no clipping / over exposure - you are fine.

Max capture length you can calculate easily by knowing few facts about your target and equipment. See example in above linked thread. Use calculated value as a guide for max capture length (without the need for derotation). Smaller the scope or coarser the working resolution - longer you can capture for ...

 

  • Thanks 1
Link to comment
Share on other sites

15 hours ago, vlaiv said:

No, you should not target certain histogram value.

You should target certain sampling rate (combination of barlows to get best "zoom" - or max details for given aperture but not too much "zoom" - beyond what scope can deliver because it just lowers SNR) and exposure time.

These are two variables that you should adjust for given "equipment profile" (gain or rather read noise is considered part of equipment profile).

You want your exposure time to be at or just below coherence time for your site on a given night so that you freeze the seeing (usually about 5-6 ms for average conditions) - but not lower as again you loose SNR that way.

Once you establish above parameters - you'll have certain histogram value - be that 20% or 80% - as long as there is no clipping / over exposure - you are fine.

Max capture length you can calculate easily by knowing few facts about your target and equipment. See example in above linked thread. Use calculated value as a guide for max capture length (without the need for derotation). Smaller the scope or coarser the working resolution - longer you can capture for ...

 

But can you give me that equation?

Link to comment
Share on other sites

7 hours ago, Chrb1985 said:

The one on how you calculate the max recording time. I did not quite catch that.

There is no "formula" per se, but we can devise one if you are up to it.

Let's do a case study and see what kind of formula we can come up with.

First thing and very important one to understand is that AutoStakkert!3 can stack subs that are quite shifted from original position. It can for example stack lunar shots that experience white a bit of motion frame to frame due to seeing. In fact I believe this to be the size of alignment point - each alignment point is "searched" to match frame to frame.

In any case this is our first variable, and if we want to be conservative about this we can say - let's have max motion of 5 pixels here.

This means that max motion between first and last frame of some feature due to rotation of the planet to be 5 pixels in total. You can put different value here if you wish, I think 5 is good enough for this calculation.

How much is 5 pixels in arc seconds? Depends on your sampling resolution.

For example, let's say you are sampling at 0.21"/px (this is optimum sampling rate for 10" scope).

How much is 5 pixels in arc seconds? well - that one is easy 5px  x 0.21"/px = 1.05"

Fastest moving feature on surface of the planet must not move more than 1.05" for duration of our recording. Let's say we are talking about Mars here. Fastest moving feature due to rotation is at equator, facing directly us. How fast does that move in some units? Depends on diameter of the planet and rotation speed.

Let's do case of Mars - Mars does one full rotation in 1 day and 37 minutes, or 1477 minutes. It has radius of 3389.5km. This means that circumference of Mars at equator is 2 * PI * 3389.5 = ~21296.86 km.

Feature moves this distance in one whole revolution so speed of motion is ~14.419 kilometers per minute.

Now question is - what angle 14.419 kilometers make at distance equal to current Mars distance?

Current Mars distance is 63,423,358 Km

14.419 Km at distance of 63,423,358km is equal to 0.04689"

(use this calculator http://www.1728.org/angsize.htm

or a bit of trigonometry)

So we have 0.04689" per minute, yet we can't let feature move more than 1.05" - so how many minutes is that?

~22.4 minutes

You can actually record for 22.4 minutes without first and last frame having more than 5 pixels of motion at Mars equator - something AS!3 can easily handle since it can handle 5px of motion due to seeing.

Article recommends that you should limit your recording to 90 seconds for mars, and we just calculated that you can use 22.4 minutes without any issues if you are using 10" scope and using optimum sampling rate.

 

  • Like 1
Link to comment
Share on other sites

14 hours ago, vlaiv said:

There is no "formula" per se, but we can devise one if you are up to it.

Let's do a case study and see what kind of formula we can come up with.

First thing and very important one to understand is that AutoStakkert!3 can stack subs that are quite shifted from original position. It can for example stack lunar shots that experience white a bit of motion frame to frame due to seeing. In fact I believe this to be the size of alignment point - each alignment point is "searched" to match frame to frame.

In any case this is our first variable, and if we want to be conservative about this we can say - let's have max motion of 5 pixels here.

This means that max motion between first and last frame of some feature due to rotation of the planet to be 5 pixels in total. You can put different value here if you wish, I think 5 is good enough for this calculation.

How much is 5 pixels in arc seconds? Depends on your sampling resolution.

For example, let's say you are sampling at 0.21"/px (this is optimum sampling rate for 10" scope).

How much is 5 pixels in arc seconds? well - that one is easy 5px  x 0.21"/px = 1.05"

Fastest moving feature on surface of the planet must not move more than 1.05" for duration of our recording. Let's say we are talking about Mars here. Fastest moving feature due to rotation is at equator, facing directly us. How fast does that move in some units? Depends on diameter of the planet and rotation speed.

Let's do case of Mars - Mars does one full rotation in 1 day and 37 minutes, or 1477 minutes. It has radius of 3389.5km. This means that circumference of Mars at equator is 2 * PI * 3389.5 = ~21296.86 km.

Feature moves this distance in one whole revolution so speed of motion is ~14.419 kilometers per minute.

Now question is - what angle 14.419 kilometers make at distance equal to current Mars distance?

Current Mars distance is 63,423,358 Km

14.419 Km at distance of 63,423,358km is equal to 0.04689"

(use this calculator http://www.1728.org/angsize.htm

or a bit of trigonometry)

So we have 0.04689" per minute, yet we can't let feature move more than 1.05" - so how many minutes is that?

~22.4 minutes

You can actually record for 22.4 minutes without first and last frame having more than 5 pixels of motion at Mars equator - something AS!3 can easily handle since it can handle 5px of motion due to seeing.

Article recommends that you should limit your recording to 90 seconds for mars, and we just calculated that you can use 22.4 minutes without any issues if you are using 10" scope and using optimum sampling rate.

 

Sl about 7min on each channel? Man you are a really smart man! What do you do for a living?

  • Like 1
Link to comment
Share on other sites

1 hour ago, Chrb1985 said:

Sl about 7min on each channel? Man you are a really smart man! What do you do for a living?

Ok, so to really understand what I've calculated above, let's examine a few cases:

1. Sequence shot with OSC camera.

In this case - you can shoot for ~22 minutes without any need for derotation of video. Stacking software like AS!3 is capable of compensating for any rotation between first and last frame (and hence any other two). This is due to fact that AS!3 uses alignment points to compensate for seeing induced warp of the image - which is very similar to rotation.

2. Three different sequences are shot - each for one filter. Now same rule applies as above - you can shoot each filter for 22 minutes and within that video there will be no need for derotation - AS!3 will handle this.

However - in order to compose RGB image out of three separate images - you'll need to do derotation of two of them to match the third one.

This is true even if you shoot 7 minute videos for total of 21 minutes (being less than 22 that we calculated). This happens because AS!3 creates reference frame out of good frames for duration of the video and determines true position of alignment points as average of positions across those subs. As a consequence - resulting stack is snapshot of the planet as it was mid way thru particular video.

If we take for example 7 minutes for each channel - each channel image will have no issues with motion blur due to rotation - AS!3 takes care of this, but each of channel stacks will represent image of Mars at different time - 7 minutes apart with times about midway thru each of corresponding videos. In order to compose image - you'll need to derotate two of those frames to align with third properly as they are effectively "shot" at different times.

In the end - since you'll need to derotate channels for alignment - you might as well shoot up to 20 minutes per channel - that way no channel is going to have motion blur by itself - yet you'll need to do derotation anyway.

My advice would be to use following sequence:

- shoot R or B first

- shoot G in the middle

- shoot remaining channel last (depending if you chosen to go with R or B first - you'll need to shoot B or R last - hope this makes sense)

In the end leave G as is - that will be reference point and don't derotate that one - but rather derotate first channel forward in time to match G and derotate last channel back in time to match G.

Why G as reference - well cameras tend to have the greatest QE in that range and also, human vision is similarly geared towards green carrying the most information in terms of contrast and sharpness. For this reason - it should be the best channel of the three and since derotation slightly blurs data - best to leave G as reference and derotate other two - hope this makes sense as well :D

 

I'm software engineer / system architect.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

56 minutes ago, vlaiv said:

and since derotation slightly blurs data -

This is interesting... I think you said on another thread that it's the interpolation during derotation that harms the data, can you go into more detail on that please? (if you don't mind!)

Link to comment
Share on other sites

2 minutes ago, CraigT82 said:

This is interesting... I think you said on another thread that it's the interpolation during derotation that harms the data, can you go into more detail on that please? (if you don't mind!)

No, I don't mind, although it is quite technical, but I'm going to simplify it as much as I can.

Thing is that every step in processing that requires resampling is affecting data. This is in particular important with planetary imaging if done prior to wavelets / deconvolution or any other frequency restoration technique.

To show you what happens, I'm going to use - just Gaussian type noise as it has equal distribution of intensities over frequencies (it has some value at each frequency component and on average it is uniform at that). It is also rather easy to see effect of this additional blurring on pure noise.

Ok, so here is the first image:

image.png.1ff15b226719c3c3ab79622a7453fb8b.png

This image contains same thing - pure Gaussian noise, in fact - it is the same image - left side is unaltered, while right side has been just translated by a half of a pixel using linear interpolation (I made a copy, did translation and pasted half of it over original image).

You should be able to tell that right side looks blurred compared to left - noise grain is not as "sharp" as on the left. People doing long exposure imaging that use DSS will recognize this - background noise looking coarse grained. This is because deep sky stacker uses linear interpolation when aligning frames.

We can actually do some fancy math in Fourier domain to qualify this blur. Shifting image in spatial domain does nothing in frequency domain (it just shifts all phases by same amount - intensities remain the same). If I make Fourier transforms of original noise and shifted noise image and divide them we will get MTF of interpolation used for translation of the data.

Look what happens:

image.png.aabb5ae8f092f07491d14984fd6309b1.png

Left is frequency spectrum of original noise image while right is frequency spectrum of translated noise image (using linear interpolation). They are clearly different and there seems to be some sort of attenuation of frequencies going on due to resampling. Low frequencies are towards the center of the image while high frequencies are toward the edges. Attenuation is in high frequencies.

If we divide these two images we will get attenuation function - here it is:

image.png.c0c0b6365003211b4d3a23118ac3db1e.png

Here is profile of this 2d curve plotted in 3d:

image.png.71df6b78e833af8d2d317f2b8e18b031.png

And here is profile of line going from center in X direction:

image.png.876b1c18fcbf6d406a357cb81c32fbcc.png

Now if this reminds you of telescope MTF that looks like this:

image.png.310a3f7081796d8647414f274b46b208.png

then you would be right - it is the same thing - but applied once more over data. It has slightly different shape - as it affects data in slightly different way - attenuates lower frequencies less than telescope aperture - but remember these are cumulative - they are in fact multiplying each other (and atmospheric MTF as well - at each step it multiplies).

Thing is - we can't avoid telescope MTF and atmospheric MTF but we can avoid additional step if we don't subject data to yet another high frequency attenuation.

In reality, things are not going to be as bleak if we use advanced resampling techniques in stead of simple linear interpolation. Here is same MTF of interpolation but this time done for Quintic B-Spline interpolation:

image.png.ebd53c729158a1e006ffcb108508ddf2.png

image.png.84bd75f56c8307e4a9f96206ebe69fcf.png

image.png.3117678b93afcd32d7cabfdc9bff0e4f.png

As you can see - this is much better - it almost does nothing to frequencies up to half of highest frequency in the image.

In ideal world - we could use Sinc function for filtering - it has perfect rectangular response - so no frequencies below max frequency is attenuated:

Figure-B7-illustrates-the-sinc-function-

unfortunately - Sinc function is infinite in extent and can't be applied to finite images.

Ok, now derotation is not simple translation. It "translates" each different pixels by different amounts. This leads to sort of "wavy" frequency attenuation - some parts of the image get blurred more and some blurred less. Sharpening such image with wavelets leads to artifacts as you can imagine.

Hope this explanation was understandable and useful?

 

 

 

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

On 26/09/2020 at 18:54, vlaiv said:

Ok, so to really understand what I've calculated above, let's examine a few cases:

1. Sequence shot with OSC camera.

In this case - you can shoot for ~22 minutes without any need for derotation of video. Stacking software like AS!3 is capable of compensating for any rotation between first and last frame (and hence any other two). This is due to fact that AS!3 uses alignment points to compensate for seeing induced warp of the image - which is very similar to rotation.

2. Three different sequences are shot - each for one filter. Now same rule applies as above - you can shoot each filter for 22 minutes and within that video there will be no need for derotation - AS!3 will handle this.

However - in order to compose RGB image out of three separate images - you'll need to do derotation of two of them to match the third one.

This is true even if you shoot 7 minute videos for total of 21 minutes (being less than 22 that we calculated). This happens because AS!3 creates reference frame out of good frames for duration of the video and determines true position of alignment points as average of positions across those subs. As a consequence - resulting stack is snapshot of the planet as it was mid way thru particular video.

If we take for example 7 minutes for each channel - each channel image will have no issues with motion blur due to rotation - AS!3 takes care of this, but each of channel stacks will represent image of Mars at different time - 7 minutes apart with times about midway thru each of corresponding videos. In order to compose image - you'll need to derotate two of those frames to align with third properly as they are effectively "shot" at different times.

In the end - since you'll need to derotate channels for alignment - you might as well shoot up to 20 minutes per channel - that way no channel is going to have motion blur by itself - yet you'll need to do derotation anyway.

My advice would be to use following sequence:

- shoot R or B first

- shoot G in the middle

- shoot remaining channel last (depending if you chosen to go with R or B first - you'll need to shoot B or R last - hope this makes sense)

In the end leave G as is - that will be reference point and don't derotate that one - but rather derotate first channel forward in time to match G and derotate last channel back in time to match G.

Why G as reference - well cameras tend to have the greatest QE in that range and also, human vision is similarly geared towards green carrying the most information in terms of contrast and sharpness. For this reason - it should be the best channel of the three and since derotation slightly blurs data - best to leave G as reference and derotate other two - hope this makes sense as well :D

 

I'm software engineer / system architect.

Do you actually mean 20min.pr channel? How do you derotated a video on 20 min??

Link to comment
Share on other sites

1 hour ago, Chrb1985 said:

Do you actually mean 20min.pr channel? How do you derotated a video on 20 min??

Yes, under given conditions above, you can record Mars as it is now for up to 20 minutes per channel / video.

You don't have to derotate video, you only need to derotate stacked image from video to align it to image from next video and previous video (you need to align two channels to third).

As far as I know WinJupos has derotation feature - it can derotate video, frame by frame or derotate signle image.

image.png.b6e6f5ca4a1094990efd82d29f998720.png

This is screen shot from a video dealing with derotation:

https://www.youtube.com/watch?v=nOqY49FkomM

As you can see - it can either derotate single image, RGB frames (to align them) and derotate whole video - which is just derotation of single frames making them match in time and orientation.

Subs on your video that is 20 minutes long will differ in rotation - first few subs will clearly be of different rotation than last few subs. Point is that software like AS!3 can compensate for that in the same way it compensates for seeing differences between successive frames.

If you watch this video

https://www.youtube.com/watch?v=sO9KbbzP09U

you will notice how Jupiter jumps around due to seeing. If you take two successive frames and stack them without any alignment - you will get large motion blur. This does not happen in AS!3 because it has alignment points and it knows how to figure out transformation between successive frames to make them "the same" or rather stack compatible (to some extent it unwarps warping done by atmosphere). It does the same with rotation, so there is no need to derotate separately.

Once you record for larger duration than above calculated - then rotation becomes too great for AS!3 to handle. This is why we did calculation in the first place. Those 4-5 pixels of shift used in above calculation were confirmed by Emil himself as distance that AS!3 will handle.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.