Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Jupiter, February 28-2018


astroavani

Recommended Posts

20 minutes ago, astroavani said:

Hi Geof!
In fact in this catch I used a weak sharpened, usually I put in 35% that already comes out sharper. I think you know that the smaller the% the greater the sharpening.
I use either of the two output images to take to Registax, but I've noticed that there is a slight gain in detail using the image already sharpened by A! 2 but we have to be very careful not to overprocess.
I use the Registax layers to form a gentle slope from left to right, with Layer 1 around 25%, 2 in 20%, 3 in 15%, 4 in 10%, 5 in 5%, and in Layer 6 I I do not move.
I also use the filters only denote layers 1 and 2 at most 25%, all to prevent the image from looking painted.
I make a few more adjustments in Hystogran, contrast, brightness and RGB balance. Saved in the source folder and then open in Fitswork for deconvolution.
I use the Irfan View for some corrections of brightness contrast and color, change size, format, etc.
Currently after that I'm testing the Topaz Denoise which seems to improve the noise a bit without loss of detail.
In principle it is!
Best regards

Thanks Avani, this is all very useful information. I did not know about the sharpening % in AS2 (actually I now use AS3) as I never used it. Usually I take the unsharpened TIFF from AS3 diectly into Registax where usually I only use the first 2 wavelets, but with step increment set to 1 plus a little sharpening and denoise mostly on layer 1. I then save as TIFF and do rest of adjustments in PS(CS2). I am not familiar with Fitswork, Irfan View or Topaz Denoise, though I did just look at a tutorial :icon_biggrin:. I will experiment with sharpening in AS3 in future, but probably will not try to capture any Jupiter yet, until closer to opposition and in warmer weather - my observatory is currently covered in snow...!!

Many thanks, Geof

Link to comment
Share on other sites

  • Replies 38
  • Created
  • Last Reply

As I put in another post, I will repeat here for my friend Geof to know of my opinion!
I do not know if they can understand me, I do not speak English and I depend on the translator, I was traveling during most of the discussion and I could not put my opinion,
 One very interesting thing is this Signal / Noise issue. According to Vlaiv, considering the Niquist theorem, he used the Fourier transform to determine the best point of image capture, in which case he arrived at F / 11. No problem at all! Very straightforward, since the Fourier transform is used in the manipulation of analogue / digital signals.
I found the approach quite pertinent. On the other hand, he exemplified with an image of mine, in which he transposed to another image that would be the image in F / 11 so as to compare the capture of details. I do not know how he does it with JPEG image, everything he approached has his application share and he knows the limitations (even because he does not mention them). In theory, all that applies and would be the ideal of worlds. But the world is not ideal and one of the biggest variables that I see is precisely that of the lack of idealism.
Translating, under controlled conditions, all mathematics will apply to Niquist's theorem and all. In practice, variation occurs. Because? Because the system is not ideal, it is stochastic, that is, it varies at every moment. This is the domain of stochastic calculation, much more complex and difficult in order to generate a mathematical model that comes closer to reality. That's what stochastic calculation tries to do. We can? Yes, but not in full. For example, use the Monte Carlo Method to get a template that applies to the conditions in which you capture your data. For this we would have to have a large mass of data and determine what data (variables) we would consider. The more variables, the more computation time is applied, which can take hours, days or months. So nobody does it amateurishly.
You have to have access to supercomputers or computer clusters. Some variables: humidity, temperature, pressure, number of particles in the atmosphere, optical distortion of your equipment, electric flow in the CCD, data capture and drainage speed, data archiving form, processor TDP, processing capacity, etc. That's why math helps you not to waste time, but reality is far from it, it comes close, but it's not reality. All this is a simulation to represent reality and not reality. So what's this for? So you do not waste time on empiricism. In other words, the best catch of subtle details is, in your case, around F / 11. It may be F / 18, F / 22, but it will not be F / 60 or F / 80. How to know? As Dr. Strange's movie says: Study and Practice? The study tells you where you can be, but practice, where it really is. Its atmosphere does not leave, it is not mathematical, it approaches, but it is not ....

Link to comment
Share on other sites

Hi Avani,

I very much appreciate you writing and translating your thoughts on this topic when English is not your language. I was very interested in vlaiv's initial alalysis and proposal that maybe F11 would be best as I too prefer to image at F22, but with Jupiter, Saturn and Mars all very low this year (and next few years) from UK it is a slow capture process, so F11 may have some advantage and I would be a fool not to consider it. I am not a student of statistical modelling, analog / digital analysis such as Niquist theorem, Fourier transform, Monte Carlo Method, though maybe understand a little at a superficial level. I am more governed by what works for me in practice than what theory says may be possible, but I am also not blind to see what theory proposes. I think your penultimate sentence summarises very well how I think about this...

7 hours ago, astroavani said:

The study tells you where you can be, but practice, where it really is.

I maybe would like to try ~F18, but I do not have access to x1.5 Powermate, so it is either F11 or F22 and for me F22 just turns out better. The most practical part is what my eyes can see when I am trying to achieve good focus in what is a very dynamic, non stable environment - my old eyes just seem to work better with the larger F22 image on the screen, though there is a trade off and if I try x3 (F33) it all becomes a mess... :icon_biggrin:

Again, many thanks to both you and vlaiv for this very interesting and worthwhile discussion. I look forward to seeing many more excellent planetary images from you over the coming months and years.

Best regards,  Geof

Link to comment
Share on other sites

When  I changed from f16 to f24 with my 150PL (using x3 instead of x2 barlow) I saw a definite increase in detail. Using an x5 barlow just gives mush.

As Avani says I suspect that there are more than just one or two variables involved, not least camera resolution. Going to a better camera (allows capture of 12 bit RAW) gave further improvement despite slightly larger pixels.

 

Link to comment
Share on other sites

Hello everyone!
My best captures have always been using the 16-bit option in Fire Capture, I've noticed a noticeable increase in the number of fine details. This explains why capturing in 16 bits greatly increases the number of tones we have available. I think it's something worth trying.

Link to comment
Share on other sites

6 hours ago, astroavani said:

Hello everyone!
My best captures have always been using the 16-bit option in Fire Capture, I've noticed a noticeable increase in the number of fine details. This explains why capturing in 16 bits greatly increases the number of tones we have available. I think it's something worth trying.

Can I offer a second opinion on this? :D

While I totally agree with using 16bit (or 12bit with modern cmos sensors) when people need or want to use it, but in reality there is no need to do so in some cases.

We can do quick calculations to show what level of signal one might expect in single exposure when doing lucky imaging at focal lengths and exposures people often use. Let's again use Avani's setup as an example.

First thing to do is determine base conditions:

- Jupiter angular diameter: 45"
- Apparent magnitude: -2.5
- Combined optical throughput of system: 50% (this is rough estimate, includes central obstruction, losses on mirrors, QE of sensor, ... and combines them into single value, one can split them up into components and do calculations with more precision but it is not needed for this quick example)
- 1,000,000 photons from 0 magnitude source, per squared centimeter per second (this is also approximation, it depends on spectral response, band used, it gives value without atmospheric attenuation, but again good enough for rough calculation).

So for the Jupiter we will have 10 times more photons than mag 0 source (10^(-2.5/-2.5)) so 10,000,000 per second per cm^2.

Surface of C14 is ~ 933.15 cm^2 so total number of photons collected in one second by telescope will be:

0.5 * 933.15 * 10,000,000

Jupiter image is spread over ~271,764 pixels (resolution is 0.0765"/pixel, diameter 45", diameter in pixels is ~588px, radius is ~294px, r^2*pi)

So each pixel in one second receives (roughly):

0.5 * 933.15 * 10,000,000 / 271,764 photons = ~ 17,168.4 photons per pixel per second

Typical exposure length in lucky imaging is 10ms, this means that we will record on average ADU level of ~172.

So if one operates at F/22 and uses 10ms or less on Jupiter, 8bit precision (0-255) is enough to record results.

For Saturn and Mars for example, being quite a bit fainter, in most cases one would not need more than 8 bit precision.

So in general it depends if one is after the least exposure time to freeze the seeing (like going 5-6ms exposures).

But why people see better results when doing 12bit capture vs 8bit capture?

Because it largely depends on exposure length. In above example if we opted for 20ms exposure - we would have signal level of around ~345 - and 8bit is not enough to record it. Also there is factor of how 8bit is recorded on particular sensor. For example ASI cameras operate in 10bit mode when using 8bit capture, and (I suspect) sensor just throws away 2 LSB (least significant bits) when doing conversion from 10bit to 8bit. This means that if doing 8 bit capture one should select system gain to be at least 0.25 (so one electron is represented with ADU of 4, and when 2 lowest bits are truncated away we still end up with correct value 4/1 = 1).

So bottom line is if one is after ultimate speed of capture with very short exposures (or worry about storage space) - use 8bit recording but make sure you know it is going to be ok to use it (depending on your parameters).

Otherwise you can't go wrong with using 12/16 bit capture (top frame rate will be a bit less and recording will take up more space, but those are only drawbacks).

Link to comment
Share on other sites

2 hours ago, vlaiv said:

So each pixel in one second receives (roughly):

0.5 * 933.15 * 10,000,000 / 271,764 photons = ~ 17,168.4 photons per pixel per second

Typical exposure length in lucky imaging is 10ms, this means that we will record on average ADU level of ~172.

If the image has any dynamic range at all to speak of 8 bits will be wholly inadequate as many pixels will be far above the average.

 

 

Link to comment
Share on other sites

21 minutes ago, Stub Mandrel said:

If the image has any dynamic range at all to speak of 8 bits will be wholly inadequate as many pixels will be far above the average.

 

 

Not necessarily:

image.png.d4d53353ea5538030d8dfa2c7777e718.png

This unprocessed image Avani posted (F/11 image out of AS!2) stack shows that dynamic range is less than 10:1.

Even if you account for Poisson distribution and use something like 5 sigma with slight modification of above parameters (like using more precise figure for photon count of 880,000 then account for each loss both in atmosphere and optical train you will likely end up with smaller figure than 0.5, and if you use more realistic coherence time for single frame 4-6ms instead of 10ms) you can still easily fit in 0-255 range.

But all of that is besides the point, my point was that under certain circumstances you can fit inside 8bit and under other you won't fit. I also agreed that people should use 16/12bit unless they know exactly why they are using 8bit and that it will work - in there cases 8bit gives exactly the same result as 16/12bit.

Link to comment
Share on other sites

I disagree Vlaiv, the flip size with images of low dynamic range is that it's also about the discrimination between features of similar, but different, brightness, as extra bits allow you to capture subtler differences for stretching later.

 

 

Link to comment
Share on other sites

4 minutes ago, Stub Mandrel said:

I disagree Vlaiv, the flip size with images of low dynamic range is that it's also about the discrimination between features of similar, but different, brightness, as extra bits allow you to capture subtler differences for stretching later.

 

 

Dynamic range is restored by stacking many subs. It is the nature of light that it comes in discreet packets, so when working in low light conditions, that being very short sub where expected max signal is only 100e in some cases, best dynamic range that you can have in single frame, due to physics, is simply that 100:1. This can happen even if target is high dynamic range by itself - like being 1000:1.

If you have high dynamic target with intensity span of 1000:1 for different sampling points, and you exposure is short so that the brightest sampling point only records 100e. Would that mean that the least bright point will record 0.1e? No we can't have fractional number of photons/electrons in single sub! That just means that on average 9 frames out of 10 will have value 0, and 1 frame out of 10 will have value of 1 (it is almost like that, but not quite, there can be frames that will have 2, 3 or higher electron count, but they will be rare - 1 in 20, or 1 in 30 or ... - following Poisson distribution). But when you stack those 9/10 with 0 and 1/10 with 1 will indeed on average produce 0.1e thus restoring dynamic range 100:0.1 = 1000:1.

So when doing this sort of imaging it is maximum number of photons captured per pixel per sub that counts.

 

Link to comment
Share on other sites

15 minutes ago, cuivenion said:

The Sharpcap histogram in the latest version can figure out the best bit rate for for you. I don't know if you've seen it yet but it seems like it may be of interest.

I did not know that, but I think it is really useful feature - no need for any sort of calculations - max histogram value based on selected parameters can tell what bit precision is needed.

Link to comment
Share on other sites

On 06/03/2018 at 17:07, vlaiv said:

Dynamic range is restored by stacking many subs. It is the nature of light that it comes in discreet packets, so when working in low light conditions, that being very short sub where expected max signal is only 100e in some cases, best dynamic range that you can have in single frame, due to physics, is simply that 100:1. This can happen even if target is high dynamic range by itself - like being 1000:1.

If you have high dynamic target with intensity span of 1000:1 for different sampling points, and you exposure is short so that the brightest sampling point only records 100e. Would that mean that the least bright point will record 0.1e? No we can't have fractional number of photons/electrons in single sub! That just means that on average 9 frames out of 10 will have value 0, and 1 frame out of 10 will have value of 1 (it is almost like that, but not quite, there can be frames that will have 2, 3 or higher electron count, but they will be rare - 1 in 20, or 1 in 30 or ... - following Poisson distribution). But when you stack those 9/10 with 0 and 1/10 with 1 will indeed on average produce 0.1e thus restoring dynamic range 100:0.1 = 1000:1.

So when doing this sort of imaging it is maximum number of photons captured per pixel per sub that counts.

 

Forgive me... I see where you are coming from now...

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.