Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Widefield image processing.


Recommended Posts

Just thinking aloud...

One of the interesting challenges of widefield image processing is that the imager usually has a higher dynamic range in the target than is found in restricted fields of view. We are likely to find patches of obscuring dust which lie well below the background brightness, we have the clear background itself, then nuanced levels of reflecting dust and the full range of emission nebulosity. The dynamic range of the software does not, unfortunately, expand to allow for this. What is more, the imager will want to bring out very large-scale structures in the faint, usually dusty, signal and doing so consumes more of the available range. This is borne out by the fact that, when I add high res close-up data, the background of the original processing is almost always much darker than in the widefield.

Olly

Link to comment
Share on other sites

And another issue, for those of us not blessed with frequent clear, dark skies,  vastly different background levels across the panels. I did a 4 panel mosaic with the SY135 to capture M31 and M33 in the same image and really struggled to get a uniform star field density across the panels, due I suspect to varying sky conditions across the sessions.

Link to comment
Share on other sites

I would like to point a few things out on this topic.

- Fixed set of intensities mimics the way we see. We are capable of seeing larger intensity difference than our computer screens can show - but only because of anatomy of our eyes - we have iris that opens and closes depending on amount of light we see. If we fix size of our pupil - then there is small range of brightness that we can perceive at the same time.

Having fixed range of brightness in processing is not limitation of image processing / display systems - it is limitation of our physique for the most part.

- Problem is not showing very large intensity range in single image. Problem is with detail / structure. To be able to see detail and structure in something we need to distinguish enough intensity levels that compose those details and structure. Since we have limited "budget" of intensities - we can choose which part of large range will we show in fine detail / granularity. We can opt to show fine grained high intensity range, or medium intensity range or low intensity range. We can also opt to say show low and medium range but with less granularity - or maybe everything with the least granularity. This is processing choice.

- There are tools that perform "high dynamic range" - but I would caution against using those. They will show both low intensity and high intensity regions in high fidelity. Or rather - tool will attempt to do this by using certain technique. Unfortunately - this technique is unnatural and in my opinion should be avoided.

It relies on using non monotonic stretch of intensities - something our brain easily recognizes in everyday life as unnatural.

Our brain does not feel cheated if we keep one thing in our processing - order of intensities. If something is brighter in real life and we keep it brighter in the image - all is good, but if we reverse intensities - our brain will start to rebel and say - this is unnatural.

This translates into slope of curves in processing software - if we keep it monotonically rising from left to right - image will look natural (it may look forcefully stretched and bad in different sort of way - but it will look ok in domain that I'm discussing) - so:

image.png.fc8c00789fb01efe8c7f6b38c93126e3.png

that is ok - and we don't have immediate objection to the image but this:

image.png.fd67976226a2513a02710a15aac4460a.png

feels unnatural - note how curves first raises then falls then raises again  - it is not constantly rising when going left to right. This in turn produces some very unnatural looking regions of the image - like strange clouds, or parts of buildings that lack detail - look flat / gray.

Slope of the curve in curves is level of detail in given intensity region - more it slopes - more detail. If it's flat - less detail.

Basic stretch in astronomy images looks like this:

image.png.4947afed0898e837195faf1fa61c307f.png

and this is for a reason - we want to show detail in faint regions - left of curves diagram, so we set that part to steep slope and we don't care about detail in high intensity regions - so we leave that flat in right part.

Here you can see that - in order to have two regions with steep slope - we need to go back down - and change direction of the slope - which is no good as it reverses intensities and confuses our brain (in effect - it creates a negative image superimposed on positive image - and we find negative image unnatural).

You might be applying the same technique without realizing it - if you create two layers that you stretch differently and then selectively blend those - you might end up in above situation with reverse in slope.

I'll try to find classic example of this in high dynamic range processing and post a screen shot.

image.png.728a00392f5dc9070ed6ec20d96bf629.png

Here it is - making central part of M42 with all the detail - while still maintaining detail in faint outer reaches. This is intensity inversion.

 

 

Link to comment
Share on other sites

58 minutes ago, vlaiv said:

I would like to point a few things out on this topic.

- Fixed set of intensities mimics the way we see. We are capable of seeing larger intensity difference than our computer screens can show - but only because of anatomy of our eyes - we have iris that opens and closes depending on amount of light we see. If we fix size of our pupil - then there is small range of brightness that we can perceive at the same time.

Having fixed range of brightness in processing is not limitation of image processing / display systems - it is limitation of our physique for the most part.

- Problem is not showing very large intensity range in single image. Problem is with detail / structure. To be able to see detail and structure in something we need to distinguish enough intensity levels that compose those details and structure. Since we have limited "budget" of intensities - we can choose which part of large range will we show in fine detail / granularity. We can opt to show fine grained high intensity range, or medium intensity range or low intensity range. We can also opt to say show low and medium range but with less granularity - or maybe everything with the least granularity. This is processing choice.

- There are tools that perform "high dynamic range" - but I would caution against using those. They will show both low intensity and high intensity regions in high fidelity. Or rather - tool will attempt to do this by using certain technique. Unfortunately - this technique is unnatural and in my opinion should be avoided.

It relies on using non monotonic stretch of intensities - something our brain easily recognizes in everyday life as unnatural.

Our brain does not feel cheated if we keep one thing in our processing - order of intensities. If something is brighter in real life and we keep it brighter in the image - all is good, but if we reverse intensities - our brain will start to rebel and say - this is unnatural.

This translates into slope of curves in processing software - if we keep it monotonically rising from left to right - image will look natural (it may look forcefully stretched and bad in different sort of way - but it will look ok in domain that I'm discussing) - so:

image.png.fc8c00789fb01efe8c7f6b38c93126e3.png

that is ok - and we don't have immediate objection to the image but this:

image.png.fd67976226a2513a02710a15aac4460a.png

feels unnatural - note how curves first raises then falls then raises again  - it is not constantly rising when going left to right. This in turn produces some very unnatural looking regions of the image - like strange clouds, or parts of buildings that lack detail - look flat / gray.

Slope of the curve in curves is level of detail in given intensity region - more it slopes - more detail. If it's flat - less detail.

Basic stretch in astronomy images looks like this:

image.png.4947afed0898e837195faf1fa61c307f.png

and this is for a reason - we want to show detail in faint regions - left of curves diagram, so we set that part to steep slope and we don't care about detail in high intensity regions - so we leave that flat in right part.

Here you can see that - in order to have two regions with steep slope - we need to go back down - and change direction of the slope - which is no good as it reverses intensities and confuses our brain (in effect - it creates a negative image superimposed on positive image - and we find negative image unnatural).

You might be applying the same technique without realizing it - if you create two layers that you stretch differently and then selectively blend those - you might end up in above situation with reverse in slope.

I'll try to find classic example of this in high dynamic range processing and post a screen shot.

image.png.728a00392f5dc9070ed6ec20d96bf629.png

Here it is - making central part of M42 with all the detail - while still maintaining detail in faint outer reaches. This is intensity inversion.

 

 

I agree that HDR techniques usually produce an unnatural look and I only use them in astronomy in cases like M42, where the alternative is, to my eye, worse. If the camera itself cannot span the dynamic range there is no way to present it with any curve.

In daytime photography I sometimes enjoy using HDR precisely because it does give an unnatural look. It makes the picture look like a picture, so to speak. It presents an object or view in a way that we don't normally see it and this can stimulate the eye-brain to look again and think again. I like to get it to the level where you think, 'There is something slightly odd about this...'

spacer.png

spacer.png

Of course, you may not like it at all!

Olly

Link to comment
Share on other sites

14 minutes ago, ollypenrice said:

Of course, you may not like it at all!

HDR composition in camera does not necessarily entail intensity inversion.

We do regular HDR composition in astronomy imaging - well that sort of HDR composition as camera does.

Normally, difference in intensity levels that camera can capture are roughly 1000:1 for 14-16bit camera (if we don't want to show the noise, so lowest signal should really have SNR of 5 or so - which means value of about 25 electrons and if we divide 65535 or 16 bit with that we get ~2500 or if we divide 14 bits or 16384 with that we get ~650 so middle ground is about 1000:1)

However, when we stack - with each subs stacked - we increase this number. We can end up with 10000:1 easily (that is difference of 10 mags - so we can see both 5 magnitude star and 15 magnitude star in the same image).

Stretch maps that many intensity levels onto 256 intensity levels that are available for regular computer image (per channel, but let's not get into color stuff now - let's discuss black and white as most information in the image comes from brightness). This stretch can be linear or maybe non linear - but can keep order of intensities.

All of that is HDR composition (either taking two exposures of different length and combining them or stacking multiple exposures - result is the same) - but it won't necessarily produce unnatural looking image because there is no intensity inversion.

Look what happens when I do even a small intensity inversion:

image.thumb.png.bb178ef6b64d10c87e00d3dca44c6bc8.png

While in the first image - ceiling and walls of the passage look like they are a bit brighter than a scene would suggest - they still look "natural", but as soon as you do color inversion - you start seeing artifacts.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.