Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Linear Data??


Astrosurf

Recommended Posts

Hi Alex

My understanding (coming from a signal processing perspective) is that it refers to data to which no nonlinear processing (curves etc) has been applied. The relationship with the data as it comes off the sensor is linear i.e. notwithstanding saturation, the numbers you see in the image are proportional to the brightness as captured at the sensor.

The important thing about linear processes is that the order in which you apply processing operations doesn't matter -- the end result is the same. In practice this means they can sometimes be made more efficient by combination. It also means they are reversible without losing information, which isn't always the case with nonlinear operations (of course, you can always "undo", but that isn't the same thing).

Some examples of linear operations are addition, subtraction, and certain types of filtering.

Preserving linearity is important in applications such as photometry e.g., variable star magnitude estimation.

I'd be interested to hear other perspectives as mine comes mainly from audio  :smiley: but I think the same applies...

cheers

Martin

Link to comment
Share on other sites

Just as Martin says - Linear is prior to any processing at all - The stacks that I get out of PI that have been calibrated are linear. I'm not sure if the data from a DSS stack would be linear if you move the saturation sliders for example and save with these settings. 

Link to comment
Share on other sites

The colloquial use of 'linear' is as described above, i.e. processing that applies the same operation to all pixels, and that operation must be linear. So anything that involves raising to a power or finding the root of a pixel value, or changing a pixel value based on other pixel values, using a mask, noise reduction, etc. are all nonlinear operations. Simple explanations might say that everything before the first stretch or curves is nonlinear, but you might apply noise reduction before the stretch and so selectively alter pixels, so that is nonlienear too.

In the technical use, linear data has to be gathered in the linear response range of the sensor, so that a given number of photons generates a given number of electrons and thus the same Dn (brightness value). When you saturate a pixel it becomes nonlinear since you go from (say) one extra photon = one extra Dn to infinite extra photons = zero extra Dn as soon as the pixel saturates. In scientific sensors the extra electrons leak (bloom) in to adjacent pixels creating the ugly bright columns you see in some science images. In most sensors you have an anti-blooming gate which drains away the excess electrons as a pixel approaches saturation. This makes for nicer pictures but means the sensor is only linear within a certain brightness range.

You can find out where by taking a series of flats from a fixed light source and increasing the exposure length in small but fixed steps. Measure the median brightness of each image and plot the brightness against exposure length on a graph. At first you will get a nice straight line. Where the line starts to curve, that is the brightness where the linear range ends for your sensor and so you would have to ensure that you stay below it for any scientific imaging purposes.

Link to comment
Share on other sites

Linear means you can still do science with the images. You still have direct access to the number of photons received  (the pixel value is some fixed number times the number of photons received). Once you stretch an image (making it non linear) you can not recover this original photon count. At that point you have chosen Art over Science.

Link to comment
Share on other sites

Take a picture of 4 objects in one field of view. The objects are made of identical light bulbs but the first has only one of these bulbs, the second has two, the third has three and the fourth has four. The objects are at the same distance so the fourth is four times brighter than the first, the second is twice as bright as the first, etc. We would, therefore, describe these four objects as having a linear relationship in terms of brightness. If the third object suddenly popped a bulb it would no longer be three times as bright as the first and the relationship would be non linear now - but this hasn't happened!

Next we take an honest camera (one which records the four objects in their true brightness.) We describe this camera as having a linear response because it can and does do this. We photograph the four objects and open the image, unprocessed, in Photoshop or GIMP etc.

We open 'Curves' and see a straight line (a linear graph) and we Alt click on each object in turn to mark it on the Curve. We find that the objects are equally spaced on the straight line with the fourth four times higher than the first, the third three times higher etc. The linearity of brightness found in the real objects has been preserved by the camera at capture and perserved (so far) by the imager in Photoshop. This is a linear image.

The trouble is that, if we are astrophotographers, Object One is likely to be some faint thing too feeble to be seen and Object Four is likely to be a bright star which we don't like so we want to keep the bright star almost as it is and brighten the faint thing by a lot - so we go non linear by 'stretching.'

Themos, I don't know that this is always unscientific. It isn't photometric but is photometry the only use of telescopic data? Would Nicholas Outters have discovered the Squid Nebula if he hadn't stretched his O111 data? It's pretty hard to see in a linear image. I would see this stretching as a means of extracting information - in this case the existence of an unknown object. 

Olly

Link to comment
Share on other sites

Good point, Olly! have we done this before?

No, but something not unrelated, I think.  :grin: I'd argue that a well made non linear image can also provide structural information about galaxies and nebulae. The trouble is that most of this has now been done but at one time discoveries could be made just by looking at pictures. (For example Barnard's ponderings over whether the dark features in space were windows of transparency or patches of dusty obscurity. It was by looking at his pictures that he made the conclusion we now consider to be correct. They are obscure, of course.)

Olly

PS Another might be the Hubble Deep and Ultra Deep fields? 

Link to comment
Share on other sites

Well to be fair, photographic plates and films are nonlinear detectors so they didn't have much choice in the matter.  (I seem to recall it is down to the way the grains in the photographic emulsion react to light.  If a grain is hit by a photon, a sort of 'molecular crack' is formed, and if enough cracks form in a short space of time then the chemical change becomes permanent, but if few cracks form in a given period then they will self-heal. This makes film insensitive to light sources of less than a given number of photons per period of time, and so nonlinear).

I did a bit of an X-Ray astronomy course on Coursera before Christmas (good mental work out but didn't have time to finish it all).  The professional software certainly does use non-linear stretching of images so that you can spot interesting features by eye, but if you then use tools to measure differences in brightness, etc. it would be by reference to the linear values in the image, not the non-linear display of them on screen. 

Link to comment
Share on other sites

Brilliant guys! And thanks so much Olly. It's not often someone can explain things in a way I understand. I'm not putting the other comments down, it's just I have a slightly skewed brain!

Thanks so much all, you've been great!

XXX

Link to comment
Share on other sites

Brilliant guys! And thanks so much Olly. It's not often someone can explain things in a way I understand. I'm not putting the other comments down, it's just I have a slightly skewed brain!

Thanks so much all, you've been great!

XXX

One skewed brain deserves another...

:grin: lly

Link to comment
Share on other sites

The linearity issue is a bit over-rated. People who process audio spend most of their time thinking in decibels, which is a highly compressed nonlinear scaling (log). Signal-to-noise ratio is also expressed in dB. Forensic phoneticians work in the same dB-scaled universe in the main. In part this is because the ear's response to intensity is quasi-logarithmic (actually closer to cube root), so it makes sense to talk about units in the same general scale as our sensitivity to sound. Likewise auditory masking can only be understood because of log compression, and this forms the basis for important applications such as mp3 encoding. 

That doesn't mean that audio people ignore linearity when it comes to computations. To compute the energy of two or more sounds it is necessary to revert to the linear domain (by inverting the log), then adding them, and (crucially) transforming right back to the decibel scale. What this tells us that it is perfectly possible to do good science with signals while spending most of our time in a nonlinear domain. An exactly analogous situation exists with stellar magnitudes. Magnitudes are a nonlinear brightness scale, yet they're the domain where we prefer to work even for applications which involve photometry. That is, the photometry is done correctly in the brightness domain, but immediately re-expressed as magnitude for ease of interpretation.

So to go back to the original point, it isn't preservation-of-linearity-at-all-costs that is important but the potential for destruction of information by performing nonlinear operation that do not have a unique inverse (i.e., for which it isn't possible to subsequently go back to the linear domain to do the calculations correctly). There are plenty of processes that fit into this category: median, for one; similarly, anything involving max, min or rectification. This is why nonlinearity is dangerous for good science (IMO).

There's plenty that can be done and yet preserve invertibility. Anything involving Fourier transforms is invertible and though I'm not very familiar with them, I believe the same is true of wavelets. 

The other benefit of keeping things linear is simply computational: it is possible to carry out an arbitrary sequence of linear operations in a combined and efficient way (e.g., combined rotation, transformation and scaling of an image).

Martin

Link to comment
Share on other sites

Well to be fair, photographic plates and films are nonlinear detectors so they didn't have much choice in the matter.  (I seem to recall it is down to the way the grains in the photographic emulsion react to light.  If a grain is hit by a photon, a sort of 'molecular crack' is formed, and if enough cracks form in a short space of time then the chemical change becomes permanent, but if few cracks form in a given period then they will self-heal. This makes film insensitive to light sources of less than a given number of photons per period of time, and so nonlinear).

I did a bit of an X-Ray astronomy course on Coursera before Christmas (good mental work out but didn't have time to finish it all).  The professional software certainly does use non-linear stretching of images so that you can spot interesting features by eye, but if you then use tools to measure differences in brightness, etc. it would be by reference to the linear values in the image, not the non-linear display of them on screen. 

In the days of film I think that linearity used to be called, more or less,  reciprocity and the film imagers wanted to avoid 'reciprocity failure.' I'm talking off the top of my head because I never did this. However, these heroic types used to hypersensitize their film in order to increase the time span over which it would retain its linear response to light.

When we look at the best film images ever taken (say David Malin's) we can see that our huge advantage is being able to manipulate, easily, the non linearity of our digital data. We don't have  to have the Trapezium burned out in M42.

Olly

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.