Jump to content

Narrowband

Playing around with deconvolution (programming)


NickK

Recommended Posts

I found yet another negative aspect - which is any PSF noise, at stacking, becomes a valid signal.

I have an idea to remove the noise from the PSF (as I do with the main image) which should improve things massively.

Now I'm comfortable that the processes are good - after the noise I may look to start being a bit more rigorous and optimise (possibly GPU it).

Edited by NickK
Link to comment
Share on other sites

  • 5 months later...

Not much progress since the last update the job has been OTT busy :/

I have just watched an interesting lecture on deep learning for image processing. The interesting bit is that the model they use for identification looks similar to what I'm doing here i.e. using convolution filters and then a  3D output of correlation for the neural net. The difference here is that I'm using FFT full image they're using linear sub-images that are down-sized. In essence they're doing a correlation and then working out if the pattern of correlations from multiple feature filters = man, dog or raspberry based on the net recognising the location and the object input patterns it's trained on.

The job has a (legal) mandatory 2 week vacation so as part of that I get to tune out and one of those is to have a think about this in further detail.

 

Link to comment
Share on other sites

  • 5 months later...

Well I'm getting bored .. with the cloudy winters night.. so I think I may pic thus up again :D I need some maths in my life and sometimes it's good to take a step back and view with fresh eyes.

I may be also tempted to look at using AI on the residual noise layer in detecting if a pixel is noise or not. At this pixel value level, this is critical - the focus is not if this pixel is noise or not but how much of the pixel value is noise. So I will start researching the AI noise estimation and see if there's any mileage. Additionally I will start looking at some improvements for performance. I don't think it's worth hitting the GPU yet but there's definitely some algorithmic speed ups that could be done.

Edited by NickK
Link to comment
Share on other sites

  • 5 months later...

Started playing with GNU Octave. Given my MacBook pro memory slot has died I'm down to 8GB and slow CPU.. this makes the entire thing more portable and a damn sight more elegant.

Guider image:

5af6fcb5cde5e_ScreenShot2018-05-12at15_38_15.thumb.png.3f486ced0fafbfd05105ef2271b9bcb2.png

 

Link to comment
Share on other sites

  • 1 year later...

I have been away.. playing with the idea of building a DAC for my Mac mini with headphones from scratch (a R2R ladder DAC for those audio peeps). The fun is that I spend a week solidly researching the maths behind filtering, which is extremely closely related to images. Interpolation, FIR/IIR and (de)convolution are all seen as filters for example. 

This morning, with a head full of octave Sinc Filters, it occurred to me that my previous technique for deconvolution detailed previously in the thread isn't far away from a possibly new noise removal method for astro images.

In audio filtering the FFT is used as sound is a cyclic sine wave as a pattern and that fact means noise in the form of random sample noise, can be removed relatively easily.

If we want to remove CDD noise - then we look for patterns in the signal in the image (ie the sinusoidal cycles for audio). It occurred to me that every bit of light travels through the atmosphere - with a blurring point spread function. This represents a repeating pattern.. a cycle of point spread functions. (Ignoring turbulence variation over x,y for the time being). That means we can identify signals that have travelled through the atmosphere and those that have not (noise from the CDD).

That got me thinking - we could make a system that uses the atmosphere blurring on the long exposure to reduce noise. The PSF noise from a guide star or from the image can be reduced by stacking the stars that exist in the long exposure we've identified - this can then be used repeatedly to reduce noise in the psf itself. (there is a psf error here too based on x,y turbulence but let's ignore that- we'll just average it)

So using this lower noise psf, we can re-phase correlate from the original image and use the correlation strengths and the psf as a reconstruction filter. Hey preso a new reduced noise image.

As part of the reconstruction filter - it could also deconvolute.

I'm going to investigate this using Jupyter Notebook with Octave (Python sucks) - using maths, using a 1D slice inputs from a real image and then do the same for a 2D  full image. Given Octave is single threaded and memory based, it may run out of memory - so I may have to code it up in C/C++ with FFTW. 

  • Like 1
Link to comment
Share on other sites

So if we take a single width line can convolute with a simple point spread function that simulates our atmosphere:

1583870709_Screenshot2020-01-04at15_15_37.png.e839c53c77d11a8fbadca8b9cd5abfda.png

 

Then add these convolved stars into our picture at positions 23, 400 and 850, then add random noise up to 20% in the image to simulate the camera noise. I've also added 20% noise to the psf above too so we have a noisy guider psf star:

287392468_Screenshot2020-01-04at15_15_58.png.4c8bac72c47d1a38deea0e26fd8280ac.png

Next we use phase correlation to detect the psf within the picture - the output here is the correlation, where we notices three peaks over 90% correlated these are at 23, 400 and 850 positions!

1162554227_Screenshot2020-01-04at15_16_15.png.128fec8b6fcdf9ad03d66ec07d2f1e51.png

Lets take the noisy image - take the 60 pixel psf sub images out of the noisy large image and stack them, we see the noise level of the averaged PSF reduce:

1315134827_Screenshot2020-01-04at15_16_26.png.1376ca328688dea0c43aab352a12b7ee.png

The more stars we have (ie the more PSF sub-images we have the more we can average out and remove the noise. You notice the noise is dropping - it's now about 10%.

This process then allows us to use the new stacked sub images as a new average psf (here's where the x,y variation means our error is growing). The alternative is we can the simply map out the variation on x,y based on the image stars so we can build a function(x,y) that describes the atmosphere. Also we can use the new psf (plural depending if we want x,y) to be able to ask "how much of is this pixel noise" by looking at the correlation strength. the lower correlation the higher the probability of noise.

We can then use a reconstruction filter to process the noisy image using the psf and also deconvolve the picture!

 

The above is simplistic example due to the speed of Octave (it's single threaded). 

 

Edited by NickK
Link to comment
Share on other sites

Just starting to write a full image version:

Here's the guider star as a psf:

852299742_Screenshot2020-01-04at16_38_08.png.4e12a2326694f5398d19b4f41f3e02ba.png

 

Just don't attempt mesh() with a long exposure 🤣 ....zzzz

Also there's a library called Atlas that can be compiled on your system - if Octave finds this it will use it supposedly. Essentially atlas provides multi-core/threaded support for maths functions.

 

Edited by NickK
Link to comment
Share on other sites

I seem to understand this topic a bit (not this actual topic - but deconvolution and psfs and image restoration) and I can tell you that you are on to something.

Not sure what your actual approach is, but yes:

1. you can extract PSF from image and stack of images will improve SNR in PSF

2. you can use PSF to reduce the noise

3. noise reduction and sharpening can be part of single filter

Let's briefly address these points:

1. PSF - not sure if you should attempt to use extracted PSF for this application, or just assume certain analytical form of PSF - like gaussian. Actual PSF of star will depend on:

- position in the field (take newtonian scope as obvious example - coma in the corners)

- spectral type of star that produced it (take achromatic refractor as obvious example, but also note that it can happen with perfectly corrected scopes as shorter wavelengths "bend" more than longer ones)

I think that taking actual PSFs of stars in the image can be useful when trying to correct sub that has guiding / tracking issues. That way you can get blur kernel that is common across respective stars and deconvolve to correct it.

I've done something similar some time ago, here are details:

Original image (excuse the processing at the time :D:

M27.png

8" F/6 newtonian on Heq5 without guiding. PE is obvious and produces elongation in RA.

After correction

M27_127F_final.png

Stars are nicer but SNR suffered.

kernels_red.png

Red channel kernels that I got by taking 5 different stars and "comparing" them to gaussian shape. Actual extraction was done via deconvolution (convolution in spatial domain is multiplication in frequency domain so if a*b = c then a = c/b but also b = c/a and therefore you can get blur kernel by deconvolving star with "perfect" star profile - which can be synthetic or extracted from subs that don't have trailing).

star_compare.png

Comparison of effects of deconvolution on star profile.

Sorry about this digression - just wanted to point out that for filtering you can use synthetic profiles and don't need to extract true profiles, although true profiles have their use, but it's best if they are used against same star rather than assumed to be same across the image.

2. This is the key point - PSF introduces a level of correlation between pixel values and that can be exploited in denoising in different ways :

3. Take for example approach that I've come up with:

use Sinc filtering or rather windowed sinc (for example lanczos filter) to decompose original image into layers of different frequency components. Deconvolution, or rather frequency restoration in this case would consist of "boosting" high frequencies that got attenuated by blurring. Opposite of that is killing high frequencies that are due to noise only.

How can you distinguish the two? There is third component in all of that - by use of SNR. Each astronomical image begins as stack of subs. Let's take simple example - regular average stack. We get average value to be final value of the image, but we can also take standard deviation of each pixel. From that and number of stacked subs we can get "noise" estimation.

After you remove background signal from image (wipe background / remove LP offset), and divide the two you will get SNR per pixel. We can use that value to determine if we should "boost" high frequencies (high SNR) or lower them (low SNR).

Back to point two - you can use PSF to do denoising in similar way Lucy-Richardson deconvolution works - it is based on Bayesian framework and uses knowledge of PSF distribution to perform deconvolution. You can take again above SNR to estimate signal value and uncertainty in it (assume some distribution, but different than one in LR because you have stack now and it is not simple Poisson+Gaussian - shot noise + read noise) and you have probability associated with PSF.

In any case - it is a valid point to use PSF characteristics to both denoise and deconvolve image (and it can be done in the same time as well).

  • Like 1
Link to comment
Share on other sites

I could use a simple gaussian psf, however I've had batter results in the past using the stars themselves. I've also used LR (earlier in the thread I've used LR on GPUs) also implemented FIR/IIR on GPU for deconvolution.  

Also you can use a microsope 3D approach with the PSF to find the best point of matching - essentially finding the match for the slopes around a saturated star. 

Link to comment
Share on other sites

12 hours ago, vlaiv said:

2. This is the key point - PSF introduces a level of correlation between pixel values and that can be exploited in denoising in different ways :

3. Take for example approach that I've come up with:

use Sinc filtering or rather windowed sinc (for example lanczos filter) to decompose original image into layers of different frequency components. Deconvolution, or rather frequency restoration in this case would consist of "boosting" high frequencies that got attenuated by blurring. Opposite of that is killing high frequencies that are due to noise only.

How can you distinguish the two? There is third component in all of that - by use of SNR. Each astronomical image begins as stack of subs. Let's take simple example - regular average stack. We get average value to be final value of the image, but we can also take standard deviation of each pixel. From that and number of stacked subs we can get "noise" estimation.

After you remove background signal from image (wipe background / remove LP offset), and divide the two you will get SNR per pixel. We can use that value to determine if we should "boost" high frequencies (high SNR) or lower them (low SNR).

Back to point two - you can use PSF to do denoising in similar way Lucy-Richardson deconvolution works - it is based on Bayesian framework and uses knowledge of PSF distribution to perform deconvolution. You can take again above SNR to estimate signal value and uncertainty in it (assume some distribution, but different than one in LR because you have stack now and it is not simple Poisson+Gaussian - shot noise + read noise) and you have probability associated with PSF.

It sounds like “gather” by reconstructing taking each image pixel then summing each scaled psf multiplied by the respective existing image value. This is done across all the image.

This is the approach I have used previously.

However I will try a divide and see the results too..

i think going forward an idea I want to explore is probability. The idea is to reclaim signal in the faint background noise.

Edited by NickK
Link to comment
Share on other sites

One of the issues with using FFT (now I remember!) is ringing; in this case this is a deconv using fft stretched (this may be due to the stetted psf so I'll check that):

1998483266_Screenshot2020-01-05at13_01_18.png.abb371ad9a09fcc37561c3294fa58e90.png

 

Deconvoluted image:

193547485_Screenshot2020-01-05at13_15_03.png.6113d5ecddf65136d73f3215c49b75cd.png

 

Original image at same scale:

1836997411_Screenshot2020-01-05at13_14_37.png.c2a59e64b2c392ef37858faedf064181.png

 

Now to try the technique I've used previously - reconstruction using correlation.

Edited by NickK
Link to comment
Share on other sites

This is with averaged, still got some way to go (ignore the borders - it's so I don't have to fuss with the edge detection):

193154513_Screenshot2020-01-05at17_12_17.png.e5adf169e5a820c3aaf0b7809995ac04.png

Averaged means the stars are on the same scale as the image - 7 stars were used for the psf.

Stretched becomes darker;

1289373202_Screenshot2020-01-05at17_22_02.png.92ab5f58f453f2e2a9f0c51301ea6a08.png

 

Still - there's something I have wrong with this given there's cycles (echos)

Got it - the weird boxes are because I use the weight of the correlation map to scale the psf sampling kernel. Also I think there's some coordinate errors going on, too late now but will have a look; fixed one and it makes a large different to the image contrast. The obvious tell tale is that the image has flipped.. 

 1832780045_Screenshot2020-01-05at17_47_18.png.28b9731beb9e6c07478122558d6417ea.png

 

The obvious thing is I can Hann window to offer soft psf limits which will help but this is the same issue I've seen from the ObjectiveC code that does the same.

Time for an image processing is about 5 minutes which is not bad as it's running double precision.

Edited by NickK
  • Like 1
Link to comment
Share on other sites

This thread is fascinating!  I´m a newbie at SGL and now wish I´d come across it previously.

Just a couple of comments for now.

1) I find that the CLEAN algorithm can work rather well on stars.  See https://britastro.org/node/19566 for an early blundering in this area.

2) Co-adding stellar PSFs to reduce  noise is a long-used technique. The observed PSF is generally best represented as an elliptical Gaussian or Moffat profile together with a discrepancy bitmap. This  approach is used in the DAOPHOT software for crowded field photometry.  DAOPHOT can be found in the NOAO package within IRAF or PyRAF which is how I use it. I have not tried to find alternative implementations.

Looking forward to seeing your source code.

Link to comment
Share on other sites

10 hours ago, Xilman said:

This thread is fascinating!  I´m a newbie at SGL and now wish I´d come across it previously.

Just a couple of comments for now.

1) I find that the CLEAN algorithm can work rather well on stars.  See https://britastro.org/node/19566 for an early blundering in this area.

2) Co-adding stellar PSFs to reduce  noise is a long-used technique. The observed PSF is generally best represented as an elliptical Gaussian or Moffat profile together with a discrepancy bitmap. This  approach is used in the DAOPHOT software for crowded field photometry.  DAOPHOT can be found in the NOAO package within IRAF or PyRAF which is how I use it. I have not tried to find alternative implementations.

Looking forward to seeing your source code.


I that the CLEAN algorithm is, with the original image and psf “dirty”;

For each highest psf match in the dirty image, reduce the value of the psf in the dirty image, plot the match using a clean Gaussian psf into the clean output image,  repeat until stopping criteria met.

It is a correlation and then use the correlation map to scale a Gaussian to a clean image. In reality the correlation output is the point output (ignoring error). This is essentially the reconstruction filter I’m performing - I can make a gaussian and demonstrate later tonight.

I find parameterised psf works for accurate tracking but is less successful with amateur deconvolution where the initial deconvolution from real to a parameterised form (or complete one step) is better. 

For upsampling - a sinc function, FIR/IIR or simply use drizzling/super-resolution (I've done this in OpenCL before, along with Richardson Lucy).

Edited by NickK
Link to comment
Share on other sites

Not entirely sure I understand all of the above.  Possibly it's because we may be addressing different problems, the clue being that you keep mentioning the use of the guider star images. My starting point is that I have an image and wish to "improve" it using only the information held within that image. Improvement generally means increasing the resolution of the without damaging its astrometrical or photometrical properties "too much" --- a rather subjective criterion.

My implementation of CLEAN is still very simple and follows the original Hogbaum formulation.  It can be improved greatly in at least two ways.

The PSF estimation is very poor, being  essentially a weighted sum of image patches centered on a few bright stars. Although that has the benefit that the PSF need not be anything like a Gaussian or Moffat profile (jaggies from poor guiding for instance), it has the disadvantage of having pixel resolution at best for each star which itself means that the composite PSF is broader than it should be.

The other is that rather than fitting a PSF properly to each source in the dirty map and subtracting, I just centre the PSF at  the brightest pixel at each iteration. Again, this does not give sub-pixel matching but, more important, is too sensitive to noise in the dirty map.  Dark subtraction removes truly hot pixels but a cosmic ray hit or satellite trails gives a nasty artefact in the final image.  So far, removal of bright non-stellar objects has not yet been implemented

Even that crude an implementation shows promise on unresolved double stars.

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Xilman said:

Not entirely sure I understand all of the above.  Possibly it's because we may be addressing different problems, the clue being that you keep mentioning the use of the guider star images. My starting point is that I have an image and wish to "improve" it using only the information held within that image. Improvement generally means increasing the resolution of the without damaging its astrometrical or photometrical properties "too much" --- a rather subjective criterion.

My implementation of CLEAN is still very simple and follows the original Hogbaum formulation.  It can be improved greatly in at least two ways.

The PSF estimation is very poor, being  essentially a weighted sum of image patches centered on a few bright stars. Although that has the benefit that the PSF need not be anything like a Gaussian or Moffat profile (jaggies from poor guiding for instance), it has the disadvantage of having pixel resolution at best for each star which itself means that the composite PSF is broader than it should be.

The other is that rather than fitting a PSF properly to each source in the dirty map and subtracting, I just centre the PSF at  the brightest pixel at each iteration. Again, this does not give sub-pixel matching but, more important, is too sensitive to noise in the dirty map.  Dark subtraction removes truly hot pixels but a cosmic ray hit or satellite trails gives a nasty artefact in the final image.  So far, removal of bright non-stellar objects has not yet been implemented

Even that crude an implementation shows promise on unresolved double stars.

I have a data set that contains a number of long exposures but I recorded all the 2 second guider images (60x60). The guider camera is noisy but the guider is an OTL through the same scope.

The code uses the first guider star image just so I don't have to point one out :) for the Octave code but uses summed sub-image PSF from the long exposure.  For the original ObjectiveC code the program takes all the guide star images during the long exposure and stacks them but I've one coded that up into a function todo that yet in Octave.

 

Edited by NickK
Link to comment
Share on other sites

This is the octave code - it's pretty simple;

%
% Astro noise and deconvolution
% Using the atmospheric turbulence convolution as a cyclic pattern to remove noise
% NickK
% apt-get install liboctave-dev <- required for signal
% pkg install -forge control
% pkg install -forge signal

pkg load signal;
pkg load image;
pkg load fits;

% create objects
[mGuiderImage, guiderHeader] = read_fits_image("/media/deconvTestData/GuiderFITS/PHD_GuideStar_078_212009.fit",0);
[mLongExposure, longExposureHeader] = read_fits_image("/media/deconvTestData/LongExposures/383L-jupiter00009175.fit",0);
printf("guider image size is %ix%i\n", rows(mGuiderImage), columns(mGuiderImage));
% imshow(mLongExposure); hold on;

mPsf = mGuiderImage;
mesh(mPsf);

% create correlated sub images
% fft both images
% cross product
% inverse fft for result

hw=hann(60);
hannWindow = hw*hw';

fftLE = fft2(mLongExposure);
fftInPSF = zeros(rows(mLongExposure),columns(mLongExposure));
fftInPSF(1:60,1:60) = mPsf(1:60,1:60).*hannWindow;
fftPSF = fft2(fftInPSF);

% in reality we don't need to use a full width DFT as the largest cycle length is the size of the PSF

printf("fft completed\n");
printf("  long exposure image is %i x %i\n", columns(fftLE),rows(fftLE));
printf("  guider image is %i x %i\n", columns(fftPSF), rows(fftPSF));

 AR = real(fftLE);
 AI = imag(fftLE);
 BR = real(fftPSF);
 BI = imag(fftPSF);
 re = AR.*BR + AI.*BI;
 im = AI.*BR - AR.*BI;
 magRe = sqrt( re.*re + im.*im );
 den = magRe.*magRe + eps;
 correlationFFT= (re.*magRe ./ den) + j*(im.*magRe ./ den);
 correlation = ifft2(correlationFFT);
 correlationmap = sqrt( real(correlation).**2 + imag(correlation).**2 );

printf("correlation completed\n");

%imshow(correlationmap); hold on;
%figure;

% rescale 0 to 1 ----- min works on 1D arrays.. 
m = min(min(correlationmap));
rangexc = max(max(correlationmap))- m;
xc = (correlationmap-m)./rangexc;

% scale 0-1 to 16 bit 
xut16 = uint16(double(correlationmap.*(256*256))); % .*(255.0*255.0));

% imshow(out16); hold on;
imwrite(xut16, "~/testOutCorrelationMap.png");

% printf("  min = %d   range = %d\n", m, rangexc);

printf(" xc size is %i x %i ", rows(xc), columns(xc));

%imshow(image(xc)); hold on;
%figure;
% bar(xc); hold on;
% figure;

% find is 2D, so we get a number of results 
[fRows, fCols] = find(real(xc) >0.9 ); % find returns index positions >90% signal
stacked = zeros(60,60);

printf("- Number of spikes correlated %i\n", rows(fRows));

leCols = columns(mLongExposure)-60;
leRows = rows(mLongExposure)-60;

 for row=1:rows(fRows),
    %printf(" spike at (%i, %i) = %f\n", fRows(row), fCols(row),xc(fRows(row),fCols(row)));
    if( (fCols(row)<leCols) && (fRows(row)<leRows) )
        stacked += mLongExposure(fRows(row):fRows(row)+59,fCols(row):fCols(row)+59);
    endif;
endfor;

% rescale stacked to 0 to 1
stm = min(min(stacked));
strange = max(max(stacked))- stm;
stackedRescaled = (stacked-stm)./strange;
% or average
% stackedRescaled = stacked ./ rows(fRows); % this keeps scaling inline with large image vs stretching


mesh(stackedRescaled); hold on;
figure;

printf(" starting gather deconvolution\n");

SR = zeros(rows(fftLE), columns(fftLE));
SR(1:60,1:60) = stackedRescaled;
res = fft2(SR) ./ fftLE;
out = ifft2(res);

output = sqrt( real(out).**2 + imag(out).**2 );

% rescale
omin = min(min(output));
orange = max(max(output))- omin;
outRS = (output-omin)./orange;

% scale 0-1 to 16 bit 
out16 = uint16(double(outRS.*(256*256))); % .*(255.0*255.0));

% imshow(out16); hold on;
imwrite(out16, "~/testOutFFTDecovolution.png");

printf("starting gather kernel deconvolution\n");

summed = zeros(rows(mLongExposure), columns(mLongExposure));

startTime = time();

for leC = 31 : columns(mLongExposure)-31,
    for leR = 31 : rows(mLongExposure)-31,
        weight = correlationmap(leR-30, leC-30);  % use the correlation power to define scale (offset window)
        w = mLongExposure(leR-30:leR+29, leC-30:leC+29);
        e = stackedRescaled(1:60,1:60).*weight;
        r = w.*e;
        s = sum( r(:));
        summed(leR-30:leR+29, leC-30:leC+29) = s;
    endfor;
endfor;

whos

endTime = time();

fprintf("Gather complete, total time %d seconds.\n", endTime-startTime);

imshow(summed); hold on; 

% rescale
gmin = min(min(summed));
grange = max(max(summed))- gmin;
gutRS = (summed-gmin)./grange;

% scale 0-1 to 16 bit 
gut16 = uint16(double(gutRS.*(256*256))); % .*(255.0*255.0));

% imshow(out16); hold on;
imwrite(gut16, "~/testOutGatherDecovolution.png");

 

The conj() expansion is because the code I have in ObjectiveC and fftw allows for mis-matched ffts to be processed, and I use a small window (the code has a for x,y loop around it), in Octave it seems fast enough just to vectorise in parts.

 

Edited by NickK
Link to comment
Share on other sites

Ejecting noise is done by reconstruction above but I also have a mechanism in the ObjectiveC code that looked for hot single pixels, placed them in a list then looked at the correlation with the psf - a single point is going to have a low correlation plus if the image hot pixel value is outside of the expected value deviation it simply smooths it (that's the noise removal from the first page).

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.