Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

CraigT82

Members
  • Posts

    3,846
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by CraigT82

  1. Those are fabulous captures, the subtle colour really brings them to life. As for sheer resolution I think you could still do better with the 8” newt. What is your sampling rate? 
     

    My best detail from the same class of scope (well, an 8.5” newt) is below and I think you could get close to this with your 8” in excellent seeing and correct sampling.

     

    CC7BB466-26A7-4191-B030-4E4E0BB15F93.png

    • Like 5
  2. 8 hours ago, vlaiv said:

    It does look good - 2.06px/cycle is very close to 2px/cycle

    Have you done F/ratio vs pixel size math on it to confirm sampling rate?

    It might be that inner circle is actual data limit and outer circle is some sort of stacking artifact.

    It is always guess a work.

    Resulting MTF of the image is combination of telescope MTF and resulting seeing from all little seeing distortions combined when stacking subs. It will change whenever we change subs that went into stack - and we hope that they average out to something close to gaussian (math says that in limit - whatever seeing is, stacking will tend to gaussian shape), however, we don't know sigma/FWHM of that gaussian - and that is part of guess work.

    Different algorithms approach this in different way.

    Wavelets do detail decomposition to several images. There is Gimp plugin that does this - it decomposes image into several layers, each layer containing image composed out of set of frequencies.

    In the graph, it would look something like this (theoretical exact case, but I don't think wavelets manage to decompose perfectly)

    image.png.45579352b57a87acbb7e688362626690.png

    So we get 6 images - each consisting out of certain set of frequencies.

    Then when you sharpen - or move slider in registax - you multiply that segment with some constant - and you aim to get that constant just right and you end up with something like this:

    image.png.650f852ce0520021c3b57b28826ab1a8.png

    (each part of original curve is raised back to some position - hopefully close to where it should be) - more layer there is, better restoration or closer to original curve.

    This is just simplified explanation - curves don't really look straight. For example in Gaussian wavelets, decomposition is done with gaussian kernel, so those boxes actually look something like this:

    image.png.c6ff31f314466f4f0b9059df02c7dc22.png

    (yes, those squiggly lines are supposed to be gaussian bell shapes :D ).

    Deconvolution on the other hand handles things differently. In fact - there is no single deconvolution algorithm, and basic deconvolution (which might probably be the best for this application) is just simply division.

    There is mathematical relationship between spatial and frequency domain that says that convolution in one domain is multiplication in other domain (and vice verse).

    So convolution in spatial domain is multiplication in frequency domain, and therefore - one could simply generate above curve with some means and use it to divide Fourier transform (inverse of multiplication) and then do inverse Fourier transform.

    Other deconvolution algorithms try to deal with problem in spatial domain. They examine PSF and try to reconstruct image based on blurred image and guessed PSF - they solve the question - what would image need to look like if it was blurred by this to produce this result. Often they use probability approach because of nature of random noise that is involved in the process. They ask a question - what image has the highest probability of being a solution, given these starting conditions - and then use clever math equations to solve that.

     

    Thanks! I’m not going to pretend I understood much of that but it’s given me some direction for further reading. Thanks for taking the time to respond 😊

    I have looked at the planet size in the image and back calculated to FR and it came out to F/17, so I thought I was actually oversampled with my 2.9um pixels. 

  3. 34 minutes ago, vlaiv said:

    You did not manage to restore all frequencies properly but there was "gap" left in there. Not your fault - it is just the settings that you applied when sharpening that resulted in such restoration.

    Thanks Vlaiv, super informative as ever.
     

    The individual stacked tiffs were all sharpened with wavelets and deconvolution before going into WJ…. And I guessed at the kernel size during the deconvolution step exactly like you said.

    Any ideas how I could properly restore all frequencies?! 

    Sampling looks ok though does it? I’m not sure. 

  4. Had a go using the normal FFT function and got this (loaded up my winjupos derotation output OSC with no further post processing/sharpening).  Edge of circle is 2.06 pix per cycle.

     

    Does this look right @vlaiv.  Seems to be a large concentration of low frequencies

     

    image.png.32b95cd66a85c3f0b86c92ec44fef0a6.png

     

    If I try the same thing on the fully post processed image (no rescaling occured) I get this weirdness

     

     

    image.png.5f94c111c5bf60dd9150f869d243db17.png

     

     

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.