Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

NickK

Members
  • Posts

    3,804
  • Joined

  • Last visited

Posts posted by NickK

  1. On 21/06/2020 at 14:48, StuartJPP said:

    If people knew how Switch Mode Power Supplies operate then they'd think twice about using them.

    Think of a Switch Mode Power Supply as a nest full of angry wasps buzzing away, until something (anything) upsets them.

    I am of course being a bit facetious, and allegedly they can be designed properly.

    On the flip side, a freshly charged 12V leisure battery can deliver at most about 14V, give or take. It can deliver a lot of current, so it can melt wires and set fire to things if wired incorrectly, but it will never deliver more voltage than chemically possible.

    Now if only the manufacturers would put an "idiot" diode in line with the circuit board or better yet a FET and derate the component operating voltages/currents then it would be practically impossible to blow a board up by plugging it in, except for actual component failure on that board.

     

    I've recently been heavily involved in producing a 24Vdc -> 240Vdc boost converter for a valve amp. Let's ignore the topology discussion for now but - you're right that SMPS are an inherently noisy way of either stepping down or stepping up power. One or more switches are flipping anything from 50,000-1M times a second depending on design and configuration. It's the reason why the components are smaller (due to the high frequency switching). Also SMPS that plug into the mains typically convert from 240Vac to 390V before bucking down to the intended 24V or 12Vdc.

    Meanwell produce good supplies - including medical. just ensure that (a) the supply is "isolated" and (b) has over/under and short curcuit protection. A lot of the meanwell also have PFC which makes them very efficient. Their 24V supplies only have around 150mV peak-to-peak for the 24V supply. There's some noise but for audio shifting the noise up in frequency is a great way to make things quieter (easier to filter).

    Now to my point. It is possible to post regulate (ie using a maida regulator with an LT3080 instead of the old LM317 for example) with 1A max to 1mV ripple or below which means you're at least down to -60dBv and the supply I'm working on is around -120dBV at 240Vdc but also be efficient. That lower ripple (regardless of frequency range) means low noise for cameras - traditionally this is why APers have used linear regulated power supplies as they don't have switching noise.

    I have a meanwell supply on my koi pond filter controller. I've had it blow due to water ingress but it's dependable and works. New controller box to solve the water leak and no problem since.  I'd certainly recommend meanwell supplies.

    As always - careful of humidity and water ingress ;)

     

  2. Another issue with wine is the support for multiple threading. It’s not brilliant for high performance work (ie real-time or games) but does work well enough for low end stuff.

    personally I’d have a look at Linux and INDI/Kstars. Lots of drivers and they seem good enough.

  3. 3 hours ago, Avocette said:

    So would you recommend 2x2 binning in this case? I’m pretty sure I’ve read somewhere that 2x2 binning even for the monochrome 120MM is a good idea when using the Evoguide scope which is 242mm focal length.

    That would probably work well and actually allow a faster frame rate (depending on the seeing).

    There is a fl vs binned pixel size but if the guide star image is large enough then a centre mass calculation shouldn’t have a problem.

    All my cameras are mono.

     

    • Like 1
  4. 27 minutes ago, don4l said:

    I'm using the 120MC with PHD2.  I've no idea if the fact that it is colour has had any effect.  It just seems to work.

    The only issue is that Bayer matrix can confuse some star tracking. If the wavelength of the start if blocked by one of the colours then the tracker may loose the star or find that the star centre point moves irregularly.

  5. Right ... getting to the end of of the ODroid C2 build..

    I've created a virtualbox 64bit x66 ubuntu install for cross compiling to arm aarch64 Indi, kstars and this.. so will start playing with the focuser over the next week(s).

    This will allow me to test using the Mac straight to the focuser (12 core Mac 32GB RAM is a lot faster!), then simply patch over the binaries.

    Once it's working, I'll look at adding stop switching for limits.

     

  6. The C2 uses eMMC for system drive (16GB) so ~12sec power-ready but only has usb 2.0 - fast enough for an SSD image store. With passive cooling - no vibrations too. I use UBECs to convert from 12V to 5V - small but powerful enough :)

    Like the RPi it does everything in a box :)

    Just trying to pick up courage to reinstall from scratch! Will probably do that tomorrow.

     

    • Like 1
  7. 15 hours ago, Gina said:

    All this seems so much easier in KStars/Ekos/INDI with RPi. than it was in Windoze!!  The more I use it the more I'm impressed!  😀

    Amazing to think that something the size of this can do everything including astrometry!

    B7B110AF-762F-44C5-AFD1-4F5C4DA285FA.thumb.jpeg.e2cf6727fa65d60d6f0e02e6069f154b.jpeg

    Just playing around - the build on this is 2016 and the first 64bit self compilation of many INDI/Kstars things.. so I’m seriously considering updating it to correct many package errors stopping upgrades etc.

     

    • Like 1
  8. so; Guide Camera -> Mount rather than Guide camera -> RPi -> mount ?

    ST4 simply says nudge N/S/E/W and the length of pulse is how long the nudge is (the auto guider port on the NEQ6). There isn't any logic or protocol to change speeds etc. If the hand controller decides it's going todo something then it will cause a problem with the tracking. 

    Most cameras are simply cameras - ones with an ST4 port simply pass the computer image to the computer, the computer analyses and instructs the ST4 pulses back to the camera though USB. Autoguider cameras that analyse the image inside the camera short cut that - and you'd not need to worry about the computer guiding (it doesn't know about the computer).

    INDI has a guider module;

    Quote

    Ekos Guide Module enables autoguiding capability using either the powerful built-in guider, or at your option, external guiding via PHD2 or ln_guider. Using the internal guiding, guider CCD frames are captured and sent to Ekos for analysis. Depending on the deviations of the guide star from its lock position, guiding pulses corrections are sent to your mount Via any device that supports ST4 ports. Alternatively, you may send the corrections to your mount directly, if supported by the mount driver. Most of the GUI options in the Guide Module are well documented so just hover your mouse over an item and a tooltip will popup with helpful information.

    So basically set INGI guider up for internal guider, connect the ST4 ports up and away you go.. you don't need to remove the handset and use the PHD-style dongle to replace the handset controller.

    It's not as advanced as PHD2 but more info here: https://indilib.org/about/ekos/guide-module.html

    • Thanks 1
  9. 15 hours ago, Alien 13 said:

    Audio design is difficult at the best of times and totally different to any other form of electronics because of the minute signals in use, filters are very hard to get right and over zealous use of them can kill the sound completely.

    I have used specially selected versions of chips like the LM317 in the past with very good results even in Moving Coil phono stages although still preferred ECC81 valves  even for that application.

    Alan

    My initial idea was to get pure digital from PSU to headphones with possibly a passive RC filter at the end.

    Going through all the research it became apparent for R2R vs singlebit sigma delta, that the noise from square waves needs careful control (far beyond an active Sallon-Key Butterworth filter. Also the ladder resistances for higher bits requires higher tolerances (0.01%). Then by the time the switching on the ladder is operating at ~3MHz you're starting to get into higher frequency curcuit design.. bypass caps, track layouts etc all become too costly to simply DIY.  I find the software and building digital filters relatively easy (including the fpga) but low volume cost of high precision resistors and large number of mistakes in hardware.. it would be easier selecting an OEM board. 

    In the end I will go with a Soekris R2R OEM board (this has usb, fgpa, ultra stable clock, switches and resistors at 0.01%). The board then requires line buffer or headphone amp, power supplies and control boards. Those are easier to build or source. 

    • Like 1
  10. 1 hour ago, symmetal said:

    Any noise on the power rails can be greatly reduced by a 2 stage LC filter. I run my astro setup from a single SMPSU with a 2 stage LC filter in each distribution output, values tailored to what that output is supplying. This also isolates noise created by the device the output is connected to from affecting the other outlets. Probably overkill but it doesn't cost much extra and gave me an excuse to make a PCB. :smile:

    As many cameras are now powered from USB, which will be very noisy anyway, and they don't seem to be affected by it, they have their own on board filtering methods. As regulated power supply noise is all high frequency then only small values of L and C are needed so take up little space.

    Probably cheaper than a bank of ultra low noise regulators. :smile:

    Alan

     

    Lol yes - it's easy to make a filter to kill off anything outside of 50Hz from the mains, then regulate to a voltage. Most cameras I have are 12V with the fans (shock horror) also running from the same power.

    Just seemed like an option to provide 12V heavily filtered :) It's possible to run an isolated USB with the regulator providing quiet USB too.

    • Like 1
  11. So I've been exploring the world of DIY audio.. specially making a headphone "DAC" with an R2R ladder style DAC. So effectively the headphones are connected to the power supply - thus ultra low noise stable DC power regulation is needed.

    Now my current power system is a supply (normally battery) that is then fed through a RC style regulator to give the required voltages and current.

    However the LT3045 regulator is something worth noting .. 

    Quote

    Ultralow RMS Noise: 0.8μVRMS (10Hz to 100kHz) Ultralow Spot Noise: 2nV/√Hz at 10kHz
    Ultrahigh PSRR: 76dB at 1MHz
    Output Current: 500mA

    Wide Input Voltage Range: 1.8V to 20V
    Single Capacitor Improves Noise and PSRR
    100μA SET Pin Current: ±1% Initial Accuracy
    Single Resistor Programs Output Voltage
    High Bandwidth: 1MHz
    Programmable Current Limit
    Low Dropout Voltage: 260mV
    Output Voltage Range: 0V to 15V
    Programmable Power Good
    Fast Start-Up Capability
    Precision Enable/UVLO
    Parallelable for Lower Noise and Higher Current
    Internal Current Limit with Foldback
    Minimum Output Capacitor: 10μF Ceramic
    Reverse-Battery and Reverse-Current Protection

    Only 500mA I hear.. well actually you can connect these in parallel and the reference design has a 4-device 2A design that pushes the heat and noise levels down further! So this would be able to run a camera at 12V but another option is to "upgrade" the internal camera power.

    The down side.. in singles the regulator chip itself (surface mount) is about $/£/€5 each.

    Just wondered if anyone had explored creating an astro "ultra quiet" power regulator that could be plugged inline for example.

     

    • Like 1
  12. Catalina has some issues with USB it seems from reports too.

    When you connect the camera to USB can you see it in OSX?

    Hold down Alt when you have the apple menu open (menu from the top left) - "About this Mac" then becomes "System Information". Navigate to the USB part of the tree and you should see all the USB devices it recognises.

  13. Ejecting noise is done by reconstruction above but I also have a mechanism in the ObjectiveC code that looked for hot single pixels, placed them in a list then looked at the correlation with the psf - a single point is going to have a low correlation plus if the image hot pixel value is outside of the expected value deviation it simply smooths it (that's the noise removal from the first page).

  14. This is the octave code - it's pretty simple;

    %
    % Astro noise and deconvolution
    % Using the atmospheric turbulence convolution as a cyclic pattern to remove noise
    % NickK
    % apt-get install liboctave-dev <- required for signal
    % pkg install -forge control
    % pkg install -forge signal
    
    pkg load signal;
    pkg load image;
    pkg load fits;
    
    % create objects
    [mGuiderImage, guiderHeader] = read_fits_image("/media/deconvTestData/GuiderFITS/PHD_GuideStar_078_212009.fit",0);
    [mLongExposure, longExposureHeader] = read_fits_image("/media/deconvTestData/LongExposures/383L-jupiter00009175.fit",0);
    printf("guider image size is %ix%i\n", rows(mGuiderImage), columns(mGuiderImage));
    % imshow(mLongExposure); hold on;
    
    mPsf = mGuiderImage;
    mesh(mPsf);
    
    % create correlated sub images
    % fft both images
    % cross product
    % inverse fft for result
    
    hw=hann(60);
    hannWindow = hw*hw';
    
    fftLE = fft2(mLongExposure);
    fftInPSF = zeros(rows(mLongExposure),columns(mLongExposure));
    fftInPSF(1:60,1:60) = mPsf(1:60,1:60).*hannWindow;
    fftPSF = fft2(fftInPSF);
    
    % in reality we don't need to use a full width DFT as the largest cycle length is the size of the PSF
    
    printf("fft completed\n");
    printf("  long exposure image is %i x %i\n", columns(fftLE),rows(fftLE));
    printf("  guider image is %i x %i\n", columns(fftPSF), rows(fftPSF));
    
     AR = real(fftLE);
     AI = imag(fftLE);
     BR = real(fftPSF);
     BI = imag(fftPSF);
     re = AR.*BR + AI.*BI;
     im = AI.*BR - AR.*BI;
     magRe = sqrt( re.*re + im.*im );
     den = magRe.*magRe + eps;
     correlationFFT= (re.*magRe ./ den) + j*(im.*magRe ./ den);
     correlation = ifft2(correlationFFT);
     correlationmap = sqrt( real(correlation).**2 + imag(correlation).**2 );
    
    printf("correlation completed\n");
    
    %imshow(correlationmap); hold on;
    %figure;
    
    % rescale 0 to 1 ----- min works on 1D arrays.. 
    m = min(min(correlationmap));
    rangexc = max(max(correlationmap))- m;
    xc = (correlationmap-m)./rangexc;
    
    % scale 0-1 to 16 bit 
    xut16 = uint16(double(correlationmap.*(256*256))); % .*(255.0*255.0));
    
    % imshow(out16); hold on;
    imwrite(xut16, "~/testOutCorrelationMap.png");
    
    % printf("  min = %d   range = %d\n", m, rangexc);
    
    printf(" xc size is %i x %i ", rows(xc), columns(xc));
    
    %imshow(image(xc)); hold on;
    %figure;
    % bar(xc); hold on;
    % figure;
    
    % find is 2D, so we get a number of results 
    [fRows, fCols] = find(real(xc) >0.9 ); % find returns index positions >90% signal
    stacked = zeros(60,60);
    
    printf("- Number of spikes correlated %i\n", rows(fRows));
    
    leCols = columns(mLongExposure)-60;
    leRows = rows(mLongExposure)-60;
    
     for row=1:rows(fRows),
        %printf(" spike at (%i, %i) = %f\n", fRows(row), fCols(row),xc(fRows(row),fCols(row)));
        if( (fCols(row)<leCols) && (fRows(row)<leRows) )
            stacked += mLongExposure(fRows(row):fRows(row)+59,fCols(row):fCols(row)+59);
        endif;
    endfor;
    
    % rescale stacked to 0 to 1
    stm = min(min(stacked));
    strange = max(max(stacked))- stm;
    stackedRescaled = (stacked-stm)./strange;
    % or average
    % stackedRescaled = stacked ./ rows(fRows); % this keeps scaling inline with large image vs stretching
    
    
    mesh(stackedRescaled); hold on;
    figure;
    
    printf(" starting gather deconvolution\n");
    
    SR = zeros(rows(fftLE), columns(fftLE));
    SR(1:60,1:60) = stackedRescaled;
    res = fft2(SR) ./ fftLE;
    out = ifft2(res);
    
    output = sqrt( real(out).**2 + imag(out).**2 );
    
    % rescale
    omin = min(min(output));
    orange = max(max(output))- omin;
    outRS = (output-omin)./orange;
    
    % scale 0-1 to 16 bit 
    out16 = uint16(double(outRS.*(256*256))); % .*(255.0*255.0));
    
    % imshow(out16); hold on;
    imwrite(out16, "~/testOutFFTDecovolution.png");
    
    printf("starting gather kernel deconvolution\n");
    
    summed = zeros(rows(mLongExposure), columns(mLongExposure));
    
    startTime = time();
    
    for leC = 31 : columns(mLongExposure)-31,
        for leR = 31 : rows(mLongExposure)-31,
            weight = correlationmap(leR-30, leC-30);  % use the correlation power to define scale (offset window)
            w = mLongExposure(leR-30:leR+29, leC-30:leC+29);
            e = stackedRescaled(1:60,1:60).*weight;
            r = w.*e;
            s = sum( r(:));
            summed(leR-30:leR+29, leC-30:leC+29) = s;
        endfor;
    endfor;
    
    whos
    
    endTime = time();
    
    fprintf("Gather complete, total time %d seconds.\n", endTime-startTime);
    
    imshow(summed); hold on; 
    
    % rescale
    gmin = min(min(summed));
    grange = max(max(summed))- gmin;
    gutRS = (summed-gmin)./grange;
    
    % scale 0-1 to 16 bit 
    gut16 = uint16(double(gutRS.*(256*256))); % .*(255.0*255.0));
    
    % imshow(out16); hold on;
    imwrite(gut16, "~/testOutGatherDecovolution.png");
    

     

    The conj() expansion is because the code I have in ObjectiveC and fftw allows for mis-matched ffts to be processed, and I use a small window (the code has a for x,y loop around it), in Octave it seems fast enough just to vectorise in parts.

     

  15. 7 minutes ago, Xilman said:

    Not entirely sure I understand all of the above.  Possibly it's because we may be addressing different problems, the clue being that you keep mentioning the use of the guider star images. My starting point is that I have an image and wish to "improve" it using only the information held within that image. Improvement generally means increasing the resolution of the without damaging its astrometrical or photometrical properties "too much" --- a rather subjective criterion.

    My implementation of CLEAN is still very simple and follows the original Hogbaum formulation.  It can be improved greatly in at least two ways.

    The PSF estimation is very poor, being  essentially a weighted sum of image patches centered on a few bright stars. Although that has the benefit that the PSF need not be anything like a Gaussian or Moffat profile (jaggies from poor guiding for instance), it has the disadvantage of having pixel resolution at best for each star which itself means that the composite PSF is broader than it should be.

    The other is that rather than fitting a PSF properly to each source in the dirty map and subtracting, I just centre the PSF at  the brightest pixel at each iteration. Again, this does not give sub-pixel matching but, more important, is too sensitive to noise in the dirty map.  Dark subtraction removes truly hot pixels but a cosmic ray hit or satellite trails gives a nasty artefact in the final image.  So far, removal of bright non-stellar objects has not yet been implemented

    Even that crude an implementation shows promise on unresolved double stars.

    I have a data set that contains a number of long exposures but I recorded all the 2 second guider images (60x60). The guider camera is noisy but the guider is an OTL through the same scope.

    The code uses the first guider star image just so I don't have to point one out :) for the Octave code but uses summed sub-image PSF from the long exposure.  For the original ObjectiveC code the program takes all the guide star images during the long exposure and stacks them but I've one coded that up into a function todo that yet in Octave.

     

  16. 10 hours ago, Xilman said:

    This thread is fascinating!  I´m a newbie at SGL and now wish I´d come across it previously.

    Just a couple of comments for now.

    1) I find that the CLEAN algorithm can work rather well on stars.  See https://britastro.org/node/19566 for an early blundering in this area.

    2) Co-adding stellar PSFs to reduce  noise is a long-used technique. The observed PSF is generally best represented as an elliptical Gaussian or Moffat profile together with a discrepancy bitmap. This  approach is used in the DAOPHOT software for crowded field photometry.  DAOPHOT can be found in the NOAO package within IRAF or PyRAF which is how I use it. I have not tried to find alternative implementations.

    Looking forward to seeing your source code.


    I that the CLEAN algorithm is, with the original image and psf “dirty”;

    For each highest psf match in the dirty image, reduce the value of the psf in the dirty image, plot the match using a clean Gaussian psf into the clean output image,  repeat until stopping criteria met.

    It is a correlation and then use the correlation map to scale a Gaussian to a clean image. In reality the correlation output is the point output (ignoring error). This is essentially the reconstruction filter I’m performing - I can make a gaussian and demonstrate later tonight.

    I find parameterised psf works for accurate tracking but is less successful with amateur deconvolution where the initial deconvolution from real to a parameterised form (or complete one step) is better. 

    For upsampling - a sinc function, FIR/IIR or simply use drizzling/super-resolution (I've done this in OpenCL before, along with Richardson Lucy).

  17. This is with averaged, still got some way to go (ignore the borders - it's so I don't have to fuss with the edge detection):

    193154513_Screenshot2020-01-05at17_12_17.png.e5adf169e5a820c3aaf0b7809995ac04.png

    Averaged means the stars are on the same scale as the image - 7 stars were used for the psf.

    Stretched becomes darker;

    1289373202_Screenshot2020-01-05at17_22_02.png.92ab5f58f453f2e2a9f0c51301ea6a08.png

     

    Still - there's something I have wrong with this given there's cycles (echos)

    Got it - the weird boxes are because I use the weight of the correlation map to scale the psf sampling kernel. Also I think there's some coordinate errors going on, too late now but will have a look; fixed one and it makes a large different to the image contrast. The obvious tell tale is that the image has flipped.. 

     1832780045_Screenshot2020-01-05at17_47_18.png.28b9731beb9e6c07478122558d6417ea.png

     

    The obvious thing is I can Hann window to offer soft psf limits which will help but this is the same issue I've seen from the ObjectiveC code that does the same.

    Time for an image processing is about 5 minutes which is not bad as it's running double precision.

    • Like 1
  18. 12 hours ago, vlaiv said:

    2. This is the key point - PSF introduces a level of correlation between pixel values and that can be exploited in denoising in different ways :

    3. Take for example approach that I've come up with:

    use Sinc filtering or rather windowed sinc (for example lanczos filter) to decompose original image into layers of different frequency components. Deconvolution, or rather frequency restoration in this case would consist of "boosting" high frequencies that got attenuated by blurring. Opposite of that is killing high frequencies that are due to noise only.

    How can you distinguish the two? There is third component in all of that - by use of SNR. Each astronomical image begins as stack of subs. Let's take simple example - regular average stack. We get average value to be final value of the image, but we can also take standard deviation of each pixel. From that and number of stacked subs we can get "noise" estimation.

    After you remove background signal from image (wipe background / remove LP offset), and divide the two you will get SNR per pixel. We can use that value to determine if we should "boost" high frequencies (high SNR) or lower them (low SNR).

    Back to point two - you can use PSF to do denoising in similar way Lucy-Richardson deconvolution works - it is based on Bayesian framework and uses knowledge of PSF distribution to perform deconvolution. You can take again above SNR to estimate signal value and uncertainty in it (assume some distribution, but different than one in LR because you have stack now and it is not simple Poisson+Gaussian - shot noise + read noise) and you have probability associated with PSF.

    It sounds like “gather” by reconstructing taking each image pixel then summing each scaled psf multiplied by the respective existing image value. This is done across all the image.

    This is the approach I have used previously.

    However I will try a divide and see the results too..

    i think going forward an idea I want to explore is probability. The idea is to reclaim signal in the faint background noise.

  19. I could use a simple gaussian psf, however I've had batter results in the past using the stars themselves. I've also used LR (earlier in the thread I've used LR on GPUs) also implemented FIR/IIR on GPU for deconvolution.  

    Also you can use a microsope 3D approach with the PSF to find the best point of matching - essentially finding the match for the slopes around a saturated star. 

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.