Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. Here are the results, and are quite "unexpected" to say the least :D

    image.png.b36ba57192c61b87f7370e12778c5389.png

    I ran gaussian noise on 512x512 image and measured standard deviation at 0.1 intervals (starting at 1 and ending up at bin x3).

    Interestingly enough - it almost represents a straight line with dips at certain points - whole numbers and 1.5 and 2.5. Probably because level of correlation is small at those factors.

    9 minutes ago, Martin Meredith said:

    I use fractional binning in my Jocular EAA tool (for colour binning). And I can imagine that in general it is very useful to be able to match binning to seeing, which is always going to be best suited to fractional values.

    It is implemented using interpolation via the rescale function available in the scikit-image module of Python. I'd suggest there are as many ways to implement fractional binning as there are interpolation algorithms, but I imagine any reasonably sophisticated interpolation algorithms (that include antialiasing for example) will do what you need.

    Martin

    This type of binning - at least how I see it is a bit different than using interpolation especially for bin coefficients larger than 2.

    Take for example bilinear interpolation - it will only "consult" 4 neighboring pixels depending where interpolated sample falls. With factor larger than 2, fractional binning will operate on at least 9 pixels.

    But you are right, as implemented above, fractional binning is very much like rescaling algorithm. I plan to implement it slightly differently though - like split algorithm described in first post - at least for some fractions (2 and 3 in denominator and suitable values in enumerator). That will remove this sort of pixel to pixel correlation and implement a sort of "weighing" in the stack, because some pixel values will be present multiple times.

     

  2. Just now, michael.h.f.wilkinson said:

    BTW, the banding I refer to in the 1.1x case should not so much be visible as dark and light stripes, but more noisier and less noisy stripes. This could perhaps be detected by calculating variance per column or row

     

    It turns out that it was artifact due to wrong condition in for loop - < instead of <= :D - but you are right, that sort of artifact happens when you rotate image by very small angle and use bilinear interpolation. It is visible only in noise if you stretch your histogram right - due to different SNR improvement - small fraction of one pixel and large of another will have small SNR improvement, but equal size fractions will have SNR improvement of 1.414 ... So you get "wave" in noise distribution going over the image.

  3. 3 minutes ago, michael.h.f.wilkinson said:

    The banding is easily explained in the 1.1x case, as the weights by which pixels are added vary periodically. Looking just at the horizontal binning, in the first pixel the weights are 10 and 1, the second 9 and 2, the third 8 and 3, then 7 and 5, 6 and 5, 5 and 6, etc. At the first pixel the expected gain in S/N is a modest 11/sqrt(101), or just 1.0945,  with weights of 5 and 6, we get an improvement of 11/sqrt(36+25) or 1.4084. In the case of 1.5x this effect does not occur, because the weights alternate between 2, 1 and 1, 2 (again just looking at 1-D binning).

    What I would do to debug is to first bin horizontally, and then bin the result vertically.

    Managed to find a bug - no banding is now present, and power spectrum displays nice attenuation from center to edges.

    I'm now trying to hunt down another bug (don't you just love programming :D ) - I get SNR improvement of x2 for binning x1.5 for some reason - which means values are not calculated correctly (or maybe my initial calculation was wrong - will figure it out on small image 3x3 if values are calculated correctly).

  4. 10 minutes ago, michael.h.f.wilkinson said:

    It means some of the power in the Fourier power spectrum of the noise moves to lower spatial frequencies, so the 1.8x suppression at the highest frequencies will be offset partly by an increase at slightly lower frequencies. A certain bumpiness in the background might be seen if you go pixel peeping

    I'm getting that sort of result, but here is what confuses me - it is more pronounced on low fraction than on high fraction.

    If I'm binning by factor of x1.1, I'm getting quite a bit of artifacts in pure noise as a result of frequency distribution change - but it seems to be only in "one direction". FFT confirms this. It is probably due to error in implementation, but I can't seem to find it :D (as is often case with bugs).

    Here is example:

    image.png.8cd8d5f90f84f2c6c49c93fed5659f0e.png

    Distribution is no longer gaussian - but more narrow, and there are vertical "striations" in the image. Here is raw power spectrum via FFT:

    image.png.76ed358d4d1c06807a78fc75baec548c.png

    It's obvious that higher frequencies are attenuated - but again it is in single direction - which is weird and should not happen. There is obviously a bug in software that I need to find.

    On the other hand, here is bin x1.5 - effects is much less pronounced:

    image.png.b887d4d18c09c15a6fbc73c47abf25be.png

    Both image and distribution look ok, but power spectrum is still showing above "feature":

    image.png.4eb068101e9096ad1abfd1e7c15a6ea0.png

     

     

  5. 29 minutes ago, Tommohawk said:

    I could post all the above as FITS but they are pretty massive files and TBH on visual inspection the reduced size JPEGS above appear identical to the FITS. I'll attach a light - but TBH what I'm now wondering is what the individual flats look like before averaging by Sharpcap. I don't have individual flats subs cos Sharpcap just  stacks them and doesn't keep separate subs. I'll redo the flats as individual subs and repost.

    Ok, here is couple of things you should try out.

    First, yes go for ASCOM drivers. Those can also be selected in SharpCap as far as I remember. In my (outdated) version of SharpCap, these are available in menu:

    image.png.071b72b59629c3102002f68bc21e045c.png

    SharpCap then issues a warning that you are using ASCOM or WDM driver for camera that SharpCap can control directly - but just click that you are ok with that.

    Alternative would be to use different capture software. Nina seems to be popular free imaging software that you can try out:

    Second would be examining your subs. I wanted you to post your subs for examination - but that is just a few measurements and overview of fits header information that can help. You don't need to post complete files you can just post following:

    1. Statistics of each sub - one flat, other flat, related flat darks, ...

    2. Histogram of each sub

    3. Fits header data

    For sub that you uploaded, here is that information:

    image.png.44bd1de387e44196d5e346955cd1462a.png

    From this I can tell you straight away that you have offset issue. You should not have value of 16 in your sub even for bias subs, let alone subs that contain signal!

    Here is histogram for example:

    image.png.70e954e90233f8638fb430fd3d7fe9bc.png

    This one looks rather good, but I can bet that darks and flat darks will be clipping to the left due to offset issue.

    Here is fits header:

    image.png.56c36b7ffa85b8dfb13ac62627ecaab0.png

    So I see that you have your gain set at 300 that temp was -15, that you used 5 minute exposure and so on ... When comparing this information between subs one can see if there is mismatch or whatever.

    Most important thing - don't do "auto" things. When you let software do something automatically for you - you loose a bit of control. In my view it is best to know exactly what software is doing if you are going to do that. Collect your darks, your flats and your flat darks like you do lights and then you can make your own masters the way you want. It also gives you chance to examine individual subs and see if there is something wrong with them.

    Only when you know that your settings work "manually" and you know what software is doing automatically (by adjusting parameters / reading of what it does in certain procedure) - then use automatic approach to save time.

  6. 2 minutes ago, michael.h.f.wilkinson said:

    As I take it, you are exchanging uncorrelated noise for correlated noise. In the case of integer binning (2x2, 3x3, etc) the noise in the resulting pixels is uncorrelated, because each of the pixels is the result of the sum of different pixels in the input image. However, in the case of fractional binning, Each pixel contributes to nine pixels in the output, meaning that if a given pixel has a bit of a noise spike this is spread out over 9 pixels in the surroundings, causing local bias.

    I figured that as well.

    Do you by any chance have idea if this correlation is going to affect results of stacking? Does it change distribution of noise in such way that it lowers SNR gains in stacking?

    I don't think that local correlation of noise is necessarily a bad thing in astro imaging. It certainly can be when you are doing measurements, but for regular photography I can't see any down sides except possibility of altered noise distribution.

  7. 6 minutes ago, vlaiv said:

    1.6, 1.7889, 1.6, 1.7889, ....

    1.7889, 1.6, 1.7889, 1.6, ....

    1.6, 1.7889, 1.6, 17889, ....

    Each other pixel will have different SNR improvement in checker board pattern. There is about half of 1.6 and half of 1.7889 - and I honestly don't know how SNR improvement will average, will it behave like regular numbers or like noise. If it behaves like noise then average will be: sqrt ( 1.6^2 + 1.7889^2) = 2.4, but if it behaves like normal numbers average will be (1.6 + 1.7889) = 1.69445

    It stands to reason that it will average like regular numbers, so I'm guessing that total average SNR improvement after stacking will be 1.69445

    This logic is much harder to apply when you have bin like x1.414 because number of possible combinations of areas that go into resulting pixels is infinite, and each one will have slightly different SNR improvement - and in the end you have to average those out.

    Probably easiest way to do it would be to actually implement "regular" fractional binning (that one is easy to measure because it works on single sub and does not need alignment and stacking afterwards) and test it on bunch of subs with synthetic noise and measure standard deviation of result to see what level of improvement there is for each fractional bin coefficient. Maybe we can derive some sort of rule out of it.

    This is actually not correct as there will be even third case where we have distribution like 2x2, 2x2, 2x2 and 2x2 (second pixel in second row of resulting pixels) which will have SNR improvement of x2

    I'm doing this of the top of my head, but I think that resulting SNR in this case will be 3/8 * 1.6 + 3/8 * 1.7889 + 1/4 *2 = ~1.771 but can't be certain that I got it right this time (could be wrong again).

    Maybe best thing to do is to implement it and measure - will do that tomorrow and post results.

  8. 27 minutes ago, kens said:

    Yes - I made an arithmetic error. To investigate further I tried to work out the optimal binning ratio that gives the best improvement of SNR over expected.

    Lo and behold the optimum is 1.414 which has an SNR 1.707 vs the expected 1.414. Could that be a clue as to the source of the discrepancy?

    I think it could - no wonder that numbers you got have to do with square root of two, that is something that does not surprise me (because we work in 2d and area is square).

    1.414 is square root of 2 (well rounded down), and 0.707 is 1/sqrt(2).

    We do have to be careful when doing things with arbitrary fractions. There are couple of reasons why I presented fractional binning in the way I have above. Doing x1.5 bin is going be beneficial in two things

    1. It can be implemented in "split" fashion rather easily (not as easy with arbitrary fraction - you get very large number of subs that mostly repeat values) - alternative is to do actual binning on each sub and increase pixel blur.

    2. It is symmetric

    I'll expand on this a bit and we can then have a look at other example to see if there is indeed difference that I believe exists.

    When we do x1.5 bin like I explained above we get certain symmetry. If we observe first 2x2 group of resulting pixels we can see that:

    - first one contains one whole pixel, two halves of other pixels and quarter of fourth pixel

    - second one will contain the same thing - one remaining half (other half of orange group), one full pixel (next original pixel or group if you will in X direction), 1/4 of blue group and again half of pixel in bottom row.

    - same will be for third an fourth pixel - ingredients will be the same just geometrical layout will be different, but math involved for each will be the same.

    Let's try now with different approach where we loose this sort of symmetry. Let's break up original pixels in groups of 9 (3x3) and bin 4x4 these little sub pixels.

    If we go in X direction, first resulting pixel will consist of: 3x3, 3x1, 1x3 and 1x1. But second resulting pixel will consist out of a bit different "configuration" - it will be: 2x3, 2x3, 2x1, 2x1.

    We need to do the math for each of these and see if they add up to same thing (and I'm guessing they are not, but we need to check just in case):

    9^2 + 3^2 + 3^2 + 1^2 = 81 + 18 +1 = 100

    6^2 + 6^2 + 2^2 + 2^2 = 36+36+ 4+4 = 80

    So first pixel will have something like 16/10 = 1.6 improvement in SNR, but second pixel will have something like ~1.7889.

    Now this seems a bit strange that different pixels have different SNR improvement - but again this is due to noise correlation, and if you do this on single subs and align those subs after - these SNR improvements will average out.

    But the point is - we can't determine SNR improvement based on single resulting pixel case if there is asymmetry like above. In fact in above case we will have something like:

    1.6, 1.7889, 1.6, 1.7889, ....

    1.7889, 1.6, 1.7889, 1.6, ....

    1.6, 1.7889, 1.6, 17889, ....

    Each other pixel will have different SNR improvement in checker board pattern. There is about half of 1.6 and half of 1.7889 - and I honestly don't know how SNR improvement will average, will it behave like regular numbers or like noise. If it behaves like noise then average will be: sqrt ( 1.6^2 + 1.7889^2) = 2.4, but if it behaves like normal numbers average will be (1.6 + 1.7889) = 1.69445

    It stands to reason that it will average like regular numbers, so I'm guessing that total average SNR improvement after stacking will be 1.69445

    This logic is much harder to apply when you have bin like x1.414 because number of possible combinations of areas that go into resulting pixels is infinite, and each one will have slightly different SNR improvement - and in the end you have to average those out.

    Probably easiest way to do it would be to actually implement "regular" fractional binning (that one is easy to measure because it works on single sub and does not need alignment and stacking afterwards) and test it on bunch of subs with synthetic noise and measure standard deviation of result to see what level of improvement there is for each fractional bin coefficient. Maybe we can derive some sort of rule out of it.

     

  9. 8 minutes ago, kens said:

    On the principle that "you don't get something for nothing" I would think the discrepancy arises from using upsampled pixels. The signal per upsampled pixel is not the same as if they were real pixels so when you bin red with orange, green and blue, intuitively you are introducing noise. Otherwise you could upsample 4x before binning 6x6 get an even better result of 2.0

    I'm not sure it would produce result of 2.0. Let's do the math and see what happens. Instead of splitting each pixel in 4 smaller ones (upsampling by 2 as you would put it), let's do that with 4 and repeat the math.

    This time we will be averaging 36 pixels instead of just 9, right? (6x6 grid instead of 3x3). So we need to divide total noise by 36.

    Total noise in this case will be sqrt( (16*red)^2 + (8*green)^2 + (8*orange)^2 + (4*red)^2 ) = sqrt( 256*noise^2 + 64*noise^2 + 64*noise^2 + 16*noise^2 ) = sqrt(  400 * noise^2) = 20*noise

    So total noise will be noise * 20 / 36, and we need reciprocal of that to see increase in SNR, so it will be 36/20 - and that is still x1.8

    You gain nothing by splitting further.

    It is respective "areas" of each pixel that add up to form larger pixel that determine SNR increase - not number of splits. I used splitting of pixel because that allows me use the same principle that I outlined before that does not introduce additional pixel blur.

    By the way, there is explanation for this, one that is not immediately obvious. With integer binning we are not introducing correlation between noise of each pixel.

    When we sum 2x2 grid of pixels, those 4 pixels contribute to value of one resulting pixel and no other pixels. Each resulting pixel will get its own unique set of 4 original pixels that make it up. With fractional binning - this is not the case.

    Look at the image in my first post ("color coded" one) - resulting pixel will contain whole red group (single pixel) - half of orange and half of green group and only quarter of blue group. This means that green group gives value to both this pixel, but also next one in vertical. Orange does the same but gives value to the next right pixel, and blue will give value to 4 adjacent resulting pixels.

    This introduces correlation in noise of those resulting pixels - their noise is no longer independent. This is concern if we are doing any sort of measurement, but it is not concern for creating nice image. In fact if we use any sort of interpolation algorithm when aligning our final subs for stacking (except original shift and add) - which implies sub pixel alignment of frames, we will be introducing both signal and noise correlation in pixels of aligned subs. It does not hurt the final image.

  10. 1 hour ago, pete_l said:

    There are no negative values for noise. Noise is a random additive component to the desired signal.

    Yes there is. Even with Poisson type noise.

    Consider following example: We have a barber shop, people go in to have their hair cut. On average 2.4 people will come in to barber shop per hour. There will be hours when no one enters, and there will be hours where 4 persons appear. Signal in this case is 2.4, and measured value per hour can be 0 or 1 or 2 or 3 or 4 ....

    If no one appears error for a given hour will be -2.4 or measured_value - true_value. If 4 people walk in on particular hour error will be 4 - 2.4 = 1.6.

    Same is true for photons - there is certain brightness from a target that is expressed in number of photons per time interval - that is our value of signal. Measured value will be either less or more than this. Errors can be both negative or positive.

    Noise on the other hand is average error value, or can be thought of as likelihood that measured value is close to true value. It has magnitude and that magnitude is always positive - like a vector, it has direction and size and we consider size to be always positive.

    1 hour ago, pete_l said:

    Once you have read an image off your sensor, its parameters are fixed. There is a component of the final ADU value of each pixel which will be signal and another that will be the sum of all the various noise sources. Binning does not reduce the level of noise as your experiment above shows (the average noise level is still 25 counts).

    Binning both reduces error and hence spread of error, or average error value - noise. Signal is still 25 counts, but deviation from that signal - or average error goes down when you bin it.

    Binning is nothing more than stacking images - same as stacking improves signal to noise ratio, so does binning. Same thing happens when you average four subs - you in fact average four "overlapping" pixels, and with binning - you average four adjacent pixels, and provided that you are oversampling - those 4 pixels will have roughly the same signal, same as in four subs.

    2 hours ago, pete_l said:

    There isn't any more data. The total number of pixels is the same - just allocated to 4 different subs. So I cannot see how the SNR can be improved to a better ratio than ordinary binning would produce.

    There is no more data and splitting original sub into 4 subs is the same as binning - except for one thing - pixel blur. Pixel blur is consequence of pixels having finite size - larger the size more blur is introduced. Sampling should be done with infinitesimally small points, but in reality it is not. Because of this, there is a bit of blur. When you add 4 adjacent pixels, you are effectively getting the same result like using larger pixel. With splitting you are lowering sampling rate (larger space between sampling points) but you are keeping pixel size the same. Each of these 4 subs will be slightly shifted in alignment phase, but there will be no enlargement of the pixels and no pixel blur increase.

  11. 1 hour ago, pete_l said:

    I think that only holds true in a continuous system. With discrete data, such as that from an imaging sensor, noise adds arithmetically.

    An example of a continuous system would be where exposure time of a single image is increased. Then the noise component (being random) of two images would change with the square root of the exposure times. While the signal (being deterministic) with the simple ratio of them. However, once the image data is sampled, that becomes a discrete system with unchanging noise / signal parameters.


    The insert below is a snap from a spreadsheet which attempts to be a numerical model. Consider the 6x6 array of pixels labelled 'Signal'. This can be thought of a single "image" of a small object that has a centre in the outlined square, with a signal value of 17. The pale yellow squares around it suggest what might be seen, compared with a darker background.
    I have added some random noise values. So with both the signal and noise data for each individual pixel there are some SNR values.

    Of course in practice it is impossible to separate the signal data from the noise data for each individual pixel, so all of this is strictly theoretical ;)

    snr-spreadsheet.png.a7d0647542264a995c39e6bc14887250.png

    I don't think this is very good way to do things. Your noise contains signal in it to start with. Where are negative values?

    Noise can't be characterized by single number being error from true value. That is just error not noise. Noise has magnitude and is related to distribution of samples that represent true_value + error, or if you remove true_value it represents distribution of errors.

    Having noise of 1 does not mean that error is 1. - it means that (depending on type of distribution of random variable - let's take Gaussian) - 68% of errors will be in range -1,1, 95% of all errors will be in -2,2 range and 99.7% of all errors will be in -3,3 range. For noise of magnitude 1 here is for example 0.15% probability that error will be larger than 3 - but it can happen.

    If you have a pixel that has value of 17 - you can equally assign error of -1 or +1 to it, or -2 or 2 or even -5 and 5 (it can happen but with low probability). Your errors show bias - they are all positive values - so they don't have proper distribution to be considered noise (although they might be random).

    If you want to create such spread sheet and do "micro case", you need to randomly produce number that has certain distribution. But I think it is better to operate on synthetic images rather than on cells - easier to process.

    Here is example

    image.png.35d02a3f0c8dcc95e180729e840b2b75.png

    This is image with Poisson noise distribution for uniform light signal of value 25. Next to it is measurement of image - it shows that mean value is in fact 25 and standard deviation is 5 (which is square root of signal).

    Here is same image binned 2x2 with measurement

    image.png.50770ab0c3686b8cc9080c46b904c975.png

    Now you see that signal remained the same but standard deviation in fact dropped by factor of 2.

    SNR has improved by factor of x2.

    Here is it now binned by x3

    image.png.86f5c69cad7e914dc4fa99b910c22d5a.png

    There is no question about that - binning works, and it works as it should. You should not think about errors being the noise - you need to think about distributions being the noise and having magnitude like a vector rather than being plain old number.

    Here is an example: Let's do 2x2 pixel by hand - with noise that we invented, but we will try to at least honor distribution with the fact that errors can be both positive and negative.

    Let's say that base number / true value is 25 for 4 pixels, but we don't know that we simply have:

    25, 26

    23, 26

    Errors in this case are:

    0, 1

    -2, 1

    What would be average error? Average error would in this case be 0, and that would imply that on average there is no error in values but there is. This is why we don't use simple average for errors but rather thing similar to sine and RMS - root mean squares.

    In above example our average error will be - sqrt((1^2 + 1^2 + 0^2 + (-2)^2)/4) = sqrt(6/4) = ~1.225

    This means that our error on average is 1.225 as displacement from true value (either positive or negative).

    Let's average our pixels and see how much error will be then: (25+23+26+26) / 4 = 25

    Error reduced - now it is closer to true value. That can be expected if you average some positive and some negative values then result will be closer to zero (you see why having all positive values did not produce good results for you?).

     

  12. 3 hours ago, saac said:

    Vlaiv I don't think it is as indeterminable as it may appear. Treating it as a simple adiabatic expansion will get you pretty close to the value in practice - nozzle considerations accepting (choking, throat velocity).  Nor is your target of -50 Celsius overly ambitious; for example a common CO2 fire extinguisher (55 bar , 3 litres) will easily get down to that range.  Discharge a CO2 fire extinguisher through a close fibre cloth and you will be able to collect lumps of dry ice at -78 Celsius - the horn of the extinguisher will itself drop by as much as 50 Celsius if the extinguisher is fully discharged. 

    Out of interest what is your plan, are you looking to build the setup yourself - interesting project, good luck with it. 

    Jim 

    Yes - I thought of building one unit myself. Nothing much to it the way I envisioned it.

    It will consist of old refrigerator - just casing and insulation, no need for other components so it can be broken unit. I'll need regular refrigerator and deep freeze to prepare and store goods. First step would be to take fruits / vegetables and spread them onto trays and get them cooled to regular refrigeration temperature - 5C or so. After that I load them into "fast freeze" unit. That is old fridge casing with some modifications:

    - inlet that can be connected to high pressure tank - with nozzle

    - maybe safety valve if pressure builds up (but on second though it might even create lower pressure inside if cool air starts mixing with hot air inside and there is drop in overall temperature in closed constant volume - pressure will go down, not sure if it will offset added air.

    - Pressure gauge and thermometer - to monitor what is going on :D

    After release of the compressed air if temperature drops enough it should start rising again. If insulation is good - maybe it will provide cold enough environment for deep freezing to be completed (I have no clue how long it will take - maybe 15 minutes or so?).

    Next step is using regular freezer bags to package goods and storing them in regular freezer - ready to be used in winter months :D

    Whole procedure will take maybe less than an hour, so I can do multiple rounds and store enough goods without much expense (except for electrical bill of compressor and refrigeration), but like I said - it's more about producing and storing own food than any economical gains.

    • Like 1
  13. Fractional binning is not widely available as data processing technique so I'm working on algorithms and software for that as I believe it will be beneficial tool to both get proper sampling rate and enhance SNR.

    In doing so, I came across this interesting fact that I would like to share and discuss.

    It started with a question - if regular binning x2 improves SNR by x2 and binning x3 improves SNR by x3 and so forth, how much does fractional binning improve SNR?

    Let's say for arguments sake that we are going to bin x1.5, how much SNR improvement we are going to get? It sort of stands to reason that it will be x1.5. In fact that is not the case!

    I'm going to present a way of fractional binning that I'm considering and will derive SNR improvement in particular case of x1.5 binning - because it's easy to do so.

    First I'm going to mention one way of thinking about regular binning and feature of binning that this approach improves upon. Binning method here discussed is software binning - not hardware one.

    In regular binning x2 we are in fact adding / averaging 2x2 group of adjacent pixels. Following diagram explains it:

    image.png.673197279466771ee7f070d462aa1bbb.png

    We take signal from 2x2 group of pixels add it and store it in one pixel, we do the same for next 2x2 group of pixels. This leads to result being the same as if we used larger pixel (in fact x4 larger by area, x2 by each axis). SNR is improved, sampling rate is halved and there is another thing that happens - we increase pixel blur because we in effect use "larger" pixel. There is slight drop in detail (very slight) that is not due to lower sampling rate because of this.

    There is a way to do the same process above that circumvents issue of pixel blur. I will create diagram for that as well help explain it (it is basis for fractional binning so it's worth understanding) :

    image.png.f7c89353eb5c1c063b6861b4d853bdd0.png

    Ok, this graph might not be drawn most clearly - but if you follow the lines you should be able to get what is going on. I'll also explain in words:

    We in fact split image into 4 sub images. We do this by taking every 2x2 group of pixels and each pixel in that group goes into different sub image - always in the same order. We can see the following: samples are evenly spaced in each sub image (every two pixels in X and Y direction on original image), and sampling rate has changed by factor of 2 - same as with regular binning we have x2 lower sampling rate. Pixel size is not altered, and values are not altered in any way  - we keep the same pixel blur and don't increase it. We end up with 4 subs in place of one sub - we have x4 more data to stack and as we know if we stack x4 more data we will end up with SNR increase of x2.

    This approach does not improve SNR of individual sub, but does improve SNR of whole stack in the same way as bin x2 improves individual sub - with exception of pixel blur.

    Now let's see what that has to do with fractional binning. Here is another diagram (hopefully a bit easier to understand - I'll throw in some color to help out):

    image.png.1dd0d9ce4b7affd4890e51f1bbbd161c.png

    Don't confuse color with bayer matrix or anything like that - we are still talking about mono sensor, color just represents "grouping" of things. Black grid represents original pixels. Each color represents how we "fictionally" split each pixel - in this case each pixel is split into 2x2 grid of smaller pixels - each having exactly the same value. I stress again this is fictional split - we don't actually have smaller pixels or anything.

    If we want to do fractional binning x1.5 - we will in fact sum / average outlined group of "fictional" pixels. Each purple outline will be spaced at distances 1.5 larger than original pixel size - so we have appropriate reduction in sampling rate. In reality algorithm will work by splitting subs like above - again to avoid pixel blur, but for this discussion we don't need that. We need to see how much SNR improvement there will be.

    Let's take the case of averaging and examine what happens to noise. Signal (by assumption) is same across pixels involved so in average it stays the same. Reduction in noise will in fact be improvement in SNR.

    We will be averaging 9 "sub pixels" so expression will be:

    (total noise of summed pixels) / 9

    What is total noise of summed pixels? Noise adds like square root of sum of squares. But we have to be careful here. This formula works for independent noise components. While noise in first pixel (red group) is 100% independent of noise in other pixels (orange, green and blue groups) it is in fact 100% dependent within group itself - and adds like regular numbers. It is 100% dependent because we just copy value of that pixels four times.

    So expression for total noise will be:

    square_root((4*red_noise)^2 + (2*orange_noise)^2+(2*green_noise^2)+(blue_noise)^2)

    or

    sqrt( 16*red_noise^2 + 4*orange_noise^2 + 4*green_noise^2 + blue_noise^2)

    Because we assume that signal is the same over those 4 original pixels - and noise magnitude will be the same - so although red_noise, orange_noise, green_noise and blue_noise are not the same values in vector sense - they do have the same magnitude and we can just replace each with noise at this stage.

    sqrt(16*noise^2 + 4*noise^2 + 4*noise^2 + noise^2) = sqrt(25 * noise^2) = 5*sqrt(noise^2) = 5*noise.

    When we average above sub pixels we will end up with noise being 5*noise / 9, or SNR improvement will be 9/5 = x1.8

    That is very interesting fact - we assumed that it will be x1.5 but calculation shows that it is x1.8

    Now, either I made mistake in calculation above or this is in fact true.

    Any thoughts?

     

  14. Thanks all for input on the topic.

    Luckily, I'm not trying to be very precise about all of that - just wanted approximation to see if there would be cooling with about 40K delta T.

    From above two approaches - energy conservation / isothermal process and second one - adiabatic expansion, I think safe bet is that there will be some cooling in range of 8K to 200K delta T :D (although I'm well aware that this second figure is very unlikely).

    It did show however that this topic is rather complicated and that there is no single formula that can be used to predict what will happen with good accuracy - it actually needs a bit more complicated simulation to get close. I think it is easier to actually test it out.

    For all interested, here is what I had in mind:

    I'll probably move to a new house (yet to be built - currently in phase of project and getting required permits) on piece of land that I purchased recently (yes there will be obsy too :D ) - and there is a lot of fruit there. That area is known for fruit production. Although you can buy frozen fruits and vegetables in supermarket at decent prices and there is no real commercial incentive to do that sort of thing - I fancied a way to do quick/deep freeze in home that does not involve expensive equipment and dangerous materials (like handling liquid nitrogen and such) - so I came up with above idea - use compressed air to quickly lower temperature. Grow your own food kind of approach (and store it for the winter). Had no idea if it would actually work, but recently I stumbled across these:

    (I'm posting links to commercial items - but have no intention of promoting them in any way. If it is deemed against SGL policy - please remove them)

    https://www.minidive.com/en/

    And also a kick starter project that seems to offer similar thing:

    Both of them have 0.5L bottles (that can be filled with hand pump up to 200 atm pressure). That is why I used those figure in above calculations to see if it would work.

    In any case - I fancy idea of small scuba units which I can use on holidays , and if I could use it to do above as well - that would be nice (I have not found small pressure tanks anywhere else).

    • Like 2
  15. Ok, this is deeply confusing (as is most of the physics if one does not know it :D )

    Here is another approach - adiabatic expansion (probably closer to what will happen):

    Taken from this page:

    https://phys.libretexts.org/Bookshelves/University_Physics/Book%3A_University_Physics_(OpenStax)/Map%3A_University_Physics_II_-_Thermodynamics%2C_Electricity%2C_and_Magnetism_(OpenStax)/3%3A_The_First_Law_of_Thermodynamics/3.6%3A_Adiabatic_Processes_for_an_Ideal_Gas

    We have following:

    image.png.c6360d0ccba694aa947d639cf0f9df43.png

    and hence this:

    image.png.c529b160fae166473b850034aaacad7c.png

    If we have 200 atm pressure and we quickly release gas to ambient pressure (1 atm) we will in fact have about 17 liters of gas out of 0.5 liter container (Cp/Cv for air ranges from 1.4 up to 1.58 at 100 atm, so I took it to be 1.5 for approximation).

    And according to ideal gas law it will be at temperature of about 50K!

    That sort of makes sense if PV=nRT then at the same pressure if temperature is five times lower, so it will be volume.

    That is seriously different answer from above one.

  16. Best I could think of right now is to do some really crude approximation.

    Problem is that every quantity depends on bunch of other quantities, and one ends up with really complicated differential equations.

    Basis of my crude approximation would be:

    - What sort of energy do I need to compress 100 liters of air to 200 atmospheres?

    - What is heat capacity of air?

    These of course are not constants (which makes things complicated) but let's do some approximations to at least see what sort of magnitude do we get as an answer.

    If we assume isothermal compression, then work done is:

    image.png.3dab395123bdf848c8a7fb7166ba4736.png

    Let's say that we compress at 300 kelvins, and R (specific) of air is 287.058 J/kgK. 100L contains about 0.1276kg of air. Work done is: ~58KJ

    (Not much energy needed in fact).

    Now I'm going to just assume that heat capacity of air is something like 1KJ/kg K. This is really complex topic, but I've found some tables, and this value ranges in 0.7-1.5 depending on what you keep constant (constant volume or constant pressure) and vary other thing. I reckon 1KJ/kgK is a good approximation as any in this case.

    So we have about 60KJ of energy, we have 0.1276Kg of air and we need to see how much kelvins can we get out of that :D

    60 * 0.1276 / 1 = about 7.6K?

    Not sure if this approximation is any good but it shows that this sort of thing will not get me delta T that I'm after.

     

  17. 1 minute ago, KevS said:

    Vlad, try the following, might be right memory at my age  is problematical.

    Can't remember the derivation off the top of my head, but will dig it out if required:

    T(2) =P(2) V(2) T(1) / P(1) V(1)

    K

    Tried that one - and it is for ideal gas - which does not exhibit that behavior.

    Both @andrew s and I posted "right solution" for this problem above. Well I still don't have a solution, but at least I know that it can't be derived from ideal gas law.

  18. I'm a bit stuck and would appreciate some help on this one.

    Don't know much about this part of physics and as a consequence - I found myself rather confused :D

    While contemplating device that would flash freeze fruits and vegetables, I came up with following idea: Take old fridge (just as an insulated container - it won't do any cooling), put one atmosphere pressure release valve (or a bit above one atmosphere) on one side of it, and inlet on the other side of it.

    Take 0.5L gas canister capable of storing 200 atm pressure and fill it up. Let it cool to some temperature like 5C. Connect it to inlet and release the gas. It should expand into fridge, and security valve should open up to release (ambient temp) air that was inside.

    Common sense says that there will be drop in temperature when air is released from the canister. I'm trying to see what sort of difference in temperature I will get with this procedure (roughly) - hoping for about 40C delta T.

    However when I try to fiddle around with ideal gas law - I get exactly 0 delta T :D - and that is what is confusing me. I'm probably not doing something right, but I can't figure out where I'm making mistake. Here is what happens:

    P1 * V1 / T1 = P2 * V2 / T2

    As far as I understand, if I fill 0.5L canister with air up to 200 atm pressure - it will contain 100L of air.

    This means that 0.5 * 200 / T1 =  100 * 1 / T2

    Or T1 = T2 - there will not be change in temperature when I release the gas from canister?

    I can see that there is likely error in my reasoning that 0.5L canister will contain 100L of air at 200 atmospheres - but how do I go about calculating temperature drop?

  19. Darks are fairly simple to do.

    You need the same settings as lights for them to work, and that means:

    1. Same gain / iso / offset (depending on your camera if it is DSLR or dedicated astro camera)

    2. Same exposure length

    3. Same temperature

    Only difference is that you have to block all light coming into the scope. You can take them while camera is attached to the scope - by covering the scope with lens cap, or you can take them when camera is detached from the telescope by putting camera cover on. Be careful about any sort of light leak - even in IR part of the spectrum (plastic can be somewhat transparent to IR wavelengths).

    In case that your camera does not have set point temperature control (not cooled) - try to take darks at approximately same temperature as your image subs (lights). If your camera has stable bias and is not cooled you can use something called dark optimization (there is option in DSS for that).

    Flats are done with camera on the scope and focus position roughly the same as when doing light subs. You need either flat panel or flat evenly illuminated white surface in front of the scope. You can even use sky when it is brighter (not during night, but early in the morning or early in the evening when sun is not out but it is still bright enough) - some people put white T shirt (or other cloth) on aperture to diffuse the sky light.

    You need to be careful not to overexpose when doing flats. Histogram peak needs to be at about 75-80% in value.

    Take flat darks as well, and rule with those is the same with regular darks, except that they need to match your flats - so same exposure as flats, same settings as flats ...

    • Like 1
  20. There is another way to go about this - doing real test on this scope. It's going to be a bit involved, but we will get detail on that particular sample.

    One test that can be done by owners themselves that is a bit involved but not out of the reach of amateurs is Roddier analysis. If you have planetary camera and a bit of planetary imaging skills you can do it.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.