Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

M13 RGB


Rodd

Recommended Posts

1 minute ago, Rodd said:

Would you be able to show how this manifests.  Off hand I want to say that read noise is a tiny part of the whole story.  that may not be true.  Anyway to show the difference between 1.6 and 5.1, say?

If one is careful to match exposure length to read noise of camera versus other conditions - then read noise is really non issue.

Read noise becomes important only if it is significantly large compared to other noise sources. If it is small compared to other noise sources - it makes very small difference.

Here is an example.

Say you have camera that has 1.7e read noise. You image in light pollution that gives you something like 0.5e of signal per second per pixel. You image for 100s per exposure.

In 100s exposure you'll get 50e of signal from light pollution. That produces ~7.0711e of noise just from light pollution.

Noises add like square root of sum of squares. This means that "total" noise (here meaning just read noise and LP noise - as we simplified things to demonstrate) will be:

sqrt (1.7^2 + 7.0711^2) = sqrt(2.89 + 50) = sqrt(52.89) = ~7.2726

That is increase in noise of 2.85% - you won't be able to tell difference in noise up to 10% by eye, so read noise makes much less difference in regular exposure. Now let's see what happens with binned exposure:

With binned exposure we have 200e of LP signal instead of 50e (we added 4 pixels together so LP signal is 4 times as strong) and read noise is now 3.4e - let's do the same math.

LP noise without read noise is sqrt(200) = ~14.1421e

Read noise is 3.4e

These two noise added - sqrt(3.4^2 + 14.1421^2) = sqrt(211.56) = 14.5451e

Increase is now (14.5451 - 14.1421) / 14.1421 = 0.0285 = 2.85%

The same! Although read noise increases by factor of 2 - so does LP noise (since LP signal increases by factor of 4) - percentage of increase remains the same - minimal

If you determine exposure length based on read noise for single pixel - that exposure length will be valid for binned version as well, regardless of the fact that read noise increases.

You just need to be careful in thinking - Oh, I'm binning - I have more sensitive pixels - I don't need as much exposure time as before. That is wrong for CMOS sensors - keep exposure length the same.

Link to comment
Share on other sites

11 minutes ago, vlaiv said:

keep exposure length the same.

Perhaps not collect as many of them? (This is for people who are binning as a means primarily to reduce total exposure time as opposed to those who bin to try and pick up ever fainter detail (like IFN)

Link to comment
Share on other sites

10 minutes ago, Rodd said:

Not if the big pixel was an individual, non binned pixel.  Say a camera with 24 um pixels or something.  that is what I am trying to get at.  if the pixel is truly a single pixel with 100,000 FWC--it will not clip until it surpasses 100,000.  If the single pixel is really a collection of binned pixels--it can "miss out on data" that was clipped from individual pixels in the binned association....no?

Again - that is correct - but very unlikely to happen.

If you have stars that saturate single pixels - there might be couple of hundred of them in image - mostly in star cores.

Majority of those will be much "stronger" than FWC of single pixel - and would saturate even large pixel. There might be only 10% of those pixels - so maybe 10 pixels per whole image that will be different between large pixel and binned small pixel.

In another words - with image containing million of pixels - for only dozen or so binning produces different result than having larger pixels - that is definition of very unlikely - dozen per millions.

Link to comment
Share on other sites

4 minutes ago, Rodd said:

Perhaps not collect as many of them? (This is for people who are binning as a means primarily to reduce total exposure time as opposed to those who bin to try and pick up ever fainter detail (like IFN)

Well, my emphasis on binning is to hit optimal sampling rate and recover SNR that you would otherwise loose by over sampling.

That means leave imaging time as is - just don't over sample and you'll get better SNR - for free.

Link to comment
Share on other sites

Oh wow, so much to read. I've been out all day today, so only now getting to this. Thanks @vlaiv for aswering my questions, but on first reading of your reply I still am very confused. I did pick up one thing very quickly....

8 hours ago, vlaiv said:

If you work with 32bit float point numbers (and you should), well, there is also max value there, but it is something like ≈ 3.4028235 × 10^38 and you are not likely to produce it even if you add all the pixels in the image.

Doh, there am I using max 65k ADU for saturation at 16 bit, when in fact I'm processing at 32 bit float.... stupid me

I need to spend much more time reading the discussion points on this thread to see if I can undersand it any better :icon_confused:

Again thanks everyone for their contributions.

Link to comment
Share on other sites

@vlaiv and others, as promised I've re-read this thread a few times and am gradually becoming more comfortable with the terminolgy around resampling and binning. I had a play in my ImagesPlus astroprocessing software and took a look as some tools in there that I never used, as I'd not understood them. One of them is a Resize Files tool and from looking at that I now see that it is a resample tool, that amongst other things allows an image to be resampled by a fixed factor including decimels, not just integers, and values below 1 (so downsampling?). As you can see from the below image the resize tool offers Bilinear, Bicubic and nearest neighbour resample options, with Bilinear being the default. As an experiment I applied a resize factor of 2 (x2 bilinear) to an unprocessed stack of 2hrs lum subs of the M90 galaxy that was captured at Bin 2x2 in camera with my QSI583 CCD, due to oversampling with that camera and my C14 combo. I then binned that resampled stack 2x2 in ImagesPlus (using the Add operation), which got me back to a same dimensions images at the original, but now binned 2x2, so a brighter starting point. Next I applied a quick ArcSinH stretch (with an nth Power X'n prestretch to darken the background). Interestly I found it much easier to reveal the fainter details in the image, including the outer galaxy arms, so I think may be a useful way of processing data in future. I need to experiment more with that though.

image.thumb.png.f259194ba092f716db890757a8942f56.png

Please note that I found that if I did not resize by x2 before binning 2x2 in software, I noticed that the stars started to become blocky, but by first resizing, then binning their roundness (I know that they are not very circular to start with) was preserved. I'd be very interest to have you observations on what I did and my initial findings.

Many thanks,

Edited by geoflewis
Typos
Link to comment
Share on other sites

@geoflewis

Procedure that you did - bilinear up sampling x2 and then binning x2 (thus down sampling to original resolution) - has the same effect as simple blurring.

Let me explain - first part, up sampling adds no detail, second part - binning does not work as you would expect - as you don't have genuine data but data produced by interpolation. Only genuine data that obeys certain noise distribution will benefit from binning. Otherwise - free lunch - up sample - bin/down sample - rinse/repeat - free SNR improvement :D - but it does not work like that.

Only thing you did is introduce pixel to pixel correlation. Here, lets look at 1D scenario to understand what is happening. Let's just look at 2 pixels - A and B.

When we up sample x2 - we "insert" another sample between A and B

We will have A, X, B

Depending on interpolation used, X is calculated differently. You used bilinear (or in 1D simple linear) interpolation -and since X is midway between A and B - it is simple average of the two (otherwise it would be c*A+(1-c)*B where c is normalized position between 0 and 1 - 0 being all the way at A and 1 being all the way at B. If we put it at midpoint we will have 0.5*A + (1 - 0.5)*B = 0.5*A + 0.5*B = (A+B)/2)

That means we have A, (A+B) * 2, B

Now we bin that data (with sum for example) - we bin first two up sampled pixels so our new pixel will be:

A + (A+B)/2, some expression of B and C,

1.5*A + 0.5*B, some expression of B and C, ...

You simply mixed values of A and B in the first pixel, mixed values of B and C in second pixel and so on ...

This is pixel to pixel correlation and that:

1) reduces noise as noise is random - and pixel values now stop being fully random first pixel contains both elements of first pixel and second pixel - and is no longer truly random as it depends on "external" value of second pixel to some degree

2) you introduce blur. Blur is loss of sharpness. If A is very high in value and B is very low in value, and you mix the two - you get value in between - "smoother" value - contrast has been reduced.

1 hour ago, geoflewis said:

Please note that I found that if I did not resize by x2 before binning 2x2 in software, I noticed that the stars started to become blocky

How so? Can you show this effect?

Stars in the image can't become blocky. In worst case scenario - stars can be reduced to a single sample - single point in the image - what we think of single pixel. That point has no dimensions - it is sample and thus can't be neither round nor square.

It is only when we enlarge image by up sampling to make us easier to see things - we see that single sample as some geometric shape. Actual shape depends on resampling method used.

1. Nearest neighbor will produce square

image.png.3dd3d72ba6c43544f30deabd869c3181.png

(this is single sample enlarged by x20 using nearest neighbor)

2. Bilinear will produce diamond

image.png.614efa964a1270a628220ce07e55fbd8.png

3. Bicubic and other will produce something diamond shaped with a bit of ringing

image.png.49df029b2e46f522b365591e10a0f5cb.png

(cubic convolution image)

4. Lanczos will produce ringing - similar to airy pattern

image.png.05d1e85b04238e43590515e63cbbca2e.png

(this is actually Quintic B-spline, but Lanczos would be similar)

It is actually not strange for high profile resampling methods to produce ringing that resembles Airy pattern - that is what happens when you remove high frequency components - be that in software or with actual physical device like telescope - both have limiting resolution.

 

  • Like 1
Link to comment
Share on other sites

47 minutes ago, vlaiv said:

How so? Can you show this effect?

Stars in the image can't become blocky. In worst case scenario - stars can be reduced to a single sample - single point in the image - what we think of single pixel. That point has no dimensions - it is sample and thus can't be neither round nor square.

Thanks again Vlaiv, so clearly I still not understand this. I will keep trying, but it is difficult over forum chat.

As requested, below is a screen grab of the same region displayed 3 ways. The top left is the original (not resampled) image binned 2x2 (add) and zoomed 400% to show the blocky pixelated stars - can you see it? The bottom left image is the resampled x2 (upsampled) image then binned 2x2, viewed zoomed 200% so that image dispays the same dimmensions as the top left, not resampled image - the stars are not blocky. The image on the right is the same as bottom left image (resampled x2, plus bin 2x2) zoomed 400% for comparison and the stars are still not blocky. Any ideas why it is not as you expect? I could zoom everything in even further to show the blocky stars in the not resampled image better.

image.thumb.png.8eeb392abedbe61ee00a17296f1eab75.png

47 minutes ago, vlaiv said:

Let me explain - first part, up sampling adds no detail, second part - binning does not work as you would expect - as you don't have genuine data but data produced by interpolation.

What does this mean? Should I only bin the raw images before calibration and stacking? Would that mean that I also have to bin all my calibration frames at their Raw off camera status, not the unbinned calibration masters? I am really not understanding how anyone would use binning in software if I can't bin a calibrated, but otherwise unprocessed stack of raw image data.

Thanks again

Edited by geoflewis
Link to comment
Share on other sites

10 minutes ago, geoflewis said:

Thanks again Vlaiv, so clearly I still not understand this. I will keep trying, but it is difficult over forum chat.

As requested, below is a screen grab of the same region displayed 3 ways. The top left is the original (not resampled) image binned 2x2 (add) and zoomed 400% to show the blocky pixelated stars - can you see it? The bottom left image is the resampled x2 (upsampled) image then binned 2x2, viewed zoomed 200% so that image dispays the same dimmensions as the top left, not resampled image - the stars are not blocky. The image on the right is the same as bottom left image (resampled x2, plus bin 2x2) zoomed 400% for comparison and the stars are still not blocky. Any ideas why it is not as you expect? I could zoom everything in even further to show the blocky stars in the not resampled image better.

It is not stars that are blocky - it is viewing them on 400% that makes them look blocky - because viewing it on 400% must resample image for viewing - and above software uses nearest neighbor to do that.

When you up sampled image yourself - you've chosen some other type of resampling method that makes star look smoother and not pixelated. It is only nearest neighbor that makes stars look pixelated.

Look at this:

Screenshot_2.jpg.281d867e277e3000427050635375d864.jpg

I use IrfanView to view images - above is just some image of stars.

When I zoom in IrfanView 550% to see stars up close, I get this:

image.png.25ed8971c6d09e55e147e8fee8366b23.png

Star is round, but now I'm going to switch off "good" resampling in IrfanView:

image.png.6f7794c8fdb78b97fed331c636ca3fcc.png

Suddenly - that same star now looks pixelated:

image.png.d06ecce7d21cc4531ec9c2df137bec14.png

Pixelation is not feature of the image - it is feature of zoom method used to view image on certain zoom. Look at this:

image.png.cc3fe943e638d8a394d2fcfdd778fb99.png

That is one of your resampled stars that you think is smooth (and it is at only 200% zoom) - but if I zoom it further (again using nearest neighbor interpolation for resampling) - it also becomes pixelated.

21 minutes ago, geoflewis said:

What does this mean? Should I only bin the raw images before calibration and stacking? Would that mean that I also have to bin all my calibration frames at their Raw off camera status, not the unbinned calibration masters? I am really not understanding how anyone would use binning in software if I can't bin a calibrated, but otherwise unprocessed stack of raw image data.

No, it means that binning works as expected if noise statistics in the image follows certain distribution. If you mess with noise in the image in some way - binning won't work as expected.

For binning to work - you need randomness and independent values.

It is a bit like this - when you stack images, you get SNR improvement by factor of square root of number of stacked images.

This means - stack 4 images and you get SNR improvement of 2.

Now let's suppose that you have only 2 images to stack. Expected SNR improvement will be sqrt(2) = 1.41...., but what if we trick the system? Say we take first image and we copy it 2 additional times. Now we have total of 4 images, that should give us SNR of 2 - after all, we are stacking 4 images now, right?

No - it does not work that way - three of those images are what we call - linearly dependent vectors - they are the same thing multiplied by a constant and offset by a constant - in this case, constants being 1 and 0 (multiplied by one and offset by zero).

For stacking to work as you expect - noise must behave like linearly independent vector - meaning that noise in each image is completely random in relation to any other image in the stack.

Same thing happens with binning - noise in pixels must be totally random with respect to noise in all other pixels (in that group).

When you calibrate your image - you keep operations on "pixel level" - you don't mix pixel values. You subtract dark - pixel for pixel, you divide by flat - pixel by pixel. You don't mix pixel values. This keeps noise independent and binning works.

Once you star aligning your subs - then you start introducing cross pixel correlation.

Best way to stack images with respect to SNR - is to use integer offsets and no rotations - this is how it stacking was first developed - it was called "Shift and add" technique. As soon as we start using sub pixel shifts to align subs for stacking - we are using some sort of interpolation and we introduce correlation between pixels.

Better interpolation algorithms introduce less of this pixel to pixel correlation.

Some time a go I made a post where I addressed this effect and how choice of interpolation algorithm used to align images impacts noise grain.

In it you can find comparison of few interpolation algorithms and how they act as low pass filter. Less they act as filter - more they preserve original noise - less effect on stacking there is.

  • Like 1
Link to comment
Share on other sites

16 minutes ago, vlaiv said:

It is not stars that are blocky - it is viewing them on 400% that makes them look blocky - because viewing it on 400% must resample image for viewing - and above software uses nearest neighbor to do that.

When you up sampled image yourself - you've chosen some other type of resampling method that makes star look smoother and not pixelated. It is only nearest neighbor that makes stars look pixelated.

Look at this:

Screenshot_2.jpg.281d867e277e3000427050635375d864.jpg

I use IrfanView to view images - above is just some image of stars.

When I zoom in IrfanView 550% to see stars up close, I get this:

image.png.25ed8971c6d09e55e147e8fee8366b23.png

Star is round, but now I'm going to switch off "good" resampling in IrfanView:

image.png.6f7794c8fdb78b97fed331c636ca3fcc.png

Suddenly - that same star now looks pixelated:

image.png.d06ecce7d21cc4531ec9c2df137bec14.png

Pixelation is not feature of the image - it is feature of zoom method used to view image on certain zoom. Look at this:

image.png.cc3fe943e638d8a394d2fcfdd778fb99.png

That is one of your resampled stars that you think is smooth (and it is at only 200% zoom) - but if I zoom it further (again using nearest neighbor interpolation for resampling) - it also becomes pixelated.

No, it means that binning works as expected if noise statistics in the image follows certain distribution. If you mess with noise in the image in some way - binning won't work as expected.

For binning to work - you need randomness and independent values.

It is a bit like this - when you stack images, you get SNR improvement by factor of square root of number of stacked images.

This means - stack 4 images and you get SNR improvement of 2.

Now let's suppose that you have only 2 images to stack. Expected SNR improvement will be sqrt(2) = 1.41...., but what if we trick the system? Say we take first image and we copy it 2 additional times. Now we have total of 4 images, that should give us SNR of 2 - after all, we are stacking 4 images now, right?

No - it does not work that way - three of those images are what we call - linearly dependent vectors - they are the same thing multiplied by a constant and offset by a constant - in this case, constants being 1 and 0 (multiplied by one and offset by zero).

For stacking to work as you expect - noise must behave like linearly independent vector - meaning that noise in each image is completely random in relation to any other image in the stack.

Same thing happens with binning - noise in pixels must be totally random with respect to noise in all other pixels (in that group).

When you calibrate your image - you keep operations on "pixel level" - you don't mix pixel values. You subtract dark - pixel for pixel, you divide by flat - pixel by pixel. You don't mix pixel values. This keeps noise independent and binning works.

Once you star aligning your subs - then you start introducing cross pixel correlation.

Best way to stack images with respect to SNR - is to use integer offsets and no rotations - this is how it stacking was first developed - it was called "Shift and add" technique. As soon as we start using sub pixel shifts to align subs for stacking - we are using some sort of interpolation and we introduce correlation between pixels.

Better interpolation algorithms introduce less of this pixel to pixel correlation.

Some time a go I made a post where I addressed this effect and how choice of interpolation algorithm used to align images impacts noise grain.

In it you can find comparison of few interpolation algorithms and how they act as low pass filter. Less they act as filter - more they preserve original noise - less effect on stacking there is.

Thanks again Vlaiv, I feel that I've reached the end of the road with my undersranding, or in reality, lack of it. I am going round in circles and getting nowhere. I want to understand why, when and how to use resampling and/or software binning to improve my AP workflow. Up to now I never tried it, as I didn't understand it; I did not know what benefit it would be trying to achieve. I thought I understood in camera (CCD) binning (better sampling rate for oversampled camera scope configuration as with my 5.4 micron camera pixels and F11 C14 scope + x0.63 telecompressor), but now even that is not so clear. I really appreciate your patience with me and you sharing your deep knowledge of this topic, but I just don't get it, sorry....!!

Edited by geoflewis
Link to comment
Share on other sites

15 minutes ago, geoflewis said:

Thanks again Vlaiv, I feel that I've reached the end of the road with my undersranding, or in reality, lack of it. I am going round in circles and getting nowhere. I want to understand why, when and how to use resampling and/or software binning to improve my AP workflow. Up to now I never tried it, as I didn't understand it; I did not know what benefit it would be trying to achieve. I thought I understood in camera (CCD) binning (better sampling rate for oversampled camera scope configuration as with my 5.4 micron camera pixels and F11 C14 scope + x0.63 telecompressor), but now even that is not so clear. I really appreciate your patience with me and you sharing your deep knowledge of this topic, but I just don't get it, sorry....!!

You don't need to fully understand process in order to be able to utilize it.

Don't bother with resampling at all. It is useful in some cases - like when you want to get certain number of dots per inch for printing purposes or whatever.

Bin your data to recover SNR if you are over sampled.

- you can bin in hardware with CCD cameras or in firmware with CMOS cameras and you can bin in software. In most cases - difference is very small, so best guideline is to bin in software for CMOS and bin in hardware for CCD to get faster download and smaller files - and you can additionally bin in software files from CCD if there is need for it.

- bin individual subs after calibration or bin resulting image after stacking. Bin individual subs if you use advanced resampling algorithm for alignment during stacking (like Lanczos or Catmull Rom spline or whatever) - bin stack result if you use bilinear interpolation (like DSS does).

- don't try to enlarge binned image by up sampling (unless you need it for printing or whatever) - there is little point in doing so.

- best method to determine good sampling rate is to measure FWHM and then divide that with 1.6. Bin so that you get close to this value. I personally think that it is better to under sample a bit rather than over sample a bit if you can't be spot on - but that is personal view.

  • Like 1
Link to comment
Share on other sites

53 minutes ago, vlaiv said:

You don't need to fully understand process in order to be able to utilize it.

Don't bother with resampling at all. It is useful in some cases - like when you want to get certain number of dots per inch for printing purposes or whatever.

Bin your data to recover SNR if you are over sampled.

- you can bin in hardware with CCD cameras or in firmware with CMOS cameras and you can bin in software. In most cases - difference is very small, so best guideline is to bin in software for CMOS and bin in hardware for CCD to get faster download and smaller files - and you can additionally bin in software files from CCD if there is need for it.

- bin individual subs after calibration or bin resulting image after stacking. Bin individual subs if you use advanced resampling algorithm for alignment during stacking (like Lanczos or Catmull Rom spline or whatever) - bin stack result if you use bilinear interpolation (like DSS does).

- don't try to enlarge binned image by up sampling (unless you need it for printing or whatever) - there is little point in doing so.

- best method to determine good sampling rate is to measure FWHM and then divide that with 1.6. Bin so that you get close to this value. I personally think that it is better to under sample a bit rather than over sample a bit if you can't be spot on - but that is personal view.

Thanks again Vllaiv, this I think I understand.

 

53 minutes ago, vlaiv said:

bin individual subs after calibration or bin resulting image after stacking. Bin individual subs if you use advanced resampling algorithm for alignment during stacking (like Lanczos or Catmull Rom spline or whatever) - bin stack result if you use bilinear interpolation (like DSS does)

I have no idea what method ImagesPlus uses for alignment and since the developer no longer supports the product, I am unlikely to be able to find out. The auto image set processing tool starts with me loading all my raw data (lights, darks, flats, flatdarks, bias) and ends by kicking out a calibrated, normalised, graded, aligned and combined image using whatever combination method I specify, e.g. Min/Max exclude, average, one of Sigma clip options, etc. I have been using Median Sigma combination, but saw that you recommend Average instead of Median, so I will change to that in future. I can stop the process after calibration, but then normaisation, grading, aligning and combining the calibrated images becomes a tedious manual process, that I'd prefer not to do.

I will email the developer to see if he will answer about the alignment method, e.g. is it bilinear?, but I'm not hopeful about getting a reply. If I don't get a reply, can I just take the risk of binning the stack, i.e. what is the risk of a bad result and how bad could that be?

Best regards,

Edited by geoflewis
spelling
Link to comment
Share on other sites

43 minutes ago, jetstream said:
54 minutes ago, vlaiv said:

You don't need to fully understand process in order to be able to utilize it.

I just love this statement! :thumbsup:

I was getting to the point that I thought that I no longer understood English.....

  • Haha 1
Link to comment
Share on other sites

25 minutes ago, geoflewis said:

I was getting to the point that I thought that I no longer understood English.....

I watch these threads trying to learn a bit from each. Whether or not an in depth understanding like @vlaiv has, is reached (most likely not) the path that these threads take us will enhance understanding and give us flexibility in image capture and processing IMHO.

  • Like 1
Link to comment
Share on other sites

4 minutes ago, jetstream said:

I watch these threads trying to learn a bit from each. Whether or not an in depth understanding like @vlaiv has, is reached (most likely not) the path that these threads take us will enhance understanding and give us flexibility in image capture and processing IMHO.

Absolutely, at least I certainly hope so, which is why I am so appreciative of very generous people like @vlaiv who are prepared to indulge folks like me, with their time, to try and share their knowledge.

Edited by geoflewis
  • Like 1
Link to comment
Share on other sites

13 minutes ago, geoflewis said:

I have no idea what method ImagesPlus uses for alignment and since the developer no longer supports the product, I am unlikely to be able to find out. The auto image set processing tool starts with me loading all my raw data (lights, darks, flats, flatdarks, bias) and ends by kicking out a calibrated, normalised, graded, aligned and combined image using whatever combination method I specify, e.g. Min/Max exclude, average, one of Sigma clip options, etc. I have been using Median Sigma combination, but saw that you recommend Average instead of Median, so I will change to that in future. I can stop the process after calibration, but then normaisation, grading, aligning and combining the calibrated images becomes a tedious manual process, that I'd prefer not to do.

I will email the developer to see if he will answer about the alignment method, e.g. is it bilinear?, but I'm not hopeful about getting a reply. If I don't get a reply, can I just take the risk of binning the stack, i.e. what is the risk of a bad result and how bad could that be?

You can email developer - but don't worry about that too much - most of differences I'm talking about are academic rather than obvious.

Here - I'll show you by example.

I created two images with pure noise - with noise value of 1 unit (what ever unit we choose to be).

image.png.068af7ddf252fa161605bdcb64aa3ae6.png

Now I'm going to shift each by half a pixel, stack and bin - and measure result.

Then I'm going to bin each, shift them by quarter of a pixel and stack - and measure result, so we can compare them.

image.png.469dd6d3adb7a6b426abfef7d33063cc.png

First is - stack then bin, second is bin than stack. Both are using same bilinear shift to "align" frames.

Second one yields slightly better result in terms of noise reduction - but that might be just difference caused by bilinear interpolation as that in itself reduces noise and smooths image.

 

 

 

 

  • Thanks 1
Link to comment
Share on other sites

3 minutes ago, vlaiv said:

but don't worry about that too much - most of differences I'm talking about are academic rather than obvious

So when it boils down to it, as far as you are conserned I can bin the stacked linear image, regardless of the align methodogy and no one will ever know (see) the difference?

Link to comment
Share on other sites

7 hours ago, geoflewis said:

So when it boils down to it, as far as you are conserned I can bin the stacked linear image, regardless of the align methodogy and no one will ever know (see) the difference?

Well, in principle - yes. It won't be as good as otherwise - when you measure it - but anyone will be hard pressed to see difference visually.

 

  • Thanks 1
Link to comment
Share on other sites

9 hours ago, vlaiv said:

Well, in principle - yes. It won't be as good as otherwise - when you measure it - but anyone will be hard pressed to see difference visually.

 

Thanks Vlaiv, that sounds good enough for me, so I will continue to experiment with binned and unbinned processing in software. I will continue to bin 2x2 all my subs in camera when shooting with the C14, so a further bin 2x2 in software might be too much.

I know that I've struggled with all of this, but actually feel as if the fog is begining to lift, so thanks for your significant contributions over the last few days on that.

BTW I did send an email to the ImagesPlus developer, Mike Unsold, but haven't heard anything back yet; that might be just early days, but I suspect that he doesn't want to take the lid off that particular box again, since he announced over 1 year ago that he's no longer supporting, or developing it and made it a free download, which if your are interested in checking out is at http://www.mlunsold.com/ILOrdering.html

Many thanks,

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.