Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. 2 hours ago, StarSnap said:

    It is a ZWO  EFW with T2 females both sides.

    The train confuguration you suggest should work, bit extra distance from the sensor to the filters but shouldn't really matter.

    Cheers

    It certainly won't matter on chip the size of 178. Even on larger sensors you should not get much vignetting at that distance.

  2. Are you sure you have EFW the right way around? What EFW is it? ZWO as well?

    In fact it does not matter. EFW has both female T2 connections, your camera has T2 female connection, but EFW comes with T2-T2 adapter.

    This is how you should put together things:

    camera - t2-t2 - EFW - 16.5 extender - FF/FR

    It should all fit together like that, if I'm not mistaken.

     

  3. Just now, Rodd said:

    Well not reduced.  I did not use a reducer

    Indeed, both my data as your data is not gathered with reducer, I plan to use binning as it has the same effect as reducer for this purpose - it provides the same aperture at resolution

  4. 3 minutes ago, Rodd said:

    I just collected a big over 8 hours of the horse head. I will post binned and I binned.  I am having trouble seeing the difference between binnin and just making the image smaller.  Images look better smaller. That’s why they look so good on my iPhone.  Why not just makes smaller. 

    I will do experiment with the data I have, but if you want, I can do it with the data you've gathered. No need to bin it, just post aligned and cropped (to remove any alignment / registration artifacts) set of subs and I'll do all the processing.

    Once I do experiment, you will be able to tell the difference between all versions - binned, native, reduced, enlarged ....

  5. 2 minutes ago, Rodd said:

    Half the subs vlad. My whole question revolves around imaging time

    I think I get it now.

    Idea is to simulate reducer scenario?

    I can do following. Take data set, stack it at "normal resolution". Take half of that data set (half the subs) and bin it 2x2 to simulate reducer. Stack that one as well. Create comparison between images at large scale and small scale.

    Enlarge small image to match scale of larger image and do another comparison.

    I can do the same with quarter of the subs (that would be equivalent time when using x0.5 reducer or 2x2 binning - it should give same SNR as native resolution).

    Will do that and post results sometime today.

     

    • Like 2
  6. 3 minutes ago, ollypenrice said:

    One interesting test any of us could do would be to pre-process and post-process an image we already have but using only half the data. My prediction would be this: the half-data image, when shown at some fraction of full size, will look pretty similar to the full data image. It will break down into noise as we take it closer to full size. This would give us a practical feeling for the role of resizing in influencing perceived image quality.

    Olly

    I don't really follow what you propose to be done?

    By half the data, do you mean half of the subs, or "every other pixel".

    I'm not getting the parts: "some fraction of the full size", and "we take it closer to full size"

    Once I understand your proposal, I'd be happy to give it a go and show the differences.

  7. 2 minutes ago, Rodd said:

    The question is can you do it and save time imaging.   That is the whole point, not whether you CAN do it

    I think that answer has been given to that question, but to reiterate:

    In my words - yes.

    Olly's position on that again is clear from his last post on subject, and I'll quote:

    48 minutes ago, ollypenrice said:

    Regarding your M1 example, this is pure 'F ratio myth' territory whether you subscribe to the myth or reject it. My contention is that you don't need the reducer. You could just work in Bin 2 or resample the native image downwards for a comparable result without the expense of the reducer, its more critical focus and its tendency, in some cases, to plague you with internal reflections, spacer distances, tilt, etc.

    Here I'll point you to part: "you don't need the reducer". And in fact all of the rest said above is true and also demonstrated in this thread:

    - Bin x2 in practice will give you same results

    - Downsampling image will give you comparable results (and I made demonstration of how different resampling methods fare against binning).

    However, same as x2 bin works - so does reducer.

    In fact, in some cases reducer will have an edge over binning. For example if you are sampling at 1"/px and ideal sampling rate is 1.3"/px, and you happen to have reducer that reduces your focal length and gives you that sampling rate - it will be easier to use it than to bin - simply because you don't have suitable fractional binning method implemented in software yet. Reducer will provide better SNR improvement over other forms of resampling.

    There is another benefit of doing reducer vs binning - exposure time. With reducer you are getting stronger signal per pixel and it is easier to beat read noise - you need shorter subs. If sharpness of your image depends on your sub duration (shorter subs are sharper than longer subs due to guiding issues or perhaps wind or whatever) then reducer will perform better because you will be able to use shorter subs.

    • Like 1
  8. 5 minutes ago, Rodd said:

    So....it CAN't be done!!!! I get conflicting reports!

    I think that you misinterpreted what Olly said there.

    34 minutes ago, ollypenrice said:

    *Edit: Cannot be usefully resampled upwards as Vaiv says below. 

    What I'm reading (and happy to be corrected) is that Olly is saying in fact:

    Yes you can upsample it to same size, but like I pointed out - there is no useful detail to be had by doing so. That is not conflicting to what I said - no detail will be gained, in case of oversampling image will be the same because there was no detail in the first place, and I don't really see the point in doing so as image will be more pleasing at lower resolution and if people want to enlarge that image for better view - they can easily do it them selves by hitting magnify/zoom in button.

  9. 13 minutes ago, Rodd said:

    So it makes more sense not to match pixel size with scope.   It makes more sense to use pixels that are too small because you can have both binned images and when the seeing is great higher resolution images

    Well, yes if you know what you are doing :D

    I'm going to quote wiki article on Optical transfer function (https://en.wikipedi0.org/wiki/Optical_transfer_function) :

    Quote

    Just as standard definition video with a high contrast MTF is only possible with oversampling, so HD television with full theoretical sharpness is only possible by starting with a camera that has a significantly higher resolution, followed by digitally filtering. With movies now being shot in 4k and even 8k video for the cinema, we can expect to see the best pictures on HDTV only from movies or material shot at the higher standard. However much we raise the number of pixels used in cameras, this will always remain true in absence of a perfect optical spatial filter. Similarly, a 5-megapixel image obtained from a 5-megapixel still camera can never be sharper than a 5-megapixel image obtained after down-conversion from an equal quality 10-megapixel still camera. Because of the problem of maintaining a high contrast MTF, broadcasters like the BBC did for a long time consider maintaining standard definition television, but improving its quality by shooting and viewing with many more pixels (though as previously mentioned, such a system, though impressive, does ultimately lack the very fine detail which, though attenuated, enhances the effect of true HD viewing).

    Similarly you will get somewhat sharper image if you oversample and then do some tricks to restore lost SNR. This is one of the reasons for recent trend of having high megapixel sensors and decreasing pixel size. Coupled with low read noise of CMOS sensor that enables you to get sharper images then with larger pixels (other being that it just sells better when people hear it has "more" of something - more mega pixels :D ).

    In theory, ideal sensor will have extremely small pixels and 0 read noise. Such sensor would provide you with sharpest image and would offer fractional binning at any level without introducing artifacts. Due to nature of the world - it is simply not possible to make pixels smaller than wavelength of light (for example 2.4um pixel is about x4 larger than red wavelength so you can't get much smaller than that and still manage to record photons) and there will always be some read noise.

  10. 39 minutes ago, Rodd said:

    This does not really get to the point of using a reducer to decrease imaging time for smaller targets within a larger FOV.  My original post tried to ask (its proving to be a difficult thing to articulate).....well, how about this.....you have a 8 inch scope at F10 and you want to image M1.  so you say, wait, I will throw on the reducer, image M1 and surrounding environs, then crop out M1 and I will have my picture of M1 in half the time.    A very specific use of reducers.  I understand all other aspect of the devices.  You are interested in the interlacing details in the core, naturally.

    Rodd

    In principle you can do that and it will lead to faster imaging time (or better image in the same time). Same thing can be achieved by not using focal reducer and binning your data (almost the same thing - there are very minute differences related to pixel blur and read noise, but you will be hard pressed to see them).

    You can even do what you proposed at one point - if your native FL is oversampling, use focal reducer to record smaller image of object more quickly and then resample back to wanted size. However, I don't really see the point in resizing back to original size - no detail will be achieved that way (and that is ok because no detail was present in the first place when oversampling) and resulting image will look less sharp. You can leave it at lower resolution / smaller object - it will be aesthetically more pleasing to look at and if someone wants to look at it enlarged (to the same effect as if you upsampled it as part of processing) - they can just hit magnify buttom (ctrl + in their browser) - and image will be enlarged. Who ever chooses to do this will be aware that blur is consequence of enlargement and will not think that your image is at fault (but can object to this if you do it in processing).

    28 minutes ago, Rodd said:

    Thanks. You think the pi folks would have offered this.   Like pulling teeth.  
     

    question.  Is there any difference between software binning and using a camera with bigger pixels that is more suited for a scope ( in my case I am way over sampled with toa native and asi 1600.  I can bin to bring the resolution better in line with the Fwhm (closer to 2x as apposed to 1.6). Or I can switch to a larger pixel camera.   Any major advantage to either.   Assume software bin

    In practice there will be no difference assuming you bin x2 vs use x2 as large pixel (x4 in surface area - x2 width x2 height). In theory there are subtle differences, especially if you use software binning.

    There will be difference in read noise for binned camera - it will double "per pixel" - so instead comparing camera with smaller pixel base read noise to camera with larger pixels - compare double that value. For ASI1600 bin x2 will give 7.6um pixels, so when comparing to camera that has 7.6um pixels - use 3.4e read noise instead of 1.7e read noise.

    If you do binning certain way, for the same conditions you will have slightly sharper image - this is due to pixel blur. Surface of the pixel causes some blur over all other things that blur image (atmosphere, aperture, tracking/guiding). It is very small difference but it is there. Larger the surface of the pixel - larger the pixel blur. Regular binning will do the same to pixel blur as using larger pixel - so no advantage there. But other types of binning will circumvent issue with pixel blur to some degree.

    There are a few more differences that are worth mentioning in "academic" sense. Issue with hot pixels. When you software bin you effectively loose "1/4 of pixel value" due to hot pixel. With camera with larger pixels that is whole pixel. Similarly you can bin in a certain way to include the fact that not all pixels have the same read noise (some are more noisy than others) so you can do weighted average and improve read noise characteristics.

    I listed these subtle differences just that people get informed about them, but in practice you won't see almost any difference between camera with larger pixels and binning camera with smaller pixels.

    • Like 1
  11. 4 minutes ago, Rodd said:

    I think the question was more how to do it in PI but great info. The question remains, how does this translate to PI

    Well, PI documentation does offer some insight in how it's all done - and what options are available:

    https://pixinsight.com/doc/tools/Resample/Resample.html

    https://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html

    In fact in this last link you will see math behind it and effects it has on image and noise in some scenarios

  12. 42 minutes ago, ollypenrice said:

    Adam and Vlaiv in particular, how about this caveat for the diagrams I posted?

    These diagrams assume workable sampling rates for the camera in all cases. If the longer focal length introduces over-sampling it will add no resolution and the imager would benefit from the reducer since it will add speed without reducing resolution. An alternative would be to bin the data when oversampled.

    I'm left wondering if we don't need a new unit. Arcseconds per pixel is fine for resolution but don't we really need something indicating flux per pixel? This might be indicated by something like square mm of aperture per pixel, no? 

    Olly

     

    Not sure if people will understand what "workable sampling rate" is. Imager might benefit from reducer if they are properly sampled at native focal length or even undersampled. In fact they will benefit from focal reducer in terms of speed if they leave sampling rate as is, and are happy with their sampling rate - image will reach target SNR faster.

    Some time ago, when I first started taking interest in whole F/ratio - speed stuff, I also tried to come up with meaningful measure of how fast system is. Something that can be used to compare two setups in terms of speed. It is possible to do, but defeats the point - too much math involved to be done quickly mentally (things need to be squared and divided). Only decent thing that I got out of it is "aperture at resolution" - but again that does not help much in this case and in general case because resolution changes.

    Even if we define some measure - it will be more of "setup is capable to deliver" type of thing but it will not be relative measure in all scenarios (like saying setup A is two times as fast as setup B). Whenever we have weak signal - other things start being more important than aperture and sampling rate. Processing workflow can have effects as well - like number of calibration frames and algorithms used.

  13. 9 hours ago, jjosefsen said:

    Very interesting (and confusing thread) 🙂

    How would you go about resampling (is that the correct term) to a certain resolution in PixInsight?

    Example:

    Let's say I have an image shot at 1"/px and I want to target 1.8"/px because I feel like that is what my skies allow. How would I do that?

    Hope it's ok to ask in this discussion..

     

    You have couple of options there, not all are available in PI.

    You can do fractional binning. That is whole new topic, and I'm just mentioning it as information.

    Next thing to do would be simple resample. Depending on type of interpolation algorithm used for resampling you will get different results in SNR improvement (I'll expand on that with examples). Next thing that you could do is bin 2x2 to get to 2"/px and then upsample image back to 1.8"/px. This one will "cost" some of detail - but it's likely that most people won't be able to tell the difference. It will provide you with precise SNR improvement.

    14 hours ago, ollypenrice said:

    Yes, I'm not competent in all the ways of resampling because I don't do it, other than to compensate for JPEG artefacts when posting, but I really don't believe that there will be a serious difference between them. A slight difference, perhaps. We really need examples.

    Here is one setup that will show effects of different resampling methods. It will be "artificial" / generated example, but because of that we will be able to get exact numbers.

    I've created an image consisting out of two elements - a gaussian star profile with FWHM 2" and Gaussian noise with magnitude 1. Image is "sampled" at 0.5"/px. Here is image and measurements:

    image.png.7526186ec11b07c153a9ba49c61fa8c5.png

    This image can be reduced to half its size ("sampling" at 1"/px) without loss of information - It is oversampled. I'm going to use different methods for reducing it and we can compare impact on SNR (measurement of noise - signal stays the same when resampling so any reduction in noise will be improvement in SNR) and also on any resolution loss - by examining how FWHM changes in reduced image.

    In order, I will perform:

    1. Nearest neighbor resampling (or rather just taking every other sample) - this approach should not have any impact on SNR and resolution what so ever

    2. Regular binning 2x2 - it should reduce noise by factor of 2 and have small increase in FWHM due to pixel blur

    3. Split / Sift binning - that is something I developed, so we can compare it to the others

    4. Resampling with use of linear interpolation

    5. Resampling with use of cubic interpolation

    6. Cubic B-Spline

    7. Cubic O-Moms

    8. Quintic Spline

    Here is a screen shot of first option:

    image.png.a0662ab27730c6b723d0e4c5c12011f7.png

    As predicted, noise remains the same, and FWHM is also about the same (both vary because image is noisy) - now we are at 1"/px - so baseline FWHM is 2". Other measurements I'll just list instead of taking screen shots.

    First SNR:

    image.png.c58db09793b90dbe88e5372aa39c3229.png

    Results are in StdDev column. Baseline, and nearest neighbor have no change in SNR. Binning x2 and linear interpolation give SNR increase (predictably) by factor of x2. In this particular case linear interpolation is in fact bin x2 with half pixel shift (bin is average of two pixels, and because we are doing resizing by factor of x2 - linear interpolation is %50 of one pixel and 50% of other - which is the same as average).

    Split/Sift bin gives the best results. We can go a bit deeper into that, but I've already gave outline explanation for this in thread on software binning that I gave link to in one of earlier posts. Other advanced resampling methods give poorer results. That is to be expected because advanced resampling methods are designed to do least alteration to image when resampling - that includes noise as well as data. In fact - most advanced resampling here gives SNR increase of only ~x1.2 vs split/sift bin that results in ~2.22 boost in SNR.

    Let's now look at effects on resolution:

    FWHMx	 2.0762 	FWHMy	 2.086
    FWHMx	 2.0164 	FWHMy	 2.0235
    FWHMx	 2.0762 	FWHMy	 2.086
    FWHMx	 2.0007 	FWHMy	 2.0125
    FWHMx	 1.9842 	FWHMy	 1.9988
    FWHMx	 1.9798 	FWHMy	 1.9957
    FWHMx	 1.9778 	FWHMy	 1.9946
    
    

    Due to pixel blur, regular binning increases star FWHM by almost 4%  - 2.08" vs 2". Same is of course true for bilinear resampling. Split/Sift bin increases FWHM by about 1% (it's designed to circumvent pixel blur). More advanced resampling methods pretty much keep FWHM the same - they are designed so that they don't loose any information.

    This little exercise also shows that for FWHM 2" stars, you don't loose anything when sampling at 1"/px - in fact, proper sampling for 2" FWHM image is about 1.25"/px (FWHM divided with about 1.6).

  14. 8 minutes ago, Rodd said:

    As olly stated--I do not think those effect are enough to negate his claim

    " I stand by the optical point it makes and, in practice, I still think that resampling the left hand object image (M33) down to the size of the M33 in the reduced image will produce results only trivially different from each other."

    In principle I agree with that statement, however it is missing important piece of information to make it complete.

    What type of resampling are we talking about? There are different resampling algorithms with different properties.

    For example - nearest neighbor resampling will make 0 improvement in SNR - it will leave SNR exactly the same. Binning as a form of resampling has very predictable improvement in SNR. Other forms of resampling will have SNR improvement somewhere in between (or even more due to pixel to pixel correlation).

    Like I mentioned before, difference between binning and using equivalent focal reducer is in read noise contribution and covered FOV. Other forms of resampling (those that are worth using) will have slightly smaller SNR benefit compared to binning / equivalent reducer.

    On the other hand upsampling behaves differently.

    For most people that don't resample and have no clue about binning and such - focal reducer will provide real benefit in SNR, especially if they are oversampling at native focal length.

    • Like 1
  15. 52 minutes ago, Rodd said:

    So this basically is in agreement with my original post--that one can not throw on a reducer, image a galaxy, then crop it out and upsample and save time imaging.  Thank you

    Rodd

    In fact you can :D

    There is a case where such action will result in virtually same image quality, and there is a case when such action will result in very small change in image sharpness.

    If you oversample when imaging at native focal length to such extent that using focal reducer provides proper sampling or oversampling (but not undersampling) for given conditions, you will get the same image in less time.

    As explained above - SNR per unit sky area does not change with use of focal reducer. What changes is mapping of that sky to pixels, and per pixel SNR does in fact change. Image recorded with focal reducer will have better SNR for same time, or will reach same SNR in less time than your oversampled image at native focal length.

    I've already shown in this thread that upsampling of the image does not reduce SNR. So image recorded with focal reducer will keep SNR when enlarged.

    Only thing that can happen in this process is that you potentially loose some of the information, and only in the case that image with focal reducer is undersampled. If that is not the case - you will get the same image.

     

    • Like 1
  16. 1 hour ago, ollypenrice said:

    This has always been my position on F ratio as well. A good while ago I posted this, which I think makes precisely the same point:

    spacer.png

    With it I posted this:

    spacer.png

    This ignores the effect of object flux per pixel and really needs updating to include it. However, I stand by the optical point it makes and, in practice, I still think that resampling the left hand object image (M33) down to the size of the M33 in the reduced image will produce results only trivially different from each other.

    Adam also said, 'There is no f-ratio myth,  just people who don't understand optics.' I would argue (with a smile!) that the myth arises from failing to understand optics. (In one of my other areas of interest I'd also upbraid myself for the use of 'myth' when the right term would be 'fallacy' but the term 'F ratio myth' was out there so I used it.) 

    I got involved in this debate because we very often see reducers advocated as providing benefits they do not, in fact, produce. As Adam says in the quotation with which I began, we must only compare different apertures at the same focal length. The fact that F stop and aperture are used as synonyms in the camera world is made possible because there is no change in FL. I think there is a myth and this is where it started.

    Olly

    Ah, yes, now I remember.

    You are in fact right saying that SNR per unit sky area can't be increased with focal reducer. No question about it - it is equivalent to my statement above that regardless of how much FOV is occupied by object - same aperture will gather same number of photons for that target.

    However, I do think that you should be careful when saying:

    1 hour ago, ollypenrice said:

    I got involved in this debate because we very often see reducers advocated as providing benefits they do not, in fact, produce.

    as in your own words:

    1 hour ago, ollypenrice said:

    This ignores the effect of object flux per pixel and really needs updating to include it.

    For most people that don't contemplate what happens to such depths as we do in this and similar discussions, focal reducers provide real benefit.

    They won't be resampling / binning their data, and if data is not altered in such way post acquisition - it will indeed have higher SNR with focal reducer. Target SNR will be reached in shorter time and image will look less noisy / smoother when observed at 1:1 magnification. Target will of course be smaller due to coarser sampling rate.

    Ultimately, as seen from this example and earlier discussions, coming from you as well known and accomplished imager - such words have weight and people might get confused because they don't read "the fine print".

  17. 13 minutes ago, Laurin Dave said:

    Just read Craig Starks note..  my reading is that the f ratio myth would be a myth if it were not for read noise..  in my recent experience taking Ha at the same time through Esprit 150 SX-46 and Esprit 100 asi1600 the Esprit 100 stacks are better,(which is a trifle annoying..  added together though they are even better)  even though the Esprit 150sx-46 gets 1.5 times more photons per pixel.. read noise though is 9 vs 1.7 on the asi1600..  on LRGB the 150 combo is streets ahead..  

     

    I see it the myth in couple of ways, and one is related to read noise.

    1. Myth - faster F/ratio scope is faster to get to target SNR over slower scope

    2. Myth - two scopes of same F/ratio are "equal in speed" (myth is myth even if we use same camera / pixel size).

    While first one incorporates read noise part - it is not in the hart of it. For second myth - read noise is crucial part (and so is dark current noise). Second myth (or part of the myth) can be reformulated and is often used that way - scope of let's say F/5 ratio is x4 time as fast as F/10 ratio (or however those F/stops work, never could remember that).

    This part is true only if there are noise sources dependent on aperture and no independent noise sources. Shot noise depends on aperture - because it is related to target signal, and LP noise depends on aperture - for the same reason, it is related to sky signal. Read noise depends on number of readouts and dark current noise on duration of exposure - not related to aperture / f/ratio and remain the same.

    Once you are in low light regime - both read noise and dark current noise become important contributors and can't be neglected anymore - they change above statement to: F/5 scope will be at most x4 faster than F/10 scope, but in reality it will depend on how bright target is and can be only faster by small percentage.

  18. 4 minutes ago, Adam J said:

    If my SNR is already acceptable though would that not result in reduced detail as you cant get detail back by up samping? 

    My current heart project is four frames with the 130PDS and ASI1600mm pro. I find that I need very little noise reduction after re-sampling so am recording true details at the pixel level, hence over four frames I end up with an image that will print to a 60cm x 60cm size on Alu very nicely. 

    Would I be better off getting a 70mm F5 and getting it in one frame but four times the integration in that single frame....not sure but if you only have a hammer every problem is a nail.  

    Adam

    Yes, you are in fact right - I'm somehow always struggling with SNR (bortle 8 skies? :D ) and although I love max possible detail, sometimes better SNR just wins. If you already have plenty of SNR, then yes, simple resample to get stars tight enough is all that you need.

    Another interesting topic is mosaics vs single shot with smaller scope. It is related to this, and if I'm not mistaken here it goes - Same speed scopes used with same camera will result in "same" SNR regardless if you do mosaic or shoot single frame (they will be of the same speed). In practice this is not true because small things that you need to account for - like changing FOV (takes time), and slightly smaller FOV due to overlap needed to piece together mosaic. Read noise also plays a part but not important one - for CCDs it is the same, for CMOS it is higher, but longer subs deal with that.

    Here is simple "breakdown" of what happens. Let's observe simple case - same F/ratio, twice the focal length - consequently x4 collecting area. Bin x2 will result in same resolution.

    Large scope will collect x4 more light than small scope for same sampling rate (difference in read noise only) if we bin x2. You can only spend 1/4 of the time with large scope per panel in comparison to full FOV of small scope. In the end collected signal is the same and hence SNR (except for things mentioned before).

  19. 1 minute ago, Adam J said:

    Well different methods aside I know if I software re-sample my 130PDS from 1.24" / pix to 2" / pix (my scale for presenting nabula) there is a significant increase in image quality / perceived image smoothness and gives me a nice balance between detail captured and smoothness when viewed at full scale. Thats just using the re-size function in Photoshop nothing fancy.   

    I already linked to a thread about software binning, so you can have a look on difference between resampling and binning in terms of SNR improvement. It is best done while data is still linear and probably best approach in your case would be to bin x2 to get from 1.24" up to 2.48" and then upsample back to 2"/px.

  20. Here is interesting PDF by Craig Stark - it contains much of what we discussed here but there is even more - some very good examples of images and effects - it shows how one can have the same image by shooting reduced and then enlarging if image does not contain information in the first place.

    It also shows how much detail you loose by undersampling (depending on detail in the image) - I think it is very informative and sheds light on what is to be expected (and I'm guessing many will be surprised by subtle differences for example between 1"/px and 2"/px) - I also mentioned this above.

    http://www.stark-labs.com/craig/resources/Articles-&-Reviews/ImageSampling_Fratios_SNR_RTMC.pdf

    • Like 1
  21. 3 minutes ago, Rodd said:

    I agree with all that.  A good place to end.  But the real question, which Vlaiv says yes, and Craig stark seems to agree with, is can you use a reducer to image a small target, then crop the image and enlarge the target with less exposure time than for the equivalent image taken at native.  Not all agree with this. At least the last I heard.   Olly doesn't (or didn't the last we spoke), and I can't fully dive into the "I agree" pool with Olly taking an opposing view.  

    I did hear (or better say read) once on Olly's position on this and yes as far as I remember he was not in agreement with this to an extent, but not sure it is still his position on the topic.

    But instead of hear/say - we can ask @ollypenrice what is his stand on all of this?

  22. 2 minutes ago, Adam J said:

    In software you don't have to re-sample by a factor of 2 you can do it by any amount you like and the algorithm will still work to increase SNR (if thats even the best way to describe it). So you could re-sample to 2"/pix or 1.2"/pix with the corresponding level of improvement in image quality. 

    You can then combine both focal reducer and software to get the desired samping while also getting the benefit of a wider field from the reducer. 

    That is in fact true, and only thing that I would add to that is:

    - binning produces known improvement in SNR

    - binning is the form of resampling

    - depending on type of resampling you can get couple of things happen to your data - loss of resolution due to pixel blur and pixel to pixel correlation and improvement in SNR.

    - with regular binning you get pixel blur, but you can bin your data in such way that there is no increase in pixel blur.

    - most other resampling methods don't provide known increase in SNR, and often have poorer characteristics than binning (pixel to pixel correlation / improvement in SNR).

    - there is in fact fractional binning method that should provide benefits of integer binning and resampling. It introduces very small correlation, keeps pixel blur to a minimum and has predictable improvement in SNR (equal to "scale" of binning, or square root of binned surface). Only software that I've seen it implemented in is StarTools - and I'm guessing how it's done, and I believe there is a better way to do it :D ).

  23. 5 minutes ago, Rodd said:

    This makes sense.......at .78 arcsec/pix I am way oversampled.  But--software binning would be even better according to Vlaiv. Both together may be too much

    I would like to add to this that software binning is not always the best option - it is case dependent. If one is way oversampled at native FL - use of focal reducer might not result in optimal sampling rate. For example .78"/px with x0.7 reducer will result in 1.12"/px. If conditions (sky, scope, mount) don't allow for this resolution either - it is then better to bin as 1.56"/px will be closer to optimal sampling rate.

    14 minutes ago, Rodd said:

    lets cut to the chase, shall we.  You know my gear--its in my signature for the most part.  I have reducers for all 4 scopes.  2 cameras. If it was you, which setup would you use to create the "best" image.   Your answer may point to teh reason why I am always unsatisfied.

    If we go down that path we will change direction of this discussion quite a bit. I'm ok with that.

    First I would like us to define what is the "best" image. Then we can start discussing what is needed to create the best image, and after all of that we can see what combination of scope camera fits requirements for best image.

  24. 8 minutes ago, Rodd said:

    No way....if there is a star in the upper left hand corner, some photons collected by the scope will go into that star--or another galaxy, or nebula.  If teh galaxy only fills 1/4 of the FOV, and there are other things in the FOV, most certainly all the photons collected by the scope WILL NOT go into that galaxy!

    You are right - things that are not part of the galaxy will not contribute to light emitted from the galaxy - but read carefully what I've said.

    13 minutes ago, vlaiv said:

    All the photons collected by the scope that originate from the galaxy will go into "image" of that galaxy regardless of the fact how large it is in FOV (except when it extends beyond FOV).

    I was not talking about additional photons at all - they will be focused on a different place by telescope optics. They will come to aperture at a different angle and therefore will be at different place on the image.

    However - all the light from the galaxy will end up in the image of that galaxy - no more, no less. There we agree. Only thing that I was pointed out was spread of that light over pixels and consequent level of signal per each pixel - or SNR in the end.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.