Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Sorry, I can't really help you with what you are asking, but I hope you don't mind me asking about the details of your image? What is the FOV and how many pixels does image contain? In fact, what constitutes "largest" image of M31 in the first place?
  2. I find result of that exercise very interesting. This is AI here that we are talking about and it clearly "understands" some concepts and relations and knows how to put them together. It seems that there is much more in our spoken language and our narrative than meets the eye. Things that we take for granted and never think about is present in "the data". For example this bit. AI "understands" the phrase to get the upper hand and that it is OK in the context of narrative to replace hand with a paw given that it is all about puppies. It even uses more formal felines instead of cats as would be in actual script if it were used in a show (somehow AI mapped formality of the term and use of formal terms in situation).
  3. I only had opportunity to use one eyepiece labeled AngelEyes - it is 7-21mm zoom and it is a horror show of an eyepiece, so be warned
  4. Just love statements like this one. Could you name deconvolution method that has been "deprecated" and what made it deprecated, and offer any reason and/or proof that deconvolution methods work differently based on sampling rate and more specifically that they are less efficient on lower sampled version of the data?
  5. Correct focus position does not depend on either of those two things and I don't really see why would either over or under sampling prevent anyone from achieving good focus. It's a bit like saying that finding correct focus for visual needs very high magnification. I've never had issues where I could not focus at particular magnification. I did however have situations where seeing was so poor that I was not certain if I had correct focus.
  6. Defocus is another level of blur that adds up - it reduces effective resolution of the image and makes it harder to sharpen it up (it attenuates high frequencies even more).
  7. Just to clarify further what I mean by above - look at this: At this scale images look the same - high resolution image simply does not look high resolution any more compared to your image as sampling has removed any additional detail compared to your image. Now they are matched by level of detail. When this happens - you have reached the point of optimum sampling for your data (if we judge by visual part alone).
  8. I didn't do anything particular except that I took high resolution reference image that shows same features as your two images. Originally image is very high resolution as can be seen from this example: When you take such image and scale it to particular sampling rate - you get what is maximum amount of detail that can be recorded at that sampling rate (since there is no blur from optics to speak of). This is useful when you want to compare your own image at certain resolution with what can be recorded at that resolution. If your image looks more blurry than what can be recorded at particular resolution and can't be sharpened further - that simply means that it is over sampled. It is lacking the detail that corresponds to that image size. I made such comparison with two of your images - one at F/13 and one at F/21. I did a little of processing on reference image - just desaturation and brightness to try to match it visually to your images. I kept the level of detail intact. What can be seen from this comparison is how much your images are lacking detail compared to max detail that can be recorded at certain image size. To my eye - smaller image is more similar to reference image at that scale than larger image - or in another words larger image lacks more detail compared to what can be recorded at larger size. To again put it differently - smaller image is less over sampled that larger image. In any case, I believe that it is good thing to be aware what properly sampled image at certain image size looks like.
  9. Depends on sensor-filter distance, F/ratio of the optics and amount of vignetting you are willing to accept, but in general - you want diagonal of sensor to be slightly smaller than the filter diameter (clear aperture). For 1.25" filters - you have something like 27mm of clear aperture so you want to limit your sensor to 4/3 (which have diagonal of about 22-23mm). APS-C is 28mm so you need 36mm filters and so on ...
  10. If you want to resample raw data (linear) - then ImageJ with TransformJ gives large number of resampling algorithms. If you want to resample processed image - use IrfanView - it has Lanczos resampling (that is what I used to compare the two Mars images above at the same scale). I think that alignment point size should be matched to seeing conditions rather than to size of the target. If seeing quivering is small and rapid - one should use small alignment points but if it is large in magnitude or if seeing is without larger variations of tilt (just softening of the image without quivering) - then it is ok to use larger alignment points.
  11. I agree - but I maintain that that is the consequence of BlurX butchering the binned version and not because oversampled is somehow superior to binned version. HST data serves particular purpose in these comparisons - not because it is HST data, but because it shows what level of detail can be recorded at particular sampling rate. When we lower sampling rate for both HST and particular image - at some point we will have two same looking images. Any advantage that HST had over other image will be negated by sampling rate and at that point sampling rate will be matched to underlying data for non HST image. As long as HST image is "beating" your image at particular scale - you are effectively over sampled or not sharpened enough. Here - look what happens when I scale down Olly's data to 66% of it size (that will make it effectively ~1.36"/px): difference in detail between the two starts to reduce even further. I agree that different processing will yield different results - but that was not the point. The point was - at 0.9"/px sharpening that is already available in Gimp for several years (and in G'mic collection even longer) will give you better results than BlurX - closer to what HST data shows at 0.9" - but still not quite there yet (because we are over sampled at 0.9"/px let alone at 0.46"/px). Luminance contains most of the detail sharpness information of the image and that is why it is good to compare just luminance. We can use same chrominance to color each of those luminances and difference in sharpness will still be the same. In any case - that is my view of BlurX - I'm not particularly impressed with it. I believe that better results can be obtained with different line of processing and I certainly don't think it can "tease out" the detail in over sampled images over properly sampled images. I've shown what I believe to be evidence to support my view, but of course - people are free to prefer and use any particular processing workflow that suits them and this should not be viewed as argument against that.
  12. Just for reference - here is what level of data can be had at what sampling rate and comparison (I tried to make very high resolution reference image resemble capture as much as possible without loss of detail): Level of detail in properly sampled image looks much closer to the level of detail that can maximally be held in that sampling rate. Difference in level of detail in larger image is greater.
  13. You can get equally decent image size by taking properly sampled image and just upscaling it - look at my post above. I personally prefer smaller but sharper image than large blurry one.
  14. You say that you prefer left one out of these two: why? (right one is sampled at F/13 and just upsampled to match the size of left one). I can see some features in the left image that are more distinct than in right image, but I can also see features in the right image that are more distinct than in left - which leads me to conclude that it is just seeing and stacking variation and difference in processing between the two. Is there any particular feature that makes left image better that we can contribute to increase in sampling?
  15. Yes - but not because of sampling rate but because of how BlurX works. It failed to resolve on binned image and not "over succeeded" on over sampled image - if you get what I'm saying. This is why I call for comparison to another set of your two images sharpened by different sharpening algorithm - one that won't fail at binned image - much like Olly's data and sharpening that I applied - here it is again, but with increased contrast so you can cluster detail versus background spiral arm (above was just a quick levels stretch to bring the data out not proper processing) HST image clearly shows what 0.9"/px sampling rate supports - how much detail it can be captured by it, and both images fail to deliver even that much detail let alone level of detail at 0.46"/px Here - look what can be captured at 0.46"/px: Do you still think that over sampled version captured detail anywhere close to that?
  16. Ok, here is the thing. When we capture data that is either properly sampled or over sampled - that data is still blurred and any sharpening will bring out that blurred data. Think of it this way: Detail brightness is multiplied with some number less than one - that is blurring (just replace detail with frequency in previous sentence and you will get correct statement, but people get scared when they read frequency, so it is better to use inaccurate but more familiar term detail). Large detail is say multiplied with 0.9 Medium detail is multiplied with 0.5 Fine detail is multiplied with 0.1 Ultra fine detail that one might hope to capture by over sampling is multiplied with 0 Sharpening is nothing more than taking all those detail levels and then dividing with appropriate number (inverse operation to multiplication) in order to restore original value. So we take large detail and divide it with 0.9 so we get Some_Value * 0.9 / 0.9 = Some_Value Similarly, we divide Medium detail with 0.5 and get Other_Value * 0.5 / 0.5 = Other_value In the same way we restore Fine detail by dividing with 0.1, but there is no number that we can divide following expression to get original value: Original_Value * 0 = 0 Whatever number you use as divisor - you won't get original value as 0 divided with any number is still zero. Even sharpening methods can't fully restore what can be captured at certain resolution because we have finite precision / some error in recording (finite SNR). Here, have a look at this: This is roughly 0.9"/px. Left is Hubble reference, middle is your oversampled image sharpened with BlurX and presented at 50% of original size (again at 0.9"/px) and right one is binned image to 0.9"/px and then treated with BlurX Even at 0.9"/px, with sharpening you can't reach level of detail that is supported by 0.9"/px (left image) which clearly out resolves the other two. BlurX - did not bring out those features because over sampled version somehow captured it and 0.9"/px version did not. It failed to bring out that detail in 0.9"/px version, but both versions captured that level of detail, and it is still lower than what can be recorded at 0.9"/px. I'm sure that other sharpening algorithms would bring out such detail even from 0.9"/px sampling - as is shown from Olly's data:
  17. I agree - but it did not sharpen as good as it can be done and debluring softness is just too artificial. What I do object in your comparison is doing comparison on oversampled data, binned data - but both with BlurX applied to them to draw conclusions about binning/over sampling. You need both over sampled data and binned data processed differently to be able to say that BlurX brings out something in over sampled data that is not there in binned data (provided that binning is producing correctly sampled data). As far as we know - lack of detail in binned data might be just because BlurX killed it. It is more than obvious that it kills the data in the image. Part of your original image: Part of your bin x2 image from the first post upscaled to match the size:
  18. How so? To my eyes right one seems far more natural. Here is an example: From left one it is not very clear what sort of nebulosity is captured, detail is just not there, on the right (at least to my eyes) - it is clear that we have concentration of hot young stars. In general, right one seems sharper with more defined detail. Left one - Hubble reference, middle is Gold-Meinel sharpening, right one is BlurX.
  19. I'm not sure if I actually saw example of evidence to support this. Could you post an example that shows your findings, or at least explain what you find better in over sampled images (like additional detail revealed or anything)?
  20. Here is an example of why I'm not impressed with the plugin: Left image is Olly's one from the last link (fully processed version), right one is just luminance from drop box without that plugin applied - with simple levels stretch in Gimp + Gold-Meinel deconvolution done in Gimp as well (G'mic plugins).
  21. I can't really pass judgement between the two unless we have data that has not been treated like this - to see the differences. I'm also not very impressed with this plugin. So far - all examples that I've seen feel to artificial and obviously treated (over processed).
  22. Here is simple analogy to help you understand what binning really is: Imagine you have just 4 pixels - 2x2 matrix and that is your whole image. Instead of binning - imagine you do following - take each pixel and place it in image of its own. Now you've reduced resolution of each image but you have 4 independent images (no pixels repeat). What happens if you stack 4 images? You get SNR improvement. Same thing that happens when you stack subs - happens when you bin - only on pixel level. Focal length is very important as it dictates the scale of object - which in turn spreads it over more or less pixels. Say you have some galaxy and your aperture captures 1000 photons per exposure from that galaxy. Now, if you spread that galaxy image over 100 pixels - then each pixel will get on average 10 of those photons, but if you spread the light over 1000 pixels then each pixel will on average get 1 photon. More you spread light - less of it will be recorded per pixel and lower signal means worse SNR (per that exposure). That is because you are looking at 1x1 scaled down. Zoom on it at 100% and then compare it with 3x3. Scaling image down works a bit like binning (again, you are trading resolution for lack of noise) Look at this: this is same image - same copy - one viewed at 100% zoom and other scaled to 50% (or 33%) - one looks much smoother than the other - but it is the same level of noise in both (as they are same images).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.