Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Martin Meredith

Members
  • Posts

    2,270
  • Joined

  • Last visited

Everything posted by Martin Meredith

  1. That info is often saved in the FITS header too. 'STACKCNT' comes to mind.
  2. The seagull shapes are hot pixels. If you're using StarlightLive there is (I believe) and option to get rid of them; alternatively, you can apply darks which will have the same effect. On the plus side they provide a nice way to diagnose mount movement! Martin
  3. Indeed, this is a fascinating group of objects. Thanks for your description. With the benefit of a slightly larger FOV it is possible to include yet another intriguing object: VV 1419 (which is in your Abell thread image). This is the brightest member of the cluster. I'm not sure what the apparent triple-nucleus signifies. The SDSS (9) image of this region is marvellous and suggests this part of the sky would make a great AP target. Here's an article about the Abell group
  4. Its been a few years since I looked at this marvellous galaxy (6 to be exact) and I obviously need to go back to it in colour/Ha. This is a shot captured using LodestarLive (complete with overexposed blocky stars)
  5. Lovely shots of some fascinating galaxies. Holmberg II is still on my list. Arp 210 is an intriguing galaxy with so many clear knots. Here's my attempt which I think is worth comparison if only because the large Lodestar pixels resulted in undersampling on this occasion. When I use ROI in the new version of Jocular with my ASI 290MM I can get down to quite small FOVs and its pot luck as to whether they get solved. I currently go down to GAIA G mag 15.5 only so perhaps I need to introduce a few more stars into the platesolving procedure. BTW are you using your ASI via Jocular or via the watched folder?
  6. Nice shot of Abell 610. I particularly like the echelon opf 3 (or 4) edge-ons just N of centre. I also looked at this a couple of years back and for some reason left it stacking for quite a while. I'm not sure I got any deeper as a result but here's the shot. The clusters extend out still further and the area is replete with tiny galaxies.
  7. Alt-az works fine for EAA even at longish focal lengths. I've only ever used alt-az (at 800mm) for the last 8 years and can expose for up to 30s without problems, although it depends which part of the sky I'm pointed at. Nowadays I tend to limit my exposures to the range 5-20s. A different issue you might have with your proposed scope/camera combination is undersampling. I've used a ED77mm Borg refractor at its native f6 with a Lodestar X2 mono and much of the time (ie when seeing was moderate or better) was getting blocky stars due to undersampling. I'm planning to try again with the smaller pixels of my ASI 290MM soon. Martin
  8. Does the log specify anything (like missing libusb?). As mentioned here, it appears that some Mac users have to build libusb. Martin
  9. I don't have a final position. You? If you don't like my objections to undersampling so far, don't worry, I have others. From a theoretical perspective, you're fond of citing Nyquist and reconstruction, but perfect reconstruction applies to bandlimited signals only. Under what conditions might we increase our chances of failing to meet that condition -- i.e. having spatial frequency content outside the Nyquist limit? When we are undersampled, and the more undersampled we are, the less 'perfect' our reconstruction becomes. To frame the discussion of undersampling in Nyquist terms is a bit of a mistake in my opinion. Nyquist is incapable of providing a complete picture of the frequency content of the signal prior to sampling. We can even be undersampled when astronomy.tools tells us we're oversampled unless we incorporate a spatial lowpass filter at some point prior to sampling. Fortunately (?), the scope itself acts as a spatial low pass filter, the atmosphere, poor tracking and other disturbances helps us out in practice. How much do they help us out? It varies from night to night and from kit to kit. Theory isn't going to be much help there. What we can say is that if the problems I allude to are going to surface, they are going to do so on the nights of best seeing. And by failing to remove frequencies above the Nyquist limit prior to sampling we will not be able to perfectly reconstruct even the band-limited signal. Any discussion of undersampling only makes sense in terms of seeing first, and Nyquist second. Only by taking seeing into account can we estimate just how bad our 'perfect' reconstruction is going to be. As an aside, I work in audio/speech and for a long time the received wisdom was that there was no useful information in speech above 8 kHz, leading to sampling rates of 18-20 kHz. Now we know that there is quite a lot of informative energy above 8 kHz, demonstrated for instance by colleagues working in speech synthesis who absolutely insist on training their systems using speech data sampled at up to 50 kHz (to catch frequencies up to ~20 kHz). You could say we were undersampled all along -- but at least we were able to use a lowpass filter to ensure that the conditions for the Nyquist to apply were met, and reconstruction was near-to-perfect. Back to images: So in a hypothetical future where Vlaiv's arguments have convinced readers to go out and buy a sensor with (too-)large pixels, then what? We go ahead and sample the night sky. And we have no way at all of knowing how much energy at high spatial frequencies has found into way into our sensor. They're just photons after all. They don't know that they're part of some cosmic configuration with their fellow photons that collectively represent a spatial frequency that is too high to be captured by our too-large pixels. But these photons, having made it to our scope, have to go somewhere. They don't necessarily produce aliasing artefacts so beloved of computer graphics/vision textbooks. But they're there just the same. Unwanted noise. I've always felt globular clusters are good candidates for high-spatial frequencies generators. So on a night of excellent seeing -- the ones we all pine for and celebrate and spend long hours observing in -- those of us with sensors with large pixels (and of course, I mean with repect to focal length -- I'm talking about resolution here) may well harbour a nagging suspicion that our sensor is (1) not only not going to capture the additional detail that's available, but worse, (2) that additional lost detail is translated into noise that is impossible to detect in our captures. Yes, its a double whammy. (We've mainly been focused in this thread on the first part). Here's a shot (in EEVA mode, nothing exciting) from the best night's seeing I can remember in the last 10 years. Do I believe I was undersampled on this occasion? I have absolutely no way of knowing, and neither does Vlaiv. (No need to look too closely at those stars -- you already know what you're going to see) BTW Vlaiv: no need to mention Lanczos again -- I get that I can smooth the rough edges off these stars, but perfect reconstruction? Who can say. See above. I never guide and I use alt-az, not eq, so you can imagine that things would be even worse (since I'd be successfully focussing the signal onto a tight range of pixels) if I were to guide and use the mount in eq mode. At least I can use the mount and within-sub field rotation to cover up the sins of undersampling. In the days of high read-noise CCDs there was maybe a point in not trying too hard to avoid undersampling, but with CMOS, small pixels and low read noise, why risk it? Fractional binning to achieve critical sampling and the best possible SNR may well be the way to go -- more experiments are needed in that direction. But that option is just not there if you're in the region of being undersampled to begin with. Now I neither expect nor indeed desire to convince you, Vlaiv. I just want to leave this here so that anyone coming to this thread doesn't think that everyone agrees with you. Martin Added in edit: there are techniques like [1] that can handle some classes of undersampled images by eliminating aliasing to an extent [1] https://arxiv.org/pdf/astro-ph/9810394.pdf
  10. We are in complete agreement on one thing -- that it is going round in circles.
  11. I don't think anyone on this thread is doubting Shannon-Nyquist
  12. How do you know its not peer-reviewed? Many conferences have very strict peer reviewing (up to 4 reviewers for one in my field). And what's so odd about all the authors coming from the same institution, discussing their company's product? It is really very normal in the scientific world.
  13. Hi Geoff Is your 8" f/10 scope reduced or operating at its native focal length (and if so, are you guiding?). If native, you might also have issues with the small FOV (0.14 x 0.1 degree) for that combination in terms of getting enough stars to stack (or platesolve, if you do that). Under the right circumstances (great tracking and seeing) there are PNs that ought to turn out well with that setup, but much of time I imagine it will be a struggle. Martin
  14. In the EEVA use case I often zoom well past 3x and it is a perfectly acceptable thing to do. I don't expect resolution to increase of course. But zooming has other purposes. And on your 3, we will have just to agree to disagree rather than go round in circles. My opinion: Oversampling: not bad at all (because of binning). Undersampling: (potentially) irrecoverable loss of detail coupled with square stars unless a (potentially) artefact-inducing interpolation algorithm is used, always assuming you have good criteria to decide which one to use: source: https://matplotlib.org/stable/gallery/images_contours_and_fields/interpolation_methods.html (and yes, I spotted that None and none and nearest are the same; not my image, no need to jump in on this; and I know that interpolation is going to be needed; just that starting off with a small pixel sensor means it is simpler for it to do its job) Martin
  15. I've implemented fractional binning in an EEVA context to allow a user to match the effective pixel size to seeing (in principle) to achieve 'effective critical sampling'. This is all done using a slider, with immediate visual feedback to see how far you can go before detail is lost. At that point SNR improvements due to binning are optimised with no loss in resolution. That's the idea anyway! From a mathematical point of view, the approach underlying fractional binning is a perfectly well-defined operation and while normal binning (ie adding groups of pixels) might seem somehow purer and correct, it has poorer mathematical properties in terms of what it is doing to the spatial frequency content of the image. But this is probably off-topic for the current thread -- I suggest we have a separate thread on fractional binning if there is interest. Martin BTW There is quite a lot of debate elsewhere about the topics of this thread. Here's some links from CN. https://www.cloudynights.com/topic/654570-another-view-of-small-pixels-vs-big-pixels/ https://www.cloudynights.com/topic/789663-pixinsight-square-star-issue/?hl=square+stars#entry11370250 https://www.cloudynights.com/topic/757590-undersampling-and-oversampling/?hl=square+stars#entry10907380 https://www.cloudynights.com/topic/756676-advice-on-sensor-sizeunder-oversampling/?hl=square+stars#entry10896038 https://www.cloudynights.com/topic/747946-arc-seconds-per-pixel-confusion-can-you-help/?hl=square+stars#entry10766943 https://www.cloudynights.com/topic/728911-on-sampling-and-seeing-good-sources-for/page-2?hl=square stars
  16. Yes, good catch. I've edited the post to remove that example for now. Here's a quiz. These are 8 images taken on different nights during the last couple of years with the exact same kit (800mm scope plus Lodestar).I typically make an impressionist record of the seeing most nights I'm out. So these 8 examples represent 2 each of what I recorded as 'excellent', 'good', 'ok', 'poor'. Can you match them up? The above images are a reasonable real-world selection of what I see in front of me. All were produced with the same interpolation algorithm. Let's assume Vlaiv is correct and I am never undersampled with this setup and I can therefore recover all the information by using e.g. Lanczos interpolation, regardless of how excellent the seeing is. Let's say I go ahead and apply that form of interpolation to everything I look at. My worry with doing that, as I mentioned earlier, is that interpolation methods are not free of artefacts, esp. the more sophisticated ones (the fact that the function offers me a choice of 19 interpolation methods tells its own story here). Would it not be 'purer' and certainly simpler to use a small pixel (*) camera? Again, speaking from experience using the exact same 'bad' interpolation methods with the ASI 290, they do a much better job with the small pixel camera, which is hardly surprising, to me at least. Martin (*) Vlaiv, a couple of posts up you say pixels don't have a size and shape; I'm not sure what you're talking about as both display pixels and sensor pixels do have a size and shape. Sure, they can only represent one value, but that isn't what I'm referring to when I talk about small pxiels, and I guess you know that.
  17. Meanwhile, here's a real example of what I'm talking about. I selected this image because it was a night of excellent seeing (as recorded at the time on the capture), and it is when the seeing is very good-excellent that I see square stars with the Lodestar. [edit]: removed incorrect interpolation = None example.
  18. So a simple yes/no Q for Vlaiv: do you agree with this statement: blockiness is *never* an indication of undersampling
  19. The stars. I also wanted to pick up on something else you said towards the head of this thread. You said "Star won't be appear blocky or angular if it is under sampled. This is myth and is consequence of interpolation / resampling algorithm used" You appear to be saying that the raw data (captured by the sensor array) contains nice round stars that some brutalist interpolation/resampling algorithm has turned into blocks. But you can't possibly mean this. What you have gone on to demonstrate essentially goes in the opposite direction: by using a 'nice' interpolation algorithm we can turn the blocky stars we started out with into somewhat rounder stars. BTW nobody is going to disagree with this, though whether anyone would want to do it when the alternative is to sample properly in the first place is another matter. The fundamental question is why are they blocky in the first place? And the answer is that they are undersampled.
  20. Yes, I saw your earlier panel of stars and to me they look like smoothed squares or diamonds and really not very disc-like, and some contain ringing artefacts. Lanczos is known to produce dark pixel artefacts https://www.astropixelprocessor.com/community/tutorials-workflows/interpolation-artifacts/ (I've also seen this quite a lot). The point surely is that one can avoid this by increased sampling in the first place.
  21. sure, tomorrow. But what you're seeing is exactly what I see with the tools I use -- no tricks here. Are you saying I should be (a) not enlarging them and instead squinting at the screen during my EEVA session; or (b) use a different tool with different interpolative functions? Or something else? I can only use what I have. Like I say, its practical experience, which must surely count for something.
  22. nobody is saying beautiful stellar discs => optimum sampling, but I think its fair to say that ugly blocky discs are indicative of sub-optimal sampling.
  23. This is a more typical example (Lodestar on the right this time):
  24. And this was a good example for the Lodestar (left)
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.