Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Scott Badger

New Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by Scott Badger

  1. 1 hour ago, Adam J said:

    Interesting thought occurs to me in that turning the settings up to the max, while not visually preasing, might produce extreams of results that give a clue as to the underlying process or direction being used by the AI.

    Adam

    Possibly, but depends on the data I think.....  My seeing is generally poor to terrible and I'm very oversampled at 0.33"/pixel, so it's rare that my psf is less than the 8 pixel maximum, plus I generally run BX at the default 0.9 sharpening amount, so can't really push it much further. I've also found, though I haven't heard others make this comment, that when I've tried using BX on data with poor SNR (some of my first images with little exposure time and taken with a dslr), it didn't really go awry, it just didn't do much at all. 

    Something else to note, and not sure how much difference it makes, but according to RC, BlurX wasn't trained on stars any bigger than 8 pixels. For most that should be fine, but if your seeing is poor and you're imaging at a relatively long focal length (like me), the results may not be optimal.

    BTW, I would encourage the OP to send the problem image to RC. He's very responsive to any inquiries and I'm sure he'd like to see an issue like that.

    Cheers,
    Scott

    • Like 1
  2. 10 hours ago, Adam J said:

     

    It only matters to me for three reasons, 1) If the result looks fake 2) If the result unintentionally adds elements that did not appear at all in the original image. 3) If the result removes elements from the original image that you don't want to loose. 

    For me the example above covers all three. 

    1) I find the stars too perfect to be believable. 

    2) A weird approximation of a planetary nebula was added. 

    3} In adding that nebula like object a star was totally removed from the image as opposed to repaired. 

    It could well be setting that the op has selected causing these issues. But that's another reason to take care

    Now does that mean I would not use it myself. No I would use it. It means that you need to keep a very close eye on what it's doing. For me the biggest sin is adding anything not real to the image as it removes it too close to art and to far from the science sides of the hobby. 

    To put a finger on it I believe it's sometimes treating bright linear structures as stars. Another one I have seen is it creating little blobs ob diffractions spikes. All in all it doesn't matter, but it makes me think more is going on than just the claimed AI administered deconvolution. So I would be checking my images. 

    It's just an opinion don't be offended by it. 

    Adam

    No offense taken and enjoying the discussion. I don't disagree with anything you say above, other than your suspicion that more than just deconvolution is involved. I can't prove it, but Russ Croman has been very forward with information about the tools he's created, and AI in general, and for me at least, he's established enough credibility that I'm willing to to take him at his word unless or until there's clear evidence that something else is going on. That said, there's certainly no question that BX, like any other decon tool, can create artefacts if the settings aren't optimal and sometimes, depending on the quality of the data, it won't have much effect at all. Like other tools still in development, bugs can also come up, and as I mentioned before, I think the OP's image may be an example of that. I've not seen anything like it with my use of BX, and there was a similar report in another thread (CL or the PI forum) from someone who had just gotten the new version (I haven't updated mine yet).

    To your examples, I've not seen it turn structures into stars, but I have seen the reverse; stars strung into a filament like structure. This can happen (as it happened to me with Andromeda) when the stars are small and part of a larger scale structure, like a galaxy. Using the manual psf setting and enabling "Nonstellar then Stellar" solved that particular issue, but in the end you sometimes just have to back off on the amount setting (or use a smaller psf), even if you aren't getting as much improvement as you were hoping for. Anyhow, it's no different than any other tool in that artefacts are possible and like you said, it's up to the operator to assess the results and try again if necessary. Something to note as well is that BX uses a tiled approach (part of what it can do that we can't with traditional tools), so depending on the number of stars in a particular tile, or the quality of stars in that part of the image, the sharpening effect and/or artefacts created can vary across the image. That's why the manual psf mode should always be used, and maybe why it can appear that an artefact is something 'added' to the image (i.e no direct/mathematical relationship with the image data) when it occurs in just one area as opposed to throughout the image like we'd see with a traditional decon tool.

    Cheers,
    Scott

  3. I don't know of any processing tool that gives you perfect results, and no one, especially the developer, is claiming that BlurX does. And like any software in continual development, there may be bugs that need fixing, and according to a CL thread, it appears the latest releases of BX does have a problem, especially when used in the auto psf mode. When not broken and used properly, though, it can do deconvolution (the algorithms it's actually using) better than we can mostly because it can work with millions of parameters, tens of millions even, where we can only handle a few. But in the end it's still up to the operator to look at the result of any process, assess it, and decide whether it's an improvement. You can blindly trust the result of any processing software. Or not.

    It *is* true that the nature of a trained neural network tool means a final result can't be 'predicted', or even back-engineered i.e. no matter how well you know the code and the training, you can't know exactly what leads/led to a particular result. But for our work, I don't see why that matters. I couldn't follow the math, let alone the code, in most of the processes we use, 'AI' or not. For research purposes, that opaqueness *is* a problem, but again, Russ Croman is the first to say that BlurX or any AI software shouldn't be used for that purpose. That said, professional astronomers have recently gained a better understanding of the M87 black hole by using AI to sharpen the image.

    There is also the DIY vs DFY (done for you) divide, but for me it's like woodworking, there are some things I like to do with hand-tools, and others where power tools and guides are my choice. Positioning this as all or nothing, if you use BX you might as well just go out and get a Stellina and be done with it, is a straw man argument. It would be just as fair for an astrophotographer that uses film and manual guiding to level the same charge at anyone using electronics......

    Arguments regarding image manipulation in general get muddy really fast, and the use of AI has little to do with it. Generally speaking, there's corrective and aesthetic; sharpening stars and fixing shapes is corrective, reducing the number of stars is aesthetic. That's pretty clear. But what about background and the use of tools like DBE and noise reduction, where's the line between correction and aesthetic improvement there? Or how we stretch images and insert contrast in a non-linear way, especially with tools like GHS? And then there's color...... I really don't see how any line drawn is anything but personal, or why anyone should feel compelled to process their images in a particular way. You can like, or not like, my images for all sorts of reasons, and the world goes on.

    Cheers,
    Scott

    • Like 1
  4. I'm surprised there's been no mention of the social impact that losing our night sky will have/is having, not just from Starlink, but already lost to ground based light pollution increasing at an exponential rate. Consider how much of our narratives, mythologies, and religions are invested in the night sky -- the sense of wonder, awe, mystery, scale, and even fright that it brings to our lives. Just one or two generations from now, when stars and planets are simply a fact, not an experience, when all we can see (if anything) is a reflection of ourselves in the technology we've hidden it behind, our worldview can't but be significantly changed, and I doubt for the better.......

    Scott

  5. 1 hour ago, Neil_104 said:

    I've considered this before, having looked into the telegizmo covers, but was afraid of rust due to evaporation of rain up inside the cover. You've never experienced this then?

    Maybe I should take another look into it, though I'd need to work out another place to dry the washing as the mount will be in the way 😂

    Though I have a small shed style observatory now (4 panel roof instead of roll off—better wind protection), I had no problems with the TG cover I used to use. From -15F to 105F, through rain and snow. Cover your USB ports though! Lost one to a mud wasp nest…..

    Cheers,

    Scott

    • Thanks 1
  6. I agree with Robin, plus some of the new tools like BlurXterminator widen the acceptable range given how much it can improve resolution and eccentricity. It's a dark art and everyone has their own stew recipe, but in the end knowing whether any single marginal sub is going to improve or diminish the integration, or for that matter the final image, is a guesstimate at best. Don't know if it's possible, but wouldn't it be great if for a given set of selected subs, PI's Subframe Selector could calculate what the stats of the integration would be.

    For myself, it's hard not to let the blood-sweat-and-tears factor influence my decisions...... : )

    Cheers,
    Scott

  7. Thanks Vlaiv! In a confusing way, that does make sense!.... :  ) And a more complete answer than I got from MB. They simply said that "the lower the arcsec, the better the seeing and the higher the index 1 should be" and "we're currently working on improving the meteoblue seeing forecast". Anyhow, I don't really need a chart, clearly my seeing is just plain bad!....

    Cheers,

    Scott

  8. When I look at the current Meteoblue seeing chart (screenshot attached), the best seeing index value corresponds with the highest arc sec value which is opposite to what I would have expected......am I missing something basic? The info page says that the arc sec value is based on both indexes and bad layers, does that mean I should pay more attention to the arc second values than the index values?

    Thanks,

    Scott

    seeing.JPG

  9. I’ll add that for anyone who hasn’t read the documentation for BX, I highly recommend it. It addresses nearly every concern that’s been raised and makes it clear that this isn’t magic. The tool’s output, like every other decon or sharpening tool, is an approximation. Many different sharpened images could be convolved into the same original and accuracy is dependent on many factors, including the settings used and the nature of the data inputted.

    Cheers,

    Scott

    EDIT: RC's own words, "[...] it is up to each of us whether our final images are faithful representations of reality, or grotesque, over-processed messes. [....] A scalpel can be a precision life-saving tool, or a murder weapon, depending on how it's wielded."

    Ha! I don't think any of BX's strongest critics, even, have used words like 'grotesque' and 'murder'!..... : )

    • Like 1
  10. 17 hours ago, ollypenrice said:

    I've been inputting PSFs generated by PSFImage but haven't yet come to any conclusions worth sharing.

    Olly

    I found with the embedded stars of NGC 206 in M 31, there was a lot of bridging with the default setting. Using a PSF diam of 3 px did a much better job, but that’s a much smaller diameter than the actual PSF of the image. I also played around with a lum integration of M51 and using a more accurate value (8 pix) significantly increased the de-blurring over either a lower inaccurate PSF diam or Auto, but a little unnatural looking. A sort of ‘glassy’ or ‘liquidy’ look, like I’ve seen with some of the examples posted. In the documentation, it’s noted that any coma etc. corrections to stars will also be applied to non-stellar features under the automatic setting, and it’s likely to overcorrect images or areas of images that don’t have stars. Not sure, though, how an inputted PSF works with a more localized application of deconvolution.

    In any case, I think we still have much to learn about using BX, and the software probably still has some learning of its own to do, plus development tweaks and fixes, but seems like it’s being judged  against an expectation that v 1.0 is going to work perfectly on any image using the default settings.

    I also understand those who feel it makes things too easy, but that’s a strictly personal opinion, and like woodworking, say, you can always choose a power tool or hand tool for any step of the process.

    Cheers,

    Scott

    • Thanks 1
  11. A lot of broad judgments here based on first time uses and little experimentation..... For example, I haven't seen any mention of using an inputted PSF diameter instead of the auto setting and I've seen that that can have a significant effect on both artefacts and degree of de-blurring. Every processing tool can be pushed too far or misused and RC has made it clear that BX has its limitations. How it's used and when it's used are just as important as if it's used.

    Cheers,
    Scott

    • Like 1
  12. I have a QHY 268M (CMOS) and a fairly dark site, and even at 120s exposures of M13, I got the vertical banding mentioned above and very low background ADU values (0.007) in the integration.

    In any case, as has mostly been said, longer exposures have a higher SNR because the noise increases to the square root of the signal, i.e. both noise and signal increase over a longer exposure time, but signal increases faster. The trade-off is a higher percentage of unusable subs due to planes, clouds, poor guiding, etc. and the longer the exposure, the more imaging time wasted.

    Increasing the number of subs doesn't increase signal, but increases SNR by allowing the averaging process to reduce the random noise closer to zero. So, more detail is exposed because the noise it's buried in (in the individual subs) is stripped away. Think of removing the sediment to expose a fossil (and really, our images are photon phossils...). The more subs, the less noise, but at a diminishing return -- the amount of noise reduction is the square root of the number of subs.

    EDIT: I think the general rule is, as long an exposure as equipment and conditions allow, and as many exposures as patience allows.... The one exception might be If seeing is particularly bad, then a more lucky imaging approach of taking thousands of subs, each just a few seconds exposure, can work and improve resolution, if you have the computer power for it.

    Cheers,
    Scott

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.