Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Scott Badger

New Members
  • Posts

    19
  • Joined

  • Last visited

Reputation

36 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Depending on your sampling, drizzle integration can improve resolution but requires dithering to work. Cheers, Scott
  2. FWIW, my first try with AI4 resulted in ringing around some stars which didn’t occur with AI2 at the same settings, but there was no ringing after running Correct Only first. Cheers, Scott
  3. Maybe the improvement is because the the mount is slightly unbalanced in RA and after the flip it’s east-side heavy? Depending on the mount, that can sometimes result in better guiding than perfect balance. Cheers, Scott
  4. Astrophotography is a sea of red herrings…. Cheers, Scott
  5. Yes please. It would be great to hear any feedback he gives Also, he’s done a number of interviews/presentations with TAIC, Adam Block, VisibleDark, etc. where he discusses his tools and AI as applied to AP that are pretty interesting. Cheers, Scott
  6. Possibly, but depends on the data I think..... My seeing is generally poor to terrible and I'm very oversampled at 0.33"/pixel, so it's rare that my psf is less than the 8 pixel maximum, plus I generally run BX at the default 0.9 sharpening amount, so can't really push it much further. I've also found, though I haven't heard others make this comment, that when I've tried using BX on data with poor SNR (some of my first images with little exposure time and taken with a dslr), it didn't really go awry, it just didn't do much at all. Something else to note, and not sure how much difference it makes, but according to RC, BlurX wasn't trained on stars any bigger than 8 pixels. For most that should be fine, but if your seeing is poor and you're imaging at a relatively long focal length (like me), the results may not be optimal. BTW, I would encourage the OP to send the problem image to RC. He's very responsive to any inquiries and I'm sure he'd like to see an issue like that. Cheers, Scott
  7. No offense taken and enjoying the discussion. I don't disagree with anything you say above, other than your suspicion that more than just deconvolution is involved. I can't prove it, but Russ Croman has been very forward with information about the tools he's created, and AI in general, and for me at least, he's established enough credibility that I'm willing to to take him at his word unless or until there's clear evidence that something else is going on. That said, there's certainly no question that BX, like any other decon tool, can create artefacts if the settings aren't optimal and sometimes, depending on the quality of the data, it won't have much effect at all. Like other tools still in development, bugs can also come up, and as I mentioned before, I think the OP's image may be an example of that. I've not seen anything like it with my use of BX, and there was a similar report in another thread (CL or the PI forum) from someone who had just gotten the new version (I haven't updated mine yet). To your examples, I've not seen it turn structures into stars, but I have seen the reverse; stars strung into a filament like structure. This can happen (as it happened to me with Andromeda) when the stars are small and part of a larger scale structure, like a galaxy. Using the manual psf setting and enabling "Nonstellar then Stellar" solved that particular issue, but in the end you sometimes just have to back off on the amount setting (or use a smaller psf), even if you aren't getting as much improvement as you were hoping for. Anyhow, it's no different than any other tool in that artefacts are possible and like you said, it's up to the operator to assess the results and try again if necessary. Something to note as well is that BX uses a tiled approach (part of what it can do that we can't with traditional tools), so depending on the number of stars in a particular tile, or the quality of stars in that part of the image, the sharpening effect and/or artefacts created can vary across the image. That's why the manual psf mode should always be used, and maybe why it can appear that an artefact is something 'added' to the image (i.e no direct/mathematical relationship with the image data) when it occurs in just one area as opposed to throughout the image like we'd see with a traditional decon tool. Cheers, Scott
  8. I don't know of any processing tool that gives you perfect results, and no one, especially the developer, is claiming that BlurX does. And like any software in continual development, there may be bugs that need fixing, and according to a CL thread, it appears the latest releases of BX does have a problem, especially when used in the auto psf mode. When not broken and used properly, though, it can do deconvolution (the algorithms it's actually using) better than we can mostly because it can work with millions of parameters, tens of millions even, where we can only handle a few. But in the end it's still up to the operator to look at the result of any process, assess it, and decide whether it's an improvement. You can blindly trust the result of any processing software. Or not. It *is* true that the nature of a trained neural network tool means a final result can't be 'predicted', or even back-engineered i.e. no matter how well you know the code and the training, you can't know exactly what leads/led to a particular result. But for our work, I don't see why that matters. I couldn't follow the math, let alone the code, in most of the processes we use, 'AI' or not. For research purposes, that opaqueness *is* a problem, but again, Russ Croman is the first to say that BlurX or any AI software shouldn't be used for that purpose. That said, professional astronomers have recently gained a better understanding of the M87 black hole by using AI to sharpen the image. There is also the DIY vs DFY (done for you) divide, but for me it's like woodworking, there are some things I like to do with hand-tools, and others where power tools and guides are my choice. Positioning this as all or nothing, if you use BX you might as well just go out and get a Stellina and be done with it, is a straw man argument. It would be just as fair for an astrophotographer that uses film and manual guiding to level the same charge at anyone using electronics...... Arguments regarding image manipulation in general get muddy really fast, and the use of AI has little to do with it. Generally speaking, there's corrective and aesthetic; sharpening stars and fixing shapes is corrective, reducing the number of stars is aesthetic. That's pretty clear. But what about background and the use of tools like DBE and noise reduction, where's the line between correction and aesthetic improvement there? Or how we stretch images and insert contrast in a non-linear way, especially with tools like GHS? And then there's color...... I really don't see how any line drawn is anything but personal, or why anyone should feel compelled to process their images in a particular way. You can like, or not like, my images for all sorts of reasons, and the world goes on. Cheers, Scott
  9. I'm surprised there's been no mention of the social impact that losing our night sky will have/is having, not just from Starlink, but already lost to ground based light pollution increasing at an exponential rate. Consider how much of our narratives, mythologies, and religions are invested in the night sky -- the sense of wonder, awe, mystery, scale, and even fright that it brings to our lives. Just one or two generations from now, when stars and planets are simply a fact, not an experience, when all we can see (if anything) is a reflection of ourselves in the technology we've hidden it behind, our worldview can't but be significantly changed, and I doubt for the better....... Scott
  10. Another factor to consider is that the collective reflection from the 100,000 plus satellites planned, and their debris, will mean no more bortle 1 sites. Anywhere. Cheers, Scott
  11. Though I have a small shed style observatory now (4 panel roof instead of roll off—better wind protection), I had no problems with the TG cover I used to use. From -15F to 105F, through rain and snow. Cover your USB ports though! Lost one to a mud wasp nest….. Cheers, Scott
  12. I agree with Robin, plus some of the new tools like BlurXterminator widen the acceptable range given how much it can improve resolution and eccentricity. It's a dark art and everyone has their own stew recipe, but in the end knowing whether any single marginal sub is going to improve or diminish the integration, or for that matter the final image, is a guesstimate at best. Don't know if it's possible, but wouldn't it be great if for a given set of selected subs, PI's Subframe Selector could calculate what the stats of the integration would be. For myself, it's hard not to let the blood-sweat-and-tears factor influence my decisions...... : ) Cheers, Scott
  13. Thanks Vlaiv! In a confusing way, that does make sense!.... : ) And a more complete answer than I got from MB. They simply said that "the lower the arcsec, the better the seeing and the higher the index 1 should be" and "we're currently working on improving the meteoblue seeing forecast". Anyhow, I don't really need a chart, clearly my seeing is just plain bad!.... Cheers, Scott
  14. When I look at the current Meteoblue seeing chart (screenshot attached), the best seeing index value corresponds with the highest arc sec value which is opposite to what I would have expected......am I missing something basic? The info page says that the arc sec value is based on both indexes and bad layers, does that mean I should pay more attention to the arc second values than the index values? Thanks, Scott
  15. A Halloween pair, Witch’s Broom and Ghost of Cassiopeia.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.