Jump to content

Narrowband

GreatAttractor

Members
  • Posts

    521
  • Joined

  • Last visited

Everything posted by GreatAttractor

  1. Don't worry, it's not that fragile. I recommend one of those optics cleaning microfibre cloths - sweep the sensor (or the window glass if your camera has one) with minimal pressure (since we just want to catch the specks, not "scrape away a stain") a few times, that should do it. Also, when changing cameras / attaching C-mount lenses etc., minimize the time the sensor is exposed - have a cover ready and reattach it immediately.
  2. Dont' forget to grab the latest version 0.6.1: https://github.com/GreatAttractor/imppg/releases It's much faster, all processing (except image alignment) is now performed on the GPU (details can be found here).
  3. You're welcome! It will be probably more convenient to use imgalt's successor, ImPPG (also for stacks' post-processing).
  4. Hi Nigella, Note that the "prevent ringing" function has no effect in an image like above. It's only applied around overexposed=fully white regions, e.g. when shooting prominences with an overexposed disc (there's an example here: link).
  5. Hi Damian, great results! BTW, don't forget to get the latest version of ImPPG (much faster processing, now runs on the GPU).
  6. ImPPG is written using cross-platform libraries, so it can be compiled as a native OS X application. I have plans to eventually get myself a Mac, iron out any potential incompatibilities and create an OS X binary version for download. For the time being, you could try running the Windows version via Wine (should work fine, at least in the non-GPU-accelerated mode).
  7. A small bugfix update – version 0.6.1 – is available at https://github.com/GreatAttractor/imppg/releases Bug fixes: Invalid batch processing results in OpenGL mode Enhancements: Tone curve window position reset command For the rare situations where window positioning does not work correctly, you can now force a reset of the tone curve window's position:
  8. Looks great & uniform. What's the telescope?
  9. ImPPG version 0.6.0 has been released. After some architectural cleanup, I added a GPU/OpenGL back end, that is, almost all processing (except image alignment) is now performed on the GPU. In practical terms, on most computers it means: processing is faster by a factor of several or more for quite big selection sizes and moderate L-R iteration counts, the results are rendered immediately as you move the "sigma" slider no delay of image refresh when scrolling with zoom ≠ 100% cubic interpolation does not cause a slow down You can see the new version in action in this short video: https://www.youtube.com/watch?v=giq4jCnC6KM Benchmarking result of my system - CPU: Ryzen 2700 (8 cores, 16 threads, 3.2 GHz base), GPU: Radeon R370. A typical workload - batch processing of 200 images, 1.2 Mpix each, 50 iterations of L-R deconvolution, unsharp masking and tone mapping. (Note that in CPU mode all cores are used.) CPU mode: 2:20 min GPU mode: 19 s Even on a 5-year old laptop with an integrated Intel GPU there is a noticeable speed-up.
  10. Image Post-Processor (ImPPG) version 0.6.0 is available at: https://github.com/GreatAttractor/imppg/releases New features: GPU (OpenGL) back end for much faster processing Enhancements: View scrolling by dragging with the right mouse button (previously: with the middle button) Zooming in/out with the mouse wheel (previously: Ctrl + mouse wheel) If you experience issues running in GPU mode, make sure you have the latest graphics drivers. (The old CPU mode can still be used.)
  11. Yes, those were the days... Not to mention 300 000-km filaments (2015):
  12. Thanks, a very cool material indeed! Reminds me I should finally sit down to making a video tutorial myself.
  13. To underline dim prominences, indeed it's best to combine two exposures. But it's not absolutely necessary if the prominences are bright; for example, in the right image below (4-pane mosaic made with Lunt 50), all that was needed was to gently "lift" the tone curve at the dark end (performed in ImPPG):
  14. My interpretation (ImPPG settings files attached): settings_1.xml settings_2.xml settings_3.xml settings_4.xml As for the last one you posted, the etalon was somewhat off-band; this is indicated by these flat, featureless splotches around the sunspot (I recognize the effect from my old setup using Lunt 35): When you're tuning the etalon, a good rule of thumb is "being on band = darkest possible image".
  15. Hi Nigella, have fun with ImPPG! If you'd like to post the raw stack (16-bit), I'm sure others (and me) would be happy to show their processing approaches.
  16. Unfortunately, the reference point alignment phase is not yet as robust as I'd like (though usually works fine for my Hα material). In general, changing some of these processing settings may help: search radius: try 5-10 structure threshold: try higher values brightness threshold: try higher values structure scale: try 2-3 I have a few ideas for improving this, so stay tuned.
  17. Hello everyone, I've added a description of algorithms used by Stackistry/libskry (an open-source cross-platform stacker): link to PDF
  18. Ouch! That's what I used to do (even with 200-frame time lapses). Got quite "fast" at it, say, 7-8 seconds per image… So eventually I wrote ImPPG for: - quick batch-processing of multiple stacks - automatic alignment of sequences with sub-pixel precision Download and tutorial links available at https://greatattractor.github.io/imppg/ Exporting the animation can be also done in GIMP: - open all aligned frames via File/Open as layers - crop/resize everything to taste - preview with Filters/Animation/Playback - make a GIF via File/Export As..., choose GIF, mark "As animation"
  19. I had a Nikon D40 for a few years and loved it for regular photography (eventually replaced it with a more convenient Micro-4/3 Olympus). As for astrophotographic use, I did my first experiments with it (before transitioning to a modded webcam and then a planetary CCD). For simple Milky Way stacking and Solar System single-frame or a few-frame stacks it did the job:
  20. I also got some nice results (for the aperture), with a PGR Chameleon 3 mono camera (ICX445) and a 1.6x Barlow:
  21. Everything shot with 90 mm refractor + Lunt 50 etalon. Smoky prominences (~1 h total): M5.3-class flare on 4/02 (0:46 h total): AR 2661 on fast-forward (5 hours with 5-minute intervals), with a minor C2.1-class flare. Solar rotation clearly visible: 7-pane mosaic:
  22. Yesterday I was bitten by kernel update (to 4.10.5) on my main computer (I use Fedora 25). The boot process would at some point just stop, with nothing suspicious in the last visible boot messages; the machine was responsive, though, and Ctrl-Alt-Del reboot was possible. Booting using the previous kernel was fine. After reviewing the boot log (where -1 means second-to-last boot, -0 would be the last (successful) boot etc.): journalctl -k -b -1 it turned out there was a problem uploading firmware blob to my Radeon R7 370 (I use the standard open-source driver): kernel: [drm] radeon: 2048M of VRAM memory ready kernel: [drm] radeon: 2048M of GTT memory ready. kernel: [drm] Loading pitcairn Microcode kernel: radeon 0000:01:00.0: Direct firmware load for radeon/si58_mc.bin failed with error -2 kernel: [drm] radeon/PITCAIRN_mc2.bin: 31100 bytes kernel: si_fw: mixing new and old firmware! kernel: [drm:si_init [radeon]] *ERROR* Failed to load firmware! kernel: radeon 0000:01:00.0: Fatal error during GPU init Indeed, for my particular Radeon model the newer kernel tries to upload si58_mc.bin, but the file was missing. The solution was to get the file from https://people.freedesktop.org/~agd5f/radeon_ucode/, put it in /usr/lib/firmware/radeon and regenerate initramfs images: dracut --regenerate-all --force
  23. Had a few fruitful sessions with the new 90 mm refractor/Lunt 50 etalon setup: Animations (they take a while to load):
  24. Recently I’ve dusted off my old ray tracing code, done some OpenGL reading/refreshing, and implemented real-time ray (and path) tracing on GPU. It turns out the present-day GLSL (OpenGL shading language) is capable enough, and even an integrated Intel graphics has acceptable performance. Video: https://www.youtube.com/watch?v=2lAmO1Ghtn0 The most important part is the ability to use the hierarchical scene graph (tree); thanks to this, ray tracing’s time complexity is only O(log n) w.r.t. the number of scene objects, as opposed to O(n) for hardware rasterisation (i.e. what today’s GPUs normally do). Even though GLSL doesn’t allow recursion, it’s simple to search the tree iteratively and even without simulating a stack (which would eat up precious GPU registers – I’ve tried that too). Now that I feel more confident with OpenGL, GPU acceleration for Stackistry will probably arrive in the not-too-distant future. The quality estimation and the shift-and-add phases should be easiest to port into GLSL. Even if we remain strongly IO-bound (due to all the shuffling of images between RAM and GPU memory), I think a performance boost by a factor of several is possible.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.