Jump to content

sgl_imaging_challenge_banner_31.thumb.jpg.b7a41d6a0fa4e315f57ea3e240acf140.jpg

jager945

Members
  • Content Count

    61
  • Joined

  • Last visited

Community Reputation

86 Excellent

3 Followers

About jager945

  • Rank
    Nebula

Contact Methods

  • Website URL
    https://www.startools.org

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Congrats on a fine first SHO image! Purple stars are extremely common due to the low emissions in the Ha band of stars. Some people find the purple objectionable (I can personally take them or leave them ). If you are one of those people though, there is a simple trick in ST to get rid of the purple; With Tracking off, launch the Layer module. For Layer mode, choose 'Invert foreground'. Notice how the inverted image now shows those purple stars in green. The Color module has a Cap Green function. Set this to 100%. Launch the Layer module again, once again choosing 'Invert foregr
  2. Honestly, something like a used RX 570 also offers very good value for money (though since it's an AMD offering, it will have no CUDA cores). A bit more expensive (as it's a general purpose GPU), but quite a bit more powerful for compute purposes. There are mining versions of these around as well.
  3. Wow, that's quite the difference! It seems your CPU is very much bottlenecking your system here. If it truly is 10 years old, then that would make it a 1st generation i5 on an LGA1156 socket. Depending on your system, an ultra-cheap upgrade would be an X3440 (or X3450, X3460 or X3470) and overclocking that (if motherboard allows). This should give you 4 cores, 8 threads at higher clockspeeds.
  4. You will probably find the difference is actually even more dramatic when comparing the 1.7 non-GPU and GPU versions like-for-like (both versions are included in the download). The 1.7 version, particularly in modules like Decon, uses defaults and settings that are more strenuous, precisely because we now have more grunt at our disposal. For example the amount of default iterations has been bumped up, while Tracking propagation is now fixed to the "During Regularization" setting (e.g. this setting has been removed and is always "on" in 1.7), which back and forward-propagates deconvolution's ef
  5. Great to hear you were able to produce a good image so quickly! Most modules that require a star mask now show a prompt that can generate one for you. Did this not work well/reliably for you? This is (very) likely an artifact of your monitoring application. Your GPU is definitely 100% loaded when it is working on your dataset; As opposed to video rendering or gaming, GPU usage in image processing tends to happens in short, intense bursts; during most routines the CPU is still being used for a lot of things that GPUs are really bad at. Only tasks/stages that; can be
  6. Actually, most applications do not make use of multiple cards in your system (ST doesn't). That's because farming out different tasks to multiple cards is a headache and can cause significant overhead in itself (all cards need to receive their own copy of the "problem" to work on from the CPU). It may be worth investigating for some specific problems/algorithms, but generally, things don't scale that well across different cards.
  7. For anyone interested, this is currently a very cheap solution (~40 GBP) to get your hands on some pretty decent compute performance, if you are currently making do with an iGPU or older (or budget) GPU.
  8. It's worth noting that CUDA is an NVidia proprietary/only technology, so anything that specifically requires CUDA will not run on AMD cards. Writing and particularly optimizing for GPUs has been an incredibly interesting experience. Much of what you know about optimisation for general purpose computing does not apply, while new considerations come to the fore. Some things just don't work that well on GPUs (e.g. anything that relies heavily on logic or branching). E.g. a simple general purpose median filter shows disappointing performance (some special cases not withstanding), whereas
  9. "Por qué no los dos?" Have you thought of using the Optolong L-eXtreme as luminance and using the full spectrum dataset for the colours? Assuming the datasets are aligned against each other, it should be as simple as loading them up in the Compose module (L=Optolong, R, G and B = full spectrum) and setting "Luminance, Color" to "L, RGB" and processing as normal. It should yield the detail of the second image (and reduced stellar profiles), while retaining visual spectrum colouring. Lovely images as-is!
  10. Hi, Crop away the stacking artifacts around the edges and fix your flats (these are almost definitely dust donuts). If you cannot (or don't want to) fix the flats, mask out the dust specks before running Wipe. Have a look at the Wipe documentation here which incidentally shows examples of the same issue you are experiencing above. Hope this helps!
  11. Hi, I replied on the StarTools forum, but thought I'd post the answere here as well; This is a separate thing and doesn't have much to do with AutoDev. Wipe operates on the linear data. It uses an exaggerated AutoDev (e.g. completely ignoring your old stretch) stretch of the linear data to help you visualise any remaining issues. After running Wipe, you will need to re-stretch your dataset. That is because the previous stretch is pretty much guaranteed to be no longer be valid/desirable. That is because gradients have been removed and now no longer take up precious dynamic ran
  12. Hi, Stacking artifacts a clear as day in the image you posted (as is Wipe's signature reaction to their presence). Crop them away and you should be good!
  13. I sincerely hope you will re-read my post. It was made in good faith and answers many of your questions and concerns directly or through concise information behind the links. To recap my post; a great deal of your issues can likely be alleviated by flats, by dithering (giving your camera a slight push perpendicular to the tracking direction between every x frames will do) and using a rejection method in your stacker. If that is not something that you wish to do or research, then that is, of course, entirely your prerogative. Given your post above, it's probably not productive I engag
  14. Hi, I would highly recommend doing as suggested in step 1 of the quick start tutorial, which is perusing the "starting with a good dataset" section. Apologies in advance, as I'm about to give some tough love.... Your dataset is not really usable in its current state. Anything you would learn trying to process it will likely not be very useful or replicable for the next dataset. There are three important parts of astrophotography that need to work in unison. These three things are acquisition, pre-processing, and post-processing. Each step is dependent on the other in that
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.