Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

jager945

Members
  • Posts

    96
  • Joined

  • Last visited

Everything posted by jager945

  1. Mate, the fact that you commented within minutes on a massive long dissertation about color space conversions means you are either an amazing speedreader of scientific literature, or more interested in "being right" than actually truly furthering your understanding of the subject matter. The comprehensive color space conversion research page is literally titled "Camera to CIE XYZ conversions: an ill-posed problem". It literally goes into all the different subjective decision that need to be made when constructing a solution to the matrix. I cannot fathom how you can maintain construction of the transform matrix "absolutely does not depend on any conditions" when (some of ) those conditions and constraints are spelled out for you. AP puts another heap of subjective variables into the equation (such as picking an illuminant), but we haven't even made it past you acknowledging the basics yet. 😕 Colours don't "have spectrum" (unless you're just simply asserting that visible light falls on the electromagnetic spectrum, which is pretty self evident?). Colours are a human construct, (e.g. to a human "parts of the electromagnetic spectrum appears to have colors under certain conditions" if you really must put it in those terms), which purple vs violet neatly demonstrates (brown is another fun one). Case in point, purple and violet appear - to humans, but no other species - to be shades of the same colour, yet are be made by mixing completely different parts of the spectrum. Like last time, it is hard not to conclude that you are not here to learn or ask questions in good faith, and I don't think furthering this discussion is going to be productive. You are free to "believe" about colour what you want, but please don't confuse or misinform others.
  2. Then let's add some slightly more relevant links as well; https://jakubmarian.com/difference-between-violet-and-purple/ (contradicting the incorrect assertion that "in fact color is light of a certain spectrum") http://www.cvc.uab.es/color_calibration/SpaceConv.htm (contradicting the incorrect assertion that "every manufacturer could produce RAW to XYZ color transform matrix as that absolutely does not depend on any conditions") If anyone is interested, in good faith, in more information or further explanations, do let me know.
  3. I'm afraid none of that is true. I recall a similar exchange with you some time ago in which I urged you to study up on this and gave you the sources to do so. It saddens me you have not taken that opportunity.
  4. A green-dominant OSC/DSLR dataset tends to be a good sign. All you need to do is establish and subtract - in the linear domain - the bias for each channel (indeed ABE/DBE or Wipe will do this). Then you color balance - again in the linear domain - by multiplying the individual channels by a single factor per channel. You will now get expected colouring (in the linear domain). Macbeth charts and/or calibration against them are irrelevant to astrophotography where objects are mostly emissive and cover a huge (virtually infinite) dynamic range. Nothing is lit by a uniform light source at a single brightness, nothing neatly reflects light in the same way, while there is no common agreed upon white balance either. There is a reason why this functionality is not offered in AP software (StarTools excepted, but I implemented this more as a curiosity for terrestrial purposes and debugging, rather than as something that is a must/recommended) and why instrument manufacturers don't bother with providing such a matrix; its assumptions and tightly controlled conditions don't apply. Hope this helps!
  5. Indeed, if StarTools is not playing ball with your datasets, then feel free to share one with me (Dropbox, Google Drive, WeTransfer, OneDrive, etc.). If StarTools needs lots of babysitting, then almost always, the problem is an acquisition issue (bad flats, not dithering, incorrect stacker settings, not linear/virgin anymore etc.). You can find a fairly comprehensive list here. When evaluating post-processing software, the worst mistake you can make, is judging software on its ability to hide issues. Hiding issues is not what post-processing is about . If that's mostly what you are doing and focused on, then you will not progress. The best software maximises and protects real celestial signal you have captured. Some common pitfalls; Trying to use a gradient removal tool to clean up flat frames issues Trying to use levels and curves to hide flat frame issues Trying to use noise reduction to remove correlated noise grain and pattern noise (instead of dithering). Trying to process channels separately non-linearly Applying color balance corrections, deconvolution, gradient removal and other operations after stretching. ...and that's sadly just the tip of the iceberg. Getting to the point where you can 100% trust - in terms of signal - what you have captured should be your immediate goal. You should not be wondering whether something is faint nebulosity or some smudge or gradient remnant. You should not be wondering if something is a Bok globule or shock front, or just a smudge or pattern noise. From there, image processing becomes easy, replicable, "zen" and - for most - fun and rewarding. You can then start learning about what goes into a good image, what is sound signal processing and what isn't. AP processing is so much more than just pressing the right buttons - that's just the software. Understanding what the software does - and why - is where things really start. You will then find that a piece of software's eagerness to hide issues is inversely proportional to its sophistication when it comes to preserving signal and faint detail. Don't think you need particularly deep data either. Not everyone can spend hours under pristine skies. At the end of your night, you just need well calibrated data where the only remaining issue is shot noise and nothing else. @wimvb made an excellent suggestion; have a look at someone else's dataset and see if there is anything there that stands out to you. Perhaps try processing it and - if that is helpful - see how different software reacts differently. Clear skies!
  6. Hi, ISO in the digital era is a rather confusing subject, particularly if you come from a daylight/terrestrial background. For earth-based photography you can use the triangle rule-of-thumb, but once you get into low light photography where every photon counts, this falls apart rather quickly. The "problem" is that you are dealing with one digital sensor that only has one, fixed sensitivity. To emulate other sensitivities, signal is either thrown away or signal is artificially boosted. Both of these operations are detrimental to your final signal. For this reason, in astrophotography, you should find your sensor's native ISO, which is the ISO where you eat into your dynamic range the least, while not throwing away (or reshaping) any parts of the dynamic range. This article by Chris can den Berge goes into good detail about what is going on exactly. On the StarTools website, you can find a list if recommended ISOs settings that correspond most closely to native sensitivity of your sensor. It does not list your particular D7200, but it appears an ISO of ~200 gives a linear response at highest bit-depth (and thus dynamic range). With a fixed ISO, all that truly matters then, is the exposure time you can get away with without over-exposing your image and without tracking error creeping in. A good rule of thumb is aiming for a histogram peak (caused by background light) that is ~ at 1/3rd to the left of the start of your histogram. E.g. this leaves ~2/3rds of the dynamic range to describe true celestial signal. You picked M42 as an example, which is actually one of the most troublesome objects, as it is one of the few objects where the core's brightness is such, that high dynamic range compositing may be warranted at long exposure times (but definitely not at short exposure times). That would be adding another layer of complexity. Hope this helps!
  7. Hi, I think your data acquisition efforts and precious time spent under the night skies are let down by not calibrating with flat frames. They are really not optional; they would calibrate out the dust smudges and severe vignetting. Right now any signal is mired in uneven lighting, making it impossible for people (and algorithms) to discern artifact from faint signal. Have a look here for some important do's and don'ts when acquiring and preparing your dataset for post-processing.
  8. Absolutely! Sounds like fun. Do let me know what the procedure, preferred format and times, etc. are.
  9. This is fantastic data as always, Grant! The growing body of IKO datasets are part of my favourite testing datasets, as they are good and clean, but still have tiny flaws that come with the "mere mortal" nature of the equipment and terrestrial location. This one in particular, is a great dataset to learn a rather specific compositing situation with; adding Ha detail to an object that mostly emits in other wavelengths. I made a post on the ST forums on combining the Ha and visual spectrum data for this specific case. I hope OP and the mods are OK with linking to it here (given I'm a vendor etc.). If not, happy to post it elsewhere. Thanks again & wishing you clear skies!
  10. A "circular gradient" sounds like a flats issue; vignetting and unwanted light gradients are rather different things with different origins (uneven lighting vs unwanted added light) and have different algorithmic solutions (subtraction vs division). Basic sample-setting algorithms for gradient modelling and removal as found in APP/PI/Siril are very crude tools that, while getting you results quickly, make it very easy to destroy faint detail (e.g. think faint IFN around M81/M82 etc.). In essence, they ask you to guess what is "background" in various places. Unfortunately it is impossible to tell with 100% certainty what is background, as you can't see through the muck that is still there - worse, sometimes there is no true background. Conversely, Wipe asks you - if even necessary - to tell it what is definitely not background. For the rest, it relies on undulation frequency of the gradient (slow undulating), which is almost always easily distinguishable (by the algorithm) from actual detail or faint nebulosity (fast undulating). I've likened these different approaches to performing archaeology with a shovel vs doing it with a brush. One causes collateral damage, the other doesn't. For your immediate vignetting problem, ST 1.7 has a dedicate vignetting stage now that might help somewhat, depending on how badly it is off-center. However, you should take flats as soon as practically possible. They are really not optional in AP, and attempting to just "live with vignetting" will severely hamper what you can achieve with your gear; nor algorithm, nor human can be 100% certain that what is in your image is true celestial detail otherwise. That said, if you think this is something ST should be able to solve, please feel free to share a dataset with me - I'm always looking for ways to improve ST! Clear skies!
  11. Congrats on a fine first SHO image! Purple stars are extremely common due to the low emissions in the Ha band of stars. Some people find the purple objectionable (I can personally take them or leave them ). If you are one of those people though, there is a simple trick in ST to get rid of the purple; With Tracking off, launch the Layer module. For Layer mode, choose 'Invert foreground'. Notice how the inverted image now shows those purple stars in green. The Color module has a Cap Green function. Set this to 100%. Launch the Layer module again, once again choosing 'Invert foreground' for the Layer mode. This flips the image back to normal, though this time, without purple stars, I believe this trick/hack should also work in PixInsight with SCNR.
  12. Honestly, something like a used RX 570 also offers very good value for money (though since it's an AMD offering, it will have no CUDA cores). A bit more expensive (as it's a general purpose GPU), but quite a bit more powerful for compute purposes. There are mining versions of these around as well.
  13. Wow, that's quite the difference! It seems your CPU is very much bottlenecking your system here. If it truly is 10 years old, then that would make it a 1st generation i5 on an LGA1156 socket. Depending on your system, an ultra-cheap upgrade would be an X3440 (or X3450, X3460 or X3470) and overclocking that (if motherboard allows). This should give you 4 cores, 8 threads at higher clockspeeds.
  14. You will probably find the difference is actually even more dramatic when comparing the 1.7 non-GPU and GPU versions like-for-like (both versions are included in the download). The 1.7 version, particularly in modules like Decon, uses defaults and settings that are more strenuous, precisely because we now have more grunt at our disposal. For example the amount of default iterations has been bumped up, while Tracking propagation is now fixed to the "During Regularization" setting (e.g. this setting has been removed and is always "on" in 1.7), which back and forward-propagates deconvolution's effects throughout the processing history for each iteration, rather than just once post-decon (which in itself is already unique). This is precisely why GPU acceleration is so exciting; it allows for even more sophisticated brute-force techniques to complete and be evaluated in near-realtime. And it puts "holy grails" like anisotropic deconvolution within reach! The GTX 1070 is an amazing card btw; nice find! The whole Pascal generation of cards is excellent really.
  15. Great to hear you were able to produce a good image so quickly! Most modules that require a star mask now show a prompt that can generate one for you. Did this not work well/reliably for you? This is (very) likely an artifact of your monitoring application. Your GPU is definitely 100% loaded when it is working on your dataset; As opposed to video rendering or gaming, GPU usage in image processing tends to happens in short, intense bursts; during most routines the CPU is still being used for a lot of things that GPUs are really bad at. Only tasks/stages that; can be parallelised are rather "dumb" in terms of logic (with few if-then-else branches) perform a lot of complex calculations AND process large amounts of data complete in milliseconds (up to a couple of seconds or so) ...are suitable for momentary GPU acceleration. As a result, during processing, you should see processing switch back and forth between CPU and GPU. If you have a utility that shows you temperatures and/or GPU workload, you should at the very least, see spikes in temperature and/or GPU usage. Depending on how your monitoring application measures GPU usage, however, these bursts may even be too short to register. During any GPU usage, however, the GPU is fully loaded up. Spikes may be averaged out over time by your monitoring application (with CPU intermittently doing its thing, leaving GPU momentarily unused), making it appear only partial usage is happening. That is, as you now hopefully understand, not the case! If your monitoring application can show maximum values (for Windows you can try GPU-Z or Afterburner), you will almost immediately register the GPU being maxed out. Hope this helps!
  16. Actually, most applications do not make use of multiple cards in your system (ST doesn't). That's because farming out different tasks to multiple cards is a headache and can cause significant overhead in itself (all cards need to receive their own copy of the "problem" to work on from the CPU). It may be worth investigating for some specific problems/algorithms, but generally, things don't scale that well across different cards.
  17. For anyone interested, this is currently a very cheap solution (~40 GBP) to get your hands on some pretty decent compute performance, if you are currently making do with an iGPU or older (or budget) GPU.
  18. It's worth noting that CUDA is an NVidia proprietary/only technology, so anything that specifically requires CUDA will not run on AMD cards. Writing and particularly optimizing for GPUs has been an incredibly interesting experience. Much of what you know about optimisation for general purpose computing does not apply, while new considerations come to the fore. Some things just don't work that well on GPUs (e.g. anything that relies heavily on logic or branching). E.g. a simple general purpose median filter shows disappointing performance (some special cases not withstanding), whereas complex noise evolution estimation throughout a processing chain flies! I was particularly blown away with how incredibly fast deconvolution becomes when using the GPU; convolution and regularisation thereof is where GPUs undeniably shine. My jaw dropped when I saw previews update in real-time on a 2080 Super Mobile! I don't think APP uses the GPU for offloading arithmetic yet by the way. Full GPU acceleration for AP is all still rather new. GPU proliferation (whether discrete or integrated in the CPU) has just about become mainstream and mature. Hence giving it another look for StarTools. Exciting times ahead! 👍
  19. "Por qué no los dos?" Have you thought of using the Optolong L-eXtreme as luminance and using the full spectrum dataset for the colours? Assuming the datasets are aligned against each other, it should be as simple as loading them up in the Compose module (L=Optolong, R, G and B = full spectrum) and setting "Luminance, Color" to "L, RGB" and processing as normal. It should yield the detail of the second image (and reduced stellar profiles), while retaining visual spectrum colouring. Lovely images as-is!
  20. Hi, Crop away the stacking artifacts around the edges and fix your flats (these are almost definitely dust donuts). If you cannot (or don't want to) fix the flats, mask out the dust specks before running Wipe. Have a look at the Wipe documentation here which incidentally shows examples of the same issue you are experiencing above. Hope this helps!
  21. Hi, I replied on the StarTools forum, but thought I'd post the answere here as well; This is a separate thing and doesn't have much to do with AutoDev. Wipe operates on the linear data. It uses an exaggerated AutoDev (e.g. completely ignoring your old stretch) stretch of the linear data to help you visualise any remaining issues. After running Wipe, you will need to re-stretch your dataset. That is because the previous stretch is pretty much guaranteed to be no longer be valid/desirable. That is because gradients have been removed and now no longer take up precious dynamic range. Dynamic range can now be allocated much more effectively to show detail, instead of artifacts and gradients. As a matter of fact, as of version 1.5, you are forced to re-stretch your image; when you close Wipe in 1.5+, it will revert back to the wiped, linear state, ready for re-stretch. Before 1.5 it would try to reconstruct the previous stretch, but - cool as that was - it really needs your human input again, as the visible detail will have changed dramatically. You can actually see a progression by gradually making your RoI larger or smaller; as you make your RoI smaller you will notice the stretch being optimised for the area inside your RoI. E.g. detail inside the RoI wil become much more easy to discern. Conversely, detail outside the RoI will (probably) become less easy to discern. Changing the RoI gradually should make it clear what AutoDev is doing; Confining the RoI progressively to the core of the galaxy, the stretch becomes more and more optimised for the core and less and less for the outer rim. (side note, I'd probably go for something in between the second and third image ) E.g. AutoDev is constantly trying to detect detail inside the RoI (specifically figuring out how neighbouring pixels contrast with each other), and figure out what histogram stretch allocates dynamic range to that detail in the most optimal way. "Optimal" being, showing as much detail as possible. TL;DR In AutoDev, you're controlling an impartial and objective detail detector, rather than a subjective and hard to control (especially in the highlights) bezier/spline curve. Having something impartial and objective is very valuable, as it allows you to much better set up a "neutral" image that you can build on with local detail-enhancing tools in your arsenal (e.g. Sharp, HDR, Contrast, Decon, etc.); Notice how the over-exposed highlights do not bloat *at all*. The cores stay in their place and do not "bleed" into the neighboring pixels. This is much harder to achieve with other tools; star bloat is unfortunately still extremely common. It should be noted that noise grain from your noise floor can be misconstrued by the detail detector as detail. Bumping up the 'Ignore Fine Detail <' parameter should counter that though. I hope the above helps some, but if you'd like to post a dataset you're having trouble with, perhaps we can give you some more specific advice?
  22. Hi, Stacking artifacts a clear as day in the image you posted (as is Wipe's signature reaction to their presence). Crop them away and you should be good!
  23. I sincerely hope you will re-read my post. It was made in good faith and answers many of your questions and concerns directly or through concise information behind the links. To recap my post; a great deal of your issues can likely be alleviated by flats, by dithering (giving your camera a slight push perpendicular to the tracking direction between every x frames will do) and using a rejection method in your stacker. If that is not something that you wish to do or research, then that is, of course, entirely your prerogative. Given your post above, it's probably not productive I engage with you any further at this time.
  24. Hi, I would highly recommend doing as suggested in step 1 of the quick start tutorial, which is perusing the "starting with a good dataset" section. Apologies in advance, as I'm about to give some tough love.... Your dataset is not really usable in its current state. Anything you would learn trying to process it will likely not be very useful or replicable for the next dataset. There are three important parts of astrophotography that need to work in unison. These three things are acquisition, pre-processing, and post-processing. Each step is dependent on the other in that sequence. Bad acquisition will lead to issues during pre-processing and post-processing. You can try to learn these three things all at once, very slowly, or use a divide and conquer strategy. If you want to learn post-processing now, try using a publicly available dataset. You will then know what an ok (not perfect) dataset looks like and how easy it really is to establish a quick, replicable workflow. If you want to learn pre-processing, also try using a publicly available dataset (made up out of its constituent sub frames). You will then know what settings to use (per step 1 in the quick start tutorial, see here for DSS-specific settings) and what flats and bias frames do. Again, you will quickly settle on a quick, replicable workflow. Finally, getting your acquisition down pat is a prerequisite for succeeding in the two subsequent stages if you wish to use your own data; at a minimum take flats (they are not optional!), dither unless you absolutely can't, and get to know your gear and its idiosyncrasies (have you got the optimal ISO setting?). The best advice I can give you right now, is to spend some time researching how to produce a clean dataset (not deep - just clean!). Or, if you just love post-processing, grab some datasets from someone else and hone your skill in that area. It's just a matter of changing tack and focusing on the right things. You may well find you will progress much quicker. I'm sorry for, perhaps, being somewhat blunt, but I just want to make sure you get great enjoyment out of our wonderful hobby and not endless frustration. Wishing you clear skies and good health! EDIT: I processed it, just to show there is at least a star field in there, but, per the above advice, giving you the workflow would just teach you how to work around dataset-specific, easily avoidable issues...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.