Jump to content

Narrowband

jager945

Members
  • Posts

    96
  • Joined

  • Last visited

Reputation

138 Excellent

3 Followers

Contact Methods

  • Website URL
    https://www.startools.org

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi, As others alluded to, it appears that the dataset's channels were heavily re-weighted. So much so that it appears to me the only usable luminance signal resides in just the red channel. I'm not sure at what point this is happening, but this is the root of the issue (on the StarTools website you can find a guide with regards to preparing your dataset/signal for best results, which should hold for most applications). If you wish to "rescue" the current stack, you can extract the red channel and use it as your luminance (which I think some others have done here), while using the entire stack for your coloring. The gradients then, while troublesome, become fairly inconsequential, as they mostly affect the channels that were artificially boosted - they will only appear in the coloring where the luminance signal allows them to. Processing then becomes more trivial. In the visual spectrum, HII regions like this are typically colored a reddish-pink due to a mixture of emissions at different wavelengths (not just H-alpha!). Pure red is fairly rare in outer space (areas filtered by dust are probably the biggest exception). The use of a CLS filter will usually not change this - it just cuts a part of the spectrum (mostly yellows). What a CLS filter will change, however, is the ability for some color balancing algorithms (including PCC) to come up with anywhere near correct coloring. Though you cannot expect very usable visual spectrum coloring from a CLS filter, you can still use a CLS filter to create a mostly useful bi-color (depending on the object) that roughly matches visual spectrum coloring, by mapping red to red, and green + blue to green and blue, e.g. R=R, G=(G+B)/2, B=(G+B)/2. This will yield a pink/cyan rendition that tends to match Ha/S-II emissions in red, and Hb/O-III emissions in cyan/green fairly well, as it would appear in the visual spectrum. In general, if you are just starting out in AP processing, these days it is worth tackling an important issue upfront; the question of whether you are okay with introducing re-interpreted deep-faked detail that was never recorded (and cannot be corroborated in images taken by your peers), or whether are you just in it to create something plausible for personal consumption. Some are okay with randomly missing stars and noise transformed into plausible detail that does not exist in reality, others are definitely not. Depending on your goals in this hobby, resorting to inpainting or agumenting/re-intepretating of plausible detail by neural-hallucination based algorithms, as part of your workflow can be a dead end, particularly if you aspire to practice astrophotography (rather than art) and aspire to some day enter photography competitions, etc. FWIW, here is a simple and quick-ish workflow in StarTools that stays mostly true to the signal as recorded to get you started, if useful to you; In the Compose module, load the dataset three times, once for red, green and blue. Set Green and Blue total exposure to 0. Keep the result - you will now be processing a synthetic luminance frame that consists entirely out of the red channel's data, while using the color from all channels. --- AutoDev To see what we're working with. We can see heavily correlated noise (do you dither?), stacking artefacts, heavily varying star shapes, some gradients. --- Bin To reduce oversampling and improve signal. --- Crop Crop away stacking artefacts. --- Wipe Dark anomaly filter set to 3px, you can use correlation filtering to try to reduce the adverse effects the correlated noise. --- AutoDev AutoDev, RoI that includes Melotte 15 and IC1795/NGC896, Increase Ignore Fine Detail parameter until AutoDev no longer picks up on the correlated noise. You should notice full stellar profiles visible at all times. --- Contrast Equalize preset --- HDR Optimize preset. This should resolve some detail in Melotte 15 and NGC896. --- Sharp Defaults. --- SVDecon The stars are quite heavily deformed in different ways depending on location (e.g. the point spread function of the detail is "spatially variant"). This makes the dataset a prime candidate for restoration through spatially variant PSF deconvolution, though expectations should be somewhat tempered due to the correlated noise. Set samples across the image, so SV Decon has examples of all the different star shapes. Set Spatial Error to ~1.4 and PSF Resampling to Intra-Iteration + Centroid Tracking Linear. You should see stars coalesce into point lights better (and the same happening to detail that was "smeared out" in a similar way, now being restored). It's not a 100% fix for misshapen, but it's a fix based on actual physics and recorded data, as far as the SNR allows. Improved acquisition is obviously the more ideal way of taking care of misshapen stellar profiles though. --- Color The processed synthetic luminance is now composited with the coloring. The module yields the expected visual-spectrum-without-the-yellows (strong orange/red and blue) result, consistent with a CLS filter. As mentioned above, you can opt for a Bi-Color remapping (Bi-Color preset). You should now see predominantly HII red/pink nebulosity, with hints of Hb/O-III blue, as well as some blue stars. --- Shrink Shrink make stars less prominent (but never destroying them). Defaults. You may wish to increase Deringing a little if Decon was applied and ringing is visible. --- Super Structure Super Structure Isolate preset, with Airy Disk Radius 10% (this is a widefield). This pushes back the noisy background, while retaining the superstructures (and their detail). You could do a second pass with the Saturate preset if desired. --- Switch Tracking off Perform final noise reduction to taste. Defaults were used for expediency/demonstration, but it should be possible to mitigate the correlated noise better (ideally it should not be present in your datasets at all though!). You should end up with something like this; Hope this helps!
  2. Indeed, apologies, I'm really just adding a little to the misinformation; these filters are indeed technically "wideband" filters (W designation), but - most important for your assertions, are not type of wideband filters (RGB filters) we all use; they do not record color as-we-see-it. They record different parts of the spectrum; F814W records mostly infrared and F435 records well into the ultra-violet, while there is a substantial gap in visual red response between F814 and F555. Bottom line is that you will not get visual spectrum images out of this filter set, and using any HST image as reference for visual spectrum (400-700nm) colouring is a bad (non-sensical) idea! With regards to (current) AI tools, it is really quite simple; they add stuff that isn't there and isn't real, using external sources. This is in contrast to algorithms that exclusively transform and use what is in your dataset. Such operations can usually be reversed by applying the inverse of the operation to arrive at the original image. Not so with neural hallucination. Its whole premise is precisely to neurally hallucinate "plausible" detail from a given input. Plausible does not equate real. Let alone the fact that any plausible detail originates from an exclusively non-astro training set in the case of the Topaz suite. Things like StarNet++ are a solution looking for a problem (other than rendering a plausible starless image for artistic purposes of course, or using it to create star masks - the latter is absolutely a good use case!), I cannot think of any legitimate reason to introduce data that was never recorded into your photograph. Separating stars from background for the purpose of compositing later, is wholly unnecessary and yields no benefits, only drawbacks (in the form of artifacts). Fortunately, it is usually easy to pick when StarNet was used for that purpose, even for a layperson (the Swiss cheese effect, translucent stars, missing stars, odd discs, etc.). I got nothing against AI (I studied AI in University!), but the way it is currently employed by "tools" like StarNet and Topaz AI is unsophisticated and gimmicky, rather than making true photographs actually better (again, with the exception of identifying stars for the purpose of masking). There are absolutely legit applications for AI in astronomy (and even image processing tasks), but neural hallucination is probably the laziest, lowest hanging fruit for which a neural net can be deployed. It's literally the turn-your-face-into-a-super-model Instagram filter equivalent of astrophotography (sometimes with equally hilarious/disturbing results - to me anyway). We can do better. And I'm convinced a useful application will come along some day. In the meantime, I would urge anyone who thinks these things are the best thing since sliced bread, to do a little research into how they work. It's not magic, the resulting detail is not real, wasn't just "hidden" in your dataset, and was not really recorded by you. The emperor has no clothes.
  3. Unfortunately, there is much misinformation here - from the usual suspect - about colouring. Citing a Hubble narrowband composite (acquired with F435W (B), F555W (V)), and F814W (I)) filters) as a visual spectrum reference image is all you need to know about the validity of the "advice" being dispensed here. In ST, the Color module starts you off in the Color constancy mode, so that - even if you don't prefer this mode - you can sanity check your color against known features and processes; A visual spectrum image of a close by spiral galaxy should reveal a yellow-ish core (less star formation due to gas depletion = only older stars left), bluer outer rim (more star formation), red/brown dust lanes (white-ish light filtered by dust grains) and purple/pinkish HII areas dotted around (red Ha+other blue Balmer series, some O-III = pink). Foreground stars should exhibit a good random selection of the full black body radiation curve. You should be able to easily distinguish red, orange, yellow, white and blue stars in roughly equal numbers (provided the field is wide enough and we're not talking about any sort of associated/bound body of stars). You are totally free (and able!) to achieve any look you wish in StarTools (provided your wishes follows physics and sound signal processing practices). For example, if you prefer the old-school (and decidedly less informative) desaturated-highlight look of many simpler apps, it is literally a single click in the Color module on the "Legacy" preset button. With this dataset in particular, you do need to take care your color balance is correct, as the default colour balance comes out as too green (use the MaxRGB mode to find green dominance, so you can balance it out). It's - again - a single click on a green-dominant area to eliminate this green bias. Detailed help with all of the above can be found in the manual, website and user notes created by the community on the forums. The Color module documentation even uses M101 to demonstrate how to properly calibrate in StarTools and what to look out for. StarTools may be easy to get to grips with quick on a basic level. However, like any other software, learning what the software does (and preferably how it does it) will be necessary to bend it to your will; every module is utterly configurable to your tastes (again, as long as actual recorded signal is respected). The other important aspect of image processing, is using a reference screen that is properly calibrated, both in the color and brightness domain. If you can see a "vaseline"-like appearance of the background after noise reduction, then your screen is set too bright. If it seems the background is pitch black, it is set too dark. ST is specifically designed to use the full dynamic range it assumes is at its disposal on an assumed correctly calibrated screen, with an assumed correct tapering off of the brightness response. (even then, you can add back some Equalized Grain if you really must and don't like the - normally invisible - softness). Unfortunately, there is a shocking amount of people who have never calibrated their screens and just assume that what they see is what someone else will see. It doesn't matter which software you use; if your screen is poorly calibrated (too bright or too dim), your will make incorrect decisions and/or fret about things that are not visible on a truly calibrated screen (which, mind you, closely correlates to the average screen you can expect your audience to view your work on!). When in doubt, check your final image on as many screens as you can get your hands on. Better yet, add a second (or third) screen to your image processing rig if practical. Finally, one last word of caution on the use of software that neurally hallucinates detail, like Topaz AI Sharpen or Denoise; using these "tools" is a bridge to far for many APers, as the "detail" that was added or "shaped" was never recorded and, while looking plausible to a layperson, is usually easily picked up on as a deep fake by your peers with a little more experience. I hope any of this helps!
  4. Happy to do a personalised tutorial with your own dataset if that helps (absolutely 0 obligations - we all do this for the love of the hobby!). Making sure your data is the best it can be is incredibly important (see here), particularly for StarTools, but also if you wish to trial PI.
  5. Just to clarify how/why ST takes a different approach here. You are absolutely right of course, in that the point of a non-linear stretch is to adapt linear data for human brightness perception (which roughly follows a power function). However, the specific design goal for the global stretch in StarTools ("AutoDev") is not to pick any "winners" or "losers" (in terms of detail) yet in the dynamic range. E.g. in StarTools we solve for the best "compromise" non-linear stretch that shows all detail equally in the shadows, midtones and highlights. E.g. you don't fret so much about "showing the most amount of detail", and the software is able to get you in a ballpark optimum stretch by objective statistical analysis. You then go on to progressively refine and optimise - from coarse to ultra-fine - dynamic range locally with subsequent tools. The process is much akin to how a sculptor starts with a rough block of stone and progressively carves out finer and finer features. As a result, one global stretch iteration is all it takes in StarTools, while subsequent algorithms (and you) have a much easier time lifting things from the shadows or rescuing them from the highlights. Hope that helps / makes sense. Sorry OP about hijacking this thread for this little aside! More on-topic; definitely have a look at the different trials on offer. And - most importantly - try software with good quality data. The worst thing you can do is judge software on how well it is able to hide flaws, because as you progress in AP, you learn how to minimise flaws and will become much more concerned with making the most of your hard won data. Clear skies!
  6. Don't go with these guys. Selling a GT710 as a "Gaming PC" is like selling a Fiat 500 as a "Military Vehicle". They are being very dishonest. A GT710 is a bottom of the barrel display adapter, and will not run any games. It won't, in fact, be any faster than the iGPU that the i5 already comes with (so it's really just a waste of space, power, and money). For a (very) ballpark idea of where a GPU (or iGPU)ranks in terms of speed for StarTools, have a look here. EDIT: Depending on where you live / postage, etc, you could make a bid on something like this; https://www.ebay.co.uk/itm/174946058520?hash=item28bb990118:g:CCMAAOSwIIRhR3CB It's an older i7 quad core CPU, but holds its own against many newer i7 quad cores. Crucially it comes with a pretty decent GPU. You'd want to add some more storage though. The 2nd generation i7 2xxx CPUs or 3rd generation i7 3xxx CPUs are considered "old" now, but perform almost as well as the 3rd, 4th and even 6th and 7th generation of i7 quad cores. This knowledge, lets you save some money on the CPU/system and put that towards RAM, storage or GPU. You could even decide to buy a cheap GPU used, later or separately (just make sure it will fit in the case, and the power supply can supply the power, this is less of a given when going for office/OEM systems like the Dells and the HPs). If you know someone who will install it for you (or just look up a YouTube video - it's a 5 minute job with a couple of screws), it's a great way to save some cash. For the GPU, 1GB and 2GB GPUs have fallen out of favour, but are perfectly fine for StarTools. You could try to score one with good performance on that OpenCL benchmark (something like a HD7850 for example, for example this guy).
  7. Some sort of discrete graphics card would help immensely with StarTools (as it is fully GPU accelerated as of 1.7). The problem with Small Form Factor machines, is that you can only fit the more expensive low profile cards. See if you can find a machine that fits a full-sized card, and will at least lets you upgrade later on. SFF machines also tend to have smaller/custom power supplies that limit the sort of GPU you can put in there later on. Just something to keep in mind!
  8. For anyone interested in the method outlined in my previous post, this is a test image (TIFF), graciously created and donated to the public by Mark Shelley, to explore colour retention and rendering. It is a linear image, with added noise and added bias (to mimic, say, light pollution). This is the TIFF stretched with a gamma of 4.0; This is the image once "light pollution" was modelled and subtracted (in the linear domain of course!), and then stretched with a gamma of 4.0; And this is the image once its colours were white balanced independently of the luminance (in the linear domain of course; never "tweak" your colours in PS/GIMP/Affinity once your image is no longer linear - it makes 0 sense from a signal processing PoV!) with a factor of 1.16x for red, and 1.39x for blue vs 1.0x for green, and subsequently composited with a gamma 4.0 stretched luminance; Notice how the faintest "stars" still have the same colouring perceptually (CIELAB space was used). You can also, more naively, force R:G:B ratio retention, but this obviously forces perceptual brightness changes depending on colours (blue being notoriously dimmer); In this rendition, the blue "stars" seem much less bright than the white stars, but R:G:B ratios are much better respected (ratios are reasonably well preserved, even in the face of the added noise and the modelling and subtraction of a severe, unknown bias). It's as trade-off. But as always, knowing about these trade-offs allows you to make informed decisions. Regardless, notice also how colouring in bright "stars" is resolved until no colour information is available due to over-exposure. FWIW, any basic auto-balancing routine (for example a simple "grey world" implementation) should come close to the ~ 1.16:1.0:1.39 R:G:B colour balance. The benefits of this method should hopefully be clear; it doesn't matter what exposure time was used, how bright the objects are, or how sensitive your camera is in a particular channel - you should get very similar results. All that is required, is that spectrum response of the individual channels is "ballpark" equal to all other cameras. This tends to be the case - the whole point of the individual channel response for visual spectrum purposes, is to mimic the response of the human eye to begin with. In essence, this method exploits a de-facto "standard" amongst all cameras; errors will fluctuate around the average of all specific spectral responses of all cameras. In my experience, those deviations, however, tend to be remarkably small (which - again - is to be expected by design). Of course, taking into account the aforementioned caveats (filter response of mono CCD filters, violet "bump", proper IR/UV cut-off etc.). All that remains is making sure your white balance/reference can be argued to be "reasonable" (many ways to do this, as mentioned before; sampling average of foreground stars, nearby galaxy, G2V star, balancing by known juxtaposed processes like Ha vs O-III, etc.).
  9. Therein lies the rub. There is no single accepted raw XYZ transformation - even if we all used the exact same camera! (hence the many different "modes" on cameras - see the presentation I linked to earlier). You can definitely attempt to create a calibration matrix (as included by all manufacturers), but even the construction of such a matrix is an ill-posed problem to begin with. Also, you cannot really use a random reflective (like a Macbeth chart) or emissive (like a screen) calibration target either, without ensuring that its SPD is representative of the SPD under which your are expecting to record. As you know we record objects with many different SPDs (every star has its own unique SPD). Sure, you can standardise on the SPD of our own sun (which is what G2V calibration does), but this is precisely one of those arbitrary steps. Others use the average SPD of all stars in an image, yet others use the SPD of a nearby galaxy. Is all lost then? No, not entirely! As @vlaiv alludes to, it is possible to create colour renditions that vary comparatively little and are replicable across many different setups, cameras and exposures (though not entirely in the way he suggests). Of course, here too, arbitrary assumptions are made, but they are minimised. The key assumption, is that overall, 1. visual spectrum camera space RGB response is 2. similar. 1. This assumes we operate in the visual spectrum - if your camera has an extended response (many OSCs do), add in a luminance (aka IR/UV cut) filter. 2. This assumes that red, green and blue filters record the same parts of the spectrum. Mercifully, this tends to be very close, no matter the manufacturer. One notable exception is that many consumer-oriented cameras have a bump in sensitivity in the red channel to be able to record violet, whereas many B filters (as used in mono CCDs) do not. This, in essence, sidesteps one XYZ colour space conversion step entirely and uses a derivative (e.g. still white balanced!) of camera space RGB directly for screen space RGB. This is in, fact, what all AP-specific software does (e.g. PI, APP and ST). All AP software dispenses with camera response correction matrices entirely, as they simply cannot be constructed for AP scenes without introducing arbitrary assumptions. As a matter of fact, StarTools is the only software that will allow you to your DSLR's the manufacturer matrix (meant for terrestrial scenes and lighting) if you really wish, but its application is an entirely arbitrary step. As @vlaiv alludes to as well, the tone curve ("stretch") we use in AP is entirely different (due to things being very faint), which drastically impacts colouring. Local detail / HDR enhancement ("local" stretching) adds to this issue and makes color even harder to manage (which is one of the reasons why some end up with strange "orange" dust lanes in galaxies for example - important brightness context that informs psychovisual aspects of colouring is mangled/lost; see @powerlord's fantastic "Brown - color is weird video"). Indeed, "the" solution is to set aside the colouring as soon as as its calibration has been performed. You then go on to process and stretch the luminance component as needed. The reasoning is simple; objects in outer space should not magically change colour depending on how an earthling stretches the image. Once you are done processing the luminance portion, you then composite the stretched image and the "unadulterated" colour information (in a colour space of your choosing) to arrive at a "best of both worlds" scenario. Adhere to this and you can create renditions that have very good colour consistency across different targets, while showing a wealth of colour detail. Crucially the end result will vary markedly little between astrophotographers, no matter conditions or setup/camera used. This approach is StarTools' claim to fame, and is the way it works by default (but can obviously be bypassed / changed completely). With identical white references, you should be able to achieve results that are replicable by others, and that is - not coincidentally - very close to what scientific experiments (and by extension documentary photography) are about.
  10. You are absolutely right of course that they can be expressed as XYZ values (apologies for not being precise enough). It's super easy in fact - the coordinates fall on the spectral locus itself (e.g. they fall on the circumference of a CIE XYZ chromaticity diagram), which as you point out is indeed outside of real-world gamuts to begin with. What I meant is that the monochromatic light we record (in raw, camera-specific RGB) cannot be expressed as one pre-determined XYZ value, due to color space conversion (which necessitates picking a white point, applying response correction matrices - if you so choose - etc.). E.g. you cannot say, "this XYZ triplet is wrong for the camera-space RGB values/data you recorded".
  11. Indeed, color can not be measured in XYZ tristumuls values, but can be converted into XYZ tristumulus values. The problem is in the measuring domain, and the (arbitrary) assumptions that the measuring necessarily entails. E.g. this is the reason why monochromatic emission lines (H-alpha, O-III) cannot be expressed in XYZ values.
  12. That's a great motivation, as color is a super important, but often overlooked aspect of AP. 👍 Color cannot be measured like that, simply because there is no one canonical reference. What looks white to you in daylight over there in Europe, looks yellow to me right now here in dark Australia. Color cannot be measured. Some things that cause color can however be measured, however their interpretation is not set (see presentation).
  13. This thread serves little purpose? Color rendering (and also perception) is highly subjecttive. This excellent presentation by Michael Brown will teach you everything you ever wanted to know about the entire imaging pipeline. This, amongs other things, includes the influence of color space conversions,/white references and tone curves, all of which directly and dramatically influence color rendition. All of these are fair game and are even more arbitrary when dealing with astronomical images (there is no one canonical white reference or illuminant for astrophotographical scense, nor is there one "ballpark" presecribed tone curve/stretch). That's not to say anything goes, but the continuum of solutions is vast.
  14. Great to hear it helped and is making a great image even better. You are indeed correct about the square root If you try the different presets, you should notice the amount of extra bits (e.g. increased precision in the dataset) going up by whole integers. Do let me know if you'd like a 1.8 preview (which has some improvements/ tweaks to Denoise) - I can send you a private link. Clear skies!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.