Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

jager945

Members
  • Posts

    96
  • Joined

  • Last visited

Everything posted by jager945

  1. Hi, As others alluded to, it appears that the dataset's channels were heavily re-weighted. So much so that it appears to me the only usable luminance signal resides in just the red channel. I'm not sure at what point this is happening, but this is the root of the issue (on the StarTools website you can find a guide with regards to preparing your dataset/signal for best results, which should hold for most applications). If you wish to "rescue" the current stack, you can extract the red channel and use it as your luminance (which I think some others have done here), while using the entire stack for your coloring. The gradients then, while troublesome, become fairly inconsequential, as they mostly affect the channels that were artificially boosted - they will only appear in the coloring where the luminance signal allows them to. Processing then becomes more trivial. In the visual spectrum, HII regions like this are typically colored a reddish-pink due to a mixture of emissions at different wavelengths (not just H-alpha!). Pure red is fairly rare in outer space (areas filtered by dust are probably the biggest exception). The use of a CLS filter will usually not change this - it just cuts a part of the spectrum (mostly yellows). What a CLS filter will change, however, is the ability for some color balancing algorithms (including PCC) to come up with anywhere near correct coloring. Though you cannot expect very usable visual spectrum coloring from a CLS filter, you can still use a CLS filter to create a mostly useful bi-color (depending on the object) that roughly matches visual spectrum coloring, by mapping red to red, and green + blue to green and blue, e.g. R=R, G=(G+B)/2, B=(G+B)/2. This will yield a pink/cyan rendition that tends to match Ha/S-II emissions in red, and Hb/O-III emissions in cyan/green fairly well, as it would appear in the visual spectrum. In general, if you are just starting out in AP processing, these days it is worth tackling an important issue upfront; the question of whether you are okay with introducing re-interpreted deep-faked detail that was never recorded (and cannot be corroborated in images taken by your peers), or whether are you just in it to create something plausible for personal consumption. Some are okay with randomly missing stars and noise transformed into plausible detail that does not exist in reality, others are definitely not. Depending on your goals in this hobby, resorting to inpainting or agumenting/re-intepretating of plausible detail by neural-hallucination based algorithms, as part of your workflow can be a dead end, particularly if you aspire to practice astrophotography (rather than art) and aspire to some day enter photography competitions, etc. FWIW, here is a simple and quick-ish workflow in StarTools that stays mostly true to the signal as recorded to get you started, if useful to you; In the Compose module, load the dataset three times, once for red, green and blue. Set Green and Blue total exposure to 0. Keep the result - you will now be processing a synthetic luminance frame that consists entirely out of the red channel's data, while using the color from all channels. --- AutoDev To see what we're working with. We can see heavily correlated noise (do you dither?), stacking artefacts, heavily varying star shapes, some gradients. --- Bin To reduce oversampling and improve signal. --- Crop Crop away stacking artefacts. --- Wipe Dark anomaly filter set to 3px, you can use correlation filtering to try to reduce the adverse effects the correlated noise. --- AutoDev AutoDev, RoI that includes Melotte 15 and IC1795/NGC896, Increase Ignore Fine Detail parameter until AutoDev no longer picks up on the correlated noise. You should notice full stellar profiles visible at all times. --- Contrast Equalize preset --- HDR Optimize preset. This should resolve some detail in Melotte 15 and NGC896. --- Sharp Defaults. --- SVDecon The stars are quite heavily deformed in different ways depending on location (e.g. the point spread function of the detail is "spatially variant"). This makes the dataset a prime candidate for restoration through spatially variant PSF deconvolution, though expectations should be somewhat tempered due to the correlated noise. Set samples across the image, so SV Decon has examples of all the different star shapes. Set Spatial Error to ~1.4 and PSF Resampling to Intra-Iteration + Centroid Tracking Linear. You should see stars coalesce into point lights better (and the same happening to detail that was "smeared out" in a similar way, now being restored). It's not a 100% fix for misshapen, but it's a fix based on actual physics and recorded data, as far as the SNR allows. Improved acquisition is obviously the more ideal way of taking care of misshapen stellar profiles though. --- Color The processed synthetic luminance is now composited with the coloring. The module yields the expected visual-spectrum-without-the-yellows (strong orange/red and blue) result, consistent with a CLS filter. As mentioned above, you can opt for a Bi-Color remapping (Bi-Color preset). You should now see predominantly HII red/pink nebulosity, with hints of Hb/O-III blue, as well as some blue stars. --- Shrink Shrink make stars less prominent (but never destroying them). Defaults. You may wish to increase Deringing a little if Decon was applied and ringing is visible. --- Super Structure Super Structure Isolate preset, with Airy Disk Radius 10% (this is a widefield). This pushes back the noisy background, while retaining the superstructures (and their detail). You could do a second pass with the Saturate preset if desired. --- Switch Tracking off Perform final noise reduction to taste. Defaults were used for expediency/demonstration, but it should be possible to mitigate the correlated noise better (ideally it should not be present in your datasets at all though!). You should end up with something like this; Hope this helps!
  2. Indeed, apologies, I'm really just adding a little to the misinformation; these filters are indeed technically "wideband" filters (W designation), but - most important for your assertions, are not type of wideband filters (RGB filters) we all use; they do not record color as-we-see-it. They record different parts of the spectrum; F814W records mostly infrared and F435 records well into the ultra-violet, while there is a substantial gap in visual red response between F814 and F555. Bottom line is that you will not get visual spectrum images out of this filter set, and using any HST image as reference for visual spectrum (400-700nm) colouring is a bad (non-sensical) idea! With regards to (current) AI tools, it is really quite simple; they add stuff that isn't there and isn't real, using external sources. This is in contrast to algorithms that exclusively transform and use what is in your dataset. Such operations can usually be reversed by applying the inverse of the operation to arrive at the original image. Not so with neural hallucination. Its whole premise is precisely to neurally hallucinate "plausible" detail from a given input. Plausible does not equate real. Let alone the fact that any plausible detail originates from an exclusively non-astro training set in the case of the Topaz suite. Things like StarNet++ are a solution looking for a problem (other than rendering a plausible starless image for artistic purposes of course, or using it to create star masks - the latter is absolutely a good use case!), I cannot think of any legitimate reason to introduce data that was never recorded into your photograph. Separating stars from background for the purpose of compositing later, is wholly unnecessary and yields no benefits, only drawbacks (in the form of artifacts). Fortunately, it is usually easy to pick when StarNet was used for that purpose, even for a layperson (the Swiss cheese effect, translucent stars, missing stars, odd discs, etc.). I got nothing against AI (I studied AI in University!), but the way it is currently employed by "tools" like StarNet and Topaz AI is unsophisticated and gimmicky, rather than making true photographs actually better (again, with the exception of identifying stars for the purpose of masking). There are absolutely legit applications for AI in astronomy (and even image processing tasks), but neural hallucination is probably the laziest, lowest hanging fruit for which a neural net can be deployed. It's literally the turn-your-face-into-a-super-model Instagram filter equivalent of astrophotography (sometimes with equally hilarious/disturbing results - to me anyway). We can do better. And I'm convinced a useful application will come along some day. In the meantime, I would urge anyone who thinks these things are the best thing since sliced bread, to do a little research into how they work. It's not magic, the resulting detail is not real, wasn't just "hidden" in your dataset, and was not really recorded by you. The emperor has no clothes.
  3. Unfortunately, there is much misinformation here - from the usual suspect - about colouring. Citing a Hubble narrowband composite (acquired with F435W (B), F555W (V)), and F814W (I)) filters) as a visual spectrum reference image is all you need to know about the validity of the "advice" being dispensed here. In ST, the Color module starts you off in the Color constancy mode, so that - even if you don't prefer this mode - you can sanity check your color against known features and processes; A visual spectrum image of a close by spiral galaxy should reveal a yellow-ish core (less star formation due to gas depletion = only older stars left), bluer outer rim (more star formation), red/brown dust lanes (white-ish light filtered by dust grains) and purple/pinkish HII areas dotted around (red Ha+other blue Balmer series, some O-III = pink). Foreground stars should exhibit a good random selection of the full black body radiation curve. You should be able to easily distinguish red, orange, yellow, white and blue stars in roughly equal numbers (provided the field is wide enough and we're not talking about any sort of associated/bound body of stars). You are totally free (and able!) to achieve any look you wish in StarTools (provided your wishes follows physics and sound signal processing practices). For example, if you prefer the old-school (and decidedly less informative) desaturated-highlight look of many simpler apps, it is literally a single click in the Color module on the "Legacy" preset button. With this dataset in particular, you do need to take care your color balance is correct, as the default colour balance comes out as too green (use the MaxRGB mode to find green dominance, so you can balance it out). It's - again - a single click on a green-dominant area to eliminate this green bias. Detailed help with all of the above can be found in the manual, website and user notes created by the community on the forums. The Color module documentation even uses M101 to demonstrate how to properly calibrate in StarTools and what to look out for. StarTools may be easy to get to grips with quick on a basic level. However, like any other software, learning what the software does (and preferably how it does it) will be necessary to bend it to your will; every module is utterly configurable to your tastes (again, as long as actual recorded signal is respected). The other important aspect of image processing, is using a reference screen that is properly calibrated, both in the color and brightness domain. If you can see a "vaseline"-like appearance of the background after noise reduction, then your screen is set too bright. If it seems the background is pitch black, it is set too dark. ST is specifically designed to use the full dynamic range it assumes is at its disposal on an assumed correctly calibrated screen, with an assumed correct tapering off of the brightness response. (even then, you can add back some Equalized Grain if you really must and don't like the - normally invisible - softness). Unfortunately, there is a shocking amount of people who have never calibrated their screens and just assume that what they see is what someone else will see. It doesn't matter which software you use; if your screen is poorly calibrated (too bright or too dim), your will make incorrect decisions and/or fret about things that are not visible on a truly calibrated screen (which, mind you, closely correlates to the average screen you can expect your audience to view your work on!). When in doubt, check your final image on as many screens as you can get your hands on. Better yet, add a second (or third) screen to your image processing rig if practical. Finally, one last word of caution on the use of software that neurally hallucinates detail, like Topaz AI Sharpen or Denoise; using these "tools" is a bridge to far for many APers, as the "detail" that was added or "shaped" was never recorded and, while looking plausible to a layperson, is usually easily picked up on as a deep fake by your peers with a little more experience. I hope any of this helps!
  4. Happy to do a personalised tutorial with your own dataset if that helps (absolutely 0 obligations - we all do this for the love of the hobby!). Making sure your data is the best it can be is incredibly important (see here), particularly for StarTools, but also if you wish to trial PI.
  5. Just to clarify how/why ST takes a different approach here. You are absolutely right of course, in that the point of a non-linear stretch is to adapt linear data for human brightness perception (which roughly follows a power function). However, the specific design goal for the global stretch in StarTools ("AutoDev") is not to pick any "winners" or "losers" (in terms of detail) yet in the dynamic range. E.g. in StarTools we solve for the best "compromise" non-linear stretch that shows all detail equally in the shadows, midtones and highlights. E.g. you don't fret so much about "showing the most amount of detail", and the software is able to get you in a ballpark optimum stretch by objective statistical analysis. You then go on to progressively refine and optimise - from coarse to ultra-fine - dynamic range locally with subsequent tools. The process is much akin to how a sculptor starts with a rough block of stone and progressively carves out finer and finer features. As a result, one global stretch iteration is all it takes in StarTools, while subsequent algorithms (and you) have a much easier time lifting things from the shadows or rescuing them from the highlights. Hope that helps / makes sense. Sorry OP about hijacking this thread for this little aside! More on-topic; definitely have a look at the different trials on offer. And - most importantly - try software with good quality data. The worst thing you can do is judge software on how well it is able to hide flaws, because as you progress in AP, you learn how to minimise flaws and will become much more concerned with making the most of your hard won data. Clear skies!
  6. Don't go with these guys. Selling a GT710 as a "Gaming PC" is like selling a Fiat 500 as a "Military Vehicle". They are being very dishonest. A GT710 is a bottom of the barrel display adapter, and will not run any games. It won't, in fact, be any faster than the iGPU that the i5 already comes with (so it's really just a waste of space, power, and money). For a (very) ballpark idea of where a GPU (or iGPU)ranks in terms of speed for StarTools, have a look here. EDIT: Depending on where you live / postage, etc, you could make a bid on something like this; https://www.ebay.co.uk/itm/174946058520?hash=item28bb990118:g:CCMAAOSwIIRhR3CB It's an older i7 quad core CPU, but holds its own against many newer i7 quad cores. Crucially it comes with a pretty decent GPU. You'd want to add some more storage though. The 2nd generation i7 2xxx CPUs or 3rd generation i7 3xxx CPUs are considered "old" now, but perform almost as well as the 3rd, 4th and even 6th and 7th generation of i7 quad cores. This knowledge, lets you save some money on the CPU/system and put that towards RAM, storage or GPU. You could even decide to buy a cheap GPU used, later or separately (just make sure it will fit in the case, and the power supply can supply the power, this is less of a given when going for office/OEM systems like the Dells and the HPs). If you know someone who will install it for you (or just look up a YouTube video - it's a 5 minute job with a couple of screws), it's a great way to save some cash. For the GPU, 1GB and 2GB GPUs have fallen out of favour, but are perfectly fine for StarTools. You could try to score one with good performance on that OpenCL benchmark (something like a HD7850 for example, for example this guy).
  7. Some sort of discrete graphics card would help immensely with StarTools (as it is fully GPU accelerated as of 1.7). The problem with Small Form Factor machines, is that you can only fit the more expensive low profile cards. See if you can find a machine that fits a full-sized card, and will at least lets you upgrade later on. SFF machines also tend to have smaller/custom power supplies that limit the sort of GPU you can put in there later on. Just something to keep in mind!
  8. For anyone interested in the method outlined in my previous post, this is a test image (TIFF), graciously created and donated to the public by Mark Shelley, to explore colour retention and rendering. It is a linear image, with added noise and added bias (to mimic, say, light pollution). This is the TIFF stretched with a gamma of 4.0; This is the image once "light pollution" was modelled and subtracted (in the linear domain of course!), and then stretched with a gamma of 4.0; And this is the image once its colours were white balanced independently of the luminance (in the linear domain of course; never "tweak" your colours in PS/GIMP/Affinity once your image is no longer linear - it makes 0 sense from a signal processing PoV!) with a factor of 1.16x for red, and 1.39x for blue vs 1.0x for green, and subsequently composited with a gamma 4.0 stretched luminance; Notice how the faintest "stars" still have the same colouring perceptually (CIELAB space was used). You can also, more naively, force R:G:B ratio retention, but this obviously forces perceptual brightness changes depending on colours (blue being notoriously dimmer); In this rendition, the blue "stars" seem much less bright than the white stars, but R:G:B ratios are much better respected (ratios are reasonably well preserved, even in the face of the added noise and the modelling and subtraction of a severe, unknown bias). It's as trade-off. But as always, knowing about these trade-offs allows you to make informed decisions. Regardless, notice also how colouring in bright "stars" is resolved until no colour information is available due to over-exposure. FWIW, any basic auto-balancing routine (for example a simple "grey world" implementation) should come close to the ~ 1.16:1.0:1.39 R:G:B colour balance. The benefits of this method should hopefully be clear; it doesn't matter what exposure time was used, how bright the objects are, or how sensitive your camera is in a particular channel - you should get very similar results. All that is required, is that spectrum response of the individual channels is "ballpark" equal to all other cameras. This tends to be the case - the whole point of the individual channel response for visual spectrum purposes, is to mimic the response of the human eye to begin with. In essence, this method exploits a de-facto "standard" amongst all cameras; errors will fluctuate around the average of all specific spectral responses of all cameras. In my experience, those deviations, however, tend to be remarkably small (which - again - is to be expected by design). Of course, taking into account the aforementioned caveats (filter response of mono CCD filters, violet "bump", proper IR/UV cut-off etc.). All that remains is making sure your white balance/reference can be argued to be "reasonable" (many ways to do this, as mentioned before; sampling average of foreground stars, nearby galaxy, G2V star, balancing by known juxtaposed processes like Ha vs O-III, etc.).
  9. Therein lies the rub. There is no single accepted raw XYZ transformation - even if we all used the exact same camera! (hence the many different "modes" on cameras - see the presentation I linked to earlier). You can definitely attempt to create a calibration matrix (as included by all manufacturers), but even the construction of such a matrix is an ill-posed problem to begin with. Also, you cannot really use a random reflective (like a Macbeth chart) or emissive (like a screen) calibration target either, without ensuring that its SPD is representative of the SPD under which your are expecting to record. As you know we record objects with many different SPDs (every star has its own unique SPD). Sure, you can standardise on the SPD of our own sun (which is what G2V calibration does), but this is precisely one of those arbitrary steps. Others use the average SPD of all stars in an image, yet others use the SPD of a nearby galaxy. Is all lost then? No, not entirely! As @vlaiv alludes to, it is possible to create colour renditions that vary comparatively little and are replicable across many different setups, cameras and exposures (though not entirely in the way he suggests). Of course, here too, arbitrary assumptions are made, but they are minimised. The key assumption, is that overall, 1. visual spectrum camera space RGB response is 2. similar. 1. This assumes we operate in the visual spectrum - if your camera has an extended response (many OSCs do), add in a luminance (aka IR/UV cut) filter. 2. This assumes that red, green and blue filters record the same parts of the spectrum. Mercifully, this tends to be very close, no matter the manufacturer. One notable exception is that many consumer-oriented cameras have a bump in sensitivity in the red channel to be able to record violet, whereas many B filters (as used in mono CCDs) do not. This, in essence, sidesteps one XYZ colour space conversion step entirely and uses a derivative (e.g. still white balanced!) of camera space RGB directly for screen space RGB. This is in, fact, what all AP-specific software does (e.g. PI, APP and ST). All AP software dispenses with camera response correction matrices entirely, as they simply cannot be constructed for AP scenes without introducing arbitrary assumptions. As a matter of fact, StarTools is the only software that will allow you to your DSLR's the manufacturer matrix (meant for terrestrial scenes and lighting) if you really wish, but its application is an entirely arbitrary step. As @vlaiv alludes to as well, the tone curve ("stretch") we use in AP is entirely different (due to things being very faint), which drastically impacts colouring. Local detail / HDR enhancement ("local" stretching) adds to this issue and makes color even harder to manage (which is one of the reasons why some end up with strange "orange" dust lanes in galaxies for example - important brightness context that informs psychovisual aspects of colouring is mangled/lost; see @powerlord's fantastic "Brown - color is weird video"). Indeed, "the" solution is to set aside the colouring as soon as as its calibration has been performed. You then go on to process and stretch the luminance component as needed. The reasoning is simple; objects in outer space should not magically change colour depending on how an earthling stretches the image. Once you are done processing the luminance portion, you then composite the stretched image and the "unadulterated" colour information (in a colour space of your choosing) to arrive at a "best of both worlds" scenario. Adhere to this and you can create renditions that have very good colour consistency across different targets, while showing a wealth of colour detail. Crucially the end result will vary markedly little between astrophotographers, no matter conditions or setup/camera used. This approach is StarTools' claim to fame, and is the way it works by default (but can obviously be bypassed / changed completely). With identical white references, you should be able to achieve results that are replicable by others, and that is - not coincidentally - very close to what scientific experiments (and by extension documentary photography) are about.
  10. You are absolutely right of course that they can be expressed as XYZ values (apologies for not being precise enough). It's super easy in fact - the coordinates fall on the spectral locus itself (e.g. they fall on the circumference of a CIE XYZ chromaticity diagram), which as you point out is indeed outside of real-world gamuts to begin with. What I meant is that the monochromatic light we record (in raw, camera-specific RGB) cannot be expressed as one pre-determined XYZ value, due to color space conversion (which necessitates picking a white point, applying response correction matrices - if you so choose - etc.). E.g. you cannot say, "this XYZ triplet is wrong for the camera-space RGB values/data you recorded".
  11. Indeed, color can not be measured in XYZ tristumuls values, but can be converted into XYZ tristumulus values. The problem is in the measuring domain, and the (arbitrary) assumptions that the measuring necessarily entails. E.g. this is the reason why monochromatic emission lines (H-alpha, O-III) cannot be expressed in XYZ values.
  12. That's a great motivation, as color is a super important, but often overlooked aspect of AP. 👍 Color cannot be measured like that, simply because there is no one canonical reference. What looks white to you in daylight over there in Europe, looks yellow to me right now here in dark Australia. Color cannot be measured. Some things that cause color can however be measured, however their interpretation is not set (see presentation).
  13. This thread serves little purpose? Color rendering (and also perception) is highly subjecttive. This excellent presentation by Michael Brown will teach you everything you ever wanted to know about the entire imaging pipeline. This, amongs other things, includes the influence of color space conversions,/white references and tone curves, all of which directly and dramatically influence color rendition. All of these are fair game and are even more arbitrary when dealing with astronomical images (there is no one canonical white reference or illuminant for astrophotographical scense, nor is there one "ballpark" presecribed tone curve/stretch). That's not to say anything goes, but the continuum of solutions is vast.
  14. Great to hear it helped and is making a great image even better. You are indeed correct about the square root If you try the different presets, you should notice the amount of extra bits (e.g. increased precision in the dataset) going up by whole integers. Do let me know if you'd like a 1.8 preview (which has some improvements/ tweaks to Denoise) - I can send you a private link. Clear skies!
  15. Great image - so much to see and the objects of interest stand out well. 👍 This area is deceptively difficult to process, with so much "dark" dust. You have managed to (correctly) look like the dark dust is in front of the background rather than part of it. Excellent! Then you are unfortunately missing out on one of the most important and powerful bits in StarTools. 😔 You should obviously tweak the result to taste, and for your tastes - most likely - the parameter you're looking for is the "Equalized Grain" parameter. It reintroduces noise grain back into the denoised result, however in such a way that its perceived magnitude is equal in all areas. E.g. it makes your image look as if it has a fixed SNR across the entire image, regardless of how you stretched or locally processed it. This leaves your image with the type of residual noise signature that terrestrial photographs possess and is often considered "pleasing" noise (CGI often artificially introduces this sort of noise to come across more convincing). If that is not what you're after then there are a myriad of different ways to shape residual noise grain (the module is a noise grain shaper, which as the name suggest, lets you shape the grain in any manner you please). Being able to use and control StarTools's Denoise module effectively will be even more important in the upcoming 1.8 version, as it will be - more than ever - a vital part of getting the most out of the new deconvolution module and, by extension, your hard-won signal. Any trouble, questions, do let me know!
  16. Sadly you are learning many incorrect things and, frankly, abject nonsense. I urge you to get some second opinions before accepting any of this into your workflow.
  17. Your latest dataset is a massive improvement! There are some small remaining issues, but they mostly have to do with the stacker. These issues are channel misalignment (causing colored fringing) and aberrant chromatic information in the highlights (over exposed areas). You can see these issues here; There is much I can say about some of the grave misconceptions above, but hopefully the most obvious, thing to anyone, is that space is not brown - not at any chosen white reference - and that stars are not colourless, nor are light-reflecting objects. As mentioned (but not demonstrated) above, ideally your stars should show colouring that roughly follows the black body radiation curve. Where an individual star sits on the curve (redder or bluer) depends on the chose white reference of course. Regardless, all star temperatures should be roughly represented in your image, from red->orange->yellow->white->blue). E.g. something like this would be reasonable (but by no means the only "correct" answer); Incidentally it is not too far off of the image in Stellarium (not that that is canonical in any way or was intentional). A blow-up of that, showing star colours a bit better; Colour balancing for DSOs is not an exact science, and there is no one white reference that is canonical. However, there are various well published and - most importantly - substantiated techniques to choose a suitable white reference in your image (none of which, to my knowledge involve targeting stars with a B-V value of 0.3-0.35 which is very blue - well over 7000K!). These techniques include; Using a nearby spiral galaxy as a reference (example source that is not me, and rationale here from an astronomer who is not me) Using a sun-like G2V star (B-V around 0.62 - 0.65) as a reference (example source that is not me, another example source that is not me) Using the aggregate of an random sampling of an unassociated star field across the image (example source that is not me). Finally, you can also achieve a good ballpark colour balance, with a little knowledge of some of the astrophysics going on in an area. If you know you have multiple, distinct features going on, you can simply make sure all features are showing up well. Examples include, HII areas (pink/purple), O-III emissions (teal/green), dust lanes (red/brown), galaxy cores (yellower due to older stars remaining), galaxy rims (bluer due to younger stars), OB-class stars reflecting their blue light in nearby dust, dust obscured areas will tend redder, and so on. (text colouring for demonstration purposes only!) As rightfully mentioned above, with the exception of O-III dominant areas (such as M42's core), green channel dominance is very rare in outer space, so if you measure (for example with a dropper) the green channel being dominant across a large area in your image, you know that the green multiplier is probably set too high. Lastly, it should be emphasised that colour balancing - if this was not clear to someone - needs to be done in the linear domain. In case anyone is interested in how the full imaging pipeline in consumer DSLR cameras work, this work by Dr. Michael S. Brown is one of the most comprehensive and informative presentations I know on the topic. It goes through all the considerations and transformations that are relevant to converting incoming photons to pixels on your screen, via the various colour spaces involved. Note that this is just for terrestrial scenes for starters! Things get even more subjective when dealing with many different illuminants in one scene (e.g. stars with various power spectra, narrow-line ionization emissions, etc.), noting that comparatively few objects we image in outer space are reflective (let alone reflecting the single power spectrum of a G2V star's daylight filtered by an earth-like atmosphere at noon). The latter is why mono OSC/CCD imagers (and AP software at large) dispenses with camera matrix corrections for DSO scenes, or why NASA keeps re-calibrating its shots depending on lighting conditions on Mars; the lighting conditions in the scene at the time of recording matter! Of course, now that you have read all this, I have to regretfully inform you that white balancing and its intricacies are stupid, and that I, nor the PI team, nor NASA, nor people like Mr. Charity know what we're doing and that all of the above is incorrect and a waste of time. 🤐
  18. Good to hear you're back on track. 👍 @AbsolutelyN brings up a good point; the L-Pro filter will block some light pollution, which will help record targets with specific emissions that are not in the same part of the filtered spectrum (Ha, Hb, O-III, etc.). However galaxies in particular, emit at all wavelengths so you are definitely filtering out some "useful" light. Whether that is worth it vs blocking the unwanted light from light pollution is dependent on the situation of course. Spectrum response at the Optolong site; https://www.optolong.com/uploads/images/20191111/8b905545f314b530f408a27ce76b4b73.png
  19. Hi, I had a quick look at your dataset, but it there are several pre-processing and acquisition issues that really need to be addressed. If using StarTools, please use the recommended settings for DSS. The dataset has been drizzled for no good reason There is severe pattern/correlated noise visible (probably due to drizzling) The dataset was not calibrated with flats - they are really not optional. The dataset was color balanced. Particularly the lack of flats will keep you from progressing. For recommended DSS settings for use with StarTools, see here. For further recommendations and do's and don'ts, see here. I hope this helps!
  20. To both of you, please feel free to share a stacked typical night's worth of data with me (here, via the ST forums, PM, whatever works for you). Along with that, a StarTools rendition that you're not happy with and - optionally - some other image that you do like. I'd be happy to give you any pointers. Often times, things are trickier if your data is not clean. The assumption is that the only noise in your image is shot noise (from the signal) and nothing else. That's when ST can really stretch its legs and the engine can properly prove its worth. Something close to this ideal (being shot noise-limited) is absolutely achievable with most gear and instruments. However. if something else is introducing non-random, correlated patterns/artefacts in your image then you will have to work harder (whether it is ST or some other software). In such cases, ST's algorithms will - just like a human - have a hard time telling apart detail from artefact and will require help/tweaks from you with this subjective task that is no longer in the realm of physics/mathematics. Some things that can cause a background to get corrupted are; not dithering, bad flats, bad/old bias or darks, accidental application of some post-processing, accidental application of noise reduction (at higher ISOs), unwanted compression artefacts/issues (e.g. Nikon D5300, some Sony DSLRs) or even sensor pixel cross-talk. Symptoms include blotches, mottling, wormy/stringy noise grain, multi-pixel noise grain, streaks, zipper patterns, faint circles/banding/posterization, and/or minor black clipping (due to dark outlier rejection). (I just realised the latter reads like some sort of medication prescription leaflet... 😋). As for processing power and super computers, we already have 2002//2003/2004's super computers in every device! Weta Digtal's Lord of the Rings visual FX, were rendered using the power of a single 2021 entry-level gaming GPU in terms of floating poins-operations-per-second (FLOPS). While not all algorithms are suitable to running on GPUs, CPU power is no longer as dominant as it once was. GPUs simply run rings around CPUs when it comes to raw number crunching due to their specialised silicon. E.g. deconvolution, some forms of noise reduction are magnitudes faster (3x-20x) on the GPU versus CPU. Try StarTools' CPU vs GPU versions and you will see what I mean. Decon previews can now complete in near real-time on the GPU. We live in amazing times to have something like that sitting on your desk or lap. With the proliferation of these compute capabilities now having achieved sufficient mass-market penetration, you will hopefully see this come to other software like PI and APP as well.
  21. This the third time you are misquoting what I said. I really doesn't help this back-and-forth if you keep doing that. Where did I ever claim there are no illuminants in outer space? Every single star is an illuminant. Emissions can be illuminants. Anything that emits visual spectrum light can act as an illuminant for a scene if that light is reflected. What is this "defined" standard? You keep conflating target medium illuminant (D65 in the case of sRGB) with the source colour space illuminant and transform - for which there is no "defined standard". They are two different things, and both color spaces require specifying. You specify the target medium illuminant (sRGB's D65), but you neglect (or don't know?) you need to specify the source colour space illuminant as well. That's not what a white point is (if you mean white reference). It's not about luminance/brightnesss. A white reference (for the purpose of this discussion) is a camera space value that defines a stimulus that should be colorless (brightness does not matter - it can be any shade of gray). Using that stimulus, a correction factor can be calculated for the components. Say the white reference is RGB:63,78,99 then all recorded values should be corrected by a factor R = 99/63 (1.57), G=99/78 (1.26), B=99/99 (1.0). You can normalize the factors as well of course to the 0-1 range; R=1.57/1.57(1.0), G=1.26/1.57(0.8), B=1.0/1.57(0.64). See DCRAW's "-a", "-A", "-w" and "-r" switches. The "-a" switch is precisely how early digital cameras found their white balance ("auto white balance"). Many cheap webcams still work like this - flip on auto white balance, show them a pastel colored piece of paper with a smiley face drawn on it with a marker, and see them white balance away the pastel color to a neutral white/gray. Well. Yeah? That's documentary photography, is it not? Why do you think the Mars rovers have a calibration target? They could have just used your magic Macbeth chart on earth, no? Why do you think all digital cameras have special modes for indoor and incandescent lighting? You must be fun at parties. Do you keep your after-dusk party shots brown/yellow, because that how they look under a completely non-applicable D65 illuminant (which you seem to be obsessed with?). Oh, you're actually serious... Why on earth are you giving the back and forward transform for the sRGB standard here? You seem to be under the mistaken impression that RAW camera source colour space is some linear version of the sRGB colour space? Nowhere in any RAW converter code will you find an sRGB -> XYZ forward transform because the RAW source color space is never already sRGB. What happens in a RAW converter is this; Camera space RGB -> White balance (as proxy for illuminant) -> XYZ -> matrix correction -> sRGB You can examine the DCRAW code to verify this. No joke; you're on the wrong Wikipedia page. You're not actually converting between two color spaces here. You're just going back and forth between one color space (sRGB) which obviously has the same illuminant, specified in the sRGB standard. Indeed. It is easy to prove you are wrong. Lets input a blue square of a Macbeth chart 470nm (aqua-ish) and use a red monochromatic light (650nm) to see if we get any reflection. The blue square's watt per steradian per square metre, per metre will peak at lambda = 470nm and will quickly fall off for other wavelengths. Pick any number you like for this spectral radiance of the blue square. Now integrate that over the target power spectrum (which consists solely of our peak at 650nm) and out rolls precisely 0 for X, Y and Z. In other words, if a Macbeth square isn't red around 650nm (our square in the above example isn't), then it isn't reflected when we use a power spectrum that does not include any power at that part of the spectrum. Your Macbeth chart will only reflect red squares and provide you no (usable) XYZ values for the other squares. You say that about formulas that literally start with "X =", "Y =" and "Z "= !? IT LITERALLY SAYS IN THE SNIPPET YOU POSTED; "multiplied by the spectral power distribution of the illuminant". At this point I really have to ask - are you trolling me? Please be honest. This nonsense is taking up so much of my time and I really don't want to continue this if it's not to anyone's benefit, and if you're just doing this to get a rise out of me. This is a 100% honest question, as you don't seem to understand or read your own sources, and you make outrageous claims that even a novice photographer would know are just downright bizarre.
  22. Indeed. But... and I think we found the problem. You seem to be under the mistaken impression there is only one illuminant in the story. There are - at the very least - always two (and almost always more). There is at least one illuminant (and associated white point/power spectrum) for the scene, whether it is the sun, moon, indoor lighting, etc. There is at least one illuminant (and associated white poin/power spectrumt) associated with your display medium. Fun fact, in the case of the sRGB standard, there are actually two more illuminants involved; the encoding ambient illuminant (D50), and the typical ambient illuminant (also D50). The latter two specify the assumptions made with regards to the viewing environment. There may be one illuminant (and associated white point/power spectrum) involved if using a camera matrix. Any associated matrix/profile and white point are only valid for the scene you based it off. It is only valid for lighting conditions that precisely match that illuminant's power spectrum. You seem to be under the mistaken impression that your camera matrix and white balance, must match the intended display medium's illuminant. That's obviously not the case. Look up any RGB to XYZ conversion formula, and they will all have parameters/terms that specify the input illuminant characteristics. Look up any XYZ to RGB conversion formula, and they will all have parameter/terms that specify the output illuminant characteristics. Nowhere does it state or require input and output to have the same illuminant. This misunderstanding leads to your very curious insistence that the bird, which any observer in that room would see and experience to be multi-colored, was brown/orange/yellow instead. By refusing to color balance, and not taking into account the conditions in the room, you are not replicating what an observer saw or experienced at all. This is fundamentally incompatible with practising documentary photography. Of course, in the thought experiment, you simply lacked the proper matrix and white balance factors to correct for. The latter is 100% fair of course; you have no way of knowing these precise parameters for the room. But you can definitely approximate them by making some plausible assumptions, based on the content of your data/images. That's simply not true. It is not at all how that works, even with a "perfect" sensor with 100% even spectral sensitivity. You seem to be under the mistaken impression that your camera matrix that was acquired under D65 conditions will be completely identical to a camera matrix that was acquired under, say, LED lighting. That is not the case (though the results may not differ/matter much depending on the power spectra differences in illuminants after a simple white balance correction). Try deriving a matrix with a monochromatic light source and see how far that gets you. For example, how much light is reflected per coloured Macbeth square, heavily depends on the power spectrum of the illuminant ; It would be a single peak in the case of monochromatic light, or a complex profile in any other case. At its most extreme, a blue square will reflect no light in the case of a red monochromatic source, etc. If multiple illuminants of different power spectra are at play in a scene (such as stars), things get very complex indeed, to the point of not mattering any more, saturating the entire power spectrum if their light is taken as an aggregate. Hence the common practice of sampling foreground stars or using entire nearby galaxies as a white reference, and eschewing DSLR matrices entirely. I'm confused; are you saying a source you cited yourself is spouting nonsense? Or just that part that you don't agree with? That's not what the author is proposing. He is merely proposing to use a different input illuminant of D58 (for the "scene"/stars/temperatures) when converting his star temperature data to XYZ, he is not proposing to use a different output illuminant (for the screen) when converting XYZ to sRGB. It correctly remains at D65. The 2006-updated Perl code is here if there is any doubt; http://www.vendian.org/mncharity/dir3/blackbody/UnstableURLs/toolv2_pl.txt &useSystem($SystemsRGB); sets system output space to "sRGB system (D65)" if(want '-D58') { &set_wp($IlluminantD58); } ; switch to set illuminant to D58 for calculation of XYZ components Thank you. That is more or less how colour calibration in StarTools works; luminance is processed completely independently of chrominance. Whatever stretch you used, however you enhanced detail, has no effect on the colouring. Once you hit the Color module you can change your colouring without affecting brightness at all (you can even select whether you want that to be perceptually constant in CIELAB space, or channel-constant in RGB space). You can even mimic the way an sRGB stretch desaturates emissive-sourced colours if you so wish. What you can't do in StarTools, however, is have your chrominance data get stretched and squashed along with the luminance/detail. E.g. you will never see orange dust lanes in a galaxy with StarTools - they will always look red or brown; even psychovisual context-based colour perception is taken into account this way (see the "brown" video).
  23. I'm not sure if you are joking here by playing word games? The alternative is that you really don't understand (or even acknowledge) the massive difference between the partially reflected light from an illuminant, its power spectrum qualities, and, say a direct monochromatic light source. Let's try a thought experiment; I go into a large empty hall, somewhat dimly lit by 100+ light fixtures with old incandescent bulbs powered at unknown different voltages. Some are brighter (and thus whiter/yellower), some are dimmer (and thus redder). In the room, is a rare rainbow-coloured bird-of-paradise flying around, dodging the fixtures as it flies around. It has never been seen outside the room. I then proceed to take 10 pictures of it flying around. I come out of this room. I give my 10 RAW images to you. How is your daylight Macbeth-calibrated matrix going to help you colour-correct the 10 pictures to reveal the "true" colours of that bird accurately in all 10 shots? Are you just going to use your D65-acquired matrix and tell me the bird I saw was a drab yellow/brown/orange? Would you maintain the colors are correct? Would you attempt to colour balance? If so, what would your white point be when doing so? Would you colour balance the 10 images differently, depending on what type of light lit the bird the most during its flight? Would you just average out all light temperatures for all shots and use that as you white balance? If you decided to color balance with a custom whitepoint, would you maintain the Macbeth D65-illuminant derived color matrix is still applicable? I'd really like to know where the disconnect is here. Where did I say I implemented a feature I believe is "wrong"? I used the word "inconsequential". That's because, per the actual title of one of the articles I gave to you, colour matrix construction for a tri-stimulus camera is an ill-posed problem (particularly for AP purposes). If you don't know what "ill-posed" means, it is a term used in mathematics to describe a problem that - in its mildest form - does not have one unique solution (deconvolution is another example of an ill-posed problem in astrophotography). In other words, a DSLR-manufacturer matrix-corrected rendition in ST, is just one more of an infinite set of plausible colour renditions (even within the more stringent terrestrial parameter confines such as a single light source of a known power spectrum) that no-one can claim is the only, right solution. Ergo, the inclusion of this functionality in ST is neat, but ultimately inconsequential (but not "wrong!"); it is not "better" than any other rendition. Being able to use the manufacturer's terrestrial D65-based color matrix is particularly inconsequential outside the confines of terrestrial parameters. But if you really want to, now you can. From what you've demonstrated here, it strongly appears to me you are currently ill-equipped to make such a judgement. Worse, making that statement ("don't produce accurate star colors") belies a lack of understanding of what star colour accuracy really means or how it comes about. Which, suitably, brings us to this; There is no range. There are no prescribed values. There is no one accurate star colour. These tables don't take into account relative brightnesses, alternative saturation or - crucially - alternative white reference, because - obviously - the table would be infinite. Only a click away (http://www.vendian.org/mncharity/dir3/starcolor/details.html) you can read this massive disclaimer by the author; I.e. it sums up all the stuff I keep trying to explain to you - picking an illuminant/white reference matters to colour, picking your stretch matters to colour, picking your colourspace/tri-stimulus conversion method matters to colour, picking your luminance/chrominance compositing matters to colour. And none of these choices in AP are prescribed, nor standardised, nor fixed to one magic value or number or method or technique. And to drive this all home, on yet another page (http://www.vendian.org/mncharity/dir3/blackbody/) you can read; E.g. the creator of the table you posted as an "accurate star color range", now as of 2016, in hindsight, prefers a whopping 700K yellower D58 rendition of his table for use on his sRGB D65 screen. And that is his absolute right, as it is just as valid of a choice. I have no idea what "scientifically color calibrated" result would even mean or what that would look like. Where did you read/hear that? Perhaps you are referring to the optional/default "scientific colour constancy" mode? It helps viewers and researchers more easily see areas of identical hue, regardless of brightness, allowing for easy object comparison (chemical makeup, etc.). It was created to address the sub-optimal practice of desaturating highlights due to stretching chrominance data along with the luminance data. A bright O-III dominant area should ideally - psychovisually - show the same shade of teal green as a dim O-III emission dominant area. It should not be whiter or change hue. The object out there doesn't magically change perceived colour or saturation depending on whether an earthling chose a different stretch or exposure time. An older tool you may be more familiar with, that aimed to do the same thing, is an ArcSinH stretch. However that still stretches chrominance along with colour, rather than cleanly separating the two. I implore you to please read and watch the materials provided thoroughly before commenting - particularly the sources you cited yourself. Not doing so just makes for strange, credibility-damaging posts for no good reason. I have seen you give good advice and make positive contributions here on SGL over the years, but this colour stuff is such a strange hill to die on. This is simply not an intuitive subject, and things are much more subtle and non-deterministic than you seem to think. Anyhow, getting on topic; If you could share with us a the linear stack (Dropbox, Google Drive, WeTransfer, OneDrive etc.), fresh from the stacker without any stretches applied, we can have a look! As said, to me this just looks like the expected green bias (and increased green signal) from this particular OSC, though the image you posted seems to have been processed, so it is hard to draw conclusions from that.
  24. Oof... my head hurts from the Gish gallop and contradictions, again without actually perusing the information shared with you. If we are exclusively dealing with emissive cases (we are not), then why on earth are you calibrating against an exclusively reflective target like a Macbeth chart, purely relying on subtractive colour, as lit by a G2V type star at noon under an overcast sky (e.g. the D65 standard illuminant as used by sRGB)? None of your calibration targets here are emissive! Also, you are flat out wrong that the illuminant is irrelevant. Not only is its influence explained in the color space conversion article, more importantly, astrophotography deals with more than just emissions (most of which are monochromatic mind you). Take for example reflection nebulosity (hint; it's in the name!). It's just that the illuminants are not located with in 150,000,000km from the camera but much further, and may vary per location in the image (both in brightness and power spectrum). E.g. we are dealing with multiple, vastly different illuminants, and thus an "average" reference illuminant needs to be chosen. Popular techniques include choosing a G2V star in the field, the average of all foreground stars, or a using nearby spiral galaxy. All are equally valid choices. All will yield different white references and power spectrum assumptions. If you had bothered to watch the video rather than instinctively responding straight away, you would have learned that perceiving brown (or orange) is entirely dependent on its context within the image. You can't "measure" orange, you can't measure "brown". What looks like brown in one image, looks orange (or even red) in another, even though the RGB or XYZ values are exactly the same. It has nothing to do with "spectrum". It is not a signal that is recordable. These colours only manifest themselves once they are viewed in the context of a scene. Even with more "conventional" colours, it's not just wavelength that defines colour; Mate, I implement this stuff for a living. StarTools incorporates all DSLR matrices found in DCRAW and more (the matrices are hardcoded in the executable/libraries - with only a few DSLR model exceptions, they are rarely encoded in the actual RAW file). As I mentioned, StarTools is the only software for astrophotography (that I know of) that bothers to incorporate matrix correction at all as part of its colour calibration module (e.g. not at the debayering stage where light pollution and uneven lighting is still part of the signal). I implemented actual color calibration routines that do all this on stretched data (so luminance signal contamination can be avoided). And even I think it's an inconsequential feature. The matrices are irrelevant for use in AP. This is not terrestrial photography, and none of its assumptions with regards to colour apply. That's not even mentioning that no one stretches their image with exactly the ~2.2 sRGB gamma curve either. I will let others (and history) be the judge of that. I give up.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.