Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

stargazine_ep21_banner.thumb.jpg.d744e66b98367f1c5d57b7966c303f1b.jpg

jager945

Members
  • Content Count

    55
  • Joined

  • Last visited

Posts posted by jager945


  1. 45 minutes ago, bottletopburly said:

    Excellent cheap solution Ivo I’m assuming you could add a couple of these together on a system to create a cheap powerful Astro processing dedicated machine space allowing.

    Actually, most applications do not make use of multiple cards in your system (ST doesn't). That's because farming out different tasks to multiple cards is a headache and can cause significant overhead in itself (all cards need to receive their own copy of the "problem" to work on from the CPU). It may be worth investigating for some specific problems/algorithms, but generally, things don't scale that well across different cards.

    • Like 1

  2. It's worth noting that CUDA is an NVidia proprietary/only technology, so anything that specifically requires CUDA will not run on AMD cards.

    Writing and particularly optimizing for GPUs has been an incredibly interesting experience. Much of what you know about optimisation for general purpose computing does not apply, while new considerations come to the fore.

    Some things just don't work that well on GPUs (e.g. anything that relies heavily on logic or branching). E.g. a simple general purpose median filter shows disappointing performance (some special cases not withstanding), whereas complex noise evolution estimation throughout a processing chain flies!

    I was particularly blown away with how incredibly fast deconvolution becomes when using the GPU; convolution and regularisation thereof is where GPUs undeniably shine. My jaw dropped when I saw previews update in real-time on a 2080 Super Mobile!

    I don't think APP uses the GPU for offloading arithmetic yet by the way. Full GPU acceleration for AP is all still rather new. GPU proliferation (whether discrete or integrated in the CPU) has just about become mainstream and mature. Hence giving it another look for StarTools. Exciting times ahead! 👍

    • Like 5

  3. "Por qué no los dos?" :)

    Have you thought of using the Optolong L-eXtreme as luminance and using the full spectrum dataset for the colours?

    Assuming the datasets are aligned against each other, it should be as simple as loading them up in the Compose module (L=Optolong, R, G and B = full spectrum) and setting "Luminance, Color" to "L, RGB" and processing as normal. 

    It should yield the detail of the second image (and reduced stellar profiles), while retaining visual spectrum colouring.

    Lovely images as-is!

    • Like 1

  4. Hi,

    I replied on the StarTools forum, but thought I'd post the answere here as well;
     

    Quote

     

    I've been using StarTools for a while now.

    At first I tried to use it 'properly' after following tutorials, starting with Auto Dev. I sort of got it to work, but I've just decided - I really don't understand it.

    I've been bypassing it and just using the manual Develop option, but I feel I should really understand what I'm doing wrong, if anything.

    Whenever I use Auto Dev, I just get a horrible, overexposed mess, no matter if I select a ROI or not. I understand that it's supposed to show me the errors in my image, but it's unusable. I've noticed that if I then use Wipe, it gets even worse - but then, when I click 'Keep' after the wipe, it goes back to how it looked when I opened it, before the Auto Dev!

     

    This is a separate thing and doesn't have much to do with AutoDev. Wipe operates on the linear data. It uses an exaggerated AutoDev (e.g. completely ignoring your old stretch) stretch of the linear data to help you visualise any remaining issues. After running Wipe, you will need to re-stretch your dataset. That is because the previous stretch is pretty much guaranteed to be no longer be valid/desirable. That is because gradients have been removed and now no longer take up precious dynamic range. Dynamic range can now be allocated much more effectively to show detail, instead of artifacts and gradients. As a matter of fact, as of version 1.5, you are forced to re-stretch your image; when you close Wipe in 1.5+, it will revert back to the wiped, linear state, ready for re-stretch. Before 1.5 it would try to reconstruct the previous stretch, but - cool as that was - it really needs your human input again, as the visible detail will have changed dramatically.

    Quote

     

    It's completely counterintuitive. There's no progression from 'bad' image to 'good', and frankly I don't get it.

    Can anyone shed some light on this?

     

    You can actually see a progression by gradually making your RoI larger or smaller; as you make your RoI smaller you will notice the stretch being optimised for the area inside your RoI. E.g. detail inside the RoI wil become much more easy to discern. Conversely, detail outside the RoI will (probably) become less easy to discern. Changing the RoI gradually should make it clear what AutoDev is doing;

    file.php?id=1480
    file.php?id=1481

    file.php?id=1482

    file.php?id=1483
    Confining the RoI progressively to the core of the galaxy, the stretch becomes more and more optimised for the core and less and less for the outer rim.
    (side note, I'd probably go for something in between the second and third image :) )

    E.g. AutoDev is constantly trying to detect detail inside the RoI (specifically figuring out how neighbouring pixels contrast with each other), and figure out what histogram stretch allocates dynamic range to that detail in the most optimal way. "Optimal" being, showing as much detail as possible.

    TL;DR In AutoDev, you're controlling an impartial and objective detail detector, rather than a subjective and hard to control (especially in the highlights) bezier/spline curve.

    Having something impartial and objective is very valuable, as it allows you to much better set up a "neutral" image that you can build on with local detail-enhancing tools in your arsenal (e.g. Sharp, HDR, Contrast, Decon, etc.);
    highlightpreservestretch.gif
    Notice how the over-exposed highlights do not bloat *at all*. The cores stay in their place and do not "bleed" into the neighboring pixels. This is much harder to achieve with other tools; star bloat is unfortunately still extremely common.

    It should be noted that noise grain from your noise floor can be misconstrued by the detail detector as detail. Bumping up the 'Ignore Fine Detail <' parameter should counter that though.  

    I hope the above helps some, but if you'd like to post a dataset you're having trouble with, perhaps we can give you some more specific advice?

    • Like 1
    • Thanks 1

  5. 34 minutes ago, Mr niall said:

    Yep did that, not sure what your point is there. With the exception of flats, which I didn't believe I needed at 40mm, everything else was done (including a restack with intersection selected on DSS) I would be amazed if they transformed the situation.

    If you'd like to explain to me a good dithering procedure for a 15 year old DSLR with a wide angle lens mounted on a clockwork timer then I'm all ears.

    That was literally the point of my post; if it isn't any good then I was hoping for advice on perhaps what was causing the issues I was experiencing. The post is even called "Help a Niall" for goodness sake.

    Thanks for being both patronising and condescending. Believe it or not I've been trying to do this for nearly three years with little success. But it is reassuring to know that one quick look at my work is enough for you to make an assessment along the lines of "they've never done this before / or they've no idea what they're doing". Cheers for that. If the solution is to chuck a couple of thousand pounds at mounts at camera's then I'm not really any better off than I was to start with.

    Working with good data is easy. I cant get good data. I don't know how to get good data. I don't know why my data isn't good. That's why I'm asking for help. If you'd read my post then you'd see that.

    Again - that was literally the point of my post. If I knew what the problem was, I wouldn't be asking for help. That was the point of my post.

    Yeah… not sure you how you think you achieved that. Patronising comments aside you haven't actually said anything other than "your data is rubbish". I'm already at "endless frustration", and I'm following the guidance, repeatedly.

    WHAT ISSUES!!!????

    BTW - I did enjoy the irony of "here's a nice version of your picture - but I'm not going to show you how to do it because you're not good enough yet to deserve it". 

    Well thanks, that's me feeling like an idiot again. I'll pack the camera away and try again in another few months. If the sum total of the advice is that I'm either useless or an idiot then that advice probably would have been more useful 3 years and several hundred hours of frustrated effort ago. It's not easy asking for help when are clearly so far behind what nearly everyone else seems to be able to achieve, and believe me I doubt many are "trying harder". 

    Your post is probably the most demoralising thing I've ever read. Good work.

    I sincerely hope you will re-read my post. It was made in good faith and answers many of your questions and concerns directly or through concise information behind the links.

    To recap my post; a great deal of your issues can likely be alleviated by flats, by dithering (giving your camera a slight push perpendicular to the tracking direction between every x frames will do) and using a rejection method in your stacker.

    If that is not something that you wish to do or research, then that is, of course, entirely your prerogative. Given your post above, it's probably not productive I engage with you any further at this time.

    • Thanks 1

  6. Hi,

    I would highly recommend doing as suggested in step 1 of the quick start tutorial, which is perusing the "starting with a good dataset" section.

    Apologies in advance, as  I'm about to give some tough love....

    Your dataset is not really usable in its current state. Anything you would learn trying to process it will likely not be very useful or replicable for the next dataset.

    There are three important parts of astrophotography that need to work in unison. These three things are acquisition, pre-processing, and post-processing. Each step is dependent on the other in that sequence. Bad acquisition will lead to issues during pre-processing and post-processing.

    You can try to learn these three things all at once, very slowly, or use a divide and conquer strategy.

    If you want to learn post-processing now, try using a publicly available dataset. You will then know what an ok (not perfect) dataset looks like and how easy it really is to establish a quick, replicable workflow.

    If you want to learn pre-processing, also try using a publicly available dataset (made up out of its constituent sub frames). You will then know what settings to use (per step 1 in the quick start tutorial, see here for DSS-specific settings) and what flats and bias frames do. Again, you will quickly settle on a quick, replicable workflow.

    Finally, getting your acquisition down pat is a prerequisite for succeeding in the two subsequent stages if you wish to use your own data; at a minimum take flats (they are not optional!), dither unless you absolutely can't, and get to know your gear and its idiosyncrasies (have you got the optimal ISO setting?).

    The best advice I can give you right now, is to spend some time researching how to produce a clean dataset (not deep - just clean!). Or, if you just love post-processing, grab some datasets from someone else and hone your skill in that area. It's just a matter of changing tack and focusing on the right things. You may well find you will progress much quicker.

    I'm sorry for, perhaps, being somewhat blunt, but I just want to make sure you get great enjoyment out of our wonderful hobby and not endless frustration.

    Wishing you clear skies and good health!

    EDIT: I processed it, just to show there is at least a star field in there, but, per the above advice, giving you the workflow would just teach you how to work around dataset-specific, easily avoidable issues...

    dss output.jpg

    • Like 1

  7. 1 hour ago, red dwalf said:

    excellent, i love getting tips on how too, loads of videos on youtube but they never cover everything and until you mentioned it i had not notice the halos but they do look a lot better, i really wanted to bring out that faint spiral arm in the galaxy but struggled to do it, any ideas on that ?

    Glad I could help! With regards to the spiral arms, it depends on your dataset. If you'd like to upload it somewhere I'd be happy to have a look.

    For images like these, the HDR module's Reveal All mode may help, as well as the Sharp module from ST 1.6 (DSO Dark preset, overdrive the strength). You'd use these tools specifically as they govern small-medium scale detail.

    It's also possible a different global stretch (with a Region of Interest in AutoDev) can lift some more detail from the murk. Much depends on your "murk" (e.g. how clean and well-calibrated the background is) as well :)

    • Like 1

  8. Nice going! The star halos are likely caused by chromatic aberration. You can use the Filter module in StarTools to kill those fringes; create a star mask with the offending stars in it, then, back in the module, set Filter Mode to Fringe Killer. Now click on different halos and different parts of the halos (e.g. not the star cores, but the halos themselves!) until they have all but disappeared. You'll end up with white-ish stars for the stars that were affected, but many prefer this over the fringes.

    NewComposite-from-startools-and-photoshop.jpg.83b69f84f24d169e0e2b4172ef6f2455.jpg.e90690c020929f7017483743847a1f3b.jpg

    Finally, if you are suffering from colour blindness, be sure to make use of the MaxRGB mode in the Color module. It should allow you to check your colouring and colour balance to a good degree, as long as you can distinguish the brightness of the 3 maxima ok (most people can). See here for more info on how to use it.

    Clear skies!

     

    • Like 3

  9. Hi,

    I had a quick look at the datasets. While the Ha signal is really nice, it seems the R, G and B data has all sorts of anomalous patches going on.

    It appears something has gone wrong here. The Green dataset in particular looks like you shot clouds, or perhaps something in your optical train dewed over;

    G.thumb.jpg.cbfa365bf916c5c5be8e860b758881d9.jpg

    (this was th green channel binned to 35%, Crop, default AutoDev for diagnostics)

    As such, I can understand you're having trouble getting anything useful in the visual spectrum,

    That said, your Ha signal is fantastic and you can always create a false color Ha image if you want.

    Ha.thumb.jpg.385d0a46511bc37bf720d4185975d05c.jpg

    Once you do acquire a useful RGB signal, you will want to use the Compose module to process chrominance and luminance separately yet simultaneously. You can, for example, use Ha as luminance and R+Ha as red, with G and B as normal.

    Hope this helps!

     

     


  10. Looking great!

    I would highly recommend software-binning your dataset, as the image at its full size is very much oversampled.

    Once you've binned and converted the "useless" resolution into a better signal at a lower resolution, you push the dataset more and noise should also be much less apparent.

    Clear skies & stay healthy,

    • Like 1
    • Thanks 1

  11. 2 hours ago, Adreneline said:

    Thank you! I’m assuming you used the .fit files I posted to achieve the image in your post. Is your image a straight HOO or is it a luminance augmented HOO? - sorry I’m a little confused. How did you create the synthetic luminance image?

    Thanks again for your help and sorry if I’m missing something obvious.

    Adrian

     

    Apologies for any confusion Adrian! The two datasets were used by StarTools to automatically;

    • Create a synthetic luminance master (e.g. making the proper blend of 920s of O-III + and 2280s of Ha). You just tell ST the exposure times and it figures it out, but it in this instance, it would have calculated a signal precision of 1.77:1 (Ha:O-III), derived from sqrt(2880/920) for Ha vs sqrt(920/920). So I believe that would have yielded a 1/(1+1.77) * 100% = ~36% OIII vs 1.77/(1+1.77) * 100% = ~64% Ha blend.
    • Create a synthetic chrominance master at the same time (e.g. mapping Ha to red, O-III to green and also blue)

    If you are using PS or PI, then you can stop reading here as the following will not be possible (you will want to process both datasets with separate workflows and then afterwards combine chrominance and luminance into one image to the best of your abilities).

    In ST, the engine processes both synthetic luminance and chrominance masters simultaneously yet separately (for example, during gradient removal both datasets are treated at once). Most operations only affect the luminance portion (stretching, wavelet sharpening, decon, etc.) until you do final color calibration towards the end (please don't do color calibration this late in PS or PI though!) of your processing flow, which will seamlessly merge color and luminance. During this step, you can boost the color contribution of any channel (or remap channels at will), completely separate to the brightness and detail you brought out. It's one work flow, one integrated process with full control over luminance and chrominance interplay of the final result.

    If interested, this was the entire work flow for that image in ST 1.6;

    --- Compose

    Load Ha as red, O-III as green and O-III - again - as blue
    Parameter [Luminance, Color] set to [L + Synthetic L From RGB, RGB]
    Parameter [Blue Total Exposure] set to [Not set] (we only want to count O-III's contribution once)
    Parameter [Green Total Exposure] set to [0h16m (16m) (960s)]
    Parameter [Red Total Exposure] set to [0h37m (37m) (2220s)] (exposure times have to be multiples of 60s; close enough :))
    --- Bin
    Parameter [Scale] set to [(scale/noise reduction 35.38%)/(798.89%)/(+3.00 bits)]
    Image size is 1663 x 1256
    --- Crop
    Parameter [X1] set to [78 pixels]
    Parameter [Y1] set to [24 pixels]
    Parameter [X2] set to [1613 pixels (-50)]
    Parameter [Y2] set to [1203 pixels (-53)]
    Image size is 1535 x 1179
    --- Wipe

    Will remove gradients and vignetting in both synthetic datasets (Use Color button to toggle between datasets).
    Parameter [Dark Anomaly Filter] set to [6 pixels]
    Parameter [Drop Off Point] set to [0 %]
    Parameter [Corner Aggressiveness] set to [95 %]
    --- Auto Develop
    Parameter [Ignore Fine Detail <] set to [3.0 pixels]
    Parameter [RoI X1] set to [466 pixels]
    Parameter [RoI Y1] set to [60 pixels]
    Parameter [RoI X2] set to [779 pixels (-756)]
    Parameter [RoI Y2] set to [472 pixels (-707)]
    --- HDR
    Defaults

    --- Deconvolution
    Parameter [Primary PSF] set to [Moffat Beta=4.765 (Trujillo)]
    Parameter [Tracking Propagation] set to [During Regularization (Quality)]
    Parameter [Primary Radius] set to [1.3 pixels]
    --- Color

    Duoband preset (defaults parameter [Matrix] to [HOO Duoband 100R,50G+50B,50G+50B])

    Parameter [Bright Saturation] set to [4.70]
    Parameter [Red Bias Reduce] set to [6.91] to boost the blue/teal of the O-III to taste
    --- Psycho-Visual Grain Equalization De-Noise (switch signal evolution tracking off, choose 'Grain Equalize')
    Parameter [Grain Size] set to [7.0 pixels]
    Parameter [Grain Removal] set to [60 %]

    (I think I bumped up the saturation just a little afterwards in the Color module)

    Hope that helps!

    • Like 4

  12. If Ha and OIII are weighted properly to make a synthetic luminance frame, you have some pretty decent signal (your calibration is quite excellent!).

    If your software does not have an automatic weighting feature when compositing your synthetic luminance, then it is important to remember that, if you are weighting stacks made up of sub frames of equal exposure times, however with different numbers of subs, then relative signal quality in these individual stacks only increases with the square root of the amount of exposures.

    E.g. if you have twice as many Ha sub frames as O-III frames, then signal in the Ha stack is sqrt(2) ~ 1.4x better (not 2x!).

    As the human eye is extremely forgiving when it comes to noise in colour data, you don't need too much signal if your calibration is otherwise very good to augment your (deep) luminance rendition with O-III colouring. Creating, for example, a typical HOO bi-color then becomes fairly trivial (as usual, however, stars are pretty dominant in the O-III band however);

    NewComposite.thumb.jpg.d8162afafaf7964f60a7ad3ef88fd930.jpg

    Hope this helps!

     

     

     


  13. On 02/02/2020 at 08:35, alacant said:

    Hi. Yeah, I was looking for a free lunch too! I tried StarTools' HDR (I think that's what Ivo would use?) but unfortunately you can't mask in the HDR module, so you end up with a full frame HDR when what you want is just the core. Maybe worth asking over on st. 

    Thanks

    **We've just bagged the 10s frames, so my participation is over,  but they too look dangerously blown in EKOS' fitsviewer. Here's hoping the auto-stretch has overdone it.

    You may indeed be able to rescue some details from the highlights, but if an area is overexposed (very easy to on M42), the detail and/or colour information is just not there and you will have change tack;

    In that case, you will want to take a shorter exposure stack, make sure it is aligned with the longer exposure stack, process it to taste (preferably fairly similar to the other stack) and then use the Layer module to create a High Dynamic Range composite. To do this, put the one of the two finished images in the foreground and the other in the background. Then choose the 'Minimum Distance to 1/2 Unity' Filter. This filter creates a composite that switches between the background and foreground image, depending on which pixel is closes to gray (1/2/ unity). To make the switching less apparent/abrupt, bump up the Filter Kernel Radius. This will make the transitions nice and smooth.

    • Like 2

  14. 32 minutes ago, vlaiv said:

    I'm not sure that you understand concept of illuminant.

    I'm going to respectfully bow out here. 😁

    If you wish to learn more about illuminants (and their whitepoints) in the context of color spaces and color space conversions, have a look here;

    https://en.wikipedia.org/wiki/Standard_illuminant

    Lastly, I will leave with you a link/site that discusses different attempts to generate RGB values from blackbody temperatures, and discusses some pros and cons of choosing different white points, methods/formulas. sources for each, and the problem of intensity.

    http://www.vendian.org/mncharity/dir3/blackbody/

    http://www.vendian.org/mncharity/dir3/starcolor/details.html

    Clear skies,


  15. 1 hour ago, vlaiv said:

    It really depends on how you process your color image. What do you think about following approach:

    ratio_r = r / max(r, g, b)
    ratio_g = g / max(r, g, b)
    ratio_b = b / max(r, g, b)

    final_r = gamma(inverse_gamma(stretched_luminance)*ratio_r)
    final_g = gamma(inverse_gamma(stretched_luminance)*ratio_g)
    final_b = gamma(inverse_gamma(stretched_luminance)*ratio_b)

    where r,g,b are color balanced - or (r,g,b) = (raw_r, raw_g, raw_b) * raw_to_xyz_matrix * xyz_to_linear_srgb_matrix

    This approach keeps proper rgb ratio in linear phase regardless of how much you blow out luminance due to processing so there is no color bleed. It does sacrifice wanted light intensity distribution but we are already using non linear transforms on intensity so it won't matter much.

    Again - it is not subjective thing unless you make it. I agree about perception. Take photograph printed on paper of anything and use yellow light and complain how color is subjective - it is not fault in photograph - it contains proper color information (within gamut of media used to display image). I also noticed that you mention monitor calibration in first post as something relevant to this topic - it is irrelevant to proper color calibration. Image will contain proper information and with proper display medium it will show intended color. It can't be responsible for your decision to view it on wrong display device.

    Spectrum of light is physical thing - it is absolute and not left to interpretation. We are in a sense measuring this physical quantity and trying to reproduce this quantity. We are not actually reproducing spectrum with our displays but tristimulus value since our vision system will give same response to different spectra as long as they stimulate receptors in our eye in equal measure. This is physical process and that is what we are "capturing" here. What comes after that and how our brain interprets things is outside of this realm.

    Yup, that looks like a simple way to process luminance and color separately.

    With regards to the colouring, you're almost getting it; you will notice, for example, color space conversions requires you specify an illuminant. For sRGB it's D65 (6500K). E.g. an image is supposed to be viewed under cloudy sky conditions in an office environment.

    However in space there is no "standard" illuminant. That's why it is equally valid to take a G2V star as an illuminant, a random selection of stars as an illuminant, or a nearby galaxy in the frame as an illuminant, yet in photometry, white stars are bluer again than the sun, which is considered a yellow star. The method you choose here for your color rendition is arbitrary.

    Further to colorspaces, the CIELab and CIE 1931 colorspaces were precisely crafted using by the interpretation of colors by people (a group of 17 observers in the instance of 1931). Color perception is not absolute. It is subject to intepretation and (cultural) consensus.


  16. 9 minutes ago, alacant said:

    Hi. No, it's this one. To our -untrained eyes- it looks fine. We don't have high end equipment; this was taken at 1200mm with an old canon 450d. I'm not sure we can improve...

    I'm sorry... I'm at a loss as to what would be wrong or weird about this image.  😕

    Even the most modest equipment can do great things under good skies and competent acquisition techniques.


  17. 2 minutes ago, alacant said:

    Hi. It's the image posted here, but I'm told that the background is wrong. I honesty can't see it!

    Cheers and clear skies

    If it's the first image at the start of this thread, then the histogram does look a little weird in that the there is very little to the right of the mode (it's mostly all contained in one bin when binned to 256 possible values). Is this what you're seeing too? (could be some artifact of JPEG compression as well).

    Regardless, the background doesn't look weird to me; I can see plenty of undulation/noise/detail as a result of the right part of the "bell curve" containing plenty of Poissonian noise/detail if I zoom in. What is the main objection of those saying it is "wrong"?

    Untitled.png.a3744c9acc668396a4a791dbe1fa6209.png


  18. 3 minutes ago, alacant said:

    Ah, so my non-symmetrical histogram is what you'd expect in a finished image?

    TIA

    ss4.jpg.56d24e2f02488001b9f76649c41b8a46.jpg.82ee9ec9c1303bb9a081a59dcbf53d87.jpg

     

    That histogram looks quite "healthy" to me indeed if it is indeed of the image behind the dialog. The mode (peak of the bell curve) and the noise (and treatment of it) on both sides of the mode seems like a good example of what I was trying to explain.

    • Thanks 1

  19.   Hi vlaiv,

    I'm sad that the point I was trying to make was not clear enough. That point boling down to; processing color information along with luminance (e.g. making linear data non-linear) destroys/mangles said color information in ways that make comparing images impossible.

    There are ample examples on the web or astrobin of white, desaturated M42 cores in long (but not over exposing!) exposures of M42, whereas M42's core is really a teal green (due to dominant OIII emissions), a colour which is even visible with the naked eye in a big enough scope. I'm talking about effects like these;

    3ac420e1-66ee-4976-967a-4b3a299c2f9d.jpg

    Left is stretching color information along with luminance (as often still practiced), right is retaining color RGB ratios; IMHO the color constancy is instrumental in relaying that the O-III emissions are continuing (and indeed the same) in the darker and brighter parts of the core, while the red serpentine/ribbon feature clearly stretches from the core into the rest of the complex.

    Colouring in AP is a highly subjective thing, for a whole host of different reasons already mentioned, but chiefly because color perception is a highly subjective thing to begin with. My favourite example of this;

    spacer.png

    (it may surprise you to learn that the squares in the middle of each cube face are the same RGB colour)

    I'm not looking for an argument . All I can do is wish you the best of luck with your personal journey in trying to understanding the fundamentals of image and signal processing.

    Wishing you all clear skies,


  20. Hi,

    Without singling anyone out here, there may be a few misconceptions here in this thread about coloring, histograms, background calibration, noise and clipping.

    In case there is, for those interested I will address some of these;

    On colouring

    As many already know, coloring in AP is a highly subjective thing. First off, B-V to Kelvin to RGB is fraught with arbitrary assumptions about white points, error margins, filter characteristics. Add to that atmospheric extinction, exposure choices, non-linear camera response, post processing choices (more on those later) and it becomes clear that coloring is... challenging. And that's without squabbling about aesthetics in the mix.

    There is, however, one bright spot in all this. And that is that the objects out there 1. don't care how they are being recorded and processed - they will keep emitting exactly the same radiation signature, 2. often have siblings, twins or analogs in terms of chemical makeup, temperature of physical processes goings on.

    You can exploit point 1 and 2 for the purpose of your visual spectrum color renditions;

    1. Recording the same radiation signature will yield the same R:G:B ratios in the linear domain (provided you don't over expose and our camera's response is linear throughout the dynamic range of course). If I record 1:2:3 for R:G:B for one second, then I should/will record 2:4:6 if my exposure is two seconds instead (or is, for example, twice as bright). The ratios remain constant, just the multiplication factor changes. If I completely ignore the multiplication factor, I can make the radiation signature I'm recording exposure independent. Keeping this exposure independent radiation signature separate from the desired luminance processing, now allows for the luminance portion - once finished to taste - to be colored consistently. Remember objects out there don't magically change color (hue, nor saturation) depending on how a single human chose his/her exposure setting!  (note that even this coloring is subject to arbitrary decisions with regards to R:G:B ratio color retention)

    2. Comparable objects (in terms of radiation signatures, processes, chemical makeup) should - it can be argued - look the same. For example, HII areas and the processes within them are fairly well understood; hot, short-lived O and B-class blue giants are often born here and ionise and blow away the gas around their birthplace. E.g. you're looking at blue stars, red Ha emissions, blue reflection nebulosity. Mix the red Ha emissions and blue reflection nebulosity and you get a purple/pink. Once you have processed a few different objects in this manner (or even compare other people's images with entirely different gear who have processed images in this manner), you will start noticing very clear similarities in coloring. And you'd be right. HII areas/knots in nearby galaxies will have the exact same coloring and signatures as M42, M17, etc. That's because they're the same stuff, undergoing the same things. An important part of science is ensuring proof is repeatable for anyone who chooses to repeat it.

    Or... you can just completely ignore preserving R:G:B ratios and radiation signature in this manner and 'naively' squash and stretch hue and saturation along with the luminance signal, as has been the case in legacy software for the past few decades. I certainly will not pass judgement on that choice (as long as it's an informed, conscious aesthetic choice and not the result of a 'habit' or dogma). If that's your thing - power to you! (in ST it's literally a single click if you're not on board with scientific color constancy)

    On background calibration and histograms

    Having a discernible bell ('Gaussian') curve around a central background value is - in a perfectly 'empty', linear image of a single value/constant signal - caused by shot/Poissonian noise. Once it is stretched this bell curve will become lopsided (right side of the mode (mode = peak of the bell curve) will expand, while the left side will contract). This bell curve noise signature is not (should not be) visible any more in a finished image, especially if the signal to noise ratio was good (and thus the width/FWHM of the bull curve was small to begin with).

    Background subtraction that is based around a filtered local minimum (e.g. determining a "darkest" local background value by looking at the minimum value in an area of the image) will often correct undershoot (e.g. subtracting more than a negative outlier can 'handle' - resulting in a negative value). Undershooting is then often corrected by increasing the pedestal to accommodate the - previously - negative value. However a scalar to such values is often applied first  before raising the pedestal value. E.g. the application of a scalar, rather than truncation, means that values are not clipped! The result is that outliers are still very much in the image, but occupy less dynamic range than they otherwise would, freeing up dynamic range for 'real' signal. This can manifest itself in a further 'squashing' of the area left of the mode, depending on subsequent stretching. Again, the rationale here is that values to the left of the mode are outliers (they are darker than the detected 'real' background, even after filtering and allowance for some deviation from the mode).  

    Finally, there is nothing prescribing a particular value for the background mode, other than personal taste, catering to poorly calibrated screens, catering to specific color profiles (sRGB specifies a linear response for the first few values, for example), or perhaps the desire to depict/incorporate natural atmospheric/earth-bound phenomena like Gegenschein or airglow. 

    I hope this information helps anyone & wishing you all clear skies!

    • Thanks 2

  21. Quickie in 1.6 beta;

    416044017_M42stackedandunadulterated.png.0327e4d7d2f320f9fd33bc3d020260ec.png

    ---

    Type of Data: Linear, was not Bayered, or was Bayered + white balanced

    Note that you can now,  as of DSS version 4.2.3, save your images without white balancing in DSS. This indeed allows for reweighing of the luminance portion due to more precise green channel. However, since this dataset was colour balanced and matrix corrected, this is currently not possible.
    --- Auto Develop

    To see what we got. We can see a severe light pollution bias, noise and oversampling.
    --- Crop
    Parameter [X1] set to [1669 pixels]
    Parameter [Y1] set to [608 pixels]
    Parameter [X2] set to [3166 pixels (-2858)]
    Parameter [Y2] set to [2853 pixels (-1171)]
    Image size is 1497 x 2245
    --- Rotate
    Parameter [Angle] set to [270.00]
    --- Bin

    To convert oversampling into noise reduction.
    Parameter [Scale] set to [(scale/noise reduction 35.38%)/(798.89%)/(+3.00 bits)]
    Image size is 794 x 529
    --- Wipe

    To get rid of light pollution bias.
    Parameter [Dark Anomaly Filter] set to [4 pixels] to catch darker-than-real-background pixels (recommended in cases of severe noise).
    --- Auto Develop

    Final global stretch.
    Parameter [Ignore Fine Detail <] set to [3.9 pixels] to make AutoDev "blind" to the noise grain and focus on bigger structures/details only.

    --- Deconvolution

    Usually worth a try. Let Decon make a "conservative" automatic mask. Some small improvement.

    Parameter [Radius] set to [1.5 pixels]
    Parameter [Iterations] set to [6]
    Parameter [Regularization] set to [0.80 (noisier, extra detail)]

    --- Color

    Your stars exhibit chromatic aberration (the blue halos) and DSS' colour balancing will have introduced some further anomalous colouring in the highlights.

    --- Color
    Parameter [Bias Slider Mode] set to [Sliders Reduce Color Bias]
    Parameter [Style] set to [Scientific (Color Constancy)]
    Parameter [LRGB Method Emulation] set to [Straight CIELab Luminance Retention]
    Parameter [Matrix] set to [Identity (OFF)]
    Parameter [Dark Saturation] set to [6.00]
    Parameter [Bright Saturation] set to [Full]
    Parameter [Saturation Amount] set to [200 %]
    Parameter [Blue Bias Reduce] set to [1.39]
    Parameter [Green Bias Reduce] set to [1.14]
    Parameter [Red Bias Reduce] set to [1.18]
    Parameter [Mask Fuzz] set to [1.0 pixels]
    Parameter [Cap Green] set to [100 %]
    Parameter [Highlight Repair] set to [Off]

    What you're looking for in terms of colour, is a good representation of all star (black body) temperatures (from red, orange and yellow, up to white and blue). Reflection nebulosity should appear blue and Ha emissions should appear red to purple (when mixed with blue reflection nebulosity). M42's core should be a teal green (O-III emissions all powered by the hot young blue O and B-class giants in the core, this colour is also perceptible with the naked eye in a big enough scope). As a useful guide for this object, there is a star just south of M43 that should be very deep red (but I was unable to achieve this at this scale with the signal at hand).

    Final noise reduction (I used Denoise Classic for this one as the noise is too heavy for "aesthetic" purposes).

    --- Filter
    Parameter [Filter Mode] set to [Fringe Killer]

    Put stars with their halos in a mask and click on the offending star halos and their colours. This should neutralise them.
     

    --- Wavelet De-Noise
    Parameter [Grain Dispersion] set to [6.9 pixels]
     

    Hope you like this rendition & wishing you clear skies!

     

    • Like 5
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.