Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.



  • Content Count

  • Joined

  • Last visited

Everything posted by jager945

  1. Too true - of course there is no 'right' way of processing an image. We do what we feel conveys our intentions best. It's all about knowing about the (informed!) choices you can make to get there.
  2. Synthesizing believable diffraction spikes is not as straightforward as it seems. The naive way would be to simply 'paint' on nice looking replacement star. The result of this method is, as we can see, usually not very convincing. A more physically correct approach requires each individual star be reduced to a point light, measuring magnitude and temperature by means of photometry, after which the starlight is scattered according to a point spread function that is unique to your telescope. Each and every component in the OTA that obstructs or affects the starlight, down to individual protruding screws, mirror clips, flocking, mirror material, etc. affects how the starlight looks and behaves. Contrary to what you might think, each and every pixel in your image is affected by the starlight as well as starlight outside the immediate imaging plane. That's why you can still see Alnitak's spikes, for example, without the actual star even being recorded. Once you have modeled your virtual telescope (along with aperture and focal length), you can start calculating how stars would have looked through such a scope, given the angular size of the image you have recorded. It is also important to take into account focus and seeing conditions, which would have had an impact on how much of detail is visible in the individual stars' diffraction patterns. Additionally, star color of the stars needs to be forced to into adhering to a black body radiation curve in order to be able to simulate dispersion properly. The final result will be an approximation at best, but should yield a more convincing, physically correct result. Of course, this doesn't take into account any stretching you have performed on the point source light prior to handing it to a star re-synthesis routine. I took the liberty of reprocessing your image - hope that's okay. The attached image contains re-synthesized star light. The effect is subtle (as it should be) but distinctly noticeable when viewed up close/zoomed in. I'd also have to disagree with those who say that (proper) diffraction pattern synthesis is not useful. It is included in StarTools not for aesthetic purposes, but for the purpose of reconstructing stars that were either affected by lens aberrations/corrections (warping their spikes), or for accentuating faint nebulosity in widefields, where star distribution often correlates with gas distribution. EDIT: Further to the above description, it is also imperative that *all* stars/point lights get the same treatment. Otherwise the diffraction will look 'wrong'. EDIT: I should also say I guessed a 8" F6 Newtonian with an image at 150 ArcSecs. Other equipment/parameters would look totally different... EDIT: Doh. Just read you used an ED80. Re-synthesised the image accordingly. Cheers, Ivo
  3. My $0.02 (GBP 0.0134 ) Not all that much of an improvement over what already has been posted. I did want to say something else about the whole background level thing. If you start to do more advanced processing, it is a bit more complicated than just 'keep 5% or 10% (or whatever your preference) and you'll be fine'. Keeping a background level (when done at all times) during processing eats into your available dynamic range. Human eyes like contrast. Human eyes are also fallible. If you combine these two nuggets of information you can cheat your way to a perceived higher dynamic range and a 'punchier' image. For example, this is exactly what Unsharp Mask does. It artificially inflates very local contrast by exaggerating blackness around the star and increasing brightness in the star. We have all seen (I presume) the black ringing/panda eyes (officially called the 'Gibbs phenomenon' effect around stars with Unsharp mask applied if these stars reside on a non-black background (such as bright background level or nebula). While in the latter case of star sharpening this is an undesirable effect, we can also use this dynamic range tinkering to our advantage. Let's consider an image of a nebula. Depending on your preference, you'd have a background level around it (where dark space is), but for pixels in the nebula itself adding this background level is just a waste of dynamic range - it just makes things harder to see; the nebula itself is typically brighter than the background level anyway and even if it is not, optimizing the dynamic range for the nebula itself between pure black (e.g. without background level) and pure white will do wonders for the 'punch' of your image. No one will ever see/know you didn't add the background levels to the nebula; welcome to the obscure art of local contrast optimization! So, long story short, this is why you would want to refrain from *****-nilly maintaining that background level at all costs. Lastly, Terry, to answer some questions; 1. Is the aim to bring out as much detail as posible? In that I mean every possible background star is made clearly visible? No, not at all. The greats like Ken Crawford even intentionally leave some details blurry, just to emphasize others. A growing number of people even remove stars from their image altogether, just to show an underlying nebula in all its glory (there are some magnificent examples of this by Fred Vanderhaven and Mike Sidionio). It totally depends on how you want to present your subject and what you feel is the important subject in your particular image. There are many different scales (stars, dust lanes, big billowing clouds) in an image, all vying for attention and emphasis. Emphasize them all and an image looks busy (not a bad thing, just a choice). Emphasize just a few and the eye is drawn to them. 2. How do you know when you have introduced artifacts, ie something that is not really there? Like Dennis said, compare with other images. Tying in with your 3rd question, beware of green tints. Very few objects in space have a predominantly green appearance in any part of their complex (a couple of nebs excepted, such as M42's core, the Tarantula neb's core, and probably some more OIII rich nebs). There is a great free PS plug-in called HastaLaVistaGreen (HLVG), which gets rid of any green in your image. The reasoning is that if a pixel is green and your color balance is correct, then that green pixel must be there because of noise. PixInsight (SCNR) and StarTools (Cap Green) have a similar tool. Hope this helps! Ivo
  4. Right on David! When I embarked on the whole imaging adventure I, of course, learned about the 'rockstars' of astro imaging. One of these indisputable 'rockstars' is Rogelio Bernal Andreo (Deep Sky Colors - Astrophotography by Rogelio Bernal Andreo). He is also responsible for my favorite (and very sobering) quote about astro imaging; "There are as many schools of astrophotography as there are astrophotographers." How true. Sure, there are best practices, but at the end of the day everything comes down to your own personal interpretation! Mastering image processing is about mastering the skills to create your interpretation, nothing more, nothing less.
  5. Clipping is of course a big no-no, but I have to vehemently disagree with the black level clipping assertion Rik (and so do my histograms) - is your screen properly calibrated? I can see no black level clipping at all. Edit: I attached a version of M13 that has all its pixel levels multiplied by 10x - remember that 0 x 10 would still be 0 (e.g. black) but as you can see no pixel is perfectly black in this image. I can't see any 'neon'-like clipping/glowing either, but maybe you were just speaking in general? You're right of course that space is not perfectly black and, but as long as that is reflected in the image at whatever level (and it is in this case), then an image is 'correct' as far as I'm concerned. As a matter of fact, I think Terry has done a decent job here with the data at hand. Looking at the raw data (which he posted above), the individual stars in the cluster didn't resolve all that well. The reduced resolution image as posted here almost makes up for the 'wandering' of the stars across multiple pixels. It could maybe use some sharpening, but all-in all I think this is a decent interpretation given the data.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.