Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.


  • Content Count

  • Joined

  • Last visited

Everything posted by jager945

  1. It's worth noting that CUDA is an NVidia proprietary/only technology, so anything that specifically requires CUDA will not run on AMD cards. Writing and particularly optimizing for GPUs has been an incredibly interesting experience. Much of what you know about optimisation for general purpose computing does not apply, while new considerations come to the fore. Some things just don't work that well on GPUs (e.g. anything that relies heavily on logic or branching). E.g. a simple general purpose median filter shows disappointing performance (some special cases not withstanding), whereas complex noise evolution estimation throughout a processing chain flies! I was particularly blown away with how incredibly fast deconvolution becomes when using the GPU; convolution and regularisation thereof is where GPUs undeniably shine. My jaw dropped when I saw previews update in real-time on a 2080 Super Mobile! I don't think APP uses the GPU for offloading arithmetic yet by the way. Full GPU acceleration for AP is all still rather new. GPU proliferation (whether discrete or integrated in the CPU) has just about become mainstream and mature. Hence giving it another look for StarTools. Exciting times ahead!
  2. "Por qué no los dos?" Have you thought of using the Optolong L-eXtreme as luminance and using the full spectrum dataset for the colours? Assuming the datasets are aligned against each other, it should be as simple as loading them up in the Compose module (L=Optolong, R, G and B = full spectrum) and setting "Luminance, Color" to "L, RGB" and processing as normal. It should yield the detail of the second image (and reduced stellar profiles), while retaining visual spectrum colouring. Lovely images as-is!
  3. Hi, Crop away the stacking artifacts around the edges and fix your flats (these are almost definitely dust donuts). If you cannot (or don't want to) fix the flats, mask out the dust specks before running Wipe. Have a look at the Wipe documentation here which incidentally shows examples of the same issue you are experiencing above. Hope this helps!
  4. Hi, I replied on the StarTools forum, but thought I'd post the answere here as well; This is a separate thing and doesn't have much to do with AutoDev. Wipe operates on the linear data. It uses an exaggerated AutoDev (e.g. completely ignoring your old stretch) stretch of the linear data to help you visualise any remaining issues. After running Wipe, you will need to re-stretch your dataset. That is because the previous stretch is pretty much guaranteed to be no longer be valid/desirable. That is because gradients have been removed and now no longer take up precious dynamic range. Dynamic range can now be allocated much more effectively to show detail, instead of artifacts and gradients. As a matter of fact, as of version 1.5, you are forced to re-stretch your image; when you close Wipe in 1.5+, it will revert back to the wiped, linear state, ready for re-stretch. Before 1.5 it would try to reconstruct the previous stretch, but - cool as that was - it really needs your human input again, as the visible detail will have changed dramatically. You can actually see a progression by gradually making your RoI larger or smaller; as you make your RoI smaller you will notice the stretch being optimised for the area inside your RoI. E.g. detail inside the RoI wil become much more easy to discern. Conversely, detail outside the RoI will (probably) become less easy to discern. Changing the RoI gradually should make it clear what AutoDev is doing; Confining the RoI progressively to the core of the galaxy, the stretch becomes more and more optimised for the core and less and less for the outer rim. (side note, I'd probably go for something in between the second and third image ) E.g. AutoDev is constantly trying to detect detail inside the RoI (specifically figuring out how neighbouring pixels contrast with each other), and figure out what histogram stretch allocates dynamic range to that detail in the most optimal way. "Optimal" being, showing as much detail as possible. TL;DR In AutoDev, you're controlling an impartial and objective detail detector, rather than a subjective and hard to control (especially in the highlights) bezier/spline curve. Having something impartial and objective is very valuable, as it allows you to much better set up a "neutral" image that you can build on with local detail-enhancing tools in your arsenal (e.g. Sharp, HDR, Contrast, Decon, etc.); Notice how the over-exposed highlights do not bloat *at all*. The cores stay in their place and do not "bleed" into the neighboring pixels. This is much harder to achieve with other tools; star bloat is unfortunately still extremely common. It should be noted that noise grain from your noise floor can be misconstrued by the detail detector as detail. Bumping up the 'Ignore Fine Detail <' parameter should counter that though. I hope the above helps some, but if you'd like to post a dataset you're having trouble with, perhaps we can give you some more specific advice?
  5. Hi, Stacking artifacts a clear as day in the image you posted (as is Wipe's signature reaction to their presence). Crop them away and you should be good!
  6. I sincerely hope you will re-read my post. It was made in good faith and answers many of your questions and concerns directly or through concise information behind the links. To recap my post; a great deal of your issues can likely be alleviated by flats, by dithering (giving your camera a slight push perpendicular to the tracking direction between every x frames will do) and using a rejection method in your stacker. If that is not something that you wish to do or research, then that is, of course, entirely your prerogative. Given your post above, it's probably not productive I engage with you any further at this time.
  7. Hi, I would highly recommend doing as suggested in step 1 of the quick start tutorial, which is perusing the "starting with a good dataset" section. Apologies in advance, as I'm about to give some tough love.... Your dataset is not really usable in its current state. Anything you would learn trying to process it will likely not be very useful or replicable for the next dataset. There are three important parts of astrophotography that need to work in unison. These three things are acquisition, pre-processing, and post-processing. Each step is dependent on the other in that sequence. Bad acquisition will lead to issues during pre-processing and post-processing. You can try to learn these three things all at once, very slowly, or use a divide and conquer strategy. If you want to learn post-processing now, try using a publicly available dataset. You will then know what an ok (not perfect) dataset looks like and how easy it really is to establish a quick, replicable workflow. If you want to learn pre-processing, also try using a publicly available dataset (made up out of its constituent sub frames). You will then know what settings to use (per step 1 in the quick start tutorial, see here for DSS-specific settings) and what flats and bias frames do. Again, you will quickly settle on a quick, replicable workflow. Finally, getting your acquisition down pat is a prerequisite for succeeding in the two subsequent stages if you wish to use your own data; at a minimum take flats (they are not optional!), dither unless you absolutely can't, and get to know your gear and its idiosyncrasies (have you got the optimal ISO setting?). The best advice I can give you right now, is to spend some time researching how to produce a clean dataset (not deep - just clean!). Or, if you just love post-processing, grab some datasets from someone else and hone your skill in that area. It's just a matter of changing tack and focusing on the right things. You may well find you will progress much quicker. I'm sorry for, perhaps, being somewhat blunt, but I just want to make sure you get great enjoyment out of our wonderful hobby and not endless frustration. Wishing you clear skies and good health! EDIT: I processed it, just to show there is at least a star field in there, but, per the above advice, giving you the workflow would just teach you how to work around dataset-specific, easily avoidable issues...
  8. Glad I could help! With regards to the spiral arms, it depends on your dataset. If you'd like to upload it somewhere I'd be happy to have a look. For images like these, the HDR module's Reveal All mode may help, as well as the Sharp module from ST 1.6 (DSO Dark preset, overdrive the strength). You'd use these tools specifically as they govern small-medium scale detail. It's also possible a different global stretch (with a Region of Interest in AutoDev) can lift some more detail from the murk. Much depends on your "murk" (e.g. how clean and well-calibrated the background is) as well
  9. Nice going! The star halos are likely caused by chromatic aberration. You can use the Filter module in StarTools to kill those fringes; create a star mask with the offending stars in it, then, back in the module, set Filter Mode to Fringe Killer. Now click on different halos and different parts of the halos (e.g. not the star cores, but the halos themselves!) until they have all but disappeared. You'll end up with white-ish stars for the stars that were affected, but many prefer this over the fringes. Finally, if you are suffering from colour blindness, be sure to make use of the MaxRGB mode in the Color module. It should allow you to check your colouring and colour balance to a good degree, as long as you can distinguish the brightness of the 3 maxima ok (most people can). See here for more info on how to use it. Clear skies!
  10. Hi, I had a quick look at the datasets. While the Ha signal is really nice, it seems the R, G and B data has all sorts of anomalous patches going on. It appears something has gone wrong here. The Green dataset in particular looks like you shot clouds, or perhaps something in your optical train dewed over; (this was th green channel binned to 35%, Crop, default AutoDev for diagnostics) As such, I can understand you're having trouble getting anything useful in the visual spectrum, That said, your Ha signal is fantastic and you can always create a false color Ha image if you want. Once you do acquire a useful RGB signal, you will want to use the Compose module to process chrominance and luminance separately yet simultaneously. You can, for example, use Ha as luminance and R+Ha as red, with G and B as normal. Hope this helps!
  11. Looking great! I would highly recommend software-binning your dataset, as the image at its full size is very much oversampled. Once you've binned and converted the "useless" resolution into a better signal at a lower resolution, you push the dataset more and noise should also be much less apparent. Clear skies & stay healthy,
  12. Apologies for any confusion Adrian! The two datasets were used by StarTools to automatically; Create a synthetic luminance master (e.g. making the proper blend of 920s of O-III + and 2280s of Ha). You just tell ST the exposure times and it figures it out, but it in this instance, it would have calculated a signal precision of 1.77:1 (Ha:O-III), derived from sqrt(2880/920) for Ha vs sqrt(920/920). So I believe that would have yielded a 1/(1+1.77) * 100% = ~36% OIII vs 1.77/(1+1.77) * 100% = ~64% Ha blend. Create a synthetic chrominance master at the same time (e.g. mapping Ha to red, O-III to green and also blue) If you are using PS or PI, then you can stop reading here as the following will not be possible (you will want to process both datasets with separate workflows and then afterwards combine chrominance and luminance into one image to the best of your abilities). In ST, the engine processes both synthetic luminance and chrominance masters simultaneously yet separately (for example, during gradient removal both datasets are treated at once). Most operations only affect the luminance portion (stretching, wavelet sharpening, decon, etc.) until you do final color calibration towards the end (please don't do color calibration this late in PS or PI though!) of your processing flow, which will seamlessly merge color and luminance. During this step, you can boost the color contribution of any channel (or remap channels at will), completely separate to the brightness and detail you brought out. It's one work flow, one integrated process with full control over luminance and chrominance interplay of the final result. If interested, this was the entire work flow for that image in ST 1.6; --- Compose Load Ha as red, O-III as green and O-III - again - as blue Parameter [Luminance, Color] set to [L + Synthetic L From RGB, RGB] Parameter [Blue Total Exposure] set to [Not set] (we only want to count O-III's contribution once) Parameter [Green Total Exposure] set to [0h16m (16m) (960s)] Parameter [Red Total Exposure] set to [0h37m (37m) (2220s)] (exposure times have to be multiples of 60s; close enough ) --- Bin Parameter [Scale] set to [(scale/noise reduction 35.38%)/(798.89%)/(+3.00 bits)] Image size is 1663 x 1256 --- Crop Parameter [X1] set to [78 pixels] Parameter [Y1] set to [24 pixels] Parameter [X2] set to [1613 pixels (-50)] Parameter [Y2] set to [1203 pixels (-53)] Image size is 1535 x 1179 --- Wipe Will remove gradients and vignetting in both synthetic datasets (Use Color button to toggle between datasets). Parameter [Dark Anomaly Filter] set to [6 pixels] Parameter [Drop Off Point] set to [0 %] Parameter [Corner Aggressiveness] set to [95 %] --- Auto Develop Parameter [Ignore Fine Detail <] set to [3.0 pixels] Parameter [RoI X1] set to [466 pixels] Parameter [RoI Y1] set to [60 pixels] Parameter [RoI X2] set to [779 pixels (-756)] Parameter [RoI Y2] set to [472 pixels (-707)] --- HDR Defaults --- Deconvolution Parameter [Primary PSF] set to [Moffat Beta=4.765 (Trujillo)] Parameter [Tracking Propagation] set to [During Regularization (Quality)] Parameter [Primary Radius] set to [1.3 pixels] --- Color Duoband preset (defaults parameter [Matrix] to [HOO Duoband 100R,50G+50B,50G+50B]) Parameter [Bright Saturation] set to [4.70] Parameter [Red Bias Reduce] set to [6.91] to boost the blue/teal of the O-III to taste --- Psycho-Visual Grain Equalization De-Noise (switch signal evolution tracking off, choose 'Grain Equalize') Parameter [Grain Size] set to [7.0 pixels] Parameter [Grain Removal] set to [60 %] (I think I bumped up the saturation just a little afterwards in the Color module) Hope that helps!
  13. If Ha and OIII are weighted properly to make a synthetic luminance frame, you have some pretty decent signal (your calibration is quite excellent!). If your software does not have an automatic weighting feature when compositing your synthetic luminance, then it is important to remember that, if you are weighting stacks made up of sub frames of equal exposure times, however with different numbers of subs, then relative signal quality in these individual stacks only increases with the square root of the amount of exposures. E.g. if you have twice as many Ha sub frames as O-III frames, then signal in the Ha stack is sqrt(2) ~ 1.4x better (not 2x!). As the human eye is extremely forgiving when it comes to noise in colour data, you don't need too much signal if your calibration is otherwise very good to augment your (deep) luminance rendition with O-III colouring. Creating, for example, a typical HOO bi-color then becomes fairly trivial (as usual, however, stars are pretty dominant in the O-III band however); Hope this helps!
  14. You may indeed be able to rescue some details from the highlights, but if an area is overexposed (very easy to on M42), the detail and/or colour information is just not there and you will have change tack; In that case, you will want to take a shorter exposure stack, make sure it is aligned with the longer exposure stack, process it to taste (preferably fairly similar to the other stack) and then use the Layer module to create a High Dynamic Range composite. To do this, put the one of the two finished images in the foreground and the other in the background. Then choose the 'Minimum Distance to 1/2 Unity' Filter. This filter creates a composite that switches between the background and foreground image, depending on which pixel is closes to gray (1/2/ unity). To make the switching less apparent/abrupt, bump up the Filter Kernel Radius. This will make the transitions nice and smooth.
  15. I'm going to respectfully bow out here. If you wish to learn more about illuminants (and their whitepoints) in the context of color spaces and color space conversions, have a look here; https://en.wikipedia.org/wiki/Standard_illuminant Lastly, I will leave with you a link/site that discusses different attempts to generate RGB values from blackbody temperatures, and discusses some pros and cons of choosing different white points, methods/formulas. sources for each, and the problem of intensity. http://www.vendian.org/mncharity/dir3/blackbody/ http://www.vendian.org/mncharity/dir3/starcolor/details.html Clear skies,
  16. Yup, that looks like a simple way to process luminance and color separately. With regards to the colouring, you're almost getting it; you will notice, for example, color space conversions requires you specify an illuminant. For sRGB it's D65 (6500K). E.g. an image is supposed to be viewed under cloudy sky conditions in an office environment. However in space there is no "standard" illuminant. That's why it is equally valid to take a G2V star as an illuminant, a random selection of stars as an illuminant, or a nearby galaxy in the frame as an illuminant, yet in photometry, white stars are bluer again than the sun, which is considered a yellow star. The method you choose here for your color rendition is arbitrary. Further to colorspaces, the CIELab and CIE 1931 colorspaces were precisely crafted using by the interpretation of colors by people (a group of 17 observers in the instance of 1931). Color perception is not absolute. It is subject to intepretation and (cultural) consensus.
  17. I'm sorry... I'm at a loss as to what would be wrong or weird about this image. Even the most modest equipment can do great things under good skies and competent acquisition techniques.
  18. If it's the first image at the start of this thread, then the histogram does look a little weird in that the there is very little to the right of the mode (it's mostly all contained in one bin when binned to 256 possible values). Is this what you're seeing too? (could be some artifact of JPEG compression as well). Regardless, the background doesn't look weird to me; I can see plenty of undulation/noise/detail as a result of the right part of the "bell curve" containing plenty of Poissonian noise/detail if I zoom in. What is the main objection of those saying it is "wrong"?
  19. That histogram looks quite "healthy" to me indeed if it is indeed of the image behind the dialog. The mode (peak of the bell curve) and the noise (and treatment of it) on both sides of the mode seems like a good example of what I was trying to explain.
  20. Hi vlaiv, I'm sad that the point I was trying to make was not clear enough. That point boling down to; processing color information along with luminance (e.g. making linear data non-linear) destroys/mangles said color information in ways that make comparing images impossible. There are ample examples on the web or astrobin of white, desaturated M42 cores in long (but not over exposing!) exposures of M42, whereas M42's core is really a teal green (due to dominant OIII emissions), a colour which is even visible with the naked eye in a big enough scope. I'm talking about effects like these; Left is stretching color information along with luminance (as often still practiced), right is retaining color RGB ratios; IMHO the color constancy is instrumental in relaying that the O-III emissions are continuing (and indeed the same) in the darker and brighter parts of the core, while the red serpentine/ribbon feature clearly stretches from the core into the rest of the complex. Colouring in AP is a highly subjective thing, for a whole host of different reasons already mentioned, but chiefly because color perception is a highly subjective thing to begin with. My favourite example of this; (it may surprise you to learn that the squares in the middle of each cube face are the same RGB colour) I'm not looking for an argument . All I can do is wish you the best of luck with your personal journey in trying to understanding the fundamentals of image and signal processing. Wishing you all clear skies,
  21. Hi, Without singling anyone out here, there may be a few misconceptions here in this thread about coloring, histograms, background calibration, noise and clipping. In case there is, for those interested I will address some of these; On colouring As many already know, coloring in AP is a highly subjective thing. First off, B-V to Kelvin to RGB is fraught with arbitrary assumptions about white points, error margins, filter characteristics. Add to that atmospheric extinction, exposure choices, non-linear camera response, post processing choices (more on those later) and it becomes clear that coloring is... challenging. And that's without squabbling about aesthetics in the mix. There is, however, one bright spot in all this. And that is that the objects out there 1. don't care how they are being recorded and processed - they will keep emitting exactly the same radiation signature, 2. often have siblings, twins or analogs in terms of chemical makeup, temperature of physical processes goings on. You can exploit point 1 and 2 for the purpose of your visual spectrum color renditions; 1. Recording the same radiation signature will yield the same R:G:B ratios in the linear domain (provided you don't over expose and our camera's response is linear throughout the dynamic range of course). If I record 1:2:3 for R:G:B for one second, then I should/will record 2:4:6 if my exposure is two seconds instead (or is, for example, twice as bright). The ratios remain constant, just the multiplication factor changes. If I completely ignore the multiplication factor, I can make the radiation signature I'm recording exposure independent. Keeping this exposure independent radiation signature separate from the desired luminance processing, now allows for the luminance portion - once finished to taste - to be colored consistently. Remember objects out there don't magically change color (hue, nor saturation) depending on how a single human chose his/her exposure setting! (note that even this coloring is subject to arbitrary decisions with regards to R:G:B ratio color retention) 2. Comparable objects (in terms of radiation signatures, processes, chemical makeup) should - it can be argued - look the same. For example, HII areas and the processes within them are fairly well understood; hot, short-lived O and B-class blue giants are often born here and ionise and blow away the gas around their birthplace. E.g. you're looking at blue stars, red Ha emissions, blue reflection nebulosity. Mix the red Ha emissions and blue reflection nebulosity and you get a purple/pink. Once you have processed a few different objects in this manner (or even compare other people's images with entirely different gear who have processed images in this manner), you will start noticing very clear similarities in coloring. And you'd be right. HII areas/knots in nearby galaxies will have the exact same coloring and signatures as M42, M17, etc. That's because they're the same stuff, undergoing the same things. An important part of science is ensuring proof is repeatable for anyone who chooses to repeat it. Or... you can just completely ignore preserving R:G:B ratios and radiation signature in this manner and 'naively' squash and stretch hue and saturation along with the luminance signal, as has been the case in legacy software for the past few decades. I certainly will not pass judgement on that choice (as long as it's an informed, conscious aesthetic choice and not the result of a 'habit' or dogma). If that's your thing - power to you! (in ST it's literally a single click if you're not on board with scientific color constancy) On background calibration and histograms Having a discernible bell ('Gaussian') curve around a central background value is - in a perfectly 'empty', linear image of a single value/constant signal - caused by shot/Poissonian noise. Once it is stretched this bell curve will become lopsided (right side of the mode (mode = peak of the bell curve) will expand, while the left side will contract). This bell curve noise signature is not (should not be) visible any more in a finished image, especially if the signal to noise ratio was good (and thus the width/FWHM of the bull curve was small to begin with). Background subtraction that is based around a filtered local minimum (e.g. determining a "darkest" local background value by looking at the minimum value in an area of the image) will often correct undershoot (e.g. subtracting more than a negative outlier can 'handle' - resulting in a negative value). Undershooting is then often corrected by increasing the pedestal to accommodate the - previously - negative value. However a scalar to such values is often applied first before raising the pedestal value. E.g. the application of a scalar, rather than truncation, means that values are not clipped! The result is that outliers are still very much in the image, but occupy less dynamic range than they otherwise would, freeing up dynamic range for 'real' signal. This can manifest itself in a further 'squashing' of the area left of the mode, depending on subsequent stretching. Again, the rationale here is that values to the left of the mode are outliers (they are darker than the detected 'real' background, even after filtering and allowance for some deviation from the mode). Finally, there is nothing prescribing a particular value for the background mode, other than personal taste, catering to poorly calibrated screens, catering to specific color profiles (sRGB specifies a linear response for the first few values, for example), or perhaps the desire to depict/incorporate natural atmospheric/earth-bound phenomena like Gegenschein or airglow. I hope this information helps anyone & wishing you all clear skies!
  22. Quickie in 1.6 beta; --- Type of Data: Linear, was not Bayered, or was Bayered + white balanced Note that you can now, as of DSS version 4.2.3, save your images without white balancing in DSS. This indeed allows for reweighing of the luminance portion due to more precise green channel. However, since this dataset was colour balanced and matrix corrected, this is currently not possible. --- Auto Develop To see what we got. We can see a severe light pollution bias, noise and oversampling. --- Crop Parameter [X1] set to [1669 pixels] Parameter [Y1] set to [608 pixels] Parameter [X2] set to [3166 pixels (-2858)] Parameter [Y2] set to [2853 pixels (-1171)] Image size is 1497 x 2245 --- Rotate Parameter [Angle] set to [270.00] --- Bin To convert oversampling into noise reduction. Parameter [Scale] set to [(scale/noise reduction 35.38%)/(798.89%)/(+3.00 bits)] Image size is 794 x 529 --- Wipe To get rid of light pollution bias. Parameter [Dark Anomaly Filter] set to [4 pixels] to catch darker-than-real-background pixels (recommended in cases of severe noise). --- Auto Develop Final global stretch. Parameter [Ignore Fine Detail <] set to [3.9 pixels] to make AutoDev "blind" to the noise grain and focus on bigger structures/details only. --- Deconvolution Usually worth a try. Let Decon make a "conservative" automatic mask. Some small improvement. Parameter [Radius] set to [1.5 pixels] Parameter [Iterations] set to [6] Parameter [Regularization] set to [0.80 (noisier, extra detail)] --- Color Your stars exhibit chromatic aberration (the blue halos) and DSS' colour balancing will have introduced some further anomalous colouring in the highlights. --- Color Parameter [Bias Slider Mode] set to [Sliders Reduce Color Bias] Parameter [Style] set to [Scientific (Color Constancy)] Parameter [LRGB Method Emulation] set to [Straight CIELab Luminance Retention] Parameter [Matrix] set to [Identity (OFF)] Parameter [Dark Saturation] set to [6.00] Parameter [Bright Saturation] set to [Full] Parameter [Saturation Amount] set to [200 %] Parameter [Blue Bias Reduce] set to [1.39] Parameter [Green Bias Reduce] set to [1.14] Parameter [Red Bias Reduce] set to [1.18] Parameter [Mask Fuzz] set to [1.0 pixels] Parameter [Cap Green] set to [100 %] Parameter [Highlight Repair] set to [Off] What you're looking for in terms of colour, is a good representation of all star (black body) temperatures (from red, orange and yellow, up to white and blue). Reflection nebulosity should appear blue and Ha emissions should appear red to purple (when mixed with blue reflection nebulosity). M42's core should be a teal green (O-III emissions all powered by the hot young blue O and B-class giants in the core, this colour is also perceptible with the naked eye in a big enough scope). As a useful guide for this object, there is a star just south of M43 that should be very deep red (but I was unable to achieve this at this scale with the signal at hand). Final noise reduction (I used Denoise Classic for this one as the noise is too heavy for "aesthetic" purposes). --- Filter Parameter [Filter Mode] set to [Fringe Killer] Put stars with their halos in a mask and click on the offending star halos and their colours. This should neutralise them. --- Wavelet De-Noise Parameter [Grain Dispersion] set to [6.9 pixels] Hope you like this rendition & wishing you clear skies!
  23. Thanks for the details Andy! Bortle 4 is definitely helping you here, allowing you to easily capture fainter objects (e.g. galaxies) and nebulosity. The CLS filter will skew colours a little, as it removes part of the spectrum. Yellows are usually impacted once the image is properly color balanced (usually yielding foreground starfields that have a distinct lack of yellow stars). It's not the end of the world, just something to be mindful of. If you're using a 600D, ISO 800 seems to be recommended. For a 6D, it's 1600 or 3200 (source). As far as ST goes, if you want to reduce the Saturation, in the Color module use the slider named... Saturation If you'd like to switch color rendering from scientifically useful Color Constancy to something PI/PS/GIMP users are more used to (e.g. desaturated highlights), try the "Legacy" preset. Finally, a maintenance release update for 1.5 was released a couple of days ago with some bug fixes. Updating is highly recommended. And if you feel adventurous, you can also try the 1.6 alpha, which comes with an upgraded signal quality-aware multi-scale (wavelet) sharpener. Clear skies!
  24. Hi Andy, That's an excellent start! At a glance, your images look nice and clean and well calibrated. It bodes very well! Can you tell us if you used a light pollution filter in the optical train? What software did you use to stack? Exposure times for each image and what sort of skies are you dealing with (e.g. Bortle scale)? I'm asking as it gives us an idea of what to you can reasonably expect. The stars in image of the North American nebula are suffering from the Gibbs phenomenon (aka "panda eye" effect). This is most apparent when using "dumb" sharpening filters such as Unsharp Mask. If you used StarTools deconvolution module, the choosing a "Radius" parameter that is too high will start introducing these ringing artifacts. Not using a star mask will also cause ringing artifacts around overexposing stars. Cheers!
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.