Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

sgl_imaging_challenge_banner_dslr_mirrorlesss_winners.thumb.jpg.9deb4a8db27e7485a7bb99d98667c94e.jpg

jager945

Members
  • Content Count

    30
  • Joined

  • Last visited

Posts posted by jager945


  1. Thanks for the details Andy!

    Bortle 4 is definitely helping you here, allowing you to easily capture fainter objects (e.g. galaxies) and nebulosity. The CLS filter will skew colours a little, as it removes part of the spectrum. Yellows are usually impacted once the image is properly color balanced (usually yielding foreground starfields that have a distinct lack of yellow stars). It's not the end of the world, just something to be mindful of.

    If you're using a 600D, ISO 800 seems to be recommended. For a 6D, it's 1600 or 3200 (source).

    As far as ST goes, if you want to reduce the Saturation, in the Color module use the slider named... Saturation :)

    If you'd like to switch color rendering from scientifically useful Color Constancy to something PI/PS/GIMP users are more used to (e.g. desaturated highlights), try the "Legacy" preset. Finally, a maintenance release update for 1.5 was released a couple of days ago with some bug fixes. Updating is highly recommended. And if you feel adventurous, you can also try the 1.6 alpha, which comes with an upgraded signal quality-aware multi-scale (wavelet) sharpener.

    Clear skies!

    • Thanks 1

  2. Hi Andy,

    That's an excellent start! At a glance, your images look nice and clean and well calibrated. It bodes very well!

    Can you tell us if you used a light pollution filter in the optical train? What software did you use to stack? Exposure times for each image and what sort of skies are you dealing with (e.g. Bortle scale)? I'm asking as it gives us an idea of what to you can reasonably expect.

    The stars in image of the North American nebula are suffering from the Gibbs phenomenon (aka "panda eye" effect). This is most apparent when using "dumb" sharpening filters such as Unsharp Mask. If you used StarTools deconvolution module, the choosing a "Radius" parameter that is too high will start introducing these ringing artifacts. Not using a star mask will also cause ringing artifacts around overexposing stars. 

    Cheers!

    • Thanks 1

  3. That is a rather excellent image with fantastic usage of the duoband filter!

    You may be interested in the first 1.6 alpha version of StarTools (just uploaded) which includes some specific Duoband functionality in the Color module. It lets you choose from different channel assignments and blends (and thus color schemes) without having to reprocess/reload the data. I love the HOO-ish rendering above though. 👍

  4. 1 hour ago, bottletopburly said:

    Thanks ivo 👍 imaging  equipment canon 1000d Astro Modified ur filter removed , Tecnosky 70 Q /346mm quadruplet 

    idasD2 filter  , Stacked in deepskystacker (wb from camera  not sure if I should have checked that ) processed in startools ,  taken from my location on edge of town  , led  street lights fitted all around town bortle 6 sky 

    Say no more. IMHO, that's not bad at all for Bortle 6 and a 1000D!  Light pollution is such a killer for fainter objects.

    The mod explains how you were able to get the HII areas showing. Nice!

    • Thanks 1

  5. This looks quite good to me. I would have expected a somewhat deeper and more noise free signal for 8.5h though. What instrument did you use to acquire the data? How is the light pollution at your location?

    I can't quite place Peter's comments about the colours and clipping however...

    The foreground stars show all temperatures of the black body curves equally (red->orange->yellow->white->blue are all accounted for). The yellow core is present (older stars), the bluer outer rim is correct (younger OB stars, more star formation), the all too often neglected purple/pink HII areas (as can be corroborated/seen here) are just about coming through in the outer rim (responsible for the star formation), the red/brown dust lanes are present. You have produced a scientifically valuable image in terms of coloring of temperatures and emissions. Finally, your histogram looks just fine to me and shows no clipping that I can see;

    1347037454_Screenshotat2019-09-11223849.png.226a24d3cf6eeb0b5c661e78a6dca0f2.png

    I'd say nicely done!

    • Like 2

  6. 5 hours ago, paul mc c said:

    My apologies......i did'nt realise that,feel like a right muppet now.

    I'm the muppet here! :D My apologies - I should've made clear I'm the developer. I usually have this in my signature on other forums. I just updated my profile pic to avoid any confusion in the future.

    Clear skies!

    • Like 2

  7. Hi Paul,

    You are producing some fairly good data - kudos!

    Now I could be wrong, but from the color signature (a lack of yellow) I get the impression you are using a light pollution filter. Is that correct?

    The only obvious thing I can recommend to improve your datasets, woudl be to try dithering between frames, as I can see some walking noise ('streaks').

    When processing spiral galaxies, things to look out for are a yellow core (older stars, less star formation), blue outer rim (younger blue stars, more star formation) and purplish/pink HII areas dotted around the arms (you captured these too). You'd also be looking for a good even foreground distribution of star colors, ranging from red->orange->yellow (missing here)->white->blue.

    You are definitely well on your way!

    andromedauncomp.thumb.jpg.7121c2c5f33287722457e4be6337dd71.jpg

     

    FWIW I processed your image in StarTools 1.5 as follows;

    --- Auto Develop

    Default parameters, to see what we are working with. We can see some stacking artifacts, gradients, walking noise, dust donut at the top.
    --- Crop

    Cropping away stacking artifacts and dust.
    Parameter [X1] set to [20 pixels]
    Parameter [Y1] set to [55 pixels]
    Parameter [X2] set to [2129 pixels (-16)]
    Parameter [Y2] set to [1414 pixels (-14)]
    Image size is 2109 x 1359
    --- Wipe

    Getting rid of gradients. Default parameters.
    Parameter [Dark Anomaly Filter] set to [4 pixels]
    --- Auto Develop

    Final stretch. Clicked and dragged a Region of Interest (RoI) over part of the galaxy.
    Parameter [Ignore Fine Detail <] set to [2.5 pixels]
    Parameter [RoI X1] set to [95 pixels]
    Parameter [RoI Y1] set to [607 pixels]
    Parameter [RoI X2] set to [1963 pixels (-146)]
    Parameter [RoI Y2] set to [964 pixels (-395)]

    --- Deconvolution

    Auto mask. Default settings. Usually worth a try, as the module tends to "know" when and where improvements can be made. It didn't do too much in this case.
    --- HDR

    Default parameters. Shows a little more detail in the core.
    --- Color

    See notes above about what to look for.
    Parameter [Bright Saturation] set to [3.00] (less saturation in the highlights to hide some color channel alignment issues)
    Parameter [Green Bias Reduce] set to [1.09]
    Parameter [Red Bias Reduce] set to [2.09]
    Parameter [Cap Green] set to [100 %] (only when you're done color balancing)

    --- Wavelet De-Noise

    Default parameters.
    Parameter [Grain Dispersion] set to [7.5 pixels]

    Hope this helps at all!

    • Like 3
    • Thanks 1

  8. Hi Danjc,

    The short version of what's going on; stacking artifacts. Crop them away. They are fairly obvious in the image you posted.

    The long version; any StarTools tutorial or video will start by telling you to crop away stacking artefacts. The application itself will warn you about them too if they are detected (as it does with the dataset you posted). When you launch the Wipe module without cropping them away, you are effectively asking Wipe to create and subtract a light pollution model where those stacking artifacts are not being clipped (StarTools will virtually never clip your data unless you explicitly allow it to). In order to satisfy this, Wipe creates a model that locally backs off (e.g. locally subtracts no light pollution at all), causing local light pollution remnants around the edges where the stacking artefacts are located.

    Tutorials can be found here, including a "Quick start" tutorial if you're in a hurry.

    With regards to your dataset, it is fairly noisy (even when binned), which may make it hard(er) for AutoDev to lock onto the detail. A manual Develop will yield better/easier results in that case.

    When/if using Wipe, use the Narrowband preset (as of v 1.5) for this H-alpha data. You may not need to use Wipe if your data is well calibrated and there are no obvious gradients and/or bias signals in your dataset. Your dataset is well calibrated, with nothing too obvious in terms of bias signal.

    Autosave001(2).thumb.jpg.04d2d39438cdd42e59d38d882255c33d.jpg

    You can get something like the above with a simple workflow like this;

    --- Auto Develop

    Default values, to see what we got.

    We can see stacking artefacts (thin lines around all edges), noise, oversampling.

    --- Bin

    Converting oversampling into noise reduction.
    Parameter [Scale] set to [(scale/noise reduction 50.00%)/(400.00%)/(+2.00 bits)]
    Image size is 1374 x 918

    --- Crop

    Getting rid of stacking artefacts.
    Parameter [X1] set to [7 pixels]
    Parameter [Y1] set to [8 pixels]
    Parameter [X2] set to [1369 pixels (-5)]
    Parameter [Y2] set to [911 pixels (-7)]
    Image size is 1362 x 903

    --- Wipe

    Optional, I'd say (see also comments/reasons above).

    --- Develop

    Final stretch. Choosing a manual Develop here, AutoDev will have trouble locking onto the finer, fainter detail to to overwhelming noise.
    Parameter [Digital Development] set to [96.76 %]
    Parameter [Dark Anomaly Filter] set to [20.0 pixels]

    --- Deconvolution

    Usually worth a try. Auto-generate star mask. Only very marginal improvement visible due to signal quality.

    Parameter [Radius] set to [1.3 pixels]

    --- Wavelet Sharpen

    Default parameters. Using same mask that was auto-generated during Decon.

    --- Wavelet De-Noise

    Default parameters. Parameter [Grain Size] set to [8.0 pixels]

    Hope this helps!

     

     

     

    • Thanks 1

  9. On 19/08/2019 at 01:50, Mikey07 said:

    Well here's a screenshot of what I got from last night's data and it's fair to say that it aint good. It looks like someone has taken a sander to it! Any pointers on where I'm going wrong would be appreciated. The images were taken using........

    SW Esprit 100ED, SW HEQ5 Mount, Flattener (which is brand new and the first time I've used it), Modified Canon 600D, 24x120s and ISO800.

    I also have a decent .FTS file of M42 from last February so I'll run that through StarTools as well and see how it comes out.

    M31StarTools.jpg

    AutoDev serves a dual purpose; first it is used to show any defects in your image, then it is used to perform your final global stretch after you have mitigated the issues you found earlier.

    In the above image, we can see gradients, dust donuts and (rather severe) walking noise. Flats are the #1 way to improve your image (and processing enjoyment! 😀), with dithering between frames a close 2nd (this solves the walking noise issue). They are really not optional and they don't really cost anything, except some time.

    Also don't forget you can (should) click and drag a Region of Interest for AutoDev to optimize for (by default the RoI is the entire image). In the case of the image above, a slice of M31, including its core would constitute a good dynamic range sample we'd be interested in.

    There are also cases where AutoDev simply cannot lock onto celestial detail. These cases usually involve heavy noise and/or lots of "empty" background. The latter can be solved with an RoI. The former can be solved by increasing the "Ignore fine detail <" parameter, which makes AutoDev "blind" to small noise grain. If you cannot fix or work around the calibration/acquisition issues in your dataset and cannot obtain a good result with AutoDev, you can always resort to a manual Develop.

    As bottletopburly already highlighted though, post-processing becomes infinitely easier and repeatable when you have "good" data. I'm putting "good" between quotes here, because it just means "properly calibrated". It doesn't have to be deep, gradient free or noise free, as long as it has been dithered and properly calibrated with - at least - flats.

    Hope this helps!

    • Like 3

  10. 1 hour ago, cotak said:

    Well the general difference for me is just noise. Which i noticed some people here are very keen to squish. However, for me I prefer to keep a bit in because otherwise it ends up seeming artificial and overly mushy. And when I print even at large sizes the noise disappears.

    Another thing is the color of stars. I don't find it to be as natural looking as when I do color calibration in APP.

    So for me the amount of time spent in ST to get to that point as it does take a while to process, is not justified. And that's just a personal choice as similar results can be had in photoshop and I actually find it's quicker that way for me in absolute processing time. And we are not talking a dog slow old computer, I am doing all this on a i7-4790 with SSD and 32 gigs of ram.

     

    No problem - at the end of the day it's all about what gives you the result you are most happy with!

    While it is of course futile to argue aesthetics, it is important to understand that noise and signal fidelity are physical properties of a dataset (as well as its intermediate states towards a final image). Workflows, tools and filters all affect noise propagation in different ways as you process your image. Noise and noise propagation, in turn, directly affect how much you can push a dataset; as you know it is far easier to dig out more detail with a cleaner dataset and signal. For that same reason, we endeavor to get longer exposure times - so we can show more detail. Us deep sky AP'ers deal with very faint signal. Noise is the bane of our existence.

    Noise mitigation is an extremely important part of signal processing. The speckles are just a visual manifestation, whose presence can indeed be left to personal taste. However its mathematical significance is omnipresent with far reaching consequences for the effectiveness of processing steps and their cumulative behavior.

    TL;DR Noise mitigation strategies employed by your software and its algorithms while you process, allow you to dig out more detail in your final image - it's not just about leaving or removing the speckles in your final image.

    3 hours ago, bottletopburly said:

    Nicely explained Ivo, when you use develop and not autodev typically what percentage do you go to without saturating the image 

    Glad it helped someone! There is actually a 'home in'  button that, once clicked a few times, will bounce between a range suitable for your image.

    Personally, I rarely use the Develop module (I use AutoDev with an RoI instead), except in cases where I really need to correct something or where AutoDev fails entirely. The latter can happen in the presence of noise, or when data was already stretched prior (AutoDev assumes a linear dataset).

    Clear skies,

    • Like 1

  11. Thank you for uploading the dataset. It is much appreciated.

    I don't see anything immediately problematic, that would have prevented you from achieving the same result in StarTools only. The dataset is fairly bright, almost as if it has been pre-stretched and is no longer linear. I could be entirely wrong here though.

    Regardless, here is a simple workflow that was meant to emulate the coloring and result you produced and - presumably - are happy with. Simultaneously, it should hopefully demonstrate the major reason why you would want to include StarTools in your wokflow; superior noise mitigation and signal fidelity.

    If time vs result is a measure you are primarily concerned with then processing the image took ~10 minutes on a 6-core Xeon from 2010 with an SSD drive. Your mileage may vary obviously depending on hardware and how comfortable you are with operating the software. However, many default parameters and presets applied. StarTools 1.4.352 was used.

    --- Auto Develop
    To see what we're working with.
    We can see the image is quite bright, has a strong green bias + severe gradients, and is oversampled.
    Noise is already visible. Some stacking artifacts are visible.
    --- Bin
    To convert oversampling into noise reduction.
    Parameter [Scale] set to [(scale/noise reduction 50.00%)/(400.00%)/(+2.00 bits)]
    Image size is now 2329 x 1782
    --- Crop
    To remove stacking artifacts and frame the galaxy better.
    Parameter [X1] set to [97 pixels]
    Parameter [Y1] set to [79 pixels]
    Parameter [X2] set to [2180 pixels (-149)]
    Parameter [Y2] set to [1711 pixels (-71)]
    Image size is now 2083 x 1632
    --- Wipe
    Vignetting preset.
    Parameter [Dark Anomaly Filter] set to [6 pixels] in order to help Wipe ignore dark noise better.
    --- Auto Develop
    We're doing a 2-stage stretch here to achieve a similar stretch to your current image.
    Parameter [Ignore Fine Detail <] set to [3.3 pixels] to make AutoDev ignore noise
    Parameter [Outside RoI Influence] set to [5 %]
    Parameter [RoI X1] set to [755 pixels]
    Parameter [RoI Y1] set to [568 pixels]
    Parameter [RoI X2] set to [1336 pixels (-747)]
    Parameter [RoI Y2] set to [998 pixels (-634)]
    --- Develop
    Second stage, bring brightness down again.
    Parameter [Gamma] set to [0.58]
    --- Deconvolution
    Attempting some modest deconvolution. The earlier binning will have created some small areas that now have a high enough signal-to-noise ratio.
    Parameter [Radius] set to [2.6 pixels]
    --- HDR
    Reveal preset.
    Totally optional, but demonstrates the value of having these sorts of local optimisation tools in your arsenal.
    Parameter [Detail Size Range] set to [1000 pixels]
    Parameter [Strength] set to [1.7]
    --- Color
    Final color calibration, emulating the style of your current image.
    Parameter [Style] set to [Artistic, Not Detail Aware]
    Parameter [Dark Saturation] set to [2.00]
    Parameter [Bright Saturation] set to [2.30]
    Parameter [Saturation Amount] set to [365 %]
    Parameter [Blue Bias Reduce] set to [1.01]
    Parameter [Green Bias Reduce] set to [1.09]
    Parameter [Red Bias Reduce] set to [1.42]
    --- Develop
    I noticed you prefer a pedestal in your background.
    Parameter [Skyglow] set to [4 %]
    --- Wavelet De-Noise
    Final noise reduction - StarTools has now had the longest time to track (see Tracking feature) noise propagation and is ready to autonomously snuff noise out with per-pixel accuracy.
    Default parameters, except parameter [Smoothness] set to [82 %]

    After this short workflow, you should hopefully end up with this;

    spacer.png

    (full res image here)

    Any questions, do let me know! I hope this short example has helped demonstrate the value of using StarTools in your current workflow, or at the very least demonstrates StarTools to be capable (and lower cost) alternative to other software.

    Clear skies,

     

    • Like 4

  12. 43 minutes ago, cotak said:

    Just a matter of time vs results.

    I find the results from ST to be more pastel colored than with just APP + PS. Since I will do final touch up in PS, and APP has more control over its faster gradient removal tool. There is very little reason to go into ST at all.

     

    Thank you for the explanation. I'm still holding out hope there this may just be a simple understanding or maybe you are confusing StarTools with another application or action sets for PS?

    In the case you are indeed talking about StarTools, something appears to be terribly wrong. Have you ever processed an image from start to finish in StarTools in the past, or have you been using it for just one or two features?

    With regards to coloring, it appears you may have missed the Style and LRGB Method Emulation settings and/or the Saturation Amount slider in the Color module? If you don't like the default Color Constancy rendering style, it's a single click to change it to the style of other applications (e.g those of PI or APP) or even effortlessly emulate the LRGB compositing styles/techniques of Kredner/Kanevsky or Gendler with another click. Color calibration can only be performed on linear data, so perhaps that's where things are going wrong?

    Are you are aware StarTools performs other - for astrophotography - useful things like deconvolution, local contrast optimisation, local dynamic range optimisation, wavelet sharpening, color constancy and more, none of which are currently - by my knowledge - offered by either PS or APP? Are you familiar with ST's noise propagation Tracking feature?

    You are certainly not obliged to do so, but if you wish to share your dataset/stack with me, I'd love to prove StarTools' worth in your workflow. Indeed, especially in the area of noise mitigation, the present image may benefit greatly. Depending how challenging the gradients are, Wipe may also be able to clean up the red and green gradient remnants currently visible in the background.

    Do let me know!

    • Like 1

  13. On 23/06/2019 at 22:43, cotak said:

    In this case it is the difference between APP and DSS+startools. Which is why I wouldn't recommend anyone use ST today.

     

    Otherwise it's the same camera and LP filter.

     

    Hi,

    I'd love to know why you wouldn't recommend anyone use StarTools any longer? Is there a dataset you are having trouble with? (happy to do a personalised tutorial if so!)

    Thank you!

    EDIT: I also noticed on astrobin the difference in exposure time between the two images is 5h50m for the new image vs just 1h15m for the old image?

    • Like 2

  14. @bottletopburly You're so right. It's kind of unfair for newbies; there is so much to tackle and if acquisition is poor, then post-processing is so much harder. It's a double whammy. What you did was the perfect approach; divide and conquer. i.e. decoupling learning post-processing from learning data acquisition.

    Perhaps not a video/tutorial, but I thought I'd post this animated GIF here as well. It shows what the years of development between 1.3.5 and 1.4 (to be released soon) have yielded in terms of signal fidelity.

    It's a 400% enlarged crop of a Jim Misti M8 Hydrogen-alpha dataset that has been intentionally non-linearly "mangled" to put Tracking through its paces.
    Specifically, it has been stretched, Contrast enhanced, then linearly deconvolved (using Tracking time travel and precognition of future signal evolution), then noise reduced (also using Tracking).
    Workflow, parameters and settings were kept identical between 1.3.5 and 1.4. The only difference is the algorithms in Decon and Denoise making increasingly more sophisticated use of the Tracking data/time travel over the years (with one unreleased quality bump applied here that hasn't been released yet).

    spacer.png

    ("Original" is signal as visible, without any Tracking-enabled modules applied yet)

    • Like 2

  15. Thanks for spreading the word! 😀

    I'm really hoping the Tracking video helps people understand better what is so incredibly special about ST's processing engine vs traditional applications.

    Tracking is actually where most of my time, efforts and R&D are spent. The "time-travel" of your signal is why StarTools can/should yield better results with the same data.

    There are some significant improvements coming up to the Decon module (again!), making even better use of "future knowledge" about your signal.

    I try to explain it on the website;

    Quote

    Because in StarTools you initiate deconvolution at a later stage, the deconvolution module can take into account how you processed the image after the moment deconvolution should normally have been invoked (e.g. when the data was still linear). In a sense, the deconvolution module now has knowledge about a future it should normally never have been privy to (it certainly isn't in other applications). Specifically, that future tells it exactly how you stretched and modified every pixel after the time its job should have been done, including pixels' noise components.

    You know what really loves per-pixel noise component statistics like these? Deconvolution regularization algorithms! A regularization algorithm suppresses the creation of artefacts caused by the deconvolution of - you guessed it - noise grain. Now that the deconvolution algorithm knows how noise grain will propagate in the "future", it can take that into account when applying deconvolution at the time when your data is still linear, thereby avoiding a grainy "future". It is like going back in time a week and telling yourself the lottery numbers to today's draw.

     

    • Like 1
    • Thanks 2

  16. 15 hours ago, FaB-Bo-Peep said:

     My stack comprises of Large JPEGS so I will need to make sure my DSLR is switched to RAW files for my next project and look at taking some flats. Just had another go this time using a 32 bit FITS file saved from DSS as apposed to a 16 bit TIFF, the file is much larger so must contain more data to start with yes?

    Aha! That explains a lot :) JPEG is an 8-bit lossy image file format, where you camera applies all sorts of processing (stretching, color calibation, sharpening, sometimes noise reduction) before saving it to the card. You'll want to avoid all this. When processing this dataset, however, you can make use of StarTools' ability to "reverse" the stretch JPEG encoding applied (choose "Non-linear sRGB source" when opening the dataset), so at least you can work with somewhat linear data. StarTools' modules should then do a better job across the board.

    Honestly though, when you manage to grab signal like this in mere lossy 8-bit JPEGs, this bodes very well for your future endeavours!

    Saving as FITS is the favourite format when it comes to astrophotography datasets, as it is "philosophically" much closer to scientific instrument data, and virtually never used for finished images meant for human consumption. Because you stacked 8-bit JPEGs you probably won't see any fidelity gains using 32-bit FITS, but using a 32-bit FITS file format (Integer is best) for your datasets is good practice.

    Quote

    To me it looks like another step in the right direction but it's all subjective and as each time I process in Startools I'm using slightly different settings, tweaks and the masks so obviously it's impossible to exactly replicate previous results, someone could go mad with all this tweaking, does anyone ever truly finish an image?

    You can find all the steps you took in a file called StarTools.log (should be in the same folder as the executables), so you can precisely replicate them if you need to. As of the 1.4.x beta versions, it also stores the masks in that file, so you can recover them too. They are stored in BASE64 format and look like long text strings. For more info on how to convert these strings back to PNGs you can import as masks, have a look on the StarTools website. (I'm the author of StarTools, therefore I don't think it's appropriate to "spam" direct links to my own website, but it should be easy to find)

    Getting "closure" on an image becomes easier once you get to know you gear and the characteristics of the datasets you produce with it. You'll start getting a sense of what you "can get away with", how far you can push things, but also - and that's what this hobby is all about - your personal tastes. Also not unimportant - to some - is being able to document and replicate faithfully the physical & chemical properties of the objects you are imaging. Emissions, reflections, temperatures, shockwaves, tidal tails. All objects have a story; a past, a present, a future.

    • Thanks 1

  17. Hi,

    I can't be totally sure, but it appears to me the dataset has already been non-linearly stretched. If this is the case then, some things like light pollution/gradient removal, color calibration and deconvolution cannot be performed correctly (from a mathematical point of view). StarTools, especially, is sensitive to this, as much of the processing engine effectiveness hinges on being able to reliably track signal evolution from linear source (e.g. 1-to-1 photon counts) to final processed image.

    When you save the DSS result, try saving with settings "embedded" rather than "applied". This should give you a dataset that is linear. If you already did this, then the problem may lie somewhere else; be sure to stack only RAW files (e.g. CR2, NEF or ARW files), and do not use any intermediary programs before passing the frames to DSS. And, as everyone else already commented; flats, flats, flats! 😀 They are the single most effective way of improving your datasets (with dithering between frames a close second). Best of all, it's free (just a bit of effort).

    Still, nicely done though; galaxies are not easy to capture from a light-polluted environment, and you've actually managed to capture the faint tidal tail! (problem of course is that gradients and unevenness can look like tidal tails as well, that's why flats are so important; they remove the need to engage in subjective processing trickery and let the dataset speak for itself)

    Good stuff!

    • Thanks 1

  18. Hi,

    Have a look at the Dell Optiplex 9010 and 9020 series, as well as the HP Elite 8300 series (i7 models). They are regularly sold for (relative) peanuts (check eBay for example - use a coupon if you can find one to get more off the price) once they have been written off by the office, university, government organisation, etc. that - often lightly - used them. The i7 processors in these are very, very capable. They use DDR3 memory which, right now is much cheaper than the overpriced DDR4 memory. Upgrading (if needed) to 16GB of RAM should be very cost effective.

    These cases are very compact, but an extra 2.5" SSD drive will fit no problem. SSD drive have recently really come down in price.

    If you don't expect the need to fit a graphics card in the future, these machines are an absolute steal and the (IMO and AFAIK) the most cost-effective way to get your hands on an extremely capable photo editing machine.

    If you don't have Windows 10 yet, you can get a totally legit OEM keys from eBay for a couple of pounds/dollars; the install media can be downloaded freely from Microsoft. Often times, the machines themselves already come with Windows 10 though.

     

    @DaveS

    If you are getting "out of memory" errors (in any software), you are likely running a 32-bit version of the software, a 32-bit Operating System or your virtual memory settings are incorrectly configured. Modern Operating Systems (and applications) should never really bother you with an "out of memory" error, simply because they use your hard/SSD drive to supplement RAM (this may slow things to a crawl, but should never really result in an "out of memory" error!).

     

    • Thanks 1

  19. Hi,

    That's a nice M42 especially considering this is completely new software to you! What catches my eye though is the darker-than-background part between M42 and M43, as well as the outer part of M42 - any idea what caused that? Apart from the core being white (instead of the usual OIII rich greenish/teal), it's a nice image with a good bit of detail in there for sure!

    I just created a quick-start thread on the StarTools forum for the many PI users that have moved to StarTools for their post-processing, but it's also intended for people who are used to StarTools but would like to try PI. It's a 'dictionary' so to speak of terminology in the two programs. You may find it useful as well.

    Cheers,
     


  20. Just a small correction, this is post-stretching (mildly). Bob and I have been wracking our brain, trying to figure out why his stacked image appears smoothened or quantized. He says he's stacking CR2 raws, not JPEGs (confirmed) and is saving the resulting stack in 32-bit integer format.

    You'll notice the absence of any noise, as would be expected from a DSLR stack, instead showing a smoothened (median filtered-like) image with clear 'strata' or islands of equal brightness pixels where a level of random noise would be expected instead....


  21. Being a Typical man I am not one for reading manuals, I just experiment and learn.

    Yep, I'm the same. To my partner's dismay, I never ask for directions either... :grin:

    Unfortunately for me most of my captures, there are not a lot of them as I have just started really, are quite noisy with strange gradients and LP artifacts and in particular due to the rushed nature of my capture , my main job is mostly during the evenings, they suffer from low S/N ratio too. So I do have to work hard at getting anything acceptable out of the data. I have found StarTools quite a powerful piece of software. I guess it if works for me it works for anyone. My main weakness actually is with the wipe module and I am working it.

    Judging by the image you posted you're getting some pretty decent data regardless!

    You know, while the time spent on acquisition is important, it's also important to recognize that some of us (myself included!) only have limited time and/or resources to spend on the hobby. Yes, more data and better equipment will yield better images in the long run, but much fun can be had with just getting the most out of basic equipment and basic data and learning about your gear and circumstances' strong and weak points.

    I wrote ST also with that in mind - not all of us have the time (or means) to drive to a dark sky site, set up for the night, spend hours acquiring data and drive back again. Some of us have day jobs and live in the suburbs with lots of light pollution. Some of us can't actually pick and choose what time we want to acquire our data (and avoid the moon) and are stuck with acquiring during the weekend when the moon's up. Contrary to what some people and/or software authors say, I'm of the opinion that that's totally cool and you can still enjoy the hobby regardless - I'm hoping for ST to make a difference here and popularize AP a bit more. And everyone can see across the various AP forums; the stuff that people can shoot these days with just an entry-level DSLR from their suburban backyards is nothing short of amazing, especially when compared to what was possible 20, 15 or even 10 years ago!

    Back on topic, if you need any help with the Wipe module, or would like a personalised tutorial for your data - let me know! :smiley:


  22. I wish that it had better control over masking or gradient removal

    Hi A.G.,

    Anything in particular you wish you had better control over in these two areas? I thought the tools for creating and touching up masks in ST were pretty extensive, but I'm always open to suggestions! :smiley:

    For example, if you combine a mask with the Wipe module, you can very precisely control which parts of the image should be sampled for gradients and which should be left alone, much like PixInsight's DBE or GradientXTerminator. This is quite handy if you have data with dark anomalies that don't represent the background, such as dust bunnies, or you don't wish for Wipe to touch a DSO.

    Thanks!


  23. There are a number of annotated YouTube videos available that include the data that is being processed (see comments accompanying the videos for links to the data), in addition there is a dedicated tutorial section on the forums.

    Furthemore, StarTools conatins in-application help that describes what each module and each parameter does. Finally, StarTools is (uniquely) is non-linear in its processing, meaning that sequence of the operations performed matter less and that nothing is set in stone until Tracking is switched off, this promotes experimentation and avoids overcooking, letting you tweak and fine-tune your image to your heart's content. The 1.3.5 alpha version (seems to be stable on Windows and Linux so far, but may create artifacts on MacOSX), adds to that by allowing for quick restore points so that it is even easier to create multiple interpretations of the same data until you've found the ultimate one.

    Everyone's data is different (especially so for beginners), the rule of thumb being that the better your data, the more 'standard' and 'replicable' your workflow will be. StarTools has many modules and ways to fix problems that were introduced during acquisition. The thing is though that such problems typically stem from having a sparse or budget setup - these problems and their workarounds will be different for everyone.

    However if your data is of high quality, things become a lot easier.

    You'll notice that they will all roughly go like this; load an image, indicate it is still linear (start Tracking), DEVELOP or AUTO DEVELOP for first inspection, fix gradients or light pollution in WIPE, redo DEVELOP or AUTO DEVELOP for final global stretch (specifying a region of interest if using AutoDev will help), fix local shadows and highlights using HDR module and/or CONTRAST module, wavelet sharpen using SHARP module (for example to bring out larger scale structures such as dust clouds or spiral arms), apply deconvolution using DECON, do COLOR correction, switch off Tracking (perform noise reduction).

    Mallorcasaint, I'm looking forward to your data when you get the time! I'm sure there is lots ST can do to help you improve your images in a manner that suits your personal taste.

    With regards to my comment about Photoshop not being out-of-the-box a credible platform for astrophotography in comparison to dedicated AP software, I believe this still stands - only after an additional outlay of a couple of hundred dollars in action sets and plug-ins (Noel Carboni's actions, Annie's actions, GradientXterminator, Astra Image, Topaz Adjust or similar, a decent noise reduction plug-in and some sort of Wavelet Sharpener), does Photoshop gain a feature set that begins to approximate what can be had for much less and can be done much more efficient. Even then, basic functions such as FITS import requires a standalone programs like FITS Liberator, while other functions are cumbersome (ex. keeping data in a linear state), completely irrelevant to astrophotography or even downright destructive (ex. the brightness/contrast tool - beginners sin #1).

    • Like 2

  24. Hi,

    As stipulated on the 'Buy' page, the best thing you can do is try the freely downloadable demo, see how it runs on your machine and if you like it. I also offer personalised tutorials for anyone willing to share a typical night's worth of data, as I find it is the best way to get up to speed with ST and get the most out of the software, specific to your situation and gear.

    Mallorcasaint, the same offer is still open to you, but you never replied! I would love to have the chance to change your mind about ST if you would let me.

    ST is a not-for-profit endeavor and as such, I have no trouble recommending you try other software as well - whatever enthuses you the most for our wonderful hobby! For example, if you decide that ST is not your cup of tea, do have a look at PixInsight (they have a free demo as well & be sure to view Harry's videos to get the most out of the trial period).

    I will say however that Photoshop is simply *not* the way to go for astrophotography any longer (and hasn't been for a number of years now). Out-of-the box, it is ill-equipped to handle even the most basic things that most astrophotographers take for granted these days (ex. deconvolution, light pollution and gradient modelling, wavelet sharpening, local dynamic range optimisation, color calibration with background neutralization etc.).

    Hope this helps!

    • Like 2

  25. Nice going and congrats on getting the Atik up and running!

    If at anytime you need any help with StarTools, do contact me. Happy to do a personal tutorial, specific to your data if you wish. In that case just share a typical night's worth through a service like Dropbox and we'll do our best to take the mystery out of working with ST for you! :)

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.