Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

stargazine_ep24_banner.thumb.jpg.56e65b9c9549c15ed3f06e146fc5f5f1.jpg

Recommended Posts

Before using PixInsight I used to use Photoshop to post-process all my Astro photos. Happy with the results but the workflows are a bit tedious. I know it works well for some people and they swear by it but ever since using PixInsight I much prefer it as the workflows are much easier to manage and follow. A steep learning curve I know but I think in the long run its much better than Photoshop for Astrophography.

There is one plugin for Photoshop though I think is fantastic however I want to do all my processing in PI going forward (I don't want to flip back and forth between PI and PS. I have seen it done on tutorials online with fantastic results however I find it a pain). It's AstroFlat Pro which produces a nicely even and flat field for your image and it always works.

I tried to do the equivalent in PI but I just can't seem to replicate what AstroFlat Pro does. I used Dynamic Background Extractor but that made my image look worse.

Can anyone help or guide me in the right direction on how to dim the brightness (looks to be light pollution) and make my background look a bit darker?

Thanks,
Stefan.

 

Share this post


Link to post
Share on other sites

I also use PixInsight for my astrophotography pre and post-processing. I had some mileage with Photoshop, but only for daytime photography, so I figured that since I needed to learn astrophotography post-processing from scratch, I might as well do it with the best tool around. As you said, sure you can do things astro related with Photoshop, too, but in my eyes it's a tool too oriented for general photography, while PixInsight was born to do just that.

That said, if you are having problems with flatfields, the only two solutions (that should always be used together, not as substitutes of one another) that I can think of are:

- taking proper flats to calibrate your lights with, before integrating them - this can be tedious and from my experience I found it very hard to pinpoint the exact exposure settings (histogram peak position) for them to work well. But once I found the settings, they work as a charm. And most of all, those settings are repeatable and work from session to session. These are mostly needed to take care of vignetting, uneven illumination and dust spots

- extracting the background with either AutomaticBackgroundExtractor or DynamicBackgroundExtraction. I prefer the second one, as I have more control on the placement of the background samples, the tolerance and how aggressive I want it to be. This is needed to take care of light pollution and to even out the background. The placement of the samples is very important: you do not want to put them on nebulosity areas, nor for stars (or parts of a star) to take up most of a given sample area. Then play with the tolerance. What I do is I start from a very low value and refresh the samples until they all turn red (non valid) and then increase the tolerance by 0.1 at a time until they all turn green. Once I found the value that turns them all green, I back up to the previous 0.1 value and start increasing by 0.01, instead. The correct value will be somewhere in between one 0.1 value and the next one. Make sure you press "Resize All" to refresh them, and not on "Generate" (since this last command will replace all the samples with the default automatic grid, and you will lose all your fine tuning work of manually moving them to the appropriate spots)

If you still have problems with darker areas, try putting more samples on just those areas, more closely spaced together. I found this to work, too, expecially on the corners of the image.

Hope this helps and good luck!

Clear skies!

  • Like 1

Share this post


Link to post
Share on other sites
On 26/09/2020 at 07:10, 04Stefan07 said:

I used Dynamic Background Extractor but that made my image look worse.

It might be worth posting a TIF of the unstretched image.

DBE does a good job of removing background gradients, but does require some thought as to where you place the samples. You also have to consider whether the background is additive (as in the case of light pollution) or multiplicative (as in the case of vignetting). If the image suffers from light pollution, then the modelled background should be subtracted from the image, and any vignetting effects require division.

The background gradients are usually at a large scale, so fewer sample points may be needed, but you'll need to compare the generated background with your image, to check that any nebulous areas are not being affected too. (I usually start with 7 samples per row & a min. Default Sample radius of 20)

Some images also benefit from using either the Axial or Horizontal/Vertical/Diagonal symmetry options.

I usually increase the Tolerance up to 1.5 & then, with Zoom set to 1:1, enable tracking of the current sample & reposition samples so that none of them overlap any stars or nebulosity.

Cheers
Ivor

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.