Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

M42 with SGL improvements - what next?


Recommended Posts

OK, so in my other thread I was given some handy advice to improve my M32 image. Here is the result. It is 47x30s exposures stacked, with flats and darks. There is still some gradient, but compared to the blob I had before it's miles better. I've tried the trial version of gradient eXterminator. It doesn't seem to do anything.

Next thing is to try and get detail out in the fainter areas of the nebula. I'm happy with what I'm getting around the trapezium in the 30s exposures, but would like to get the same sort of wispy detail in the outer areas of the nebula I get in longer exposures (well, 1 minute exposures anyway - my alignment isn't good enough to go beyond that yet). How would I go about merge the detail from these two different length exposures?

post-16561-133877424122_thumb.jpg

Link to comment
Share on other sites

Hi Matthew,

My suggestion would be the following;

1) From DSS save the image as a 32bit integer TIF file.

2) Go to the PixInsight site and download the 30 days trial which has the full features of the full version.

3) On the PixInsight website, go to the section on Video Tutorials and view the one on An Introduction to DynamicBackgroundExtraction, ACDNR and HDRWaveletTransform Tools.

4) Open the 32 bit image in PixInsight.

5) Extract the background and then subtract it from the image (as shown in the video)

6) Finally apply some histogram stretch.

That should do the trick.

Cheers

Simon

Link to comment
Share on other sites

32bit processing should indeed help, whether floating point or integer. My result on your previous 16bit images what much better than on the 8bit JPEG (with all its artefacts). Dynamic background subtraction is very much like what ImageJ does with its rolling ball filter: estimate the shape of the background from the low frequency components of the image, and subtract that from the image.

Link to comment
Share on other sites

If you load a 16 bit or 32 bit TIFF into ImageJ, it considers it as a composite of three images. You must process each separately (that is a bit of a pain). If you just process the first, the red dominates, because that is the only one processed. The mouse scroll wheel allows you to page through the three planes (look at the info bar at the top of the image display window). and process the active one. Alternatively, you can first split the channels, then process each channel separately, and then merge them again (into a composite, otherwise it remaps to RGB 8bit/channel)

Link to comment
Share on other sites

The PixInsight method works by letting you place sampling points anywhere you want over the image. It can do it automatically as well....but the manual method is extremely powerful. So typically you take a copy of the image, and then stretch it horribly....so that you see all the nebula and can identify all the places where there is real background and no faint nebulosity. You then place (using the mouse) sampling squares where the background is and set up a range of parameters. You then apply this to your original unstretched image and the sampling squares appear in exactly the same place on the original image.

It then creates a 3 colour model of the background, each colour channel being an independant model depending on what it sees in the image. You then use a subtraction technique with "rescale" enabled. This makes sure that any 'negative' values are not clipped...they are simply rescaled back into the 0 to 1 range (32 bit usually).

This means its impossible to actually loose any of the data by using this process. And you don't need to 'blur' the image by choosing a 'ball size' that is somehow suitable to get rid of certain features and maintain others....so it should be much more robust.

Link to comment
Share on other sites

That probably fits a potential surface (minimum curvature interpolation) to the data points you select. The rolling ball method b.t.w. does not yield any negative values, because it fits a smooth "surface" through the minimal points in the image. It does not smooth residuals in any way. It also works in a colour-band by colour band fashion (which is not always the best). Of course, it is not necessarily a very good model of the background, so having a model with more degrees of freedom (such as a potential surface) may be better. I will check if there is an ImageJ plugin for such a method.

Link to comment
Share on other sites

I'll have a look at Pixinsight then! I've been playing with the HDR photo combination in Photoshop to see if I can combine an overexposed nebula centre with a correctly exposed one and get an image with lots of wisps and a good centre. I can get some pretty weird looking images but nothing I'd post in here...

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.