Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

A start on Orion


blackparticle

Recommended Posts

I got out and about to Devon and had a clear night with a southern view so I had to give it a go. :icon_salut:

I decided to start small to get some core detail and work outwards. I bagged about 5 hrs of data in 15s, 30s, 60s, 90s & 150s subs. I tried to get at least 40 subs in of each exposure length.

Stacking was done in DSS using the "entropy weighted" method which is perfect for something like this.

NEQ6 - SW Equinox 66 - Meade f0.63x FR (approx 266mm @ f/4) - SXVF-M8C - Lodestar & Finder-Guider

I'm really looking forwards to another session to grab the longer subs. The image has been shrunk to approx 25% to view on screen as it was processed on a laptop and I wanted a portrait orientation for this one.

Alan.

m42___the_orion_nebula_by_blackparticle-d4e8wlp.jpg

Link to comment
Share on other sites

Wow! like you say lots of subs does the trick you definately put the work in :icon_salut: if I'd managed to get an image this good I would be getting it printed on canvas, then straight on the wall, I'm liking the detail you've achieved on the dusty plumes!, lovely pic :D

Link to comment
Share on other sites

Cheers. I'm happy how this is going. I had a couple of 210s subs that didn't go into the stack and I can see where the extra depth in the dust should be.

I'm interested in your combining technique with many sub lengths in small increments. I just used three sub lengths. Did you just leave the blending of sub lengths to DSS?

Olly

Yup. Everything went into the cooking pot and entropy did the rest. For this I'm thinking a run of about 100 x 5s as well to take that last bit of excess glow out of the core.

I like the look of this - As well as using a Meade 0.63x reducer on your Equinox 66. Now there's an idea - Any issues with it?

It took a few goes to get the best out of it. I started with the filter wheel between the reducer and the cam but it was over-compensating for the flatness. So it goes -

Cam > Adapter > Reducer > Adapter > Filter Wheel > T-extension > SCT / Equinox Adapter.

No compression fittings and only about 1cm of draw on the focuser. Nice and solid. :icon_salut:

I'm not getting any vignetting but I do have a fairly small chip. Not sure if this would work in a larger sensor format.

Are you working with layers in PS for the different set of subs?

/Jessun

Nope. All done in a single stack.

The entropy method has a tendency to preserve noise if you don't use enough subs. After feeding it a few hundred it really does its thing. :D

Post-processing was done in Photoshop. Levels, curves, contrast and a bit of colour enhancement.

Link to comment
Share on other sites

wow a great method. So i need to try this so all dumped into DSS within a single group ?, if so how did you manage darks I assumed each set of exposures would require their own darks but I could be wrong ?

John B

Link to comment
Share on other sites

Ive found that with a cooled CCD the dark frame current is negligible. I've got stacks at 25s, 60s and 150s just to cover the hot pixels and let DSS determine which master darks to apply and by how much to offset them by.

I've had a few pictures where I've noticed a new hot-pixel and have had to make a new set of darks for that specific exposure length. As an example, my 10 minutes dark files didn't work very well on a 20 minute sub as some pixels only went hot after 10 mins.

Link to comment
Share on other sites

This is a really interesting approach which I will try, like your self I will go for a number of exposures at a variety of exposure lengths, think I will give each group its own darks and then stick each group on its own tab along with flats. cant wait and thanks for sharing the method

John B

Link to comment
Share on other sites

I'm interested in this technique of combining the different exposure times in DSS - could the same be done / or would it be advisable, with the Andromeda galaxy where one risks saturating the centre with too long exposure?

It should do.

What the entropy method seems to do is it goes through the subs and massages the average of each pixel based on the numbers with the widest dynamic range.

If there is detail on a 15s sub that gets wiped out at 5 minutes, it discards that saturated data from the average.

By doing everything in incremental steps I've effectively layered the image during the stacking and given the process the best choice of data to work from. This has also done well for the fine detail too as the shorter subs are usually slightly sharper when stacked than one big long one.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.