Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

M81 & M82


Recommended Posts

Hi There,

Here's my first proper attempt at M81 & M82.

SW 100ED with matching reducer with guided HEQ5 Pro and unmodded 700D with clip-in Astronomik light pollution filter.

This was stacked from about 1 hour total exposure - using 6-7 minute exposures at around ISO 800 if I remember correctly.  Used flats, dark's and bias as recommended in deep-sky, although with about half the total exposure time of the lights.

Stacked with deep-sky, processed with Pixinsight.

I think the image is rather blue - and the light pollution filter might actually be blocking out useful light. I'm in the middle of Edinburgh though, so removing the filter might render the image useless.  Having used Pixinsight for the first time, i'm impressed with its capabilities.  

Worth another try without the filter?  Or do I stick at it and collect more data to bring out more detail - or do you think its limited by the length of the individual exposures?

Thanks!

M81 M82.jpg

Link to comment
Share on other sites

Nice image. Colour is easy to fix.

Forget about in camera white balance. You can do all the colour balancing you like, in pixinsight. I would also recommend that you do all calibration and stacking in pixinsight. Dss has a reputation of messing up/decreasing colour (probably undeserved, as it may be caused by the user). Use batch preprocesser for calibration and stacking.

After image integration, crop any stacking edge artefacts. Then use dbe, few large samples inbetween stars or many large samples that don't care about stars. Make sure to avoid the galaxies.

Then create a preview containing only background. Use background neutralization to create an even, neutral background. Next, use the same preview in color calibration. Use the entire image as white reference. That's all there is to it.

Link to comment
Share on other sites

1 hour ago, adh325 said:

Hi There,

Here's my first proper attempt at M81 & M82.

SW 100ED with matching reducer with guided HEQ5 Pro and unmodded 700D with clip-in Astronomik light pollution filter.

This was stacked from about 1 hour total exposure - using 6-7 minute exposures at around ISO 800 if I remember correctly.  Used flats, dark's and bias as recommended in deep-sky, although with about half the total exposure time of the lights.

Stacked with deep-sky, processed with Pixinsight.

I think the image is rather blue - and the light pollution filter might actually be blocking out useful light. I'm in the middle of Edinburgh though, so removing the filter might render the image useless.  Having used Pixinsight for the first time, i'm impressed with its capabilities.  

Worth another try without the filter?  Or do I stick at it and collect more data to bring out more detail - or do you think its limited by the length of the individual exposures?

Thanks!

M81 M82.jpg

 

Thanks - I cant remember RGB settings for sure, but think I may have saved the image from DSS without changes.  I'll check next time though and give that a try.  Cheers.

Link to comment
Share on other sites

1 minute ago, wimvb said:

Forget abou in camera white balance. You can do all the coliur balancibg you like in pixinsight. I would also recommend that you do all calibration and stacking in pixinsight. Dss has a reputation of messing up/decreasing colour (probably undeserved, as it may be caused by the user). Use batch preprocesser for calibration and stacking.

After image integration, crop any stacking edge artefacts. Then use dbe, few large samples inbetween stars or many large samples that don't care about stars. Make sure to avoid the galaxies.

Then create a preview containing only background. Use background neutralization to create an even, neutral background. Next, use the same preview in color calibration. Use the entire image as white reference. That's all there is to it.

 

I tried Pix for stacking - but I think I must have done something wrong because I got some strange red streaks over the image.  The same image came out fine in DSS.  Anyone else had this happen?

Link to comment
Share on other sites

9 hours ago, adh325 said:

 

I tried Pix for stacking - but I think I must have done something wrong because I got some strange red streaks over the image.  The same image came out fine in DSS.  Anyone else had this happen?

It's difficult to say what it can be without seeing it. If you post an image showing the streaks, maybe someone can answer your question. What sometimes is described as streaks, is also known as walking noise, hot pixels not removed by calibration end up in a streak pattern in the final image. But that's just one possibility.

Link to comment
Share on other sites

Is anyone able to offer a bit of guidance on the light pollution issue?  Is it worth me even wasting my time taking a whole load more data with no light pollution filter?

Or am I better to continue just gathering more and more data with the LP clip in?

I'm just wondering if i'm really limiting myself and perhaps no matter how many 400s exposures I take - I wont pick up more detail than I already have?

Link to comment
Share on other sites

I live in a light polluted area, and wouldn't be caught dead imaging without my LP filter in place. It blocks out most LP due to sodium and mercury lamps (still the most common lamps in city environments). It is less efficient at blocking out LED light. Normally LP filters are designed to be used with all DSO's. The problem with LP is that it also introduces noise, and even if you can remove the LP signal itself during processing, you are still stuck with the noise part. That is why an image may look a lot more noisy after background extraction than before. Since this noise is truly random, you can reduce it by stacking more subs. Forget about the rule of diminishing returns. If you want low noise images from a LP area, there are only two sollutions: stack more subs, or do NB imaging.

The number of subs won't increase the signal in your stack, but it will reduce noise, allowing you reveal detail that would normally be lost in the noise floor.

Link to comment
Share on other sites

1 hour ago, wimvb said:

I live in a light polluted area, and wouldn't be caught dead imaging without my LP filter in place. It blocks out most LP due to sodium and mercury lamps (still the most common lamps in city environments). It is less efficient at blocking out LED light. Normally LP filters are designed to be used with all DSO's. The problem with LP is that it also introduces noise, and even if you can remove the LP signal itself during processing, you are still stuck with the noise part. That is why an image may look a lot more noisy after background extraction than before. Since this noise is truly random, you can reduce it by stacking more subs. Forget about the rule of diminishing returns. If you want low noise images from a LP area, there are only two sollutions: stack more subs, or do NB imaging.

The number of subs won't increase the signal in your stack, but it will reduce noise, allowing you reveal detail that would normally be lost in the noise floor.

That's useful thanks - i'll continue what i'm doing then and stack more data.  I didn't realise that also helped to reduce noise - that's good.

If the mount is well guided is it worth trying for longer exposures (I have a guide scope fitted)? I'm pretty sure I've achieved 600s before without any major movement in the image.

I might try for a higher ISO next time too.  Presumably its OK to mix all of these in the same stacked image providing you have matching dark / bias etc?

Last annoying question (sorry!) is there a ratio of dark / bias / flats etc you should follow?  So for example if i'm stacking an hours worth of light frames, is it acceptable to stack say only 30 mins of dark and bias?  Maybe that depends on the imaging equipment....

Link to comment
Share on other sites

Try using the longest exposure your equipment/conditions can handle. Then adjust ISO to get the on-camera histogram in the right position (the main peak well separated from the left edge of the display, maybe 1/4 from the left). If you use a higher ISO, you reduce the dynamic range (= amount of faint detail you can capture without bloating the brightest stars). Although, in a light polluted environment that is hardly an issue. Higher ISO will also introduce more noise. Canon cameras seem to have best trade off at about ISO 800.

Stacking will never add more signal to your result. If you don't catch it in a single frame, you won't catch it in 30 frames. During stacking, the final images receives the average of all sub frame pixel values. If each pixel comes from, say, 5 minutes exposures, the final image will have pixel values of 5 minutes exposures. What stacking will do is reduce noise, allowing you to reveal signal that would otherwise be lost in the noise floor. Stacking increases the signal-to-noise ratio (SNR), by decreasing the noise, not by increasing the signal.

If you mix different exposure settings, and data taken from multiple sessions, you may want to calibrate the datasets individually. If you don't have a permanent setup, you are very likely to attach the camera differently each time, and this won't allow you to reuse flats. Once the individual sets are calibrated, you can integrate/stack them all together.

Bias frames: take as many as you can. Usually I take 50 - 60 bias frames. Bias frames are easy, and you can reuse them. A masterbias from 100 or more sub frames is essentially noise free.

Dark frames: the more, the merrier. As with all other frames, the more you take, the lower the noise level. You take dark frames to identify faulty pixels (hot, cold pixels), and dark current or amp glow. Any noise left in the master dark will transfer to the light frames. Since dark frames take longer to collect, I usually aim for as many as my light frames. I do this by letting my camera take dark frames after my light frames. I put the camera (DSLR) in a box with lid, have the lens cap on and let it shoot frames while I sleep.

Flat frames: usually I take 10 - 15. Since flat frames have enough signal, noise is less of an issue. Try to keep the signal level in flat frames about 50% or somewhat less of full scale.I use T-shirt flats and take them directly after each imaging session, with the scope in the park position. That way I can be sure that the optical path isn't changed. I aim for an in-camera histogram with the central peak at 1/3 to 1/2 from the left. So far, that seems to work.

Hope this clarifies some.

Link to comment
Share on other sites

37 minutes ago, wimvb said:

Try using the longest exposure your equipment/conditions can handle. Then adjust ISO to get the on-camera histogram in the right position (the main peak well separated from the left edge of the display, maybe 1/4 from the left). If you use a higher ISO, you reduce the dynamic range (= amount of faint detail you can capture without bloating the brightest stars). Although, in a light polluted environment that is hardly an issue. Higher ISO will also introduce more noise. Canon cameras seem to have best trade off at about ISO 800.

Stacking will never add more signal to your result. If you don't catch it in a single frame, you won't catch it in 30 frames. During stacking, the final images receives the average of all sub frame pixel values. If each pixel comes from, say, 5 minutes exposures, the final image will have pixel values of 5 minutes exposures. What stacking will do is reduce noise, allowing you to reveal signal that would otherwise be lost in the noise floor. Stacking increases the signal-to-noise ratio (SNR), by decreasing the noise, not by increasing the signal.

If you mix different exposure settings, and data taken from multiple sessions, you may want to calibrate the datasets individually. If you don't have a permanent setup, you are very likely to attach the camera differently each time, and this won't allow you to reuse flats. Once the individual sets are calibrated, you can integrate/stack them all together.

Bias frames: take as many as you can. Usually I take 50 - 60 bias frames. Bias frames are easy, and you can reuse them. A masterbias from 100 or more sub frames is essentially noise free.

Dark frames: the more, the merrier. As with all other frames, the more you take, the lower the noise level. You take dark frames to identify faulty pixels (hot, cold pixels), and dark current or amp glow. Any noise left in the master dark will transfer to the light frames. Since dark frames take longer to collect, I usually aim for as many as my light frames. I do this by letting my camera take dark frames after my light frames. I put the camera (DSLR) in a box with lid, have the lens cap on and let it shoot frames while I sleep.

Flat frames: usually I take 10 - 15. Since flat frames have enough signal, noise is less of an issue. Try to keep the signal level in flat frames about 50% or somewhat less of full scale.I use T-shirt flats and take them directly after each imaging session, with the scope in the park position. That way I can be sure that the optical path isn't changed. I aim for an in-camera histogram with the central peak at 1/3 to 1/2 from the left. So far, that seems to work.

Hope this clarifies some.

 

That's very clear and very useful - clears up some of my misunderstandings and i'll post some results soon!

Thanks for taking the time to explain in so much detail.

Link to comment
Share on other sites

4 hours ago, wimvb said:

Dark frames: the more, the merrier.

Mmm. Many of us find that with Canon, they introduce more noise. The impossibility of matching the temperature of the light frames, linked with the way Canon write the CR2 are amongst the reasons cited. Of course, YMMV but it would certainly be worth doing a with-and-without comparison. Light, flat and bias frames, yes. Dark frames, no. If you can dither in between the light frames -15px works well for the 700d-  so much the better. HTH.

Link to comment
Share on other sites

7 hours ago, alacant said:

Mmm. Many of us find that with Canon, they introduce more noise. The impossibility of matching the temperature of the light frames, linked with the way Canon write the CR2 are amongst the reasons cited. Of course, YMMV but it would certainly be worth doing a with-and-without comparison. Light, flat and bias frames, yes. Dark frames, no. If you can dither in between the light frames -15px works well for the 700d-  so much the better. HTH.

Definitely. I also used to think that darks were a waste of time, and used cosmetic correction and aggressive pixel rejection instead. Until I started guiding. Then I found out (by doing the comparison) that darks gave better results, even if not temperature matched. Another way to use a dark master is as a bad pixel map. As you wrote: worth doing a comparison.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.