Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

First time DSO recomendations.


K3ny0n

Recommended Posts

I've got a SW Explorer 200p on a EQ5 Synscan mount and hopefully going to hook it up to EQMOD sometime tomorrow.

I know the mount is not ideal for imaging but Im not expecting world class images like others on this forum. Basically just something to show family and friends.

Camera wise I've got a Canon 1000D. The camera will be attached with the T-ring and adapter.

Just wondering, What object would be ideal for a first time image?

Any tips would be great too.

Rob.

Link to comment
Share on other sites

Not sure exactly what you can see from your location but anything in the Messier catalog is good. M8,13,16,17,20,31,45,51, 81&82,101. All good and fairly large and bright so you will be able to at least get a decent image even if your exposures are short. Just remember total exposure time is more important than exposure time of a single shot.

Link to comment
Share on other sites

I will have to be doing short exposures.

So the more short exposures captured and stacked together lets say is better than one single lengthly exposure?

Is it just a case of the more exposures the better the image?

Rob

Link to comment
Share on other sites

I will have to be doing short exposures.

So the more short exposures captured and stacked together lets say is better than one single lengthly exposure?

Is it just a case of the more exposures the better the image?

Rob

Its more exposures = a better image. You will obviously get better detail with a 300sec exposure vs a 30sec exposure. But even with that 30sec if you take 60 of them and stack them together you will still be able to get a very good image.

Link to comment
Share on other sites

I also have a 200p and some of my first DSO's were Dumbbell Nebula and the Whirlpool Galaxy both of which had good "wow factor" for myself and friend & family - even with my limited imaging skills.

Kind regards

Thomas

Link to comment
Share on other sites

As a general rule, I think more exposures will give you less noise and you'll get the best out of the data you've got, but longer exposures will give you more detail as long as you don't get to the point over over-exposing the image.

Right now I'd probably go for M13. It's in the sky for a reasonable amount of time so you have time to get some good exposures, but a decent distance away from the Moon. Once the Moon isn't so bright you'll have a few more options. If you're in Middlesbrough then I'm guessing you suffer from LP a fair bit which won't favour the dimmer objects either. If that's not the case then if you haven't done it already I'd also suggest taking some wide field shots of the Milky Way once the Moon is less of a problem as they can be quite impressive visually. I stacked around 50 exposures of 45 seconds each using my 450D and the 18-55mm kit lens and it did a sufficiently good job to impress my wife, who isn't exactly interested in astronomy.

James

Link to comment
Share on other sites

Here's what is probably a DA question, so be kind. If I have say 10x 120second exposures, and each 120s exposure has effectively the same data in them (120s worth of photons from the same distant sources), whats the difference between stacking those 10 exposures, and taking 9 copies of a single 120s exposure then stacking those 10 clone exposures?

I'm guessing if that worked we'd all be doing it, but I don't 'know' why it wouldn't work... Anyone?

Link to comment
Share on other sites

If you like, they have the same "signal" data in -- the image you want, but they also contain random "noise" data, and because it's random it will be different in each image. By combining the different images together you can separate the signal from the noise, leaving just the signal (or close to it). If you stacked ten clones they'd all have the same noise and you'd not then be able to separate it from the signal because you'd not be able to tell which was which.

James

Link to comment
Share on other sites

I don't know all the science behind it but basiclly every exposure is different. Its of the same object but you get different photon. So if you stacked your 9 copies of that 1 exposure then you are just stacking the same photons on top of each other instead of adding new photons. I'm sure someone else can probably explain it better. Sorry if this just confused you more.

Link to comment
Share on other sites

Actually, as a crude example, let's say you have a pixel with a signal value of 100, but the atmosphere, imaging system and so on generates random noise values between -5 and +5, so your actual pixel value as read from the camera is between 95 and 105. If you take ten images and average the value of of the pixel, in general the value will tend towards 100, the actual signal value. If the noise is truly random, the more images you combine, the more likely the average is to tend towards the correct signal value. If you just take one image with a pixel value of 102 say, you'll get 102 for the final value no matter how many images you combine.

James

Link to comment
Share on other sites

To help me understand it, I think of it this way. MSPaint used to have a 'spraycan' brush which, if you held the mouse key down whilst keeping the mouse still, would eventually give you a solid disc of colour; if you clicked it briefly, you would get just a few random* pixels of colour, within the boundary of the eventual disc.

You could, in theory make a large number of individual brief clicks, and stack them to form the solid disc. If you were to make just one click however, and clone it, you would simply be stacking the same pixels on top of each other, and you would never achieve the disc.

Your subs are the equivalent of the brief clicks, and you stack them to build up your image/disc.

*for the purposes of this example.

Link to comment
Share on other sites

Its more exposures = a better image. You will obviously get better detail with a 300sec exposure vs a 30sec exposure.

Yeah, thats what I meant I must of slipped the short exposure in there without thinking. :grin:

If you're in Middlesbrough then I'm guessing you suffer from LP a fair bit which won't favour the dimmer objects either.

James

Yeh I have a fair bit of LP. Are there LP filters to use while imaging or are they just for observing?

Ill start with M13 like you said, looks like it is a decent part of the sky from where I will be viewing/imaging.

Andrew your explanation with the MSPaint is probs the best way to explain to someone! I like it :grin:

Rob.

Link to comment
Share on other sites

To help me understand it, I think of it this way. MSPaint used to have a 'spraycan' brush which, if you held the mouse key down whilst keeping the mouse still, would eventually give you a solid disc of colour; if you clicked it briefly, you would get just a few random* pixels of colour, within the boundary of the eventual disc.

You could, in theory make a large number of individual brief clicks, and stack them to form the solid disc. If you were to make just one click however, and clone it, you would simply be stacking the same pixels on top of each other, and you would never achieve the disc.

Your subs are the equivalent of the brief clicks, and you stack them to build up your image/disc.

*for the purposes of this example.

If that helps you think about what's going on then I can't argue with that, but I don't really think it's a good analogy for what happens because the images aren't "added up" like that when you stack them. If I restated your example another way, it would be like each frame containing a number of bits of a jigsaw and you'd overlay all the frames so you could get all of the bits of the jigsaw to make a picture.

I'm struggling to think of a simple example that I think illustrates what happens a bit better, but how about this:

You have to paint a wall and the paint has been delivered, but it was mixed by the work experience boy on a Friday afternoon and the ten tins it has arrived in (it's a big wall :) are all a slightly different colour. They're all fairly close, but they're not the same. The client who wants the wall painted is breathing down your neck, so you empty all the paint into another bucket and mix it up together and because some tins were a bit light and others a bit dark, it averages out and you end up with something almost indistinguishable from the colour required in the first place, so you use it to paint the wall. The client is happy, pays you a nice fat bonus and you get on the internet that night and blow it all on a 13mm Ethos. The ten initial tins are the subs you start with (or at least, one pixel in them -- the process happens for each pixel). The final colour is what you get in the same pixel of the final image after you stack/mix them.

It's a bit more complex than that and if you really want to bend your brain you can read all about it in the "Handbook of Astronomical Image Processing", but I think that conveys a reasonable idea of how it works.

James

Link to comment
Share on other sites

Thanks all - I guess the bit that works for my brain is that while the 'good data' on each sub should be the same, the 'bad data' varies across each individual sub. The point of stacking is to remove the 'bad data' which differs on each frame while building up the effective exposure time of the final image - making more of the 'good data'.

Stacking copies of a single sub would build up both good and bad data with nothing to distinguish them, so the end result would just be an over exposed mush.

I can rationalise that in my brain. ;-)

Link to comment
Share on other sites

So like you said, the end image is really like saying it is a average of all the subs took?

In the simplest way possible :)

Yes, that's not an unreasonable way to look at it at a basic level. It's not the case that in the image you'll have some pixel data that is "good" and some that is "bad". Generally each pixel will have a colour that's close to what it "ought" to be in a perfect image plus or minus some random amount due to noise. Noise might come from all sorts of sources, but small random variations in the electronic signals in the camera circuitry is one cause, for example. Averaging the lots of values for a pixel tends to minimise the noise and give you a pixel colour close to the genuine colour.

There's a lot more that goes on than that when you're processing an image especially when you have darks and flats too, but that's the general idea.

James

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.