Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

The "No EQ" DSO Challenge!


JGM1971

Recommended Posts

4 hours ago, Filroden said:

I suspect the layering has added noise and given you the impression of it now being blurred. I think there is some fine detail in the nebula but you probably need to mask the fainter details and potentially deliberately blur it more while leaving the brighter areas in their full detail. Not only will this draw attention into the detail but it will hide some of that noise :)

You have a strange colour cast that almost looks like it's a vignette. Have your flats introduced something?

I think it might be the flats because I took them in a rush (it was freezing out there). I also suspect the masking introduced noise. I will try combining the fully stretched file with one that only stretches the core. That way, there will be less noise.

  • Like 1
Link to comment
Share on other sites

10 hours ago, Herzy said:

Here is the Stacked fits file. You can layer a stretched version and a less-stretched version to preserve the core. If you want to give it a go, please do.

You've really captured some good detail in the surrounding dust but I think you just lack enough data to really stretch it out of the background without the noise going crazy. A background wipe seemed to correct the vignetting and odd colour casts in the periphery of the image, so I think you might want to look at your flats to see what's different for this image. Otherwise, my processing pretty much matches yours. I noticed some horizontal banding in the bottom part of the image. Again, is this calibration?

M42 - Stacked (DSS).jpg

You were right about the stacked image already being very bright! It is possible to push the core harder, but the image takes on a very HDR overprocessed look and I don't think that works. The good thing is that it's easy to go back and shoot some very short exposures (even 5-10 minutes worth would be enough) which you can then use to blend into the core. That and I think it's worth collecting more data so you can really pull out the dust.

  • Like 1
Link to comment
Share on other sites

15 hours ago, Herzy said:

https://www.dropbox.com/s/30y7o9zhegesrdl/M42 - Stacked (DSS).fts?dl=0

Here is the Stacked fits file. You can layer a stretched version and a less-stretched version to preserve the core. If you want to give it a go, please do.

Well here's my go Herzy. Not as sensitively processed as Ken :icon_biggrin: ; I've been more aggressive, as is my wont. Finished in Lightroom. I wanted to reveal the dust clouds as these are so often not seen in M42 pictures. A lovely amount of detail in the nebula away from the core, rather envious of that!.

But as Ken pointed out, the core is totally washed out, (this target seems only to take seconds rather than 45), and there is banding just visible. Did you take bias, and/or darks?

Cheers, Ian

M42 - Stacked (DSS)ST1tighten.jpg

Edited by The Admiral
  • Like 3
Link to comment
Share on other sites

20 hours ago, rotatux said:

But it's also a matter of precision: if you get one bit (literally) of signal out of each sub, it would take 256 subs to gain 8 bits of precision (not taking noise into account) and have something to stretch -- maybe less bits would do, I take 8 as a known reference. Also, depending on sub exposure, you could also get less than a full bit of signal (lol! I like quantum physics :)), needing even more subs. There's also the matter of image depth through imaging software chain: if I get subs with 1-bit of signal each it means they are at position 12, so to get 8 bits of precision requires software to process at least up to 20 bits.

That's where I may be hitting Regim's 16-bit depth limit (of images). On M42 all was fine but on Rosette and Horsehead I feel I'm at the stretching limit. With the number of subs I take, the remaining noise should be near level 0 or 1. I stop stretching steps when noise becomes significant, but at the same time I understand there's no more signal to come because I've used all the bits within available depth. Feels like a dead end -- though I'm in the process of trying to "pre-scale" the images, which *will* saturate more stars but I hope allow stacking to expose more bits of signal.

I think I get your drift here Rotatux, though I admit it's pushing my knowledge envelope somewhat! Of course, it's worth adding that stacking works by pushing down the noise levels at the bottom end and so extends the DR that way, but I don't think it does anything to affect the point at which either the wells reach capacity or the DAC saturates. The 'fullness' of the wells when the DAC saturates (which will also affect DR) will be governed by the gain between the sensor and the DAC, i.e. the ISO value. I can see that if the gain is too low then one will not be able to discriminate the feinter parts of the target, so there is an optimum value of ISO to be used. There has been much written about that I think.

I presume that you've read Craig Stark's article on "The Effect of Stacking on Bit-Depth"?  (http://www.stark-labs.com/craig/resources/Articles-&-Reviews/BitDepthStacking.pdf)

Ian

Link to comment
Share on other sites

Thanks. Ken's images makes the actual nebula look great, and Ian's has great detail in the dust. If I may, I might blend the two together. I did take darks (no bias), but I only took 9 so i thought it wouldn't make a big difference and didn't apply them. Also, I checked out the flats and there is that green gradient in it. I'll try to take another set, although it might be too late.

Link to comment
Share on other sites

15 minutes ago, Herzy said:

Thanks. Ken's images makes the actual nebula look great, and Ian's has great detail in the dust. If I may, I might blend the two together. I did take darks (no bias), but I only took 9 so i thought it wouldn't make a big difference and didn't apply them. Also, I checked out the flats and there is that green gradient in it. I'll try to take another set, although it might be too late.

You can always try converting the flats into mono.

Forgot to add, you don't need to apply bias and darks if you don't subtract bias from the dark in the first place (the dark contains bias anyway). However, you still have to remove bias from flats so as not to double up your bias correction. I get myself so confused when I calibrate as I don't calibrate darks but I do calibrate flats. I need to write my own process down at some point so I don't forget it!

Edited by Filroden
Link to comment
Share on other sites

Here's my attempt, Theres a great amount of data in the image, but it does need some short subs to concentrate on the core, I tried to get as much from the core as possible but it starts to affect the rest of the image, its a joy to see this much dust in the image, the colours are powerful too. Quite difficult to control in ST.

I have had to do a fair bit of noise reduction to eliminate the heavy core processing but its starting to show.

Well done Hayden for capturing so much.

Thanks for sharing and letting us play.

Nige.

herzym42.jpg

  • Like 9
Link to comment
Share on other sites

My go too :)

Not as stretched as others and I tried to calibrate the color and get back some core (failed at the latter). I saw core is not totally saturated but didn't find the right tool to get it back. Most stars are saturated too so had to resort to background stars for B-V calibration.

Was funny, thanks again for letting us play :)

And Nige... WOW you got the core back and nice nebula color changes.

m42_fx4.jpg

  • Like 3
Link to comment
Share on other sites

4 hours ago, The Admiral said:

The 'fullness' of the wells when the DAC saturates (which will also affect DR) will be governed by the gain between the sensor and the DAC, i.e. the ISO value.

Exactly, that was the "saturation-limit" part of my reasonning. Though a physicist will argue the sensor's pixels keep the same "full-well" capacity whatever the ISO, only the maximum representable electronic level drops when ISO raises.

4 hours ago, The Admiral said:

I presume that you've read Craig Stark's article on "The Effect of Stacking on Bit-Depth"?  (http://www.stark-labs.com/craig/resources/Articles-&-Reviews/BitDepthStacking.pdf)

I've read many similar, but maybe not this one. I learned many of those articles have to be taken with care as there are often gotchas in the reasonning, sometimes small sometimes big, so you need to keep awake. Always interesting, gonna read it, thanks Ian.

  • Like 1
Link to comment
Share on other sites

Ok so I had another go at my M31 image

This time I re-stacked adding in some of the 15s subs to give a total of 8m20s!!

I have played around with curves and levels using gimp 2.9.5 and also tried adjusting the colour slightly - less red, more blue

I think it looks better, but the core is still blown out and the edges are now getting a lot of noise coming in - looks like I need more data (if only it would stop raining)

 

M31 stacked 8m20s-curves123.jpg

M31 stacked 9m30s -curves123.jpg

Edited by mxgallagher
further reprocessing this time with 9mins 30s of data
  • Like 3
Link to comment
Share on other sites

You are on a good way, maybe not aggressive enough in your stretching.

Look at what I get with only an 8-bit stretch of your JPEG, this was with only levels black point and mid point. So you must be able to get far more from your 16-bit image.

M31stacked9m30s-levels.jpeg

The main idea is 1/ strech with levels (gamma / mid point), or brightness/contrast, or curves, then 2/ trim the backgroung with levels black point.

Ok, now unless I'm wrong you will see you some apparent (strong?) vignetting, so you will need flats. But keep that for after you are comfortable with processing :)

  • Like 1
Link to comment
Share on other sites

3 hours ago, mxgallagher said:

Ok so I had another go at my M31 image

This time I re-stacked adding in some of the 15s subs to give a total of 8m20s!!

I have played around with curves and levels using gimp 2.9.5 and also tried adjusting the colour slightly - less red, more blue

I think it looks better, but the core is still blown out and the edges are now getting a lot of noise coming in - looks like I need more data (if only it would stop raining)

I don't think you'll get much more from this with just 8 minutes of data. What you've done is a great improvement in terms of pulling some of the detail out and getting a better colour balance. The addition of more data is about the only thing left to do :) Once you have more data it will become easier to stretch and the image will support stronger processing, including noise reduction and sharpening. Let's just hope for some clear skies in February!

  • Like 1
Link to comment
Share on other sites

On 1/21/2017 at 00:18, ManixZero said:

This was one of the best of about 30 shots at between 10sec and 5 mins.

30 seconds ISO 400 and developed in Lightroom.

I have only been doing this for a month (viewing and imaging) but I am totally hooked.  (I have to improve my focusing technique - I know)

M42-0601.jpg

OK, Guys....

I have taken a couple of images through my telescope and processed them in the only way (as a photographer) I know - Lightroom - I have seen how "Stacking" seems to bring out detail but I am not sure of what the technique is?

Is it a load of images all at the same exposure? or is more of a traditional "bracketing" thing where the images are all shot at different exposure value to create a kind of HDR image.

All my attempts have failed both in Registax and DSS - I have only tried the bracketed images!

So, come on guys, what's the secret to stacking!

 

Cheers

MZ 

  • Like 1
Link to comment
Share on other sites

28 minutes ago, ManixZero said:

So, come on guys, what's the secret to stacking!

Numbers!

1 - shoot RAW not Jpeg

2 - Put lots of 'lights' (RAW images of your target, also known as 'subs') into DSS, all the same ISO and exposure.

3 - add darks, bias and flat frames (not essential but they improve the result) and run DSS

4 - the end result averages out all the images and has less noise but greater bit depth so you can stretch it to bring out the detail.

See this link for an explanation of darks, lights and the DSS process:

http://deepskystacker.free.fr/english/index.html

Try 'user manual' 'FAQs' and the tutorials to get started.

  • Like 1
Link to comment
Share on other sites

Hey guys, I have a little confession to make.

I have acquired an EQ3 pro mount, for wide field mainly.

Not giving up on Alt-AZ imaging that's for sure. But it had to be done sometime.

Still keen as ever with the Alt-AZ mount imaging.

Cheers

Nige.

  • Like 1
Link to comment
Share on other sites

3 minutes ago, Nigel G said:

Hey guys, I have a little confession to make.

I have acquired an EQ3 pro mount, for wide field mainly.

Not giving up on Alt-AZ imaging that's for sure. But it had to be done sometime.

Still keen as ever with the Alt-AZ mount imaging.

Cheers

Nige.

It's a logical step. It will be interesting how you find the difference in setup and use. 

  • Like 1
Link to comment
Share on other sites

2 minutes ago, Filroden said:

It's a logical step. It will be interesting how you find the difference in setup and use. 

Indeed,

I have already got my Orion planetary imaging/guide cam and ordered the Orion mini 50mm guider scope, so will be able to guide as well.

Now my wife has agreed to let me build a summerhouse (observatory) at the bottom of our garden 100ft from the bungalow, which gives me 360 degree imaging field and be able to leave the mount set up.

Project for the coming months.

 

Link to comment
Share on other sites

11 minutes ago, Nigel G said:

months

How are those defined? Project months normally are measured in 13 weeks :)

Id love an observatory but it wouldn't give me much other than somewhere to store stuff. Though being able to align and hibernate the mount does sound good. 

  • Like 1
Link to comment
Share on other sites

Hoping to be ready by September, got a lot of things going on this year.

I had a bit of a play with my modified camera and 135mm lens over the weekend, as there was cloud rolling in and not enough time to image an object, I kept moving the mount around randomly and grabbing 15s shots. The modded 1200d and CLS filter seem to pick up more Ha than my scope on the same targets.

I found lots of colour around the night sky, I didn't save any of the images as I was just wasting time having fun. Out of a dozen, few shots were just stars, most had some sort of colour in, small patches to large faint clouds.

FUN..

Nige.

Link to comment
Share on other sites

5 minutes ago, Nigel G said:

The modded 1200d and CLS filter seem to pick up more Ha than my scope on the same targets.

That's odd? There is nothing about your scope that should reduce Ha detection compared to the 135mm lens.

I've also been trying to put this bad weather to good use too. I have finally got the spacing on my camera down to within 1mm of optimal design and just need to test it with a star field. I can reduce it by 1.5mm and increase it by about the same using Delrin spacers, so I should now be able to get it spot on and finally get a flat field across my full image.

I also invested in a cheap Mini PC which I intend to strap to my mount and run the camera, filter wheel and focuser from. It auto logs into Windows 10, launching TeamViewer so I can remotely connect to it from the laptop or main PC. It has a single USB3 for the camera/filter wheel and two USB2, one of which will control the focuser and the second will hopefully control the mount once I figure out how to do that. I still need a bigger memory card as it only has about 15Gb of storage and the largest SD card I can find in the house is only 8Gb (I was sure I had a 64Gb laying around somewhere).

I'm hoping this allows me to set up at the end of the garden so I can see 20 degrees either side of the meridian (i.e. extend my current view by about 40 degrees). I have a power cable that will reach and with the PC strapped to the mount, only the power cable to the mount and PC will dangle and potentially cause cord wrap issues. Everything else will move with the mount.

The only issue that worries me still (having not tested this in the field) is how I will frame targets after the initial goto. Currently I use a semi-live view on the laptop screen and manually adjust the framing from the mount's handset. I think I can still do this, but there will be a short delay as the live view now has to be sent over the network to the laptop.

  • Like 2
Link to comment
Share on other sites

13 hours ago, Filroden said:

Id love an observatory but it wouldn't give me much other than somewhere to store stuff. Though being able to align and hibernate the mount does sound good. 

I wouldn't underestimate the value of somewhere permanent to have your equipment set up and ready at relatively short notice to take advantage of weather breaks etc. Ken. It would lead naturally on to EQ imaging especially as you can initially spend time getting accurate polar alignment with the gear in the observatory to save repeated set up's from afresh each session.

Good luck with the mini pc too.

Best regards,
Steve

  • Like 2
Link to comment
Share on other sites

10 hours ago, Nigel G said:

Here's what I mean, these images are cropped from 135mm lens image to compare against 150p images.

They are all similar exposure times, the 135mm lens appears to gather more.

My 135mm lens is f3.5 and some are f2, I think, your scope is f5 which will be some of the difference. To be fair, I think the 135mm images look like they have been processed more aggressively too.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.