Recently Browsing 0 members
No registered users viewing this page.
Have decided to sell my trusty Equinox ED80 refractor. Sharp optics, smooth two-speed Crayford rotatable focuser, fitted with a finder shoe, and complete with the original Skywatcher case and tripod-threaded mounting bracket. A 3D-printed Bahtinov mask is included, along with a pair of mounting rings and a Losmandy dovetail plate.
Collection preferred (Sussex coast, not far from Brighton), payment by bank transfer or cash on collection.
Looking for £325.00
I was wondering whether it's possible to image a DSO and capture any depth. Every 3D astro image online is faked so at the start of the year, I decided to image M42 six months apart.
Back in March I posted a image of M42 imaged at f10, 2032mm FL through my 8SE on 28th February 2019. Than on 3rd September (setup and captured 15 second subs on 1 September) I captured M42 at the same focal length, same orientation and very similar subs for a total exposure of 1 hr 24 minutes. This was almost to the day exactly 6 months between the two images, so the earth was 300 million km away from the original position on the other side of the sun, furthest I could hope for imaging a 3D stereo pair.
First attached is the image from September...
I color matched the above image with the image from February, aligned them and below is the end result....
As you can see there is no detectable 3D effect... There was a 3Dish effect but this was most likely due to the differences in processing of the two stacks and when I SCALE and rotate the two images to align them, and hence no 3D effect.
Of course the stars and nebula are certainly not on a flat plain so I believe that the reason for the lack of any discernable depth is simply due to the distance of M42 resulting in a very small angular shift in the stars, so small in fact, that it’s beyond the sensitivity of my 8” SCT, camera pixel resolution and tracking accuracy of the CGEM.
Calculation of the expected motion of any parallax shift when the Orion Nebula is 1344 lightyears away and the distance of Earth being 149,600,000km from the Sun:
1344LY = 1.2715e+16km
Θ° = Tan-1(149.6e+6/1.2715e+16)
Parallax Shift Θ” = 2 x 3600 x Θ
Parallax Shift Θ” = 0.0048536712567150
An angular motion of 0.005” was not picked up by my system that tracks with an average accuracy of about 1” RMS, with a camera sensor that has a resolution of 1.16”/pixel at 2032mm focal length with a 8” SCT. Even if I could get consistent tracking at the best accuracy that I have ever seen with my gear, 0.38” RMS, this is still well above 0.005” and well beyond the 40D sensor pixel resolution, and all this is without considering atmospheric distortion, obviously my setup is not even close to sensitive enough.
This was a good project but unfortunately the distances of objects in the universe are too great, even objects classed as in our celestial “backyard”. If I didn’t try this experiment than I would be always wondering and curiosity would most likely make me try it eventually.
I scrapped all the Oiii and Sii data I previously took during a full moon (about 15 hours worth) and retook it all when the moon was a bit smaller at 76%. Ha was taken during 98% and 67% moon. All the lights were taken on the following nights: 12th, 19th and 20th September 2019.
Integration times, all in 600s subs unbinned:
Ha = 28.33 hours
Oiii= = 5.67 hours
Sii = 5.67 hours
The Ha data is really nice, and unsurprisingly the Oiii and Sii is not as strong (or nice).
I'm missing that (vital) step in my processing routine of getting the Sii and Oiii properly stretched to match the Ha, before combining. I dont really know how to deal with the weaker data properly. Any pointers would be appreciated.
What I do currently:
All the data is loaded into APP into separate channels/sessions.
The data is stacked and registered against the best Ha sub
This produces individual stacks of Ha, Sii and Oiii that are all registered
Each channel is processed with DPP in APP and then saved as a 16bit TIFF
Each is opened in PS
Stars removed with AA and any remnants removed and tidied up
I then open a blank RGB document in PS
I paste Ha into Green, Sii into Red and Oiii into Blue
Adjust the selective colour settings to get 'Hubble palette'
Adjust levels, curves, saturation until looks ok
All the Ha Sii Oiii data is then combined together in a single 'super' stack in APP using quality weighted algorithm to create a 'luminance'
That luminance layer is adjusted using levels, curves, and NC tools such as local contrast enhancement and deep space noise reduction (using masks to apply as required)
The luminance is pasted onto the above colour layer, and incrementally added using gaussian blur
Cropped and saved.
Here it is anyway I haven't intended on any more exposure time for this one, but will consider it, if the expert opinion dictates otherwise!
This exposure of the Orion Nebula region is really just a quick and lazy session since I didn't want to waste a clear night by doing nothing and the scope was already setup and focused so I wouldn't be spending much time on setup. I also didn't have a plan for imaging another object it seemed like a good idea being a bright and easy object to image.
I already imaged this object in the past, but by comparing the setup, procedure and improved tracking accuracy of the past together with the now cooled 40D, I know that the result would have been an improvement if I would have dedicated the necessary exposure time, through the necessary NB filters.
This image all consists of RGB/OSC, IRCut filtered, 31x15s, 32x30s, 16x60s, 10x90s, 11x120s ISO1600 subs.