Recently Browsing 0 members
No registered users viewing this page.
I'm working with some recent data of a new(ish) camera, and my Ha data is pretty good, but the oiii has these strange artifacts around the borders.
If I use any kind of local normalization then it pretty much ruins most of the image, as you can see below..
First the Ha:
Oiii with local normalization:
oiii local normalization map, see how it fits with the artifacts:
Oiii without any local normalization:
These edge artifacts, could they be related to a bad filter or is it a processing artifact, has anyone seen anything similar?
I am using Astro Pixel Processor for calibration and stacking.. Im kind of stumped on this one, and I am going to yank out the Oiii filter this weekend to have a looksie.
This is maybe my 3rd attempt at a galaxy and I am trying to figure out the best way of doing it since I live in a red zone of London suburbs.
I took this over 2 nights (well, 1.5 really, as my guiding wasn't working and plate solver wouldn't comply after meridian flip...) with my 8" EdgeHD SCT with 0.7x reducer and Atik 490X. Around 20 Luminance subs at 10 mins each (1x1) and around 15 RGB subs at 5 mins each, but binned 2x2. I use Astrodon filters but also have an LP filter permanently in my image train.
Question: should I only use RGB and create a synthetic L channel, given LP, or continue trying with the actual luminance? Gradients are horrible with luminance but RGB doesn't have as much detail (only red filter seems to be more sensitive).
My stars are all over the place (colours pop out everywhere, in the wrong way), how can I control this better?
Also, as I wrote, the red filter seems to have much more detail than the rest and when I add all the channels into PS, the red colour just overpowers everything (and in general, how can I keep the star colours as they are and not have the red and blue go crazy - I am not sure the name for it, but it looks like chromatic aberration on steroids).
Any other general tips would be great...
Thank in advance. GFA
PS: I cheated with the core: just changed the temperature to make it look a bit more glowy; for some reason, I barely had any yellow colour from the data...I will post stacked images, if of interest.
I took some really rough images of M42 the other night, alignment and focus was by eye, heavily light polluted, no calibration frames and I have a dusty corrector plate. However, this was the first time I have shot a deep sky object and for how rough it was I was pleased, see attached.
I watched a lot of DSS tutorial videos last night and I decided to stack the 16 images I have just to see what I get. Now, I am under no delusions as to the expected quality of the final image however, stacking made all the stars vanish and the overall quality of the image was less that that of a single frame. Any suggestions? I do not have an example image saved, sorry.
I've been spending some time processing over cloudy christmas and realised the thing I find most daunting, difficult and annoying is creating star masks. So my question is - is there a way of creating star masks (in Pixinsight preferably, but open to other ways!) which is (a) always accurate (b) relatively quick (c) repeatable?
I've worked through LVA tutorials and looked at David Ault's technique. I also have the Bracken book to go through. Main techniques seem to be:
1. Stretch extracted lightness, clip low, bring down highlights, then use Star Mask process - very inconsistent results I get with this approach
2. Similar to above, but use MMT/MLT to remove nebulosity to create support image, then use different star scale Star Masks to capture all stars and then use pixelmath to put them all onto one image - very time consuming I find, also lots of noise setting fiddling
I am very interested to see how people go about this and whether there are any neat tips and tricks to help the process! Thanks!
Not quite sure what to put in the title here! I've been thinking about my future possible approaches to deep-sky imaging, especially looking at Emil Kraaikamp's approach in taking thousands of 1 sec images. The datasets this will produce are obviously huge - a single frame from my QHY163M is ~31.2MB in size. Even at 5 sec subs, the volumes of data are 22+GB/hr.
Now, that's fine in theory to process (though it'll take a *long* time to chew through them, even on a 8-core i7!) - sufficient diskspace is practical, and I'd scrub the unreduced files once happy anyhow. However - long term storage is an issue. I couldn't keep all the subs here, even with poor UK weather, you're potentially looking at tens of terabytes of data in relatively short term, worse when backing it all up.
Is there a feasible approach to keeping only stacked, reduced data (which is obviously much smaller) whereby you can still add to the data at a later point in time? I was thinking along the lines of:
Take x hrs - save reduced, stacked data as a single linear frame.
Later on (possibly years later!), take another y hrs of images - somehow combine with the x hrs before?
(and repeat as needed)
Could this be achieved and still allowing relevant pixel rejection, weightings, noise reduction, etc? Anyone have a PI workflow that allows this? I guess the same approach applies to standard CCD data, though the data savings are orders of magnitude less...