-
Posts
7,659 -
Joined
-
Last visited
-
Days Won
29
Content Type
Profiles
Forums
Gallery
Events
Blogs
Posts posted by Rodd
-
-
-
1 hour ago, Bluemoonjim said:
First class
Thanks!
-
1 hour ago, geeklee said:
Both excellent @Rodd Really enjoyable Ha detail. I prefer the first version.
Too red. That was my main complaint. Too pink in non-Ha areas. Matter of opinion, I guess
-
17 minutes ago, mackiedlm said:
I dont see anything botched with these - I think both are excellent. I like the colour and saturation levels
Thanks Mack. Perhaps botched is to harsh a term.
- 1
-
-
Palette has been my downfall in this data. I think I finally renedered a fairly ballanmced image-not to saturated, though I think some of the Ha is still to colorful. And in reality-the whole image is too colorful; certainly not what it l;ooks like in an eyepiece. Buts its the best I can manage.....so far. There is a color imballance that I have tried to minimize. Not sure why. This is the TOA 130 with a .7x reducer. The resolution doesn't seem that much inferior to the .99x flattener, and the FOV is much greater.
- 13
-
15 hours ago, ollypenrice said:
Now that I've finally spotted this as a framing, I find myself wondering why it's not an established imaging classic. I've always felt the obvious extension was to the north and the Cone region rather than south to these. I suspect Paul Kummer and I will try this one properly at some time, using the RASA. For now this is a small doodle based on a Samyang 135 base image overlaid by an old Tak FSQ HaLRGB Rosette and RASA Sh2-280 and 282 in OSC. We'd actually need two more RASA panels to fill in the little bits that are missing. Samyang and RASA data captured and pre-procesed by Paul Kummer.
Olly
Another realistic image. Very nice. There are a few things in AP that have no equal: speed, FOV, aperture, good seeing, low Bortle, quality optics, and processing skill. You have them all! And it shows. Amazing
- 1
-
4 hours ago, gorann said:
Great images both of them Rodd! As already said the second one does not give any impression of being overcooked, so I have hard to chose.
Thanks Goran.
- 1
-
4 minutes ago, assouptro said:
Nice image Rodd
I agree, the first one has a slightly more natural look to it but the second one isn’t overcooked and if that was the only one you had posted I wouldn’t have criticised it for being too sharp
I also love the muted colour, I’ve noticed over the years Astro photos have become brighter and more saturated to the point where they become more like a painting than a photographic record, that’s just my opinion fwiw
Thanks for sharing
Bryan
Thanks, Bryan. I agree, saturation and brightness are often quite high. I am guilty of this often. “Painting” is an apt descriptor. Or in my case, “cartoon”. Maybe I am finally learning.
- 1
-
19 minutes ago, ollypenrice said:
Really sweet. Maybe the first one for me?
Olly
Thanks, Olly. I kind of suspected you would lean toward the first. “Leave 10% on the table” (or was it 5%?).
-
21 minutes ago, geeklee said:
Superb result @Rodd It's the detail and subtle colour variation that's worth viewing full size.
In isolation, I didn't think so. When comparing side by side, full size with the first one - perhaps. Certainly not in an OTT, detrimental way though 👍
That’s what I was thinking as well.
-
16 hours ago, happy-kat said:
There's a strong sense of looking through a window into further space
Thanks HK!
-
I finally was able to get get this to stand up to Bin1. The palette is pale by design. Bumping bup saturation invariably cuases problems. Instaed of making the data look like what I want, I made it look like what it can handle. I posted two versions-teh second has a bit more sharpening--too much?
TOA 130 with .99s flattener and ASI 1600. About 24 hours
A bit more fine scale sharpening
- 24
-
5 hours ago, wimvb said:
Agreed, I've all but given up on that luxury. For my latest image (ngc 1023 and ic239), I measured a fwhm between 3.5 and 4". I didn't even bother with collecting luminance. I'm considering giving up on galaxy hunting for a while, and shoot nebulae instead.
BlurX helps, though some feel AI is not to be trusted
-
2 hours ago, wimvb said:
That looks very impressive so far. Eagerly awaiting the final image.
Thanks, Wim. The challenge is only using data from good seeing. Tries my patience.
-
-
This is an experiment of s0rts. I was blessed with decent seeing (average for me is good, good is so rare its virtually a myth, and 5/5 is a never ordeal). I am trying to create as good of an image of this target as possible, so I am piling up teh data so that I can elimin ate all but the best and hopefully come up with a good image. This image is 281 120 sec red subs (9.4 hours) and 173 300 sec ha subs (14.4 hours). There was no reason to cull the red subs very dramatically, as FWHM values were consistantly below 2.5". The final FWHM of the red stacdk is 1.94, which is very rare for me to havev a stack come in under 2". I piled on the Ha for 2 reasons, one, I always manage to lose some of the faint emissions during processing, so I figured I would pile it on and hopefully strengthen the faintest regions. Also, the Moon came out about 12:00 during a couple days of this shoot, so after shooting red with no Moon, I didn';t want to lose good seeing sky time, so I decided to keep collecting Ha for this target instead of starting a new tareget. The image should probably be displayed at bin 2, based on the math, but I kept it at Bin 1 for the pixel peepers. Pardon the the bbackground, which is always the weakest element of my images. Hopefully I get good nights for the GB and L, which will smooth things out.
I accentuated the Ha a bit more than I usually would becuase its harder to differentiate it in a mono image. The Ha was added using Pixinsight for those that question PIs ability to handle Ha insertion. I recall a recent post about this.
TOA 130 with .99x flattener and ASI 1600. About 24 hours of red and Ha. AND I STILL COULD USE MORE!!!!
Here's a slightly better core. Only sio much can be done with a mono stack....by me that is!
- 5
-
1 hour ago, Sunshine said:
Yes, but for planetary imaging one is not using a long exposure camera as in DSO imaging. With planetary imaging the cameras is taking a video clip consisting of hundreds of frames, software then picks out rhe best frames and stacks them. One can even track by hand with a dobsonian and achieve great images as I have seen some here do with hand tracked dobsonians. Lucky imaging is the term used, many hundreds of frames are taken, but only a few of the best, most steady frames are selected by software for stacking. In fact, one can do this without any mechanical or hand tracking at all, it just means that your bideo clip will be much shorter, and less lucky frames to choose from. With DSO’s where a single long exposure is taken, hyper accurate alignment and guiding is required to maintain a sharp image and pinpoint stars, not so with planetary imaging.
But what happens to me is the target drifts so that half way through the video, I have to move the scope. Or if I shoot 4- 5 videos, I have to reposition before each one. The drift limits the length of my video, which gives a smaller data set from which to keep, say 10% or 20% of the frames. Its an annoyance. So I’ll move the scope during the video knowing that the frames taken during the movement will be discarded. That works, but always having to worry about it is a pain.
-
3 hours ago, Sunshine said:
Tracking for planetary imaging doesn’t have to be accurate at all, I can literally eyeball polaris through the polar scope on my Vixen polaris mount which is enough to keep a planet pretty much centered in the FOV for up to 5 min. Imaging deep sky is another story, as you know it requires much lore accurate polar alignment and guiding.
Don’t the planets move differently than the stars? Even though I get sometimes halfway decent PA, I always see the moon or planets drift so that I need to readjust gif each video I make. And my mount has a lunar and planetary setting.
-
25 minutes ago, Sunshine said:
Oh, I didn’t mean guiding, i meant tracking on one axis for short periods when polar aligned, switch motor on and off as needed.
I wish I could polar align that well. I use a RAPA scope that claims it is pretty accurate. But when phd2 calculates PA, I haven’t been under 5 arcsec but once, at 2.3. Usually it warns me it could be better.
-
53 minutes ago, vlaiv said:
Well they sort of do - but not all have motors - there are some that only have manual tracking - which is fine for visual use - where you correct and recenter target every so often
I find I have to do that even with my mount when imaging the planets or the Moon. Kind of frustrating
-
14 minutes ago, AKB said:
Tracking, the thing which most motorised mounts do, is not to be confused with guiding, which, indeed, generally requires stars.
Ahh. I assumed all mounts track. I guess that’s not true.
-
21 hours ago, Sunshine said:
I’ve done some nice lunar and planetary work, nothing award winning but tracking did help. It just seems easier than stripping and taking videos.
How do you track without stars?
-
I had similar issues-a deluge almost washed away my observatory. I keep my power converters in a cooler beneath the tripod. Like a fool I did not think to place the drain plug on the downslope side, and I left it open! Naturally, the cooler filled with water and sediment and ruined my power converters. The observatory roll-off tracks were buried in mud. No fun imaging in Southern CT.
- 2
Leaping Leopard
in Imaging - Deep Sky
Posted
Hard to say. It wasn’t great for some but ok for others. But still less than 3” FWHM. I think the image is 2.6” or something like that. I think the SII was better, around 2.2”. For me that is pretty good. When subs are less than 2.0 I am extatic.