Steve in Boulder
Members-
Posts
94 -
Joined
-
Last visited
Content Type
Profiles
Forums
Gallery
Events
Blogs
Everything posted by Steve in Boulder
-
By the way, the annotation feature is really nice to use with galaxy groups like NGC 7331.
-
Jocular will read the exposure time from the FITS header, so Starlight Live must not be recording it.
-
I made my first attempt last night. I used a very crude approach -- comparing the midpoints of the curve within each passband -- and wound up with 12 or 13 seconds for B, 10 for R and G. Here's an example. Not sure it was really worth the (rather lame) effort. Despite the additional integration time, I was having some alignment issues with the B subs. Some thin clouds had moved in and the moon was coming up, not sure if that could be the reason.
-
By coincidence, I'd just been looking at the color correction material in Berry & Burnell. Instead of reference stars, perhaps there's an alternate though cruder approach. Determine the altitude from the platesolver and find the relative ratios for atmospheric transmittance for R, G, and B. The QE for each R, G, and B passband can be found from the published graphs for the camera and the transmittance for the color filters likewise. I intend to start down this path next session by simply adjusting exposure times for each image based on the relative areas under the QE graph for R, G, and B. Just by eyeballing the QE graphs B&B's chart of atmospheric transmittance (Table 17.3), the QE adjustment will dominate above, say, 45 degrees in altitude.
-
I notice in retrospect that Berry & Burnell (p. 488) mention the use of longer integration periods for L than for RGB. As they discuss (p. 478), this works because "the human eye and brain readily perceive noise and errors in luminance, but are relatively insensitive to noise and errors in chrominance."
-
I did a little experiment to see if it's reasonable to reduce the total time spent on RGB subs when using LRGB mode, the theory being that L gives the most detail and RGB is there for flavoring. Here are three views of M51 (please bear with my choices for stretching, sharpening, noise reduction, and saturation, even if not to your taste!). They seem pretty close to me. Maybe this wouldn't work as well for other sorts of targets.
-
I don't know how LAB color space affects this, or the addition of the L subs, but as a user I think I'd want to be able to specify what color (in RGB terms) each narrowband should appear as. So if I wanted to see Ha as red and OIII as green, so I could easily tell the signals apart, that's what I'd get.
-
This may be, and likely is, a stupid idea, but could Jocular simply read a palette color from its configuration file for each narrowband? Then use combinations of narrowband filters as I suggested above. That is, stack L + Ha subs, then switch to L + OIII and continue stacking, with each narrowband sub assigned its configured palette color. That could implement a multi-narrowband feature with minimal effort. Unless it’s a stupid idea.
-
Martin, I plan to start experimenting with the narrowband features soon. A couple questions: How is the palette determined? Is there something to configure in the JSON file? Can one do, for example, an L + Ha stack, pause the stack, switch to L + OIII, and resume stacking (perhaps just adding OIII subs)?