Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

NGC 2169 -- the "37" cluster


Martin Meredith

Recommended Posts

Here's a quick shot of one of my favourite winter clusters from a few weeks back (ignore the date) before Orion started slipping away.

 

 

989698174_NGC216912Apr19_21_33_55.png.09ac3d5b4a6958ef01d917e88b45c82a.png

 

 

I'm starting to add support for multispectral captures to the Jocular EEVA application and this is the first test. It looks odd after months of using it as a monochrome galaxy-hunting engine....

This is mono + RGB filters (4G, 4R, 3B) but just raw (no weighting, etc). I want to avoid tedious messing about with histograms if at all possible and am looking at other methods of colour calibration, including G2V.

I also want to explore LRGB and L+Ha in the coming weeks in time for the summer open cluster/nebula season.

Am open to any ideas/suggestions on this front. 

cheers

Martin

 

 

 

 

Link to comment
Share on other sites

10 hours ago, Martin Meredith said:

I want to avoid tedious messing about with histograms if at all possible and am looking at other methods of colour calibration, including G2V.

Absolutely!  This is what distinguishes an EEVA-style viewing from imaging.  I’ve never found RGB to be particularly intuitive, especially when combined with a separate L.   Something like HSV or Lab, perhaps, as a space to play in?  

However good your calibration, you will probably need something to suppress green, like SCNR?  Fortunately, that’s easy and cheap to implement. 

Tony

Link to comment
Share on other sites

Yes, LAB and HSV are on the list (more the former than the latter as it is not a perceptual space and can do horrible things)... I tend to think we fetishise the histogram and it encourages us to do the wrong things (like lining up peaks). 

I don't think I need to suppress green since I'm using RGB filters rather than RGGB OSC, but I've no experience with SCNR so I don't currently know. Having said that, I hope whatever colour manipulations I implement will be uniform for mono + filters and OSC (synthetic luminance in the latter case). But OSC comes later.

I'm currently looking for good B - V data for a few objects, starting with NGC 2169. I've marked this up with some old data. Its interesting (but I suppose not surprising) that the 3 non-blue stars are field stars i.e. not part of the cluster. They are marked with an asterisk, but their colour data is of course still useful to me.

The first number represents the star designation for this member/non-member, in order of apparent magnitude. The second, if present, is the spectral class. The final value is the B-V colour index.

(ignore timings; these come from dumping subs into the watched folder so are not accurate; this is 3  x 15s)

 1152526973_13Apr19_09_58_57.png.3e2c2f3996706ccb9162b14f7f36bf94.png 

Link to comment
Share on other sites

Nice demonstration of colours in the 37 cluster. Good to see that multispectral is on the way Martin, I think my filterwheel has cobwebs hanging off it! I had some success with multispectral but it was definitely more challenging than mono and the results showed less detail.  Including L with the RGB reduced exposure time but washed out the colour. For some reason gradients often became more apparent with colour and the edges of the frame looked odd, possibly due to poor tracking. Sounds like gradient removal and eyepiece view (cropping) may help with these. I need to try some more multi-spectral though as open clusters came to life with colour.

Link to comment
Share on other sites

Thanks Rob. Yes, I've had trouble getting LRGB to work well in the past, yet it is ideally-suited to EAA (for those with filterwheels).

The plan is to produce separate luminance and chrominance (even if the image is shot thru just RGB, or an OSC), then do the usual mono things with luminance as at present, but for chrominance apply automated background subtraction (both background level and first-order gradient removal) before allowing the user to do color channel balancing (but no histograms!). Then once all that is done there will be a graded way to combine lum + chrom (essentially a saturation control). The balancing part will be complemented (and ideally replaced) with the ability to make use of G2V calibration stars, and Jocular will keep a library of these tagged by altitude and apply them automatically when it can (so it will be an infrequent task to build up the G2V library). There will also be chrominance channel smoothing (not just binning but fractional smoothing) to reduce colour noise.

This is all in theory. Not sure show it will work in practice! But I'm determined to get good colour for star clusters as I love them (esp. the small dense faint ones like some of the Berkeleys).

Similar ideas for narrowband. So NB will be mapped to colour channels (with options to choose HOO, SHO and switch between them live etc) and then treated as the chrominance channel, allowing it to be combined with the Lum at any level of saturation. One of my goals is to be able to controllably blend in H-alpha regions for instance as I think it really helps. I was doing an outreach session with Jocular at the weekend and I really wished I could colourise some of the star-forming regions on M101.

The other side of the coin is to produce one-click LRGB, L-Ha, etc captures. This is where I am working on scripting Nebulosity.

Martin

Link to comment
Share on other sites

Just a quick progress report on the results of a first experiment with 'potentially-live' LRGB using LAB colour manipulations. Still some way to go and the whole process needs a lot of fine-tuning. The reason I'm using LAB is that it does allow easy separation of luminance and chrominance and seems to do the job of preserving hues regardless of stretch etc, which isn't easy to achieve in RGB space. (I've never used Photoshop but I believe it does everything in LAB internally?).

The L component comes from those subs captured with a clear filter, plus synthetic luminance from the subs collected with RGB filters (there will be various options on whether to use synthetic luminance and how to combine with real luminance, but I wanted to test the full monty). All the black/white point setting and stretching is then applied to this 'total' L component.

The RGB are individually stacked, then gradient removal applied, background subtracted, and combined using a colour-balancing approach based on equalising the brightness of a bright feature in each channel. This is a poor man's G2V technique for the moment but is said to work well 95% of the time. The RGB channels are then transformed to LAB space and the L channel thrown away and replaced by the total L component (the one which has been stretched). That delivers LRGB. But the interesting bit is to then change the chroma saturation using simple manipulations of the A and B components. This is what I'm showing below. Obviously one wouldn't necessarily go so far in saturation normally, but it is interesting to see it at work.

One thing you can see in the high saturation case is the appearance of rings on the brighter stars. The centres were probably saturated in the original subs but in any case the effect can be ameliorated by smoothing the A and B components (like binning but preserving the original image dimensions). This is done in the lower right image.

What I haven't yet done but I think is needed is to align the different colour channels separately as one can see the rings are somewhat offset from the centres.

The main reason for doing this pilot is to see if it can be done in real time as there is a lot of computation just to do something simple like alter saturation. It looks like for Lodestar sized images the manipulations can be done in under 200ms on my 2015 MacBook Air, so that is useable. Obviously, things like stretching will also slow down when using LAB mode because of the need to convert between colour spaces constantly.

The entire process is automatic. All the program needs is to know the filter used for each sub. Of course, the user will be able to control things like saturation and configure other aspects, but if EEVA users are going to use mono + filters then it is important that it can be done automatically otherwise it starts to become processing rather than observing.

939106452_ScreenShot2019-04-23at13_23_30.png.986b9157210ead223e5f985f1b990122.png

 

As always, comments and suggestions welcome -- this is all new to me although I believe it is pretty standard in astrophotography. I'd be particularly interested to find more LAB resources or hear from APers on where the rings are coming from and how to deal with them (within the EEVA canon of techniques!).

cheers

Martin

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.