Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.


Martin Meredith

  • Content Count

  • Joined

  • Last visited

Community Reputation

1,756 Excellent


About Martin Meredith

  • Rank
    Sub Dwarf

Profile Information

  • Gender
  1. Hi Greg I thought I'd seen that tartan pattern before with darks but can't now lay my hands on them so I might have been imagining it. Maybe cable issues? BTW I really like the conjunction of the bubble and the open cluster. Yes, I was collecting flats for use with Jocular. cheers Martin
  2. Hi Greg Excellent shots! I particularly like the Antennae tails and the Leo group in the wider context that the Trius/Borg combination provides. And your H-alpha are really clean. Funny you should mention the fitlerwheel freezing. I've also had that a couple of times recently although for some reason I blamed the FW rather than SLL. As you say, a restart cures it. I've a feeling it might depend on the order the USB cables are plugged in, if SLL is already open when plugging. But that could be wide of the mark. The only time I've seen banding is when using short exposures, where I think it is deliberate to keep the throughput high during focus and framing (I don't understand the details, but I think the image is read in two reads, one for odd rows and the other for even rows, then combined -- or not, in the case of short exposures). Banding affected some flats I took at 0.5 seconds or so (and since I take twilight flats I have to get going a little bit later and use longer exposures.... a t-shirt would help this issue). I wonder if the higher pixel count on the Trius leads to incomplete reads and hence banding. Are the bands horizontal? Martin
  3. Hi Mike Its a tough one to get with the moon so bright esp. with such short exposures. It must have been very satisfying to see it building up as the subs came in. What I really like about this area are the double stars. If you don't mind here is a shot of mine (I don't recall but almost certainly little or no moon) from some years back where I got round to labelling them. Ignore the terrible collimation/tilt... Martin
  4. There's an interesting discussion of teletransportation in the philosopher Derek Parfit's book 'Reasons and Persons' https://en.wikipedia.org/wiki/Reasons_and_Persons also discussed here: https://www3.nd.edu/~jspeaks/courses/2007-8/20229/_HANDOUTS/personal-identity-teletransport-split-brain.pdf which raises all sorts of issues about personal identity. Its an excellent book about what it means to have continuity of identity (mind and body) when cells are being replaced etc. Martin
  5. I'm implementing 3 or 4 different approaches to combine L with RGB (LAB, HSV, the simpleton approach and perhaps some basic layer technique) and the user will be able to simply select one and see the effect pretty much immediately. I'm not a big fan of HSV but will include it in early versions purely for experimental purposes. I'd love to work on this together at some point but I need to get the basic colour stuff into the next release of Jocular first. It sounds like we have similar aims in terms of true colour. I'm working my way through the acronym soup of colour spaces. This is a great site: http://www.handprint.com/HP/WCL/color7.html#CIELAB It contains some interesting comments such as how well CIELAB does considering its origin as a "hack" and how it doesn't do so well for yellow and blue (which is worrying given that these are found a lot in AP!). The take-home message for me is that colour spaces like LAB are by no means perfect nor necessarily appropriate and it is likely to be possible to improve things. This is also a good resource on some recent ideas although I'm not sure whether the more complex models have much relevance in AP: http://rit-mcsl.org/fairchild//PDFs/AppearanceLec.pdf Martin
  6. Thanks for taking the time to comment Vlaiv. Finding appropriate ways to perform calibration is on the agenda, ideally with some fallbacks in cases where plate solving is not available, so your suggested process is something I will take into account. What I'm finding when it comes to colour in AP is that there are a lot of opinions, sometimes with a scientific veneer, but just as things are getting interesting most discussions become mini-tutorials on how to apply techniques or fix colour issues using PS or PI. None of this alters the fact that there is a ground truth out there if instrument and sky calibration is carried out effectively i.e. in the steady state a star continues to emit at a range of wavelengths regardless of who is capturing those photons here on Earth. And it is that information that I am trying to get at and represent as directly as possible. Of course, in the end everything is a visualisation, but some visualisations are more closely related to the source than others. Although incomplete, I like the approach taken here for instance: http://www.vendian.org/mncharity/dir3/starcolor/details.html What I'm aiming for is an approach which preserves the hue component (after appropriate calibration) and essentially gives the user control over saturation and luminosity stretching and not too much else. Since this will be incorporated in a near-live observation application, I want to minimise the need for adjustments and automate as much as possible, and quickly. I also need to collect more LRGB examples to see whether the simpler approach holds up. cheers Martin
  7. I'm writing some code to combine L with RGB and inevitably looking at using the LAB colour space. I understand that LAB enables separation of luminance and chrominance, allowing separate processing of the two types of information e.g. stretching of luminance, saturation of chrominance. But I've read contradictory things about the Lum/Chrom separation in LAB. Here for example it is suggested that L and A are not independent: http://www.schursastrophotography.com/techniques/LRGBcc.html. My own experience to date is that the L overwhelms the colour data unless the RGBs are themselves stretched prior to going into LAB space, potentially damaging RGB ratios. That led me to wonder about the following simple alternative to LAB. 1. Compute a synthetic luminance (I'm using the conventional weighted sum 0.2125*red + 0.7154*green + 0.0721*blue). 2. Compute a luminance weight, defined as the stretched L divided by synthetic luminance, at each pixel. 3. Multiply each of R, G and B by the luminance weight. As I see it, the two key results are that * the synthetic luminance of the new image is equal to the stretched luminance L * colour ratios are preserved (except perhaps at some edge cases) Surely this is what we want in an LRGB combination? Or perhaps I'm missing something. Here's an example for a 200x200 pixel portion of the open cluster NGC 2169 (about 1 minutes worth of 10s subs for each of LRGB captures with a Lodestar X2 mono guide camera ~ 2.1 arcsec/pixel, hence the blockiness). All of the LRGB channels are gradient-subtracted and background-subtracted. Top left is RGB combination only (i.e. no scaling via luminance -- no L at all in this one). Top right is LAB based on this unscaled RGB but with a power-stretched luminance. Note the lack of colour saturation. Bottom left is the simple approach outlined here. Bottom right is LAB applied to the scaled RGB. These are all mildly stretched with a simple power stretch applied to the combined image (i.e. to all channels). I chose a stretch that generates a similar 'stellar depth' and amount of chrominance noise to aid comparison of the lower pair -- this is the comparison that I'm interested in. LAB is quite sensitive to the stretch actually and to get a similar depth one ends up with quite 'hard-edged' stars. I see somewhat more intense colours for the fainter stars in the simple approach without any commensurate additional saturation of the brighter stars. No doubt with appropriate processing the two lower panels could turn out to be quite similar. The biggest difference is perhaps in computation time: the simple approach is about 10 times faster. I guess my question is: does anyone do the simple and obvious thing outlined above, or are there some pitfalls I haven't considered? Or does LAB have some advantages that cannot be obtained as easily with the simpler approach? (I'm aware of that AB saturation manipulation in LAB for instance, and perhaps that would be harder to achieve, although the saturation is already good as is with the simpler approach compared to the upper left LAB for example). Any thoughts or comments welcome! Martin
  8. Thanks Errol! I'm nearly ready to return to Nebulosity scripting. I had it working for an earlier version so it shouldn't be too much effort. Just finishing off colour at present and I definitely want to have 1-shot scripted captures with the filter wheel in time for open cluster season. Martin
  9. So near so far! Agreed, well worth checking. I don't do enough of it myself. I think it is worth at least estimating the magnitude from the FITS data and comparing with other estimates. There seem to be plenty of comparison stars. Martin
  10. https://wis-tns.weizmann.ac.il/object/2019ehk Discovered on the 29th April, same day as your session.... 2019-04-29 22:27:50
  11. Hi Rob Just read about a new SN in M100. See image here: https://www.cloudynights.com/topic/659748-first-light-with-altair-hypercam-294c-pro-tec-cooled/?p=9333418 Wondering if you captured it? It looks like you did to me! (Maybe the first?) Martin
  12. I've always assumed it represents how much total movement (i.e. from the keyframe to the current frame) there is before a new keyframe is deemed to be needed. The reported 'mean displacement' value appears to increase as the stack progresses, and issues with new keyframes/noise appear when that value is greater than the max displacement threshold. I guess setting the max displacement high must run some risk of either taking too long or not aligning (otherwise why not set it high all the time), but I've not noticed too many problems setting it to a reasonably high value. Without knowing how the star registration works in detail it is hard to say more, but this has been my experience. Martin
  13. A great result and write-up. I've observed these a few times without realising they have an Arp designation. Interesting to hear that their interaction may have nothing to do with each other. Re the SLL issue, I vaguely recall Paul mentioning something about the stack being reset when a new keyframe is needed. It doesn't happen to me very much but does occasionally (perhaps something to do with the amount of field rotation in a given part of the sky causing too much movement for alignment to the original keyframe). If so, the solution is to set the max pixel displacement (under stacking) to a higher value. I've just checked and I seem to have it at 32. Martin
  14. Hi Rob Congrats on getting a session in. I'd forgotten what a beauty M100 is with the other galaxies in shot too. I've never tried aligning simply with the finder. I always get it dead centre using the camera, but my slews are almost always off anyway... so perhaps I'll give your technique a go next time. Martin
  15. Hi Bill Very interesting write-up, thanks. I see you have an IC galaxy in there and a few other fainter galaxies. I will try to measure this one next time I'm out (unfortunately not until 2nd week of May). Martin
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.