Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Martin Meredith

Members
  • Posts

    2,270
  • Joined

  • Last visited

Everything posted by Martin Meredith

  1. Nice work. Would it be possible for you to identify the stars you're using without too much work on your part? BTW Did you need to take elevation into account in the end? One big difference I note is that mine has a number of redder stars (I don't mean the artefacts at the top). For instance, the curved 'triple' of stars of diminishing magnitude just L of centre with the peach, dark peach and red stars is different. I think the example I sent was over-stretched (I noticed I'd left it on my most powerful stretch by mistake). I have a liking for dense star fields and the background stars are indeed signal so it is good to see them 🙂 I guess this is the closest I can get to what I personally prefer (using arcsinh stretch for L and power for RGB, and LAB combination). cheers Martin
  2. Here's a link to the open cluster NGC 6709. These are linear fits for stacked L, R, G and B. If this is suitable/interesting I'll post the other objects. https://www.dropbox.com/s/b82b5q7kra1n2b2/NGC6709.zip?dl=0 There isn't a great deal of easily-useable colour info on this cluster online that I can find, although the brightest stars do have colour data. e.g. https://www.researchgate.net/publication/230942346_Multicolor_CCD_photometry_and_stellar_evolutionary_analysis_of_NGC_1907_NGC_1912_NGC_2383_NGC_2384_and_NGC_6709_using_synthetic_color-magnitude_diagrams In LAB space with saturation turned down for subtle colours (and no colour balancing), it looks like this using live-stacking of LRGB: When comparing against any colour data, bear in mind this was around 28 degrees elevation (I was looking for any OC in the S and there are very few around at this time of year without staying up late ;-). There will be more and higher OCs in a month or two! cheers Martin
  3. Sounds interesting. Have you tried it on many examples yet? I ask because the method I mentioned in the original post works well on some examples and less well than LAB on others. LAB seems a little less sensitive to colour noise. Since my last (and only) session since the original post I've collected LRGB for 5 targets in case you want to try out your method... Martin
  4. Hi Roel Yes, it is possible, but I don't have/use Windows so it would be a case of someone giving it a go. There are recipes to do it although it isn't straightforward I suspect (I tried a build for the Mac and spent some hours on it without resolving all the issues, so went for a code distribution instead). Jocular installation isn't necessarily difficult once Python and the required libraries are installed. And then the benefit is that future releases don't need these steps and it is just a case of dropping the new release in the right place. cheers Martin
  5. Thanks Bruce. I hope you get some decent skies soon and post some CMOS images. At some point I'm sure I'll acquire a CMOS camera myself but I'm still enjoying myself too much with the Lodestar! Martin
  6. I see you've solved it, but if anyone happens upon this thread and fancies a simple command line solution, imagemagick (freeware) is a good tool. From the home page "It runs on Linux, Windows, Mac Os X, iOS, Android OS, and others." It is great for pretty much any format conversion. Essentially, to create an animation is just convert im1.gif im2.gif -loop 0 animation.gif (with lots of other options) https://imagemagick.org/Usage/anim_basics/ Martin
  7. I use XHIP for the brighter stars http://vizier.u-strasbg.fr/viz-bin/VizieR?-source=V/137D where both B and V are generally to 2 dp and Skiff's for the fainter stars http://vizier.u-strasbg.fr/viz-bin/VizieR-3?-source=II/277 which has B-V to 2 dp and not exactly what you want (but perhaps useful in your venture at some point) is Skiff's Catalogue of Stellar Spectral Classifications http://vizier.u-strasbg.fr/viz-bin/VizieR-3?-source=B/mk&-out.max=50&-out.form=HTML Table&-out.add=_r&-out.add=_RAJ,_DEJ&-sort=_r&-oc.form=sexa You may find other resources at this site too. Martin
  8. Actually Bill, I noticed I didn't answer your question re colour cast. I think the two things are not related, because I'm creating an independent stack for each filter and applying gradient removal to these stacks independently too, so it won't affect alignment (since stars are extracted prior to each sub joining its stack). What I'm not doing (and ought to be) is realigning each of the N stacks to each other. That is the next step. In general I want to be able to realign from an arbitrary sub (perhaps automatically) as soon as an alignment issue arises, since in my experience a great many failures to align can be fixed by choosing a different sub for the key stars. Yes, one can run in any order, including interleaved, in SLL. Its just a case of getting it to move the filter wheel in between each one, then having to type red green or blue in the saved .FITs name (which is tedious if one interleaves). Also, the number of subs of each type (RGB) shouldn't affect the colour cast, again because I compute the stacked RGBs independently. Having more Rs for instance will just lower the noise in that channel. Martin
  9. Thanks Rob. Yes, it was the difficulty of getting a good combination of L and RGB in SLL that set me on the path of using LAB space. To be fair, I don't think SLL was designed with LRGB in mind. It does require specific processing to ensure that the L doesn't overwhelm colour information. These results were obtained with the saturation slider quite low, actually. It is possible to apply very large amounts of saturation too, but the results are not necessarily very pleasing! BTW I hope you can get Jocular going. I'm not much help as I don't use Windows these days. Cheers Bill. When I started off last night I collected L first and had problems aligning subsequent RGB data. It might just have been issues with the mount -- I can't be sure -- but when I switched to collecting RGB first I had no alignment issues. I think my alignment algorithm may have a problem going from a large number of stars in one frame to a much smaller number in the next, but not the other way round. If so, it can be fixed and made more robust. It will still be a while before I release the colour version. My filter wheel (from SX) doesn't add much to the optical path (15mm or so?) because the camera sensor can be push down to within a whisker of the filters within. All this live LRGB will be much easier when it is scripted from Nebulosity or similar. At present it involves a lot of clicks in SLL and name changes (Jocular identifies the filter from the occurrence of red, green or blue in the FITs name for the moment -- a temporary solution). I guess there are maybe 50+ keystrokes for a complete LRGB capture, but this could in theory be reduced to one! Martin
  10. I had a chance to try out live LRGB captures in Jocular last night for the first time. Its a fairly random collection of objects... M92. M5 Abell galaxy cluster 2151 NGC 6709, an OC in Aquila. This was quite low but there are very few OCs in the south until after midnight at present. M57 Definitely a blue colour cast on this last one that needs to be checked (there is automatic gradient removal in each filter channel but it doesn't look to be doing the right thing here). There are still a few tweaks to make especially to stack combination, and there is no manual colour balancing yet. I wasn't measuring total exposures but they are 15s subs except for Abell 2151 where I used 30s. The total sub count is at the top, but in most cases there are excluded subs so these are upper bounds. Generally I collected 4 or 5 subs in each of RGB first then added the rest as L. I'm finding that relatively small amounts of colour info is needed along with LAB colour saturation to preserve colour even when a lot of L is included. Mostly, I'm using LAB colour space (it is possible to switch to HSV, RGB or a fast hybrid LRGB combination, live, but LAB works best). The only colour controls are saturation, fractional colour binning, and colour stretch (CS in the screenshots). All of these used StarlightLive as the capture engine and for controlling the filter wheel. I still seem to have a fair amount of tilt or something in the optical train that needs to be sorted out too. cheers Martin
  11. Hi Bill Great to see some more images from you! I hope Jocular is still live-stacking away merrily. I haven't been out for more than a month... NGC 6781 seems really under-rated to me. It has plenty of structure to pull out and sits in a nice crowded star field. I was checking out an old shot of NGC 5466 and it does look a bit more glob-like. There are a ton of really faint stars so if they're just below threshold it can look very sparse. regards Martin
  12. Hi Errol Thanks for the suggestion. I had a look but I think it would be quite hard (but probably not impossible) to do that (different language/OS). I think going down the Nebulosity scripting line will be the first step to having captures under the control of Jocular. Actually, when I first started developing Jocular, I thought lack of capture functionality would be an issue (for me), but having a separate capture engine running on a separate virtual screen and switching between that and Jocular is quite comfortable (almost like they're part of the same program!). The reason for wanting to integrate a capture engine is to handle more complex captures in a single click (auto-flats, dark library construction, RGB, LRGB, SHO etc). Hoping that will be in the next release. The internals for RGB, LRGB etc are all there now so it is just a matter of getting the time to do the Neb integration as smoothly as possible (there are a few wrinkles like the lack of scriptable abort capture to sort out first). cheers Martin
  13. Very nice shots. Arp 293, esp NGC 6286, is really interesting. I've seen this kind of galaxy before on the Arp list but can't quite place it. Its like one tectonic plate sliding under the other somehow. Here's mine of the Arp 310/311 pairing from some time ago (but almost the same exposure, and using 15s subs too). There's a whole load of faint galaxy smudges here. Of course, now the full moon has past it is cloudy here... Martin
  14. Hi Greg I thought I'd seen that tartan pattern before with darks but can't now lay my hands on them so I might have been imagining it. Maybe cable issues? BTW I really like the conjunction of the bubble and the open cluster. Yes, I was collecting flats for use with Jocular. cheers Martin
  15. Hi Greg Excellent shots! I particularly like the Antennae tails and the Leo group in the wider context that the Trius/Borg combination provides. And your H-alpha are really clean. Funny you should mention the fitlerwheel freezing. I've also had that a couple of times recently although for some reason I blamed the FW rather than SLL. As you say, a restart cures it. I've a feeling it might depend on the order the USB cables are plugged in, if SLL is already open when plugging. But that could be wide of the mark. The only time I've seen banding is when using short exposures, where I think it is deliberate to keep the throughput high during focus and framing (I don't understand the details, but I think the image is read in two reads, one for odd rows and the other for even rows, then combined -- or not, in the case of short exposures). Banding affected some flats I took at 0.5 seconds or so (and since I take twilight flats I have to get going a little bit later and use longer exposures.... a t-shirt would help this issue). I wonder if the higher pixel count on the Trius leads to incomplete reads and hence banding. Are the bands horizontal? Martin
  16. Hi Mike Its a tough one to get with the moon so bright esp. with such short exposures. It must have been very satisfying to see it building up as the subs came in. What I really like about this area are the double stars. If you don't mind here is a shot of mine (I don't recall but almost certainly little or no moon) from some years back where I got round to labelling them. Ignore the terrible collimation/tilt... Martin
  17. There's an interesting discussion of teletransportation in the philosopher Derek Parfit's book 'Reasons and Persons' https://en.wikipedia.org/wiki/Reasons_and_Persons also discussed here: https://www3.nd.edu/~jspeaks/courses/2007-8/20229/_HANDOUTS/personal-identity-teletransport-split-brain.pdf which raises all sorts of issues about personal identity. Its an excellent book about what it means to have continuity of identity (mind and body) when cells are being replaced etc. Martin
  18. I'm implementing 3 or 4 different approaches to combine L with RGB (LAB, HSV, the simpleton approach and perhaps some basic layer technique) and the user will be able to simply select one and see the effect pretty much immediately. I'm not a big fan of HSV but will include it in early versions purely for experimental purposes. I'd love to work on this together at some point but I need to get the basic colour stuff into the next release of Jocular first. It sounds like we have similar aims in terms of true colour. I'm working my way through the acronym soup of colour spaces. This is a great site: http://www.handprint.com/HP/WCL/color7.html#CIELAB It contains some interesting comments such as how well CIELAB does considering its origin as a "hack" and how it doesn't do so well for yellow and blue (which is worrying given that these are found a lot in AP!). The take-home message for me is that colour spaces like LAB are by no means perfect nor necessarily appropriate and it is likely to be possible to improve things. This is also a good resource on some recent ideas although I'm not sure whether the more complex models have much relevance in AP: http://rit-mcsl.org/fairchild//PDFs/AppearanceLec.pdf Martin
  19. Thanks for taking the time to comment Vlaiv. Finding appropriate ways to perform calibration is on the agenda, ideally with some fallbacks in cases where plate solving is not available, so your suggested process is something I will take into account. What I'm finding when it comes to colour in AP is that there are a lot of opinions, sometimes with a scientific veneer, but just as things are getting interesting most discussions become mini-tutorials on how to apply techniques or fix colour issues using PS or PI. None of this alters the fact that there is a ground truth out there if instrument and sky calibration is carried out effectively i.e. in the steady state a star continues to emit at a range of wavelengths regardless of who is capturing those photons here on Earth. And it is that information that I am trying to get at and represent as directly as possible. Of course, in the end everything is a visualisation, but some visualisations are more closely related to the source than others. Although incomplete, I like the approach taken here for instance: http://www.vendian.org/mncharity/dir3/starcolor/details.html What I'm aiming for is an approach which preserves the hue component (after appropriate calibration) and essentially gives the user control over saturation and luminosity stretching and not too much else. Since this will be incorporated in a near-live observation application, I want to minimise the need for adjustments and automate as much as possible, and quickly. I also need to collect more LRGB examples to see whether the simpler approach holds up. cheers Martin
  20. I'm writing some code to combine L with RGB and inevitably looking at using the LAB colour space. I understand that LAB enables separation of luminance and chrominance, allowing separate processing of the two types of information e.g. stretching of luminance, saturation of chrominance. But I've read contradictory things about the Lum/Chrom separation in LAB. Here for example it is suggested that L and A are not independent: http://www.schursastrophotography.com/techniques/LRGBcc.html. My own experience to date is that the L overwhelms the colour data unless the RGBs are themselves stretched prior to going into LAB space, potentially damaging RGB ratios. That led me to wonder about the following simple alternative to LAB. 1. Compute a synthetic luminance (I'm using the conventional weighted sum 0.2125*red + 0.7154*green + 0.0721*blue). 2. Compute a luminance weight, defined as the stretched L divided by synthetic luminance, at each pixel. 3. Multiply each of R, G and B by the luminance weight. As I see it, the two key results are that * the synthetic luminance of the new image is equal to the stretched luminance L * colour ratios are preserved (except perhaps at some edge cases) Surely this is what we want in an LRGB combination? Or perhaps I'm missing something. Here's an example for a 200x200 pixel portion of the open cluster NGC 2169 (about 1 minutes worth of 10s subs for each of LRGB captures with a Lodestar X2 mono guide camera ~ 2.1 arcsec/pixel, hence the blockiness). All of the LRGB channels are gradient-subtracted and background-subtracted. Top left is RGB combination only (i.e. no scaling via luminance -- no L at all in this one). Top right is LAB based on this unscaled RGB but with a power-stretched luminance. Note the lack of colour saturation. Bottom left is the simple approach outlined here. Bottom right is LAB applied to the scaled RGB. These are all mildly stretched with a simple power stretch applied to the combined image (i.e. to all channels). I chose a stretch that generates a similar 'stellar depth' and amount of chrominance noise to aid comparison of the lower pair -- this is the comparison that I'm interested in. LAB is quite sensitive to the stretch actually and to get a similar depth one ends up with quite 'hard-edged' stars. I see somewhat more intense colours for the fainter stars in the simple approach without any commensurate additional saturation of the brighter stars. No doubt with appropriate processing the two lower panels could turn out to be quite similar. The biggest difference is perhaps in computation time: the simple approach is about 10 times faster. I guess my question is: does anyone do the simple and obvious thing outlined above, or are there some pitfalls I haven't considered? Or does LAB have some advantages that cannot be obtained as easily with the simpler approach? (I'm aware of that AB saturation manipulation in LAB for instance, and perhaps that would be harder to achieve, although the saturation is already good as is with the simpler approach compared to the upper left LAB for example). Any thoughts or comments welcome! Martin
  21. Thanks Errol! I'm nearly ready to return to Nebulosity scripting. I had it working for an earlier version so it shouldn't be too much effort. Just finishing off colour at present and I definitely want to have 1-shot scripted captures with the filter wheel in time for open cluster season. Martin
  22. I don't do it myself but I came across this recent thread that might (or might not) help: https://www.cloudynights.com/topic/657429-how-to-use-sharpcap-with-a-dslr/ Good luck! Martin
  23. Mike, I must admit the Ultrastar mono is tempting. I just want to add that although the kind of sensor I use is remarkably small (in pixel count too) by modern standards (it has 50 times fewer pixels than a 20MP sensor), for most objects not only is it more than adequate (e.g. 6784 members of the 6940-strong NGC catalogue i.e. nearly 98% are under 20' in diameter and therefore fit comfortably in the FOV of this camera on a 800mm FL scope), but the kind of detail one can get with these huge pixels is consistently surprising to me. E.g. here's a recent shot of NGC 3294 (ignore the name): This shows tons of detail in this many-armed galaxy. But this represents only a small part (maybe 1/30th) of the field of this tiny sensor (hence the name: NGC 3294 was an unexpected bonus). Here's the full image Yes, there are objects where a 20MP sensor is going to be worthwhile, but they are relatively few in number and one must ask if it is worth the extra compute, memory and long-term storage costs most of the time, esp. for galaxy lovers. Sure, long-term storage is cheap, but compute and memory costs provide practical limits on what one is able to accomplish in terms of flexible processing in near real-time. Martin
  24. Plenty of signal in that M51Nick. It is hard to get the faint outer plumes coming off the three horns. Just to take it down a notch, here's a stack of 15s exposures that I just dug up from 2015 and live-stacked (offline, as it were, but with nothing that couldn't be done live), completely uncalibrated (since I couldn't find any suitable frames). I've not managed to get as much of the faint stuff as you, but I think it shows that there's still life in what is a CCD-based guide camera with monster pixels in spite of relative high read noise compared to CMOS, and operating in alt-az to boot. BTW I think 15s is too short on this object for CCDs, but I'd like to repeat the experiment with proper calibration to see how much of the plume is possible and to check out if it is possible to reduce what I suspect are in part read-noise related striations in the quadrant at around 4 o'clock re M51's core (although some of these I see in your image so presumably real). Martin
  25. Thanks Robert! Just hope it works for anyone wanting to use it. As I mentioned in the guide, I am very open to suggestions for new features and happy to work with people trying to get it going on different systems such as Linux where it has yet to be tested. BTW I just replaced the users guide with a new version since I noticed the screenshots were of poor quality. So anyone who has downloaded it already ought to replace it with the new version (the written content is the same). Martin
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.