Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Martin Meredith

Members
  • Posts

    2,270
  • Joined

  • Last visited

Everything posted by Martin Meredith

  1. Hi Tony They're certainly very nice EAA shots! Great resolution. Looking more closely I can see what you mean about odd effects (more visible on the Albireo shot). There is a certain triangularity that is characteristic of pinched optics, but it is not very equilateral and not apparent everywhere. I think you might get more joy asking the hard-core AP crowd. Have you rules out stacking artefacts? What does a single sub look like? I'd be interested to know how you get on with CCDInspector. I've always wanted to use it analyse what I suspect is tilt in my setup but I believe it doesn't (or didn't, last time I looked) run on Macs. cheers Martin
  2. That's about as good or better than I've ever managed to see Jones 1. I just spotted that you're operating at f6.3 -- impressive. I tried it in LRGB recently and found a distinct electric blue/green with the saturation turned all the way up. No idea if this is genuine or not.... Interesting to see there is a very blue star in the middle -- I wonder if this is the central star. It certainly looks quite central and distinct in hue from the rest of the grouplet. This is 8m 20s. I see to recall there is another Jones PN, or maybe one that is similarly named. I have a JnEr 1 in Lynx listed. Martin
  3. What sorts of exposure times/sub lengths are you using? Maybe M76 will appear brighter with a lower white point? Martin
  4. Hi Gerry I'm not sure I can offer much in the way of techniques but I'm going to throw this into the mix in case it is of interest: One technique we use in experimental studies is pupillometry (essentially, measuring the size of the pupil using an eye-tracker). It has been found that if the brain is engaged in a cognitively-demanding task (in our case trying to extract speech from noise or other talkers), the mean pupil size increases. Its a small effect though and is somewhat transient. I have my doubts about whether it would help in observing, but who knows? There are lots of ways to increase cognitive load while observing that are more pleasant than listening to speech in noise. You could listen to polyphonic music and try to track one of the instruments, for instance. I'm pretty sure it would interfere with the pleasure of observing pretty quickly, but it might be worth a try! The main implication of the 'carving speech from music' example I gave is as a demonstration that neural processing is reconfigurable over the course of seconds and minutes. Some of the forms of distorted speech I use are very far removed from what real speech looks like, so the brain has had no prior experience of them, yet listeners are able to extract meaning after relatively short periods of exposure. What this might mean for observations I'm not sure but there is the possibility that other forms of visual expertise may have a positive impact on observing, if observing can piggy-back on the structures that this other (non-astronomical) visual expertise has put into place. I think this supports the point made by Iain (Scarp). There are lots of studies showing the benefits that musicians have in some speech tasks, for instance. For observers, there could be other ways to develop this kind of complementary visual expertise e.g. microscopy. Cheers Martin
  5. Good to see a WBL thread! I haven't seen WBL 611 before now. It is the equal of many of the fainter Hicksons. Hard to tell how many of those blobs are galaxies -- quite a lot of them I imagine. These groups can turn up some great surprises and quite often appear in starry fields. Here's WBL 666 -- so good I observed it twice (by mistake!), a year apart (it hasn't changed much). The 2019 (colour) version is longer in exposure but results in a much better-defined spiral. There is a lot of action here. The main face-on spiral is NGC 6962, accompanied by NGC 6964. I particularly like the (stellar?) configuration at about 5 o'clock that is another false Saturn. I have a csv file of these groups but it needs a little more work to convert the RAs and Decs into a non-decimal format. Will post at some point soon. Martin
  6. The honest answer is that we don't know how this works, but we have some hints from experimental studies as well as some ideas from computer modelling as to how it might work. I stress these are just my opinions based on published studies (mine and others') over the last 20 years or so. One key finding is that viewers and listeners do not in general hallucinate. Any filling in must be entirely compatible with the sensory evidence. Any counter-evidence can immediately kill off an erroneous interpretation. I've seen this in my own work involving work misperceptions in noise. When the noise is information-less and acts just to mask portions of the word rather than contributing any linguistic information itself, listeners do tend to fall back on prior knowledge when forced to guess what the word is (we see this because they choose more familiar words, so are applying knowledge about how likely different words are to be present). But when the noise itself contains speech information (e.g. competing speech babble), then listeners' misperceptions are not based on word familiarity but turn out to be words that could possibly have occurred by gluing together fragments of the background babble. There are other experiments that show that when interpreting phrases in noise we rely more on acoustic information than prior knowledge when noise is present (the opposite of what one might expect). That is, we are straining to find any reliable 'real' evidence and weight it very highly in the decision process. So in the battle between sensory data and prior expectations, sensory data will always win out. Expectations are perhaps best thought of as biases which will only get used when the incoming sensory data is absolutely ambiguous, noisy or lacking -- something that we find a lot when observing DSOs and the like! Sometimes we have a glimpse into the form of these expectations when we cut off sensory input altogether, as during dreaming. The 'no hallucination' principle suggests that during observation, we are indeed reporting what is there or what is compatible with the evidence of our eyes, while at the same time we may be able to make an interpretational 'leap' from poor sensory evidence to what experience tells us is there. In the case of say the GRS, it may not take many photons in the right place to suggest that the feature is present. It is surprising how little sensory data we need to make accurate judgements, and how much experience helps. (Off-topic, but I have recently been working with a form of distorted speech which actually contains no speech at all but is carved out of music; with experience listeners go from identifying 4 out of every 10 words correctly to 6 or 7). Another thing to realise is that filling in ('imputation' as it is sometimes called) is not a one-shot process. There is the potential for filling in at many (dozens) of points during the processing of a visual or auditory scene. Observers might fill in pixel-level intensity details based on simple spatial neighbourhood criteria, and at the same time be making mid-level interpretations about the continuity say of a feature such as a line when it is occluded by another feature, amongst others. One property of neural processing is the existence of connections in both directions i.e. not just an upwards flow to match the process of interpretations, but downward flow too. We know much less about the downward connections, but hypothesise that they serve to normalise, stabilise or regularise the upward flowing information. It is probable that these represent also the mechanism of 'filling in'. So to come to the crunch question, how can the brain fill in information if it doesn't know what to fill in? The key word here is 'know' and the question is really: how do we know what the brain really 'knows'? Every time incoming data is processed, it has the potential to alter the state of what the brain knows, and this is most likely taking place both continuously and in a distributed fashion i.e. at each processing level. For instance, the brain may be learning about most-likely local intensity gradients at the lower levels of processing, about line continuity at the mid-levels, about continuity of shade, shape, outlines etc, but also learning about any form of statistical regularity. I'm sure that some of these regularities are quite abstract and high-level, and enriched by increasing experience. A lunar observer's brain will have different expectations than an experienced observer of Mars, for instance. Experience in a sense is just the distributed accretion of sensory data and its subsequent representation in the brain. Most of the evidence we have is circumstantial and based on computational models. How sense data can best be combined with existing knowledge to produced updated knowledge is one of the main challenges of modern machine-learning. The latest and by far most successful wave of artificial intelligence is entirely based on probabilistic learning in multilayer neural models ('deep networks') which may be pretty good analogues of the distributed processing of information in the brain. One thing we can do with deep networks is turn them on their head and get them to generate scenes or sounds -- in a sense, asking them to explain (or better said, to exemplify) what they have learnt. Visualisation techniques of this kind are still in their infancy but there have been some attempts to show what has been acquired: a good example is Google's DeepDream https://en.wikipedia.org/wiki/DeepDream In principle it would be possible to train a deep network on the kind of observations that reflect what astronomers observe and determine what regularities have been extracted -- we might be surprised. Not sure any of this gets us closer to answering the question, but we are still in a (very) early state of understanding the brain. cheers Martin
  7. Hi Chris If you check this thread you can find my setup in detail. The Lodestar is my main (only) camera. Although it is marketed as a guide camera, the properties that make it a good guide camera (large pixels, high quantum efficiency) also mean it is very effective for EEA in terms of delivering reasonable views of objects quickly, which is what we want when observing, as opposed to doing astrophotography. Arguably there are better options out there, both CCD cameras (such as the Ultrastar with slightly smaller pixels and an overall larger field of view, which Mike JW uses very effectively for EAA and which would be my likely upgrade path), and CMOS cameras which tend to have lower read noise and can therefore support faster exposure times. CMOS cameras are just a little more complex in use because it is necessary to choose a gain value as well as an exposure time, but I believe this isn't too much of an issue with experience. A few caveats about Jocular: 1. The version that is currently out there does not support cameras natively. Instead, you need a separate capture engine that dumps FITs into a folder which is monitored by Jocular. The version that I aim to release next month will support the Lodestar natively and hopefully the Ultrastar too, but there is little chance I will support other cameras in the near future, so for these cameras it is necessary to use the monitored folder approach. 2. It is a source code distribution, so needs some first-time only setup (basically, installing Python and a few packages). It is not a download, click and open option, so needs a bit of work. However, this means that it works on all systems (OSX, Windows, Linux) and we have working examples and can help collectively with install issues. 3. It only supports mono cameras with relatively small total pixel counts. This comes from the design philosophy of the tool, which is to enable all decisions to be reversed/modified during a live session, in order to make best use of the photons you've collected. Essentially, Jocular supports a 'fast total reprocess of the stack at any time' design and also give the user complete access to the entire stack of subs in order to edit out poor subs, change the way the stack is combined, and a host of other actions that are not normally possible in EEVA tools. But as a tradeoff, I can only support relatively small pixel counts... 4. It supports colour via mono + filters but doesn't support one-shot colour. Jocular was conceived as an observers' tool, so it does have functionality that support observing lists, observation planning, session and object logging, reloading previous observations and the like. cheers Martin
  8. Hi Chris I'm sorry that I can't really help with the ASILive side (though I read reports that it is easy to use). I appreciate your desire for simplicity (I use a single USB cable to my laptop and another to my filter wheel, which I see as reasonably simple). On your other points, I can say that you don't need to guide to do EE(V)A. Also, an 8" aperture is easily more than enough to bring 10s or 100s of thousands of objects into view, a great many of them with lots of detail (check out the main EEVA observing subforum). I use an 8" reflector and have no desire to increase aperture. What you most likely will need is some kind of focal reduction to bring the focal ratio into the region of 4-5 (for DSOs). I don't 'do' the brighter planets, but the longer native focal ratio is I believe better suited to those. I'm sure others with similar scopes to yours will be able to chime in with more specifics. I have fairly extreme views on the camera side (so feel free to disregard!), but nearly everything worth seeing will actually fit on a small (pixel count and physical dimension) sensor (mine is 0.4Mpixels and about 0.4 degrees apparent FOV). The trend amongst EEVA'ers is in the opposite direction, to larger sensors. These are good for larger objects like many nebulae (bright and dark) and a few of the larger open clusters, and M31, and .... that's about it. Pretty much all the NGC/IC, Hicksons, Arps, and other catalogues will 'fit' on the smaller sensor (as of course will the planets). So it is worth thinking about this side of things too. Smaller pixel count sensors are (much) faster to process (stacking), eat up less storage, and are simpler if you end up doing any kind of wireless transfer. The down side of a small sensor is that your GOTOs need to be quite accurate, but I have never found that to be a problem. Happy to elaborate on any of these points! cheers Martin
  9. Hi Callum Some of the planetary nebulae need very short exposures with our sensors. I visited this one recently (will post it in the PN thread at some point when I've found time to read up on it...) with 5s exposures and then found I needed to view it with almost no stretch. I was quite surprised by the appearance: the lower-case 'omega' nebula! Orientated to match yours, this is 1m 20s total (4 x 5 seconds in each of LRGB). BTW This is my overstretched version. Although LRGB is meant to separate luminosity and chromaticity, in practice I find that stretching the L washes out the colour and I suspect that is why you are only getting a tinge of blue. cheers Martin
  10. I'm not a vision expert but I work in speech perception and there are many demos of the brain filling in information based on what ought to be there. In fact, this is the normal mode of operation and not at all exceptional (think about the way we perceive conversations in noise: at the level of the signal reaching the ears, lots of information is already masked and has to be filled in, either explicitly or as a side-effect of recognising what was being said). There's a famous illusion called the phoneme restoration effect where an entire sound is chopped out of the signal and replaced by noise, and listeners swear the sound is present. In a sense it is present, in the same way that a sculpture is present in a cube of marble (indeed, if the noise is not loud enough the illusion doesn't work). The brain appears to employ Occam's razor: of several competing explanations, choose the simplest that is compatible with the sensory evidence. its a bit old now, but the philosopher Daniel Dennett's book Consciousness Explained has some interesting ideas about the difference between sensation and interpretation and the huge hap between the way we imagine the brain operating and what is is (possibly) actually doing. I guess the boring way to answer the question would be for a psychologist to stand at the end of the scope generating control images where the feature of interest is excised and the observer has to report whether it is present or not. This would be very hard to do properly in the photon-starved, variable-seeing astronomical context because the sensory signal is continually changing. I run into the same issue when I'm trying to decide whether I've detected some faint feature on photos, but at least there we can measure the SNR and apply some kind of criterion for detection. Martin
  11. Just to clarify that this is a screenshot from the so-far-unreleased version. The older version does have some observation planning built in (it has the same 40k+ object database) but isn't anywhere close to as flexible (no exporting, no alt-az/transit info, no user catalogues etc). I will hasten to get the new version ready to test.... but meanwhile do let me know of any install issues. It isn't plug and play but most of the work is done up front, then later versions ought to be simpler... Martin
  12. HI Greg The maps link on Zenodo in my sig is working at the moment (it is the huge CERN data repository so it should always be online). They're 9G in all but you can download parts. Let me know if any issues. You need a pdf browser that supports hyperlinks (Acrobat is fine in this regard). Jocular in the version currently available on that thread only supports monitoring a folder for monochrome cameras with relatively small pixel counts (the reasoning is explained in the user guide, but essentially Jocular supports a different model where everything can be reprocessed on the fly, and this doesn't really work well for large sensors). The version that I hope to bring out soon is a major revision with lots of new features, and it will support the Lodestar mono camera natively (and the Ultrastar I hope also), as well as the monitored folder for other cameras. Unfortunately it is a lot of work and outside my programming expertise to support other cameras at the moment (I was pretty surprised I could handle the Lodestar...) but the whole thing is open source and will be on GitHub so there is nothing to stop others adding in support for other cameras. That's the best I can say at the moment. The one part of Jocular that might be of use regardless of the capture side is the session planning tool (example attached). I might develop a separate smaller application that just handles this side. The new version it supports both supplied and user DSO catalogues and can export observing lists so might be useful to some people. Martin
  13. Hi Greg This is a big question and a very interesting one, and one that I hope will provoke some discussion. All I can do is recount some of my own experiences. When I first started out in EEVA I spent a few sessions observing well-known objects such as M42, M46 and the like (it was Jan/Feb) and then galaxy season swung around so I turned to looking at well-known galaxies..... but at some point soon after I realised I was capturing much fainter stuff too, and from that moment it became a question of pushing the approach to its limits, and that feeling has not gone away. I obtained a copy of the Night Sky Observing Guide (NSOG) and found that I could make out all the details reported in 20" scopes with my 8" reflector. I suppose this was all part of a long 'calibration' process, to decide on what the boundaries of the technique were for my context. Alvin Huey's guides (mainly free) as well as Reiner Vogel's Hickson guide (free) were all very useful during this stage. Once I had an idea of the capabilities of my setup, I was amazed to find that the number of potentially interesting objects ran into tens or hundreds of thousands. I guess I spent the next 12 months or so gorging on different object types in a semi-haphazard fashion, looking at faint quasars and Abell galaxy clusters, Hicksons, Shakhbazians and the like. There is an overwhelming amount of 'stuff' that can be done with EEVA and I certainly appreciated the need for some structure. These last few years my whole process has become (slightly) more organised in the sense that I have a number of lists active that I am trying to work my way through. There is a lot to be said for looking at objects of a given type or class. It is unfortunate that we sometimes talk about 'trawling' or 'working our way' a list of objects like the VVs or Arps or Berkeley OCs or whatever and this might give the impression of cranking the handle to the next one of its type, but I've found it isn't like that. When it comes to observing many objects of the same type, I have come to the conclusion that the whole exceeds the sum of the parts. Mike's globular cluster thread is a great example of this and I've found the same when I've focussed on one object type, or one catalogue. Yes, some exemplars are underwhelming (to look at, not necessarily to read about), but these just prime the 'wow' factor when a compelling example of the type comes along. This happens very often. I might start the night with a list of 20-40 objects to potentially look at (and in a good long session I will observe no more 20), but I find it impossible to predict in advance which will be the 'object of the night' for me. Some innocuous NGC 4-digit number suddenly transforms into the most wonderful object, or the main object of interest is instead relegated because of that amazing 16th mag flat galaxy in the corner of the field, or an interesting colour combination of stars amidst a galaxy cluster... impossible to predict. And by building up a collection of observations of a given type, the astrophysics becomes more intriguing (why is this a barred spiral, what does this Trumpler classification mean, etc) and the contrasts between types become more obvious. In terms of planning a session, even the best printed guides only cover a small percentage of the 'available' objects so at some point it is worth acquiring other planning tools. I developed the PrettyDeepMaps explicitly for EEVA as I found that the charting apps available to me (on a Mac at least) did not go anywhere near deep enough for the capabilities of EEVA. For instance, I don't know anything else that plots all the individual Shakhbazian members or the like, although things might have changed since 2015-16 when I produced the charts. Although they are all PDFs, the maps come with a set of tables that can serve as a planning tool since everything is hyperlinked to everything else. Nowadays I use these in combination with Jocular for most of my planning since in that way the observing list is integrated into the EEVA tool itself, meaning I can run the sessions with minimal typing, and also because Jocular provides transit times etc that the maps cannot do as they are not tied to a specific observing location. I typically have the charts open on one 'virtual' screen on my laptop and Jocular on the other and am constantly flipping between the two. For each object on my observing list for that evening I use the charts to confirm its position, get it oriented with N at the top, and look for interesting things in the vicinity, to either frame alongside the 'target' or to visit later. In this way I am easily side-tracked... but in a good/fun way. Like Mike, I tend to focus on a small patch of the sky. This patch isn't always 'optimal' in the sense of the right time of year or the best altitude. It is occasionally a lot of fun to head to the deep south to weird constellations like Microscopium. But if I develop a sudden passion to see ring galaxies, for instance, I might be all over the available sky trying to pick them up and compare them, within a single session. In sum, to try to answer your question, there is a lot to be said for picking some catalogue or object type and pursuing it (perhaps in parallel with other lists) to see what grabs your interest. This gives some structure to an observing session (and some would say that since all of it is interesting, it hardly matters what we observe!). In parallel, it is worth exploring the limits of your own equipment/context. A fun thing to do is to find a galaxy cluster and see what is the faintest member you can detect, or an open cluster and identify your stellar magnitude limit, or a distant quasar. Cheers Martin
  14. I've only ever used my mount in alt-az mode although it does have EQ. I just get the bubble roughly in the centre by eye. I don't really know how critical it is. Most of the time I observe on a less-than-optimal suspended wooden terrace anyway which most likely changes its level as it cools... and certainly does as the dog wanders around. In the same way I don't know how critical getting the stars dead centre during 2-star alignment it, but I always aim for as close as possible (within a few pixels?) in that case and my gotos are usually on the chip somewhere which is good enough for me given how small the FOV is (0.4 degrees). I don't plate solve. Deflavio, you seems to have a well-adjusted setup judging by your images. I most likely have some camera/filterwheel induced tilt and twisted spider vanes which I've been meaning to do something about for about 5 years now... EEVA is pretty forgiving though (which is why I haven't!). Martin
  15. I think they're all interesting especially with a bit of a back-story. I have similar thoughts sometimes about the observation length, but on the other hand I can easily be spending up to 20 minutes on a particularly faint or compelling object so if that ends up with a smoother image, all the better. I occasionally go longer still. Very faint but rich Abell-Corwin-Olowin galaxy clusters are amongst my favourites as I love seeing them appear like fireflies, growing in number as stacking proceeds; and this can take some minutes before more than a few are visible. I think the longest I've spent (so far) has been on a faint quasar that (at the time) had an estimated redshift that put its light in the first billion years of the universe. I've since found that the redshift was overestimated.... so that is a personal challenge I still have. And the other occasion was (at the other distance extreme) trying to spot a very cool star (a brown dwarf called DENIS something :-). Longer observations are sometimes necessary and have their place as the only practical way to see certain things with an 8" piece of glass.
  16. These are lovely bright images of what are very faint galaxies. I agree that SHK 362 is very appealing. SHK 84 is also a nice 'shaksterism' (reminds me of a constellation -- Sagitta perhaps). Martin
  17. Its super-hot during the day all week and cloudy at night...grrr.... but I will cross my fingers. My sensor is only 0.4 degree wide so it would actually have to be sooner than Friday (already too late?). Maybe @Mike JW is interested/cloud-free? Martin
  18. Beautiful composition, Mike. I also love diffraction spikes (although mine display evidence of a twist which I really must sort out). There's so much going on here. I can see the tails and counter tails, more so after consulting Kanipe and Webb's Arp Atlas. There is a double tail (presumably one tail from each galaxy?) joining the two. I had a quick skim to see if 'infall and attraction' is defined but didn't spot anything. I did see a note that 'characterisation of peculiarities is sometimes descriptive rather than literal' but that still does help. Perhaps the term is defined in the original Arp atlas. A little mystery about that quasar: I've consulted my charts and I see two quasars hereabouts. PB 5468 is further in with mag 18.3 and the redshift of 1.87, which is the value you cite. I have Q 2334+019 as further out with mag 18.6 and a redshift of 2.19, making it still more distant. It is entirely possible that the data has been changed since I created the charts though. I have noticed that quasars (mainly their distances) are updated somewhat more frequently than other objects. Martin
  19. What an excellent find! Not one I've seen before. I like the way the dust lane (appears to) carves up the central bulge, making it look like two long thin galaxies in contact. I am finding that for me there is definite a galaxy aesthetic, often to do with angle of course but also (in this case) the fact that it is just 'hanging there' in amongst a relatively dense star field. That's why I like the WBL groups as they often appear in such location (there is also at least one catalogue of galaxies behind the Milky Way that can be challenging to spot). One of them many great things about EEVA is that makes a more comprehensive search of the NGC and other catalogues feasible within a reasonably time frame. Martin
  20. Wall to wall cloud here last night Stu.... but thanks for the heads up. This is indeed just the type of thing EEVA is made for. Martin
  21. Thanks for the paper. The information on exposures is very useful and I see we are in the 27 mags/arcsec^2 range for surface brightness, so well done on getting one (or more) of these! It appears that galaxy M as it has been known about since at least 2015 (it is on my charts, with ID LEDA 2051985) so I guess the M was to distinguish it from new discoveries? They seem to have included it because of their new radial velocity data that suggests it is a true satellite of NGC 7331. Martin
  22. Indeed you are right, Mike. I stretched just the luminance to within an inch of its life and had a matching patch of noise in position A, as well as 100s of other matching patches of noise in the vicinity 😉. So no clear ID there. But a definite challenge for another night. Mag 19 ought to be possible but I found this region to be somewhat obscured by the outer arms of NGC 7331 in my shot. Is there a link to any resources describing the 6 dwarves? Thanks again, and to Callum for reminding us what a great galaxy this is. Martin
  23. All three very nice objects, well captured. I was looking at Hickson 94 on Friday -- most unusual grouping of stars and galaxies with some lovely colour contrasts. Martin
  24. Fantastic amount of info there Mike and a wonderful challenge for us all! That is just the kind of evening the EEVA techniques are made for. It has me wondering if I managed to capture any of these in my LRGB shot from the other night. I doubt it but here it is. I will reload it later and take a more definitive look (and stretch it as far as I can...). Colour often doesn't add much but I think in this case the galaxy is bright enough to support fairly clean colour in EEVA length exposures.
  25. Great result Callum. I observed this one too a few nights back in LRGB. This is a 7m15 stack. The central star appears really blue -- I guess this is the result of absorption by the nebula. I tried to orient mine to match yours but I think yours is flipped relative to mine. There's a bonus mag 16.3 galaxy in the shot at my 11 o'clock, your 4 o'clock. Did you combine L with RGB or is it pure RGB? Martin
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.