Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ollypenrice

Members
  • Posts

    38,138
  • Joined

  • Last visited

  • Days Won

    304

Everything posted by ollypenrice

  1. My understanding is that interferomtric filters work by creating a succession of reflective layers spaced so that only a certain wavelength fits between all these layers and so makes it to the exit. If this is correct (not guaranteed!) then I don't think there is any reason to assume that the first (outer) layer should be reflective. It might be reflective on one filter and not on another... Thinking out loud so don't shoot me. Olly
  2. So did mine!! And so did my Astronomik. And so did the Astronomik which they sent me as a replacement for the first one!!! lly
  3. Don't worry about the guidescope / imaging scope ratio. As Tony said earlier, your guide RMS in arcseconds should ideally be no more than half your image scale in the imaging scope. If your guide RMS is 0.6 arcsecs you are good to image at 1.2 arcsecs per pixel. If it's an RMS of 1 arcsec then you're good to guide at 2 arcsecs per pixel. What is the image scale of your camera in the new refractor? That's what you need to be looking at and comparing it with the guide RMS. Olly
  4. You are just the man I'm looking for to review my images on the internet. No work is involved: I'll supply the text! lly
  5. Very much respected is the Unihedron Sky Quality Metre and its results are a common scale used worldwide. I was given one by a guest and SGL member and it has performed flawlessly for years. One of my robotic observatory clients has one sampling the sky all the time, as well. A very, very good product. http://www.unihedron.com/projects/darksky/ Olly
  6. I think the reds are lovely and the blues are great so far as they go, but it's the blues which take the time at the capture stage. I'd be very surprised if anyone managed to get more out of your data than this. The Meade 127 is rather a forgotten scope but I had one and liked it. I reviewed it for Astronomy Now and only sold it because an irresistible TEC140 turned up on the used market. Olly
  7. I'm relieved since I couldn't see anything wrong with my argument. You've expressed the two interpretations of the phrase better than I did and located them both in a mathematical framework. Most kind! Olly
  8. Always nice to do a Barnard object simply because E.E. Barnard was such an admirable man and such a nice man. An early Happy Christmas to you, Sir! Paul Kummer drove the scope and did the stacking and calibrating. My post processing. RASA 8, NEQ6, ASI2600MC Pro. Three hours in three minute subs. Is the colour too intense? On AB and on here it looks more saturated than on my Smugmug site. Full size is here. The little blue reflection nebula is worth a peep close up. https://ollypenrice.smugmug.com/Other/Emission-Nebulae/i-8ZPsWs3/A Olly
  9. Ouch, sorry, I seem to have offended you. That absolutely wasn't my intention and I was finding the discussion amicable and interesting. I believe that their is a semantic ambiguity in the term 'the bottom of the wheel.' The point of contact does move relative to the road because it is not a point attached to any one point on the tyre.. The rubber of the tyre in contact does not. If the point of contact did not move the car would not move. We may have, here, an example of the difference between verbal and mathematical descriptions. Anyway, I repeat my apologies and my assurance that no offence was intended. Olly
  10. I understand this point completely but am saying that there is an alternative description which I think is also valid. 1) We can define the bottom of the wheel as being the strip of rubber molecules in contact with the ground. In this description there is no relative movement between these molecules and the ground at the instant of contact. 2) We can define the bottom of the wheel as the point of contact between the tyre and the road. In this definition, which is also valid, no specific rubber molecules appear in the description. Indeed, we can ignore them entirely and define the point of contact as lying perpendicularly below the centre line of the axle. This point is not stationary relative to the road (unless you momentarily stop time., in which case everything is stationary.) It is moving constantly along the road. In the first definition we are looking at relative motion between a strip of molecules and the road and in the second we are looking at movement between a geometric point and the road. Consider these definitions as arising from the observer's point of view. - An observer on the tyre will, at the top of their rotation, see themselves as moving over the road at twice vehicle speed and at the bottom, while being squashed, as not moving relative to the road at all. - An observer at the roadside is not going round with the tyre and observes the point of contact as moving at vehicle speed continuously. When they argue about this in the pub afterwards, the tyre rider will exclaim, 'While I was being squashed I was stuck fast onto the road and not moving at all, relative to it. And I was certainly at the bottom of the wheel.' The roadside observer will counter by booming, 'You were at the bottom of the wheel at that instant but most of the time you weren't. This isn't just about you! It's about the bottom of the wheel. I was pointing a laser at the bottom of the wheel as the car drove along and the laser never stopped moving.' In my opinion they are both right, the ambiguity arising from what we consider to be the bottom of the wheel. Olly
  11. Anther Samyang (mosaic) contribution: Orion, with Paul Kummer and Peter Woods. Olly
  12. This is an interesting point. Not being a mathematician, I look at it conceptually. It seems to me that 'the bottom of the tyre' is an elusive concept because it is not defined by any property of the tyre itself (such as a mark) but by the observer who notes that every part of the moving tyre is, at some point, the bottom of the tyre. 'The bottom of the tyre' is defined by its position relative to the road. An observer looking at a car moving east to west will define 'the bottom of the tyre' as the bit touching the road and will also note that that the bit touching the road does move, east to west, which is contrary to your statement that it is not moving at all. For the roadside observer the bottom of the tyre is a point of contact which certainly is moving. That seems to me to be the easy bit. The difficult bit is working out exactly what point of observation discovers no movement forward at the bottom of the tyre. I suppose a number of rubber molecules will be able to shout out, very briefly, 'We are now pinned to the road which we know is not moving!' lly
  13. To pick up on sharkmelley's point about the number of doses of read noise, an important factor is the level of read noise of your camera. Excellent as they were and still are, CCD cameras had considerable read noise and really did best with long subs, guiding and sky permitting. Modern CMOS cameras have low read noise and don't, therefore, benefit from such long exposures. Olly
  14. You are absolutely right, processing is hard. In fact it has no ceiling: you can get better and better at it over a lifetime. Capture, on the other hand, is a mechanical process which you can, without an enormous amount of difficulty, get to be perfect within the constraints of your equipment. My advice would be to watch tutorials or read books by people who know what they are talking about. Adam Block, Warren Keller, our own Steve Richards, Robert Gendler, R Jay GaBany. The net is full of U-tubing clowns who flounder around dragging sliders this way and that till they say they 'Get something they like.' The instant you hear that phrase, turn them off. As you are learning processing, make it a rule to understand what you are doing. Clicking and thinking are two different activities! Olly
  15. Certainly. I religiously place experimentation over theory. OK, the image is, I think, deep and clean and the issue of exposure length is no longer all that critical since the CMOS chip has replaced the CCD. What I would want to do, were it my image, is sort out the saturation around the Trapezium. Whether or not you'd need short exposures for that region depends on what the exposure is like in linear form. It it isn't saturated when linear, all you need is a separate stretch blended in using one of the HDR routines, either ready-made or hand-made, so to speak. However, I do't know how you feel about a processing step like that. Personally, I'd also lose the star halos but that would take time and involve another layer of processing. Olly
  16. I never really understand what you're trying to do. My agenda is simple: I try to take the best picture I can, within my means, and I have no preconceived ideas regarding what is good or bad, either in hardware or software, capture or processing, so long as I don't invent. You seem to want to make astrophotos within a set of constraints of your own making. This approach is good, right, laudable but that is bad, wrong, reprehensible. Perhaps you might publish a kind of manifesto saying, 'These are the constraints I impose on myself when making astrophotos.' You are, of course, perfectly entitled to work within whatever constraints you impose upon yourself: we all do this. I really don't think you need me or anybody else to offer a critique of your image. Its many strong points and its few weak ones are extremely obvious and you must be perfectly well aware of them. If I were to engage with them, though, I fear that I would be engaging with your unpublished manifesto on how astrophotos should be made. Seriously, why not publish your 'Rules of engagement?' Olly
  17. Cracking image with lots of depth and texture. I really do like it and love seeing what's 'really' going on rather than the highly selective content of narrowband. Olly
  18. A friend sent me a link to this intriguing video. I watched the first part and found myself amazed by what it revealed, but I didn't see what was coming once it was applied to astronomy. (I'm not a mathematician!) However, I've studied the sideareal and solar day on an astronomy course and performed observations to measure the sidereal day, so I really should have spotted the connection. D'oh. Olly
  19. It must be because it's the only Squid data I've ever shot. I'd forgotten about that. You did a very good job with it! Certainly did, and even more so for the six people who have robotic scopes based here. Upload speed went up by one thousand, three hundred times. You know the village, of course, and there are probably six people living here who actually use the internet. Lord knows why we were blessed with a fibre connection a mountainous 8km from the main line - but we were. Olly
  20. Are you sure? It came from a very long time ago and originally looked like this: By using star removal and massive doses of NoiseXt I was able to get it to what you see in the present image but this did involve completely erasing all background and applying only the nebula itself, which is a bit borderline, ethically, if I'm honest. Needs must. In our earlier version, prior to adding the Fireworks, we used decidedly better Squid data supplied by SGL member AstroGS and published jointly https://www.astrobin.com/yrz3x8/ but the final 4 panel version is all 'in house.' Olly
  21. Only a very small amount of heat is needed to lift the dew point above what's necessary to defeat condensation. As well as dew heaters there are low consumption puppy warmers available as well! (No, there really are: I have one. ) Olly
  22. I agree. On the other hand, some processing tasks are inherently complicated even with good data. Mosaics are the obvious example because small gradients, insignificant in a single frame, add up in a mosaic. Very, very faint signal also tends to be tricky. I mean signal so faint that doubling the data will not significantly assist in the task. And then some systems just do throw up artifacts in need of cosmetic correction. I wish they didn't but they do, and I think that making a good job of them is rewarding. Our cameras also have limited dynamic range, emphatically more limited in a single exposure length, so using multiple exposures or combining multiple stretches is a slightly involved process which can extend the range. I think this is perfectly valid and, again, enjoyable to do. Olly
  23. Of course they have! They are no longer linear but have been stretched. Removing them, masking them, whatever, is just a way of giving their cores a different stretch from their edges, which is exactly what any curves or levels stretch will do. If you want to post un-manipulated stars just post your linear image. When an image is processed, it's processed. Olly
  24. Over-processing can introduce artifacts, certainly, but they can also be inherent to the system. Personally, I see post processing as an activity which involves 1) extracting what's in the data and 2) performing cosmetic correction of artifacts. This involves a negotiation between what's in the data and what's up there in the sky. I guess we'll all differ to a greater or lesser extent on where these boundaries lie, but my bottom line is probably this: if it isn't in the sky I don't want it in my picture. Then again, even that doesn't really work because all stars are point sources in amateur instruments. We are doomed to live in a world of compromises. Still, I agree with your definition of over-processing, I think. You like imaging quickly. Have you considered a RASA? They do leave you with some cosmetic work to do, though. Olly
  25. Once you've tried good binoculars you don't enjoy lesser ones very much. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.