Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ollypenrice

Members
  • Posts

    38,138
  • Joined

  • Last visited

  • Days Won

    304

Posts posted by ollypenrice

  1. My understanding is that interferomtric filters work by creating a succession of reflective layers spaced so that only a certain wavelength fits between all these layers and so makes it to the exit.

    If this is correct (not guaranteed!) then I don't think there is any reason to assume that the first (outer) layer should be reflective. It might be reflective on one filter and not on another...

    Thinking out loud so don't shoot me.

    Olly

    • Like 1
  2. On 12/12/2023 at 19:14, Sarek said:

    Current guiding varies quite a lot but on average I'd say it's usually between 0.6 and 1.0 rms

    Both scopes are around 5.5 Kgs v 2.0 Kgs with the SW 72ED

    If I've used the in-line calculator at Astronomy Tools correctly the the Starfield, combined with the ASI 533 MC Pro  and my 32mm/f4 guidescope would yield a imaging/guiding ratio of 1: 473 and my existing set up 1: 2.78. Changing to a guidescope with a focal length of 200 brings the latter down 1: 2.85, so virtually the same.

    Does that suggest the current guiding scope might struggle a bit with the Starfield?

    I probably won't be using the Mak for DSO's, just planetary imaging so guiding is probably not needed.

     

    Don't worry about the guidescope / imaging scope ratio. As Tony said earlier, your guide RMS in arcseconds should ideally be no more than half your image scale in the imaging scope. If your guide RMS is 0.6 arcsecs you are good to image at 1.2 arcsecs per pixel. If it's an RMS of 1 arcsec then you're good to guide at 2 arcsecs per pixel.

    What is the image scale of your camera in the new refractor? That's what you need to be looking at and comparing it with the guide RMS.

    Olly

    • Like 1
  3. 9 minutes ago, Marvin Jenkins said:

    It looks perfect. But then I am blind in one eye and can't see anything out the other.

     

    You are just the man I'm looking for to review my images on the internet. No work is involved: I'll supply the text!

    :grin:lly

  4. I think the reds are lovely and the blues are great so far as they go, but it's the blues which take the time at the capture stage. I'd be very surprised if anyone managed to get more out of your data than this.

    The Meade 127 is rather a forgotten scope but I had one and liked it. I reviewed it for Astronomy Now and only sold it because an irresistible TEC140 turned up on the used market.

    Olly

  5. 5 minutes ago, Zermelo said:

    I think I see what you're getting at, Olly.

    Agreed, you could take this to mean either:
    (1) A specific, physical point on the wheel's outer edge that is, at the moment under consideration, in contact with road, or
    (2) The location in space identified by the intersection between the wheel's outer edge and a line drawn directly downwards from the wheel's centre

    They happen to be co-incident for the purpose of the current discussion.

    I suspect an information modeller would represent the physical points on the wheel circumference as one concept, uniquely identified by (for example) the angular offset θ from some reference zero on the wheel, while the concept (2) above would be more like a role ("bottom of wheel") that can be fulfilled by any of the circumferential points at some appropriate time t (another role of interest might be "top of wheel"). These roles are defined relative to the vehicle, not the wheel. If the motion of the car were known, it could be modelled as a constraint between θ and t (quite simply, if the vehicle's velocity is constant).

     

    I might choose different words, but yes, in sense 2 above, the "point" of contact does move relative to the road, even though any "point" on the wheel (sense 1) that is fulfilling the role of "bottom of wheel" at a given instant will be at rest, relative to the road.

     

    I'm relieved since I couldn't see anything wrong with my argument. You've expressed the two interpretations of the phrase better than I did and located them both in a mathematical framework. Most kind!

    Olly

  6. Always nice to do a Barnard object simply because E.E. Barnard was such an admirable man and such a nice man. An early Happy Christmas to you, Sir!

    Paul Kummer drove the scope and did the stacking and calibrating. My post processing.

    RASA 8, NEQ6, ASI2600MC Pro. Three hours in three minute subs.

    Is the colour too intense? On AB and on here it looks more saturated than on my Smugmug site.

    spacer.png

    Full size is here. The little blue reflection nebula is worth a peep close up.  https://ollypenrice.smugmug.com/Other/Emission-Nebulae/i-8ZPsWs3/A

    Olly

    • Like 14
  7. 33 minutes ago, Mandy D said:

    @ollypenrice I'm simply not going to discus this topic any further with you, as it would appear to be a pointless excersize. I've laid out what is going on and others have taken the time to explain further. There is absolutely no ambiguity regarding what is the bottom (or any other point) of the wheel, with regard to a discussion of the behavoir in the limit.

    Ouch, sorry, I seem to have offended you. That absolutely wasn't my intention and I was finding the discussion amicable and interesting. 

    I believe that their is a semantic ambiguity in the term 'the bottom of the wheel.'  The point of contact does move relative to the road because it is not a point attached to any one point on the tyre.. The rubber of the tyre in contact does not. If the point of contact did not move the car would not move.  We may have, here, an example of the difference between verbal and mathematical descriptions.

    Anyway, I repeat my apologies and my assurance that no offence was intended.

    Olly

    • Thanks 1
  8. 17 hours ago, Mandy D said:

    Look at a track laying vehcile, what you would likely call a "caterpillar tractor" and you will soon see that the part in contact with the ground is certainly not moving forwards and extrapolate from there. You will find that the top of the track is moving, relative to the ground, at twice the forward speed of the vehicle. Is that clearer for you? Remember, all rotational motion involves constant acceleration, even at uniform angular velocities!

    I understand this point completely but am saying that there is an alternative description which I think is also valid.

    1) We can define the bottom of the wheel as being the strip of rubber molecules in contact with the ground. In this description there is no relative movement between these molecules and the ground at the instant of contact.

    2) We can define the bottom of the wheel as the point of contact between the tyre and the road. In this definition, which is also valid, no specific rubber molecules appear in the description. Indeed, we can ignore them entirely  and define the point of contact as lying perpendicularly below the centre line of the axle. This point is not stationary relative to the road (unless you momentarily stop time., in which case everything is stationary.) It is moving constantly along the road.

    In the first definition we are looking at relative motion between a strip of molecules and the road and in the second we are looking at movement between a geometric point and the road. Consider these definitions as arising from the observer's point of view.

    - An observer on the tyre will, at the top of their rotation, see themselves as moving over the road at twice vehicle speed and at the bottom, while being squashed, as not moving relative to the road at all.

    - An observer at the roadside is not going round with the tyre and observes the point of contact as moving at vehicle speed continuously.

    When they argue about this in the pub afterwards, the tyre rider will exclaim, 'While I was being squashed I was stuck fast onto the road and not moving at all, relative to it. And I was certainly at the bottom of the wheel.' The roadside observer will counter by booming, 'You were at the bottom of the wheel at that instant but most of the time you weren't. This isn't just about you! It's about the bottom of the wheel. I was pointing a laser at the bottom of the wheel as the car drove along and the laser never stopped moving.'

    In my opinion they are both right, the ambiguity arising from what we consider to be the bottom of the wheel. 

    Olly

     

  9. On 09/12/2023 at 20:03, Mandy D said:

    This is related to epicyclic gearing problems. It also applies to motor-vehicle wheels, but in a slightly different way that means the top of the tyre is travelling forwards at twice the vehicle's forward speed, whilst the bottom of the tyre is not moving forwards at all. 

     

    This is an interesting point. Not being a mathematician, I look at it conceptually. It seems to me that 'the bottom of the tyre' is an elusive concept because it is not defined by any property of the tyre itself (such as a mark) but by the observer who notes that every part of the moving tyre is, at some point, the bottom of the tyre. 'The bottom of the tyre' is defined by its position relative to the road. An observer looking at a car moving east to west will define 'the bottom of the tyre' as the bit touching the road and will also note that that the bit touching the road does move, east to west, which is contrary to your statement that it is not moving at all. For the roadside observer the bottom of the tyre is a point of contact which certainly is moving. 

    That seems to me to be the easy bit. The difficult bit is working out exactly what point of observation discovers no movement forward at the bottom of the tyre. I suppose a number of rubber molecules will be able to shout out, very briefly, 'We are now pinned to the road which we know is not moving!'

    :grin:lly 

  10. To pick up on sharkmelley's point about the number of doses of read noise, an important factor is the level of read noise of your camera. Excellent as they were and still are, CCD cameras had considerable read noise and really did best with long subs, guiding and sky permitting. Modern CMOS cameras have low read noise and don't, therefore, benefit from such long exposures.

    Olly

    • Like 1
  11. You are absolutely right, processing is hard. In fact it has no ceiling: you can get better and better at it over a lifetime. Capture, on the other hand, is a mechanical process which you can, without an enormous amount of difficulty, get to be perfect within the constraints of your equipment.

    My advice would be to watch tutorials or read books by people who know what they are talking about. Adam Block, Warren Keller, our own Steve Richards, Robert Gendler, R Jay GaBany. The net is full of U-tubing clowns who flounder around dragging sliders this way and that till they say they 'Get something they like.' The instant you hear that phrase, turn them off. 

    As you are learning processing, make it a rule to understand what you are doing. Clicking and thinking are two different activities!

    Olly

    • Like 2
  12. 2 hours ago, alacant said:

    LOL. It's called experimentation!

    Much of what we do both on this forum and on the ground is answer the what if or what should I do type of questions. On the rare occasions where I get a telescope to myself, I like to try and answer some of those questions. If the experiment produces a worthwhile image, that's simply a bonus. It may be irrelevant to the many, but if I can give back just a tiny piece of what this forum has given to me, then surely it is worth sharing.

    Certainly. I religiously place experimentation over theory.

    OK, the image is, I think, deep and clean and the issue of exposure length is no longer all that critical since the CMOS chip has replaced the CCD. What I would want to do, were it my image, is sort out the saturation around the Trapezium. Whether or not you'd need short exposures for that region depends on what the exposure is like in linear form. It it isn't saturated when linear, all you need is a separate stretch blended in using one of the HDR routines, either ready-made or hand-made, so to speak. However, I do't know how you feel about a processing step like that.

    Personally, I'd also lose the star halos but that would take time and involve another layer of processing.

    Olly

  13. I never really understand what you're trying to do. My agenda is simple: I try to take the best picture I can, within my means, and I have no preconceived ideas regarding what is good or bad, either in hardware or software, capture or processing, so long as I don't invent.

    You seem to want to make astrophotos within a set of constraints of your own making. This approach is good, right, laudable but that is bad, wrong, reprehensible.  Perhaps you might publish a kind of manifesto saying, 'These are the constraints I impose on myself when making astrophotos.'  You are, of course, perfectly entitled to work within whatever constraints you impose upon yourself: we all do this.

    I really don't think you need me or anybody else to offer a critique of your image. Its many strong points and its few weak ones are extremely obvious and you must be perfectly well aware of them. If I were to engage with them, though, I fear that I would be engaging with your unpublished manifesto on how astrophotos should be made.

    Seriously, why not publish your 'Rules of engagement?'

    Olly

  14. A friend sent me a link to this intriguing video.

    I watched the first part and found myself amazed by what it revealed, but I didn't see what was coming once it was applied to astronomy. (I'm not a mathematician!)  However, I've studied the sideareal and solar day on an astronomy course and performed observations to measure the sidereal day, so I really should have spotted the connection. D'oh.

    Olly

    • Like 9
  15. 7 minutes ago, Allinthehead said:

    Is it not the Squid data you sent me for the image below?

    712386733_9_PanelDonesquid.thumb.jpg.dbb721ea31a0a57baa97b3a17f21f71a.jpg

    It must be because it's the only Squid data I've ever shot. I'd forgotten about that. You did a very good job with it!

    16 minutes ago, AKB said:

    Ooh, that must have changed your online life!  Certainly did mine.  Just the thing for big images (like this) and system updates.

    Tony

    Certainly did, and even more so for the six people who have robotic scopes based here. Upload speed went up by one thousand, three hundred times. :grin: You know the village, of course, and there are probably six people living here who actually use the internet. Lord knows why we were blessed with a fibre connection a mountainous 8km from the main line - but we were.

    Olly

    • Like 1
  16. 9 hours ago, Allinthehead said:

    I believe I recognise that Squid data!

    Are you sure? It came from a very long time ago and originally looked like this:

    SQUID.thumb.jpg.3106c67923ae21d26470a4477f81613e.jpg

    By using star removal and massive doses of NoiseXt I was able to get it to what you see in the present image but this did involve completely erasing all background and applying only the nebula itself, which is a bit borderline, ethically, if I'm honest. Needs must. In our earlier version, prior to adding the Fireworks, we used decidedly better Squid data supplied by SGL member AstroGS and published jointly https://www.astrobin.com/yrz3x8/ but the final 4 panel version is all 'in house.'

    Olly

     

     

     

  17. 12 hours ago, alacant said:

    Not imaging. Processing.

    If it's not working out, it's only because you have poor or not enough data. Best to go and get more frames the next night than sit in front of a screen over processing data which will never yield.

    No theory in support of my claims. Just lots of wasted time!

     

    I agree. On the other hand, some processing tasks are inherently complicated even with good data. Mosaics are the obvious example because small gradients, insignificant in a single frame, add up in a mosaic. Very, very faint signal also tends to be tricky. I mean signal so faint that doubling the data will not significantly assist in the task. And then some systems just do throw up artifacts in need of cosmetic correction. I wish they didn't but they do, and I think that making a good job of them is rewarding.

    Our cameras also have limited dynamic range, emphatically more limited in a single exposure length, so using multiple exposures or combining multiple stretches is a slightly involved process which can extend the range. I think this is perfectly valid and, again, enjoyable to do.

    Olly

  18. On 18/10/2023 at 23:54, alacant said:

     

    It could be that the image looks unusual these days: none of the stars have been manipulated

    Of course they have! They are no longer linear but have been stretched. Removing them, masking them, whatever, is just a way of giving their cores a different stretch from their edges, which is exactly what any curves or levels stretch will do. If you want to post un-manipulated stars just post your linear image.

    When an image is processed, it's processed.

    Olly

  19. 1 hour ago, alacant said:

    Hi

    I had a go with a refractor but it takes ages; 6 hours. I don't think an hour or so would get much detail. Do post your image and prove me wrong though!

    I think that's important. Even with just 72 minutes, it's tempting to push beyond what's really there.

    When to stop? Having to continue processing to combat artefacts -not just noise- you've introduced is perhaps a good time to decide where processing ends and over-processing begins.

    Over-processing can introduce artifacts, certainly, but they can also be inherent to the system. Personally, I see post processing as an activity which involves 1) extracting what's in the data and 2) performing cosmetic correction of artifacts. This involves a negotiation between what's in the data and what's up there in the sky. I guess we'll all differ to a greater or lesser extent on where these boundaries lie, but my bottom line is probably this: if it isn't in the sky I don't want it in my picture. Then again, even that doesn't really work because all stars are point sources in amateur instruments. We are doomed to live in a world of compromises. :grin:

    Still, I agree with your definition of over-processing, I think.

    You like imaging quickly. Have you considered a RASA? They do leave you with some cosmetic work to do, though.

    Olly

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.