Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,019
  • Joined

  • Last visited

  • Days Won

    11

Posts posted by vlaiv

  1. Yes, greatest improvement will come from using dedicated planetary camera.

    These two images were taken roughly month apart - both with 5" newtonian, first one with modified web cam (Logitech C270 with removed front lens) and second with proper planetary camera (although USB 2.0).

    image.png.3fd1e4723b3dc435aaa34d6ce402b723.png

    image.png.9853348d6f35c4bba278e9d9dfcd8806.png

    Almost all conditions, including my planetary imaging ability were the same - only difference was with the camera model used.

    • Like 2
  2. If you really want to give focal reducer a try - then simply go for cheap x0.5 1.25" reducer.

    I've used it with small sensor on several occasions with different scopes and it works.

    Varying the spacing will vary magnification / compression, so you can experiment.

    See here for some ideas of what to expect:

     

    • Like 1
  3. Larger FOV on lunar is usually achieved by making a mosaic rather than using focal reducer.

    Focal reducers, while giving you large FOV, introduce optical aberrations. These might not be as bad for deep sky imaging where atmosphere effects are dominant, but if you plan on doing lucky type planetary imaging (and of course you should, and you have right gear for it) - then you don't want to loose any sharpness.

    You simply image 4 separate panels (or more) that you stitch together to create large image. I think it took 9 panels, last time I did lunar imaging with 4" mak and ASI178 to cover whole lunar surface (ASI178 is a bit larger sensor than ASI224, but mak has a bit more focal length than SCT, so they should be in the ballpark).

    • Like 1
  4. 5 hours ago, The Admiral said:

    Surely it doesn't really matter whether it's a very wide angle lens or is multi-panel mosaic, so long as both celestial poles are in the image.

    Ian

    I guess that continuity of star trails would be giveaway, or rather which stars made which segment of circle.

    With panoramic mosaic you need to capture separate parts and combine them - but you can't do that at the same time.

     

    • Like 1
  5. 8 hours ago, Ags said:

    Would this still be true if the stars were spinning. As they approach would tidal effects cause the spin to be transferred to into orbital motion?

    For all intents and purposes - they would still collide, but in principle - it would have an effect as long as there is any sort of inhomogeneity in stellar composition. This would happen even if stars are not spinning (any type of asymmetry would cause rotation to start).

    Spin itself just contributes to stronger curvature of space time as it represents energy and mass and energy are equivalent.

     

  6. That is quite normal if done with very wide lens - fish eye type that captures more than 180 degrees in single go.

    Maybe simplest explanation would be using your own hands.

    If you circle with both of your hands in the same direction (like gym exercise), here is drawing of what I mean:

    image.png.80c2a354601758f197fb3c9f5af49f74.png

    say that you move them in "forward" direction (like butterfly swimming technique) - that is exactly how stars move in north and south hemisphere. They perform large circles - but in reality they circle in the same direction (because it's the earth that is spinning).

    But if you look at your left hand while doing this - it will look as if it's circling clockwise and if you turn your head to the right to observe right hand - it will look like it's spinning counter clock wise.

    Above images are simply done with lens that can "look at both hands at the same time" - meaning it has more than 180 degrees field of view.

    • Like 2
    • Thanks 1
  7. 2 minutes ago, The60mmKid said:

    Many of the conversations on splitting doubles that I've seen online make claims like, "___ telescope can theoretically split ___ doubles," while taking for granted that atmospheric conditions and eyesight are uncontrollable and important variables. So, I anticipated a similar reply when asking about binoculars. I figure most people asking such questions on such a forum know about the impacts of seeing, etc., so that's partially why I'm surprised that we seem to be considering binoculars in a different way than we do telescopes in this instance.

    I think that it's down to two things:

    - most people that use telescopes to split doubles are familiar with influence of seeing, and it's often omitted for that reason, but when wanting to be fully accurate in description - it is included

    - eyesight is not that important variable if one can change magnification. You select magnification that allows you to easily see what the telescope is capable of. There is seldom discussion (but it does happen) - what can you split with say x40 power or similar. Most of the time, recommendation is to go with very high powers, even higher than one would use for planetary for example. That removes eyesight from the equation as at those magnifications - eye has no issues resolving things. With binoculars - it is a thing since one is tied to certain magnification - and that magnification tends to be on very low side of things - which is not suitable for splitting doubles because of eyesight issues.

    • Thanks 1
  8. 2 minutes ago, The60mmKid said:

    Why is that we talk about theoretical limits of telescope resolution all the time but uncontrolled variables suddenly plague us when it's binoculars 🤔

    Am I missing something?

    Not sure if that is true.

    I'm sure that sky conditions play major part when talking about visual separation of doubles as well.

    Sometimes talk about theoretical resolution of the telescope is had in context of planetary imaging for example. There we don't really entertain these variables as they are effectively excluded by the process of planetary imaging (lucky imaging where we discard subs that are too distorted by atmosphere).

    • Like 1
  9. 44 minutes ago, The60mmKid said:

    After further reflection, my confusion has returned. You mention 3.33' (arc minutes) here, whereas @Mr Spock mentioned 3.3" (arc seconds). Based on my observing experience, the former strikes me as quite wide, and the latter strikes me as quite close.

    Also, is aperture not a variable in this calculation? Would there not be a difference between the resolving ability of a 18x70 binocular and an 18x35 (hypothetically) binocular? That's surprising to me since aperture plays a clear role in the resolving ability of telescopes.

    Ah, sorry - my bad, I pressed ' instead of " (which is just shift away).

    Aperture is variable - but binoculars resolve independent of the eyes (they produce the image regardless if someone is actually looking thru them) so things don't really compound.

    If binoculars do resolve and human eyes are able to resolve that resolved image - we have a separation, in other cases - we don't (if either binoculars or eyes can't do their part - or both).

    • Like 2
  10. I think that you won't come close to theoretical resolution of 70mm aperture for several reasons.

    First is quality of optics, but more important is magnification - that is too low.

    If we assume perfect optics, then it's down to visual acuity of observer.

    https://en.wikipedia.org/wiki/Visual_acuity

    There is table on above page that lists MAR for different grades of visual acuity that is important factor - it is minimum angle of resolution measure and is expressed in arc minutes in said table.

    20/20 vision equates to 1 MAR of resolution - which means that 20/20 person needs to see two equal doubles at one minute of arc separation to be able to just resolve them (see the gap).

    Since you have binoculars that provide x18 magnification - that angle will actually be 1 minute of arc / 18 = 60 arc seconds / 18 = 3.33' separation.

    This is for person having 20/20 vision and perfect optics.

    Binoculars are often fast achromats that suffer from spherical aberration which will somewhat soften the view so the actual figure will be larger, and if you have less than 20/20 - this will add to separation needed.

    For example 20/30 vision adds 50% to separation so you'll be able to resolve around 5'.

     

    • Like 2
  11. 1 hour ago, Louis D said:

    Just buy another diagonal for use with all of your other eyepieces and leave that modified diagonal dedicated to widest field viewing.  Simply swap diagonals as a unit to move upward in power from the widest field unit.

    I'll probably just swap out my F/10 achromat for this:

    https://www.firstlightoptics.com/stellamira-telescopes/stellamira-110mm-ed-f6-refractor-telescope.html

    For some inexplicable reason, I'm sort of drawn to that scope :D. It's not color free, but it has something in it ... and also, it seems to be able to deliver very good views with that combination of glasses according to this:

    https://www.telescope-optics.net/commercial_telescopes.htm#error

    There will be some residual color, but at that level (somewhat more than 4" F/15) - it will be less than 4" F/10 that I already have and I'm not particularly bothered by CA in that scope.

    A bit more aperture, less focal length to be able to show wider field views and a bit less color, with potential for very sharp views (e-line Strehl design limit of 0.997 - which is potential to be almost perfect in that line) - what is not to like?

  12. 1 minute ago, Marvin Jenkins said:

    Would a wide field instrument be the thing to find a 20 mag or dimmer object?

    I'm more concerned with lack of resolution of such instrument rather than it's FOV.

    FOV is just a bonus as it allows to search more sky using the same exposure. Narrow field telescopes need to take multiple separate exposures to cover the same part of the sky, so wide field is clearly a plus here as it allows for faster search.

    Resolution in terms of pixel size is not the issue either - but resolution in terms of optical performance is. RASA telescopes have rather high star FWHM (compared to diffraction limited optics of the same aperture) and I'm afraid that in dense star fields - multiple faint stars will blend together and mask any sort of object that is present.

  13. 4 minutes ago, Nik271 said:

    I believe it's planet X OP is talking about. The problem is that the area of sky the models suggest is too big and the chance of occultation of a star is very small. It's better to look for movement but at these magnitudes (mag 20 or more ) you need big telescopes which means small fields of view, expensive telescope time, so the search is still going on.

    I wonder if telescopes such as RASA line could be used for that?
    These are wide field scopes - they can cover large area of the sky, and they can do it relatively "fast".

  14. 5 minutes ago, iantaylor2uk said:

    I think the argument that more photons are lost when the curtains are open is wrong, because those photons which are transmitted through the window (and effectively "lost" from the room) would have been those photons that would have been absorbed by the curtains (and also "lost" from the room) 

    That is only valid if we think of curtain being 100% absorbent - pure black body.

    Some of photons do get reflected of curtain and contribute to total amount of "in flight" photons that represent illumination of the room. Those that go out of the window (if we assume open window - no glass and reflection) won't spend any more time in the room.

    When we are on the subject of black bodies, what are temperatures inside and outside?

    Those also contribute some photons and if outside is hotter - those photons will enter the room and contribute to overall illumination :D

     

  15. 1 minute ago, wulfrun said:

    I think OP is using wrong logic in thinking of finite amounts of light being emitted. Not the explanation at all.

    I thought so too at one point - but argument still stands.

    Let me explain.

    Since we have dynamic system in equilibrium - there is light source, some photons that are "in flight" and there is a "sink" - which is pretty much anything in the room.

    If we think of the photons that leave the room thru the window - those could have been in one of two states - either sunk or "in flight".

    Removing them out of the system is not just equivalent of them being sunk - we also remove some of their "in flight" time - and thus we reduce light levels in the room.

    • Like 1
    • Thanks 1
  16. 44 minutes ago, robin_astro said:

    I want to know what makes a shape a shape 🙂

    That one is easy :D

    Any number of points in 2d connected into a contour (no two edges cross and all points are connected to contour) represents a shape.

    I think that we can extend this into higher dimensions (thus defining 3D shape - but here edges must be replaced with faces or something like that - no edges intersect with edges or faces and faces are 2d shapes defined by some points and edges).

     

  17. 3 hours ago, chops said:

    However, to my eye, the Laniakea and component superclusters have little to no symmetry, when compared with the diagram of the galaxy ‘big ring’ above. Presumably their science is rigorous above to exclude the likelihood that the arrangement is chance, and it has been therefore determined without doubt that the ring is in fact that - something with the order that we so desperately seek.

    image.png.1995b8b14de0e4c574f86a23ff03178c.png

    Orange arrow - thing that looks like Laniakea super cluster

    Red arrow - thing that looks like ring structure.

    And that is just a simulation. We can easily identify things that look like .... (insert your favorite shape there).

    Point of cosmological principle is - if we take some cube of universe / some region large enough and we take another such large region - they look roughly the same in structure - in this case "foamy".

    Element of that structure is filament - and bumps in those filaments - or parts of them are on average below 1Gly long. This is what represents largest element of the structure.

    Now, if we would to take region of space large enough and find part of it that is say 2Gly or more in size and is distinctly more or less dense than the rest and we don't see such thing in different large region of space - then we would say - look it is a structure - it is over density that we did not expect.

    Something being in "shape" of circle of elephant or unicorn - is not structure in that sort of sense - it is just ordering of stuff with same average density - and that is fine for cosmological principle.

    • Like 1
  18. I think that we need to be careful when deciding what the structure is ...

    Here is image that shows what we believe to be structure of universe on large scales:

    586px-Structure_of_the_Universe.jpg

    further, let's look at Laniakea super cluster and its shape:

    640px-07-Laniakea_(LofE07240).png

    This is our "home" super cluser - blue marked is our Local group in Virgo super cluster which is part of Laniakea.

    Now observe Laniakea with its neighboring super clusters:

    Laniakea.gif

    Laniakea is about 500Mly in diameter and first neighbors are Shapley super cluster and Coma super cluster. In fact Perseus-Pisces super cluster looks like it's also "connected". If we "add" all these structures to a chain - we get structure that is 1.3Bly or more in size - but is it really a structure or "beginning" of large scale cosmic foam based on above diagram?

    I'm sure that one can identify very interesting "rings" and "walls" and "daisy chains" of galaxy clusters in several neighboring super clusters - but is it really a standalone structure or we are just connecting the dots so that it resembles familiar shapes?

     

    • Like 5
  19. That is some sort of issue with data transfer.

    Images are often read bottom up from sensor - and at one point, communication was interrupted or something happened, and instead of subsequent "scan line" being sent - last one was repeated - probably because it remained in "receive buffer" (no need to clean it since it will be overwritten with next line - but as it happened, next line did not arrive).

     

    • Like 1
    • Thanks 1
  20. 11 minutes ago, LDW1 said:

    As a bit of an aside, what is the purpose of the Adjust +/- slider on the enhancing screen,  on the right side near the AF. Does / will it change the light / dark / contrast of the photos  end result ?  As you darken it you lose outer detail but its sharper, if you lighten it you get a broader detail but its very faded, does that carry through to the final product ?  Everything that I have read from ZWO about the SS I haven't found it mentioned unless I have missed it.

    No idea. I was speaking in general when imaging, but have never worked with SS so I have no idea what each particular command does.

    It could be either contrast enhancement - which is just adjustment of black / white point or it could be gamma setting - which is really a type of non linear stretch.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.