Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Larger FOV on lunar is usually achieved by making a mosaic rather than using focal reducer. Focal reducers, while giving you large FOV, introduce optical aberrations. These might not be as bad for deep sky imaging where atmosphere effects are dominant, but if you plan on doing lucky type planetary imaging (and of course you should, and you have right gear for it) - then you don't want to loose any sharpness. You simply image 4 separate panels (or more) that you stitch together to create large image. I think it took 9 panels, last time I did lunar imaging with 4" mak and ASI178 to cover whole lunar surface (ASI178 is a bit larger sensor than ASI224, but mak has a bit more focal length than SCT, so they should be in the ballpark).
  2. I'm not known to be envious, but seeing that these forum "stickers" come with real world mugs ....
  3. I guess that continuity of star trails would be giveaway, or rather which stars made which segment of circle. With panoramic mosaic you need to capture separate parts and combine them - but you can't do that at the same time.
  4. For all intents and purposes - they would still collide, but in principle - it would have an effect as long as there is any sort of inhomogeneity in stellar composition. This would happen even if stars are not spinning (any type of asymmetry would cause rotation to start). Spin itself just contributes to stronger curvature of space time as it represents energy and mass and energy are equivalent.
  5. That is quite normal if done with very wide lens - fish eye type that captures more than 180 degrees in single go. Maybe simplest explanation would be using your own hands. If you circle with both of your hands in the same direction (like gym exercise), here is drawing of what I mean: say that you move them in "forward" direction (like butterfly swimming technique) - that is exactly how stars move in north and south hemisphere. They perform large circles - but in reality they circle in the same direction (because it's the earth that is spinning). But if you look at your left hand while doing this - it will look as if it's circling clockwise and if you turn your head to the right to observe right hand - it will look like it's spinning counter clock wise. Above images are simply done with lens that can "look at both hands at the same time" - meaning it has more than 180 degrees field of view.
  6. I think that it's down to two things: - most people that use telescopes to split doubles are familiar with influence of seeing, and it's often omitted for that reason, but when wanting to be fully accurate in description - it is included - eyesight is not that important variable if one can change magnification. You select magnification that allows you to easily see what the telescope is capable of. There is seldom discussion (but it does happen) - what can you split with say x40 power or similar. Most of the time, recommendation is to go with very high powers, even higher than one would use for planetary for example. That removes eyesight from the equation as at those magnifications - eye has no issues resolving things. With binoculars - it is a thing since one is tied to certain magnification - and that magnification tends to be on very low side of things - which is not suitable for splitting doubles because of eyesight issues.
  7. Not sure if that is true. I'm sure that sky conditions play major part when talking about visual separation of doubles as well. Sometimes talk about theoretical resolution of the telescope is had in context of planetary imaging for example. There we don't really entertain these variables as they are effectively excluded by the process of planetary imaging (lucky imaging where we discard subs that are too distorted by atmosphere).
  8. Ah, sorry - my bad, I pressed ' instead of " (which is just shift away). Aperture is variable - but binoculars resolve independent of the eyes (they produce the image regardless if someone is actually looking thru them) so things don't really compound. If binoculars do resolve and human eyes are able to resolve that resolved image - we have a separation, in other cases - we don't (if either binoculars or eyes can't do their part - or both).
  9. I think that you won't come close to theoretical resolution of 70mm aperture for several reasons. First is quality of optics, but more important is magnification - that is too low. If we assume perfect optics, then it's down to visual acuity of observer. https://en.wikipedia.org/wiki/Visual_acuity There is table on above page that lists MAR for different grades of visual acuity that is important factor - it is minimum angle of resolution measure and is expressed in arc minutes in said table. 20/20 vision equates to 1 MAR of resolution - which means that 20/20 person needs to see two equal doubles at one minute of arc separation to be able to just resolve them (see the gap). Since you have binoculars that provide x18 magnification - that angle will actually be 1 minute of arc / 18 = 60 arc seconds / 18 = 3.33' separation. This is for person having 20/20 vision and perfect optics. Binoculars are often fast achromats that suffer from spherical aberration which will somewhat soften the view so the actual figure will be larger, and if you have less than 20/20 - this will add to separation needed. For example 20/30 vision adds 50% to separation so you'll be able to resolve around 5'.
  10. I'll probably just swap out my F/10 achromat for this: https://www.firstlightoptics.com/stellamira-telescopes/stellamira-110mm-ed-f6-refractor-telescope.html For some inexplicable reason, I'm sort of drawn to that scope . It's not color free, but it has something in it ... and also, it seems to be able to deliver very good views with that combination of glasses according to this: https://www.telescope-optics.net/commercial_telescopes.htm#error There will be some residual color, but at that level (somewhat more than 4" F/15) - it will be less than 4" F/10 that I already have and I'm not particularly bothered by CA in that scope. A bit more aperture, less focal length to be able to show wider field views and a bit less color, with potential for very sharp views (e-line Strehl design limit of 0.997 - which is potential to be almost perfect in that line) - what is not to like?
  11. I've since managed one observing session with this combination - and it works great. Ergonomics is not that great, especially if using diagonal and eyepiece in regular way and then trying to screw in all adapters in the dark, but once properly setup - gives great reduced images. Very nice stars to the edge and can fit whole M31 in the FOV.
  12. I'm more concerned with lack of resolution of such instrument rather than it's FOV. FOV is just a bonus as it allows to search more sky using the same exposure. Narrow field telescopes need to take multiple separate exposures to cover the same part of the sky, so wide field is clearly a plus here as it allows for faster search. Resolution in terms of pixel size is not the issue either - but resolution in terms of optical performance is. RASA telescopes have rather high star FWHM (compared to diffraction limited optics of the same aperture) and I'm afraid that in dense star fields - multiple faint stars will blend together and mask any sort of object that is present.
  13. I wonder if telescopes such as RASA line could be used for that? These are wide field scopes - they can cover large area of the sky, and they can do it relatively "fast".
  14. That is only valid if we think of curtain being 100% absorbent - pure black body. Some of photons do get reflected of curtain and contribute to total amount of "in flight" photons that represent illumination of the room. Those that go out of the window (if we assume open window - no glass and reflection) won't spend any more time in the room. When we are on the subject of black bodies, what are temperatures inside and outside? Those also contribute some photons and if outside is hotter - those photons will enter the room and contribute to overall illumination
  15. I thought so too at one point - but argument still stands. Let me explain. Since we have dynamic system in equilibrium - there is light source, some photons that are "in flight" and there is a "sink" - which is pretty much anything in the room. If we think of the photons that leave the room thru the window - those could have been in one of two states - either sunk or "in flight". Removing them out of the system is not just equivalent of them being sunk - we also remove some of their "in flight" time - and thus we reduce light levels in the room.
  16. Just to throw in a wrench ... Pure glass reflects about 4% of the light that hits it. If can even become brighter in the room if curtains absorbed more than 96% of light before they were pulled aside.
  17. This video sheds some light - but not much: two key points to take away: - Mass density was calculated by attenuation of quasar light? (if I got that correctly) - research claims 5.2 sigma confidence of over density in said region - that is significant if true
  18. That one is easy Any number of points in 2d connected into a contour (no two edges cross and all points are connected to contour) represents a shape. I think that we can extend this into higher dimensions (thus defining 3D shape - but here edges must be replaced with faces or something like that - no edges intersect with edges or faces and faces are 2d shapes defined by some points and edges).
  19. Orange arrow - thing that looks like Laniakea super cluster Red arrow - thing that looks like ring structure. And that is just a simulation. We can easily identify things that look like .... (insert your favorite shape there). Point of cosmological principle is - if we take some cube of universe / some region large enough and we take another such large region - they look roughly the same in structure - in this case "foamy". Element of that structure is filament - and bumps in those filaments - or parts of them are on average below 1Gly long. This is what represents largest element of the structure. Now, if we would to take region of space large enough and find part of it that is say 2Gly or more in size and is distinctly more or less dense than the rest and we don't see such thing in different large region of space - then we would say - look it is a structure - it is over density that we did not expect. Something being in "shape" of circle of elephant or unicorn - is not structure in that sort of sense - it is just ordering of stuff with same average density - and that is fine for cosmological principle.
  20. I think that we need to be careful when deciding what the structure is ... Here is image that shows what we believe to be structure of universe on large scales: further, let's look at Laniakea super cluster and its shape: This is our "home" super cluser - blue marked is our Local group in Virgo super cluster which is part of Laniakea. Now observe Laniakea with its neighboring super clusters: Laniakea is about 500Mly in diameter and first neighbors are Shapley super cluster and Coma super cluster. In fact Perseus-Pisces super cluster looks like it's also "connected". If we "add" all these structures to a chain - we get structure that is 1.3Bly or more in size - but is it really a structure or "beginning" of large scale cosmic foam based on above diagram? I'm sure that one can identify very interesting "rings" and "walls" and "daisy chains" of galaxy clusters in several neighboring super clusters - but is it really a standalone structure or we are just connecting the dots so that it resembles familiar shapes?
  21. That is some sort of issue with data transfer. Images are often read bottom up from sensor - and at one point, communication was interrupted or something happened, and instead of subsequent "scan line" being sent - last one was repeated - probably because it remained in "receive buffer" (no need to clean it since it will be overwritten with next line - but as it happened, next line did not arrive).
  22. No idea. I was speaking in general when imaging, but have never worked with SS so I have no idea what each particular command does. It could be either contrast enhancement - which is just adjustment of black / white point or it could be gamma setting - which is really a type of non linear stretch.
  23. Review in this thread when it arrives and subsequent impressions?
  24. This is often believed and it is so - but solely because of the way software works. There is no 1:1 correspondence between captured light and emitted light. You can simply - increase brightness on your computer screen or other viewing device - just enter settings of that device and fiddle with brightness / contrast. Also - different display devices have different brightness. Process when capturing image up to showing it on screen is always the same, and can be simplified as this: - camera captures certain number of electrons of signal in exposure - those electrons get converted into ADU - by conversion factor we know as Gain / ISO - which is expressed in e/ADU units - those ADU units get scaled to display units by using some conversion which is sometimes known as STF - or screen transfer function. Basic version of that is to set black and white point to appropriate values. Now, if you always use the same physical units in this process - you will get equally bright image every time. For example - if you convert electron count in your exposure to electrons per second instead of using electrons per 10 seconds or electrons per 30 seconds, and similarly if you use the same gain settings and if you set white and black point equally - you will get the same image. On the other side of things - once you have captured certain number of photons / electrons - no amount of above math manipulation afterwards can change that and image stays the same - it is just emitted from the screen differently. This is why we say that only thing that is really important is SNR. Difference between 10, 20 and 30 second subs is not the brightness - as that is something you can adjust without changing the contents of the image (increasing brightness does not change the amount of noise for example) - difference is SNR - which you can understand as - if you adjust parameters to get equal output for all three images - 10 second one will be noisiest, 20 second one in between and 30 second one - the least noisy. On the other hand, if you pay attention to read noise and swamp it with LP noise - then 10, 20 and 30 second subs stacked to same total time (Say 5 minutes worth of each) - will produce the same looking images if you adjust output properly. There will be no difference in noise for equally bright images.
  25. Maybe place it between camera and filter wheel? That way you will only have weight of the camera hanging on it and that is probably a bit less. M42 also won't be a problem as I'm guessing you are already using that to connect EFW with the camera? Only drawback is that you'll need to redo flats when you change camera orientation (which you should really do anyway just in case telescope is causing uneven illumination and not just filter wheel / dust on filters). You can always see if there will be any vignetting by using approximation. You say that you are at 40mm away from sensor and you are using M42 - so let's say that you have 38mm of clear aperture (2mm on each side for adapter). APS-C diagonal is ~28mm so we have (38 - 28) / 2 = 5mm of "room" on each side. In order for light beam to converge 5mm in 40mm of distance - you need to be at F/4 or faster (that is 10mm of aperture over 40mm of distance or 5mm from center down to center). I think you'll be fine at that distance with most scopes unless you have very fast optics - faster than F/4.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.