Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Indeed. In fact - regular finders when adopted to RACI will have issue focusing at infinity due to length of tube. Close focus will come "naturally" in that case.
  2. I have very philosophical take on nothing. I equate it with non-existence, and if we start to build an imaginary universe from conceptual building blocks, non-existence is rather easy to define - it is universe consisting out of single entity. No relationships, that entity not having any properties except being itself. That is nothingness - or lack of existence (I know this is a hard pill to swallow because if "there is something there" - we think of existence, but in reality we need at least two entities before we can "declare" existence, and further more, existence is actually at least two entities and at least one relationship - but that is given, as soon as we have two entities - we have relationship between them - as they both "are"). Makes sense? So in my view, nothing is same as non-existence, universe with singular entity defines it and no it does not have colour and can't be seen, smelled, heard and so on ... Btw, welcome to SGL
  3. You don't need any additional optics to make binoculars focus closer - just increase separation between objective lenses and eyepieces somehow. If your objective lenses can be unscrewed from the front of binoculars - it can be as simple as unscrewing them and adding appropriate distancing ring on that thread before screwing them back on.
  4. With most modern DSLR models - firmware does "internal" dark subtraction. Each row of pixels on sensor has first 16-20 pixels masked. Those pixels are also exposed with main exposure but they are effectively in "dark" mode as they are masked and don't get any light. Firmware then takes average value for each row (for some reason due to manufacturing process - rows have similar level of dark current and it is best treated per row) - and then subtracts that from the rest of the row. It is not ideal dark calibration - but works good enough and makes regular dark calibration redundant (it won't cause issues but will add noise back in the image which you don't want). In fact - given that one does not have control of temperature - this is actually better solution. Masked pixels are at the same ambient temperature as the rest - and internal dark subtraction is not temperature sensitive as normal dark calibration is with DSLR. For sensors that have distinct bias pattern in the image - I'd recommend to remove bias manually (in fact - I'd do that for any sensor) - but you can get away with using numeric offset value - usually 2048 ADU or 1024 ADU depending on sensor and its ADC.
  5. Ok, here is a bit of math to help you out and explain things a bit more. If you are aiming for targets in range of 10-15 arc minutes, which is 600 to 900 arc seconds (times 60) in realistic conditions - say you are shooting at 1.5"/px - you will get target size of 400-600px. I'm guessing that you'll consider such target to be very small, right? And you want to make it bigger? Thing is - you can't. I mean - you can get larger target - but you won't get detail and sharpness. Detail and sharpness is related to star FWHM you can achieve in your images, and that depends on several things: 1. seeing 2. mount performance (tracking / guiding RMS) 3. aperture size (we are considering diffraction limited performance) Changing 8" newtonian with 8" RC or CC won't change anything in above 3 points. Only thing where it could make a difference is actual telescope performance - how sharp the image it gives - and even then it is one of the three things and very very minor contribution. If you are unhappy with star shapes across the field or something like that and you want to change the scope because of it - then fine, but if you want to make your targets bigger - it simply won't work like that. Target size depends on sampling rate and sampling rate should depend on how sharp image are you able to get with depending on above 3 parameters. There is actual math for number crunching - but we don't have to do that. Experience tells us following - realistic resolution that you'll be able to achieve with 8" scope on non premium mount is 1.5"/px. With excellent seeing such combination might be able to deliver 1.3-1.4"/px. With premium mount it might go down to 1.2"/px - and in theory down to 1.0"/px should be possible - but with everything perfect - perfect mount, perfect skies and so on ... Even if you can achieve almost impossible and get sharpness for 1"/px - you'll still have galaxies that are from 600px to 900px in size on your image.
  6. You already have 8" scope, and if it works and you are satisfied with that one, why change it? Why did you use barlow at all? You can get "closer" to target - but without any additional detail. There is not much point in making image larger without detail - you can do that in software to the same effect. With 1000mm you should be very near optimum sampling rate with range of pixel sizes (even with small pixels you can bin data).
  7. Actual usability of either of two scopes will depend on what camera will they be paired with and on what sort of mount. Direct comparison between the two without regard for above detail would go like this - since you are using the same pixel size on both setups (with no intent of binning I suppose) - you can simply compare F/ratios like F/7 vs F/5.9 x 1.5 = F/8.85 (assuming that barlow indeed operates at x1.5) Roughly speaking, image acquisition time for same SNR will be 7*7 / 8.85 * 8.85 = 49 / 78.3225 = ~0.62562 or Edge HD will achieve same SNR in roughly 62% of imaging time. However, I would not base choice of setup on this alone. I'd decide on type of target, then determine imaging resolution suitable for that target that can be achieved with my gear and the based on that if both scopes can manage the same - I'd choose one with larger aperture for imaging. Say that you want to image nebulae at 3"/px - then you'll probably want to use 80mm scope without a barlow, but if you want to image galaxies at 1.5"/px - then you probably want to use Edge HD and look into binning the data from your camera.
  8. In principle - yes, but there are a lot of small details. Maybe simplest way to explain what is happening is with Strehl ratio. We know Strehl ratio to be sort of "percent" removed from perfect optics - but here is one important thing - it is actually a function with "height" and that height is related to above concept of bulge and dip in brightness. Higher the Strehl for some telescope - deeper that dip that represents a gap or line feature - more contrast there is compared to surrounding brightness - easier to see it. Now in reality, things get complicated - there is no single Strehl for refractors - there is Strehl vs wavelength graph - that looks like this: So each wavelength will have different peak. Final result then goes like this: - for each different wavelength of source (be that star or the light reflected of the moon or Saturn's rings) - and its intensity - that Strehl peak is applied to see how much contrast it will have versus surroundings - it is then "added" to the sum of light (superimposed or layered like layers in photo shop) - depending on sensitivity of human eye to that particular wavelength - which also depends on environmental factors (like photopic / mesopic / scotopic mode mainly). This shows that we really need to pay attention to scotopic vs photopic vision if we have ED optics. ED optics does not control well short wavelengths (purple fringing) - but we are much more sensitive to those wavelengths in scotopic mode and those will reduce contrast significantly (think short and cheerful achromat on planetary - detail washed out in purple haze). So yes, spherical aberration will have impact, but so will everything else that is related to optical quality - and above shows why Strehl is convenient way to express things (although it is related to single wavelength and things like polychromatic Strehl don't make much sense - as we don't know two things in advance - spectrum of the source light and our adaptation so sensitivity over wavelengths for it to make much sense).
  9. Well, that is quite misleading. Problem is in use of the term "resolve". It is not the same as detect. I'll show you an example where telescope of 7mm (our eyeball) is able to see contrast difference of feature that is x1000 less than it is able to "resolve". We see stars on a starry night, right? How is that possible? How can we see something that is way beyond resolution of our eyeball, even beyond of resolution of large aperture telescopes by factor of x1000 or more .... Stellar disk, depending on actual star will present profile that is often in micro arc seconds. Some close and large stars are few milli arc seconds (mas) in diameter. Yet we see them. How is this possible? Well - simply because we see them. It has nothing to do with resolving them. When we talk about aperture and resolution - we should not talk about being able to see singular feature like point source or a line - but rather ability to resolve two close feature - to distinguish. Be that pair of stars or two close lines or something similar. When we talk about point sources like stars or maybe division in Saturn's rings or a rile on the surface of the moon - here is what happens: what is very narrow sharp peak or dip in light intensity (star will be peak and division in rings will be dip in light intensity) - becomes much less bright or dark shallow smooth dip. If intensity of that shallow bump or dip with respect to background is too small to trigger our JND - just noticeable difference - we won't see it. But if it is large enough to trigger it - we will notice it. With telescope that has more resolving power - that bump or dip is narrower and higher intensity - so the contrast with respect to background is higher - things are easier to see. That is why we can "resolve" some features in larger apertures while missing them in smaller ones - but we are in fact not resolving - just noticing that something is there - without being able to tell its general shape or if its one object or more.
  10. Eye sees very differently than camera does. I'll name a few things that make up this difference. 1. Exposure time. Eye/brain combination integrates for about 15-30ms (think of movies being played at 30fps and screen refresh rates being 60hz). Camera can expose for few milliseconds. 2ms is exposure time not uncommon for lunar imaging. Seeing affects visual performance of the telescope much more - and will "equalize" things between large and small aperture much more. We often need to wait for moment of good seeing to see some feature. With imaging - it will be captured even if we were not able to see it in that period of time - because our brain/eye combination blurred the image - or helped seeing blur it. 2. Dynamic range and contrast When we capture image on the screen - we have very moderate level of dynamic range. Eye is performing close to its optimum as far as resolving goes - it's not dark adapted even in slightest - it is fully in photopic mode of operation. When we are at the telescope in the darkness - eye is in mesopic / scotopic mode (depends on conditions), and here aperture plays a role. For the same magnification - larger telescope will throw up brighter image. Although, it can't throw up brighter image of the moon then viewed by naked eye - if we are dark adapted even a bit - it can be blinding. Compare that to car headlights. During the day - looking directly at car headlights is not troubling at all. Look at the same headlights at night and you'll feel blinded. Car headlights did not change - but environment did. This amounts to two things. I think that finer contrast details are possible in smaller aperture because of the amount of light / dynamic range and that this difference would drop if one removes dark adaptation from the equation - one should do planetary / lunar observation in lit up environment rather than in darkness. 3. Level of sharpening and reconstruction. Our brain can't sharpen up things like computer can. It can do other things - we notice detail by prolonged observation - we need to tease out detail. We don't need to do that in images because of observing comfort. On the other hand - due to sharpening image produced by telescope can actually be sharper / more detailed than what the same telescope produces at focal plane. This is due to nature of blur and how it works.
  11. To be honest, I was not aware that thru is informal variant. I was under impression that both are the same / interchangeable - just different spelling.
  12. To be honest, I'm rather surprised by black hole images. It's a shame it does not understand the concept of a planet seen thru a telescope.
  13. I have another one - image of the Earth as observed thru an 8" newtonian from the surface of the Mars?
  14. Ask it to generate an image of a black hole?
  15. As far as hands and vacuum go - there is actual example of what might happen. Here is an account told by person that experienced the whole thing during project Excelsior - https://en.wikipedia.org/wiki/Project_Excelsior
  16. You can use it with said filters and in principle you should not worry about it. People remove UV/IR cut filter (or rather replace it) from standard DSLR filters as that one is more restrictive in Ha than astronomical one: With color camera you certainly want to have one (unless you want to do something exotic like spectrometry), and having one preinstalled saves you the trouble of using external one.
  17. There are several makes of cameras with that sensor and they are much cheaper than ZWO. Try searching for Poseidon model from Player one or Touptek version. I know that several members evaluated those cheaper models, but I don't know what is the verdict on those (you'll find threads discussing them for sure here on SGL).
  18. Ok, so I looked it up - and there is handy star that will put you in vicinity of SCP - close enough for good tracking. Here is quote from wiki article: Not sure what your mount is like - but there are two things that you need to do to get good enough polar alignment. First is to make sure your polar scope is properly aligned (once you get polar scope onto any star - best use one close to pole as those don't move much - then if you rotate mount in RA manually - star should stay in center of the polar scope - if it's not align polar scope first - look up polar scope alignment / adjustment). Then simply put cross of polar scope on BO Octantis and you are done (using alt / az controls at base of your EQ mount).
  19. You need to perform proper polar alignment. Not sure how it's done on southern hemisphere, but even slight deviation from SCP (south celestial pole) can have significant impact on drift. It does not matter if you missed polar alignment in altitude or azimuth - both result in DEC drift You can use this calculator to calculate drift rate based on your error: http://celestialwonders.com/tools/driftRateCalc.html You'll need to estimate your error - but if you manually set just altitude and eyeballed south pole - then you can be even a degree or two off. For such error - you can drift up to 15 arc seconds in half a minute at declination 0 (higher declination will have lower drift) That is significant drift over the course of half a minute and can cause significant trailing. Even PA error of half a degree will have 4 arc seconds of drift in half a minute. If you are not guiding - you want to be as accurate with your polar alignment as possible. Most people that guide are happy if they get within 10 arc minutes of polar alignment error - then drift is small enough that its easily corrected by guiding. You should aim to get polar alignment error at least as low as that if at all possible.
  20. That is a good idea. If you have 3d printer - then you can make SA into higher resolution spectrograph by adding a slit. That can be intermediate step before going onto the high res items like Solex. You'd need 2 pairs of lenses for collimation and optical slit and 3d printed housing to make it all work. I also have SA200 and 3d printer and plan on doing something like that myself at some point.
  21. Star analyzer is diffraction grating same as one used in Solex - differences being that Star analyzer is transmission grating and one used in Solex is reflection grating. Other difference is dispersion power of these elements. In theory - SA (depending on model) will have from x12 to x24 less dispersion than grating used in Solex (one that is specified in design). SA has either 100 or 200 lines per mm, while one in Solex has 2400 lpmm. Well, this part is tricky to answer, because it is not all down to dispersion of grating. There are several elements that go into "equation": 1. Size of slit and seeing blur (whichever is smaller) 2. Size of collimated beam that is hitting the grating 3. Quality of collimation optics (this is rarely an issue - but it can be if very low quality items are used) 4. Pixel size of sensor and how it is matched to dispersion of light For good spectroscope - all these things must be properly matched to achieve optimum resolution that is possible with said elements. On the other hand, if you want to image Sun using this technique - I don't think you'll get very satisfactory results in say Ha with low dispersion element like SA. If I'm not mistaken, good solar scopes have filter with band of about 0.5A - that translates into 6568 / 0.5 = R13000 to get very contrasty image of the Sun. This means that we need to illuminate at least 13000 slits on grating - that means 65mm wide collimated beam (13000/200) - and that is impossible with 28mm filter (max beam width that I'd put on that is about 20mm so R4000 is theoretical max for SA200 - which is more than 1A - not good enough for solar Ha filter).
  22. Hi and welcome to SGL. I don't think that focus is the issue here. Image looks just right for the equipment you are using. Long focal length means that you'll be over sampled if you don't bin your data and use of regular SCT telescope is bound to impart some spherical aberration on the stars when imaging (it is very hard to hit exact design distance between two mirrors and SCTs introduce spherical aberration even if perfectly figured when you change distance between mirrors - which you do when focusing). Ideally, you'd want to put your sensor where the eyepiece is when observing for best performance - but no one does it as it would mean adding several extension tubes (think diagonal + EP away from OTA) - and even then you are not 100% sure you are spherical aberration free. When you say "vertical" drift - what exactly do you mean? Vertical with respect to RA/DEC coordinate system or vertical with respect to the image above? DEC drift (one in direction of DEC) - happens due to poor polar alignment. RA drift happens due to periodic error of the drive - or in another words, because of gears in mount drive being slightly out of shape (a bit eggy and not perfectly round - this is because of manufacturing tolerances) - sometimes drive lags behind earth's rotation and sometimes it leads. This creates star elongation or perceived drift. There could be other reasons - like improper tracking rate or similar - but if it's related to mount's tracking - it is RA, if its due to polar alignment - then it is in DEC. In above image RA direction is left-right and DEC direction is "up/down". So if you have vertical drift with respect to the image - it is due to polar alignment.
  23. 1.25"/px is quite reasonable resolution with 11" of aperture in good conditions. Not sure if RASA11 can deliver that sharpness in terms of spot diagram - but as far as diffraction limited systems go - quite sound resolution.
  24. I love that you used those numbers as example .
  25. Ah, right - well that bit is the essence of it all - telescope projects onto a surface (focal plane) for given "magnification" - and surface grows as square (like volume grows as cube ...).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.