Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I'm more concerned with lack of resolution of such instrument rather than it's FOV. FOV is just a bonus as it allows to search more sky using the same exposure. Narrow field telescopes need to take multiple separate exposures to cover the same part of the sky, so wide field is clearly a plus here as it allows for faster search. Resolution in terms of pixel size is not the issue either - but resolution in terms of optical performance is. RASA telescopes have rather high star FWHM (compared to diffraction limited optics of the same aperture) and I'm afraid that in dense star fields - multiple faint stars will blend together and mask any sort of object that is present.
  2. I wonder if telescopes such as RASA line could be used for that? These are wide field scopes - they can cover large area of the sky, and they can do it relatively "fast".
  3. That is only valid if we think of curtain being 100% absorbent - pure black body. Some of photons do get reflected of curtain and contribute to total amount of "in flight" photons that represent illumination of the room. Those that go out of the window (if we assume open window - no glass and reflection) won't spend any more time in the room. When we are on the subject of black bodies, what are temperatures inside and outside? Those also contribute some photons and if outside is hotter - those photons will enter the room and contribute to overall illumination
  4. I thought so too at one point - but argument still stands. Let me explain. Since we have dynamic system in equilibrium - there is light source, some photons that are "in flight" and there is a "sink" - which is pretty much anything in the room. If we think of the photons that leave the room thru the window - those could have been in one of two states - either sunk or "in flight". Removing them out of the system is not just equivalent of them being sunk - we also remove some of their "in flight" time - and thus we reduce light levels in the room.
  5. Just to throw in a wrench ... Pure glass reflects about 4% of the light that hits it. If can even become brighter in the room if curtains absorbed more than 96% of light before they were pulled aside.
  6. This video sheds some light - but not much: two key points to take away: - Mass density was calculated by attenuation of quasar light? (if I got that correctly) - research claims 5.2 sigma confidence of over density in said region - that is significant if true
  7. That one is easy Any number of points in 2d connected into a contour (no two edges cross and all points are connected to contour) represents a shape. I think that we can extend this into higher dimensions (thus defining 3D shape - but here edges must be replaced with faces or something like that - no edges intersect with edges or faces and faces are 2d shapes defined by some points and edges).
  8. Orange arrow - thing that looks like Laniakea super cluster Red arrow - thing that looks like ring structure. And that is just a simulation. We can easily identify things that look like .... (insert your favorite shape there). Point of cosmological principle is - if we take some cube of universe / some region large enough and we take another such large region - they look roughly the same in structure - in this case "foamy". Element of that structure is filament - and bumps in those filaments - or parts of them are on average below 1Gly long. This is what represents largest element of the structure. Now, if we would to take region of space large enough and find part of it that is say 2Gly or more in size and is distinctly more or less dense than the rest and we don't see such thing in different large region of space - then we would say - look it is a structure - it is over density that we did not expect. Something being in "shape" of circle of elephant or unicorn - is not structure in that sort of sense - it is just ordering of stuff with same average density - and that is fine for cosmological principle.
  9. I think that we need to be careful when deciding what the structure is ... Here is image that shows what we believe to be structure of universe on large scales: further, let's look at Laniakea super cluster and its shape: This is our "home" super cluser - blue marked is our Local group in Virgo super cluster which is part of Laniakea. Now observe Laniakea with its neighboring super clusters: Laniakea is about 500Mly in diameter and first neighbors are Shapley super cluster and Coma super cluster. In fact Perseus-Pisces super cluster looks like it's also "connected". If we "add" all these structures to a chain - we get structure that is 1.3Bly or more in size - but is it really a structure or "beginning" of large scale cosmic foam based on above diagram? I'm sure that one can identify very interesting "rings" and "walls" and "daisy chains" of galaxy clusters in several neighboring super clusters - but is it really a standalone structure or we are just connecting the dots so that it resembles familiar shapes?
  10. That is some sort of issue with data transfer. Images are often read bottom up from sensor - and at one point, communication was interrupted or something happened, and instead of subsequent "scan line" being sent - last one was repeated - probably because it remained in "receive buffer" (no need to clean it since it will be overwritten with next line - but as it happened, next line did not arrive).
  11. No idea. I was speaking in general when imaging, but have never worked with SS so I have no idea what each particular command does. It could be either contrast enhancement - which is just adjustment of black / white point or it could be gamma setting - which is really a type of non linear stretch.
  12. Review in this thread when it arrives and subsequent impressions?
  13. This is often believed and it is so - but solely because of the way software works. There is no 1:1 correspondence between captured light and emitted light. You can simply - increase brightness on your computer screen or other viewing device - just enter settings of that device and fiddle with brightness / contrast. Also - different display devices have different brightness. Process when capturing image up to showing it on screen is always the same, and can be simplified as this: - camera captures certain number of electrons of signal in exposure - those electrons get converted into ADU - by conversion factor we know as Gain / ISO - which is expressed in e/ADU units - those ADU units get scaled to display units by using some conversion which is sometimes known as STF - or screen transfer function. Basic version of that is to set black and white point to appropriate values. Now, if you always use the same physical units in this process - you will get equally bright image every time. For example - if you convert electron count in your exposure to electrons per second instead of using electrons per 10 seconds or electrons per 30 seconds, and similarly if you use the same gain settings and if you set white and black point equally - you will get the same image. On the other side of things - once you have captured certain number of photons / electrons - no amount of above math manipulation afterwards can change that and image stays the same - it is just emitted from the screen differently. This is why we say that only thing that is really important is SNR. Difference between 10, 20 and 30 second subs is not the brightness - as that is something you can adjust without changing the contents of the image (increasing brightness does not change the amount of noise for example) - difference is SNR - which you can understand as - if you adjust parameters to get equal output for all three images - 10 second one will be noisiest, 20 second one in between and 30 second one - the least noisy. On the other hand, if you pay attention to read noise and swamp it with LP noise - then 10, 20 and 30 second subs stacked to same total time (Say 5 minutes worth of each) - will produce the same looking images if you adjust output properly. There will be no difference in noise for equally bright images.
  14. Maybe place it between camera and filter wheel? That way you will only have weight of the camera hanging on it and that is probably a bit less. M42 also won't be a problem as I'm guessing you are already using that to connect EFW with the camera? Only drawback is that you'll need to redo flats when you change camera orientation (which you should really do anyway just in case telescope is causing uneven illumination and not just filter wheel / dust on filters). You can always see if there will be any vignetting by using approximation. You say that you are at 40mm away from sensor and you are using M42 - so let's say that you have 38mm of clear aperture (2mm on each side for adapter). APS-C diagonal is ~28mm so we have (38 - 28) / 2 = 5mm of "room" on each side. In order for light beam to converge 5mm in 40mm of distance - you need to be at F/4 or faster (that is 10mm of aperture over 40mm of distance or 5mm from center down to center). I think you'll be fine at that distance with most scopes unless you have very fast optics - faster than F/4.
  15. Could be. It requires scope / eyepiece combination that has some field curvature to check it out. Usually fast refractor with wider field eyepieces will give you such combination. You can check this if you focus in center and while keeping eye relaxed - look at stars at the edge. If they are larger / a bit out of focus and you can "fix" this by either straining your eye or adjusting focus - then you have case of field curvature (where center of the field and edge can't be in the focus at the same time) - then it is just matter of testing with / without contact lenses - if it's harder to do the "switch" - from center to edge without touching focus - it is the reason.
  16. These might not be as big of an obstacle as it may seem at first. Any near/far sightedness is compensated by focus position (when you focus you adjust for your dioptre). Sometimes, astigmatism can be alleviated by choice of exit pupil. Small exit pupils will only use small portion of your eye lens that might not be as distorted as the whole thing. Many people with astigmatism get sharp views when looking at the planets at high magnification because of that - as most of the time, planets are viewed with high magnifications and exit pupils of 1mm or less. On the hand - observing very dim objects like DSOs - does not trigger color sensitive cells so objects are viewed in black and white - no need to worry about any color blindness .
  17. Yes, eye is focused to infinity if relaxed when observing. Sometimes there is a bit of field curvature and our eye accommodates to it like it does in normal circumstances. It does so with seeing as well if seeing changes are not too fast.
  18. No idea. Never seen either simulated graphs nor the view thru the singlet lens acting as a telescope.
  19. You could use line filters - but that would not produce good looking RGB image. R, G and B values represent weighted sum over all frequencies. Or to be precise - for sRGB color space, R, G and B components have these color matching functions: Note that sometimes these color matching functions are below zero in value (which would be physically impossible) - which just means that sRGB color space can't reproduce full gamut of color vision - some colors that exist and we can see - can't be shown on computer screen as they would need to have negative red value (to put it in layman's terms - it would need to be less red than the pure greed on the screen ). In any case - single lines won't work. Unfortunately that won't work either. I'll explain why. Spherical aberration and defocus depend on wavelength - in fact they are different for each individual wavelength. Two different stars in the image will produce different spectra and if you weigh aberrations by their spectra (Say one star is bluish and other is red in color - in blue one aberrations that dominate will be from blue side of spectrum while for red one - they will be from red side of spectrum) - you will get different PSF - one that depends on spectrum - which you no longer have as you recorded summed response and not individual wavelengths. Software has no way of knowing which PSF to apply to which star - and different stars will produce different PSF because of that - so you can't deconvolve with one single PSF - you need to use adaptive - but that adaptive PSF depends on information that you no longer have (spectrum of light). In principle - you could do following - take singlet lens or achromatic doublet and record Ha or OIII or any other interesting single wavelength (or small bandwidth) - and apply deconvolution as necessary. Since you are recording one wavelength - aberrations will be the same on all stars as it depends on wavelength and there is just one. That would work - you can produce narrowband image with simple scope like that.
  20. This is correct. Question is - just how much more noise will be in 10x1minute vs 1x10minute exposure, and that depends on how high is read noise compared to other noise sources - most notably the highest one which is usually light pollution noise. Noise adds like linearly independent vectors - square root of sum of squares. If one component is much smaller (just to get the idea of what we are talking here - if read noise is x3-x5 smaller than any other noise level in that single exposure) - you simply won't see the difference. Humans can't tell the difference of say 5% in noise increase. This 5% corresponds to ~x3 larger light pollution noise. In another words - if you have some read noise and per exposure you have x3 more LP noise - that is the same as having 5% more LP noise and no read noise at all. When you have 0e read noise then 10x1 minute is equal to 1x10minute as far as SNR goes (tracking and other artifacts not included - just signal quality). Only issue is that LP noise depends on LP signal - which depends on sky conditions but also sampling rate of the camera as it is amount of photons from the sky that end up hitting each pixel. General rule: Faster the optics - shorter exposures can be used Brighter the sky - shorter exposures can be used
  21. This is work in progress - and it requires knowledge of tolerances of your printer. It is designed specifically my printer and for my lens size and tube size that I plan to use. Not sure if STLs would do you any good. Here is FreeCAD model that you can tweak and adjust to suit your lens once you measure it - and adjust for your tube size (ID/OD). Hope you'll be able to use it (FreeCAD is open source so it's readily available) Lens cell.FCStd If you have any questions about it, I'd be happy to answer them.
  22. In theory yes. In practice no. Level of dispersion depends on F/ratio of the lens. Even within single color band - say from 400-500nm representing blue part of spectrum, you will have defocus between different wavelengths. Only single wavelength of light will be in focus at any given time and all the rest will be out of focus. Trick is to make that out of focus to acceptable level - and that you can do only with very slow telescope. And I don't mean F/10 slow or F/20 slow - I mean very very slow telescope. See for example this: https://en.wikipedia.org/wiki/Aerial_telescope (back in the day refractors used a single lens) Even faster achromatic doublet lens won't produce nice results using that technique.
  23. Hi and welcome to SGL. Jupiter will look tiny when you look at it thru the view finder as it takes up very small portion of the FOV. Your sensor has 5184 x 3456 px and Jupiter is usually around 100-200px in diameter in images (really depends on the telescope used and rest of the setup). Barlow lens is the right choice - but barlow lens pushes focus point further out. You might need to add extension tube to your focuser to be able to reach the focus. Planets are usually not imaged the way the rest of things are - they are imaged with technique called lucky planetary imaging (look it up and maybe watch a few tutorial videos on youtube for that). DSLR is also not the best choice for planetary imaging, but can be used if it has particular option (it needs to record video in 1:1 crop mode at small resolution like 640x480). For your model suitability check this: https://www.astropix.com/html/equipment/canon_one_to_one_pixel_resolution.html
  24. As have been said above - ROI stands for region of interest. It is handy little feature where you can select only part of the sensor to image with. This reduces the size of the image captured but allows for much higher transfer speeds. USB connection can sometimes be bottle neck when imaging and you simply can't transfer data fast enough, even if using fast USB 3.0 connection. Planets like Jupiter and Saturn are very small on sensor so it does not make much sense to capture whole frame unless you want to say capture Jupiter with its moons - something like 640x480 or 800x600 is more than enough. Anyways - if you check specifications for camera on ZWO website - you will find some ROI settings and associated max fps: This is from ASI178mc It looks like they stopped doing this for new models as it is now "common knowledge".
  25. Out of interest, has anyone tried to make a plossl eyepiece? It should not be too hard as it can be made in symmetrical configuration. I don't know exact prescription for each lens - but what about using available cemented doublets? Quick search on AliExpress returned this hit: 42mm diameter 110mm focal length for about 10 euro with shipping. Two of these spaced at say 50mm would give combined focal length of 42 + 42 - 50 / 42*42 = 1/f f = 42*42 / 34 = ~52mm of combined focal length. We can even aim for certain FL - say if I want 45mm of FL then I need to space lenses at 42^2 / 45 = 39.2 so 84 - D = 39.2 => D = 44.8mm Let's do math again, I messed up - used diameter instead of focal length. 110 + 110 - 50 / 110*110 = 1/f f = 110^2 / 170 = 71.17mm It turns out that with these lenses we can get down to only 55mm (if we have 0 separation which itself is not possible) and not less! Found it! D32 F86 with distance of 7.6444mm Anyway, you get the idea ...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.