Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,027
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I thought so too at one point - but argument still stands. Let me explain. Since we have dynamic system in equilibrium - there is light source, some photons that are "in flight" and there is a "sink" - which is pretty much anything in the room. If we think of the photons that leave the room thru the window - those could have been in one of two states - either sunk or "in flight". Removing them out of the system is not just equivalent of them being sunk - we also remove some of their "in flight" time - and thus we reduce light levels in the room.
  2. Just to throw in a wrench ... Pure glass reflects about 4% of the light that hits it. If can even become brighter in the room if curtains absorbed more than 96% of light before they were pulled aside.
  3. This video sheds some light - but not much: two key points to take away: - Mass density was calculated by attenuation of quasar light? (if I got that correctly) - research claims 5.2 sigma confidence of over density in said region - that is significant if true
  4. That one is easy Any number of points in 2d connected into a contour (no two edges cross and all points are connected to contour) represents a shape. I think that we can extend this into higher dimensions (thus defining 3D shape - but here edges must be replaced with faces or something like that - no edges intersect with edges or faces and faces are 2d shapes defined by some points and edges).
  5. Orange arrow - thing that looks like Laniakea super cluster Red arrow - thing that looks like ring structure. And that is just a simulation. We can easily identify things that look like .... (insert your favorite shape there). Point of cosmological principle is - if we take some cube of universe / some region large enough and we take another such large region - they look roughly the same in structure - in this case "foamy". Element of that structure is filament - and bumps in those filaments - or parts of them are on average below 1Gly long. This is what represents largest element of the structure. Now, if we would to take region of space large enough and find part of it that is say 2Gly or more in size and is distinctly more or less dense than the rest and we don't see such thing in different large region of space - then we would say - look it is a structure - it is over density that we did not expect. Something being in "shape" of circle of elephant or unicorn - is not structure in that sort of sense - it is just ordering of stuff with same average density - and that is fine for cosmological principle.
  6. I think that we need to be careful when deciding what the structure is ... Here is image that shows what we believe to be structure of universe on large scales: further, let's look at Laniakea super cluster and its shape: This is our "home" super cluser - blue marked is our Local group in Virgo super cluster which is part of Laniakea. Now observe Laniakea with its neighboring super clusters: Laniakea is about 500Mly in diameter and first neighbors are Shapley super cluster and Coma super cluster. In fact Perseus-Pisces super cluster looks like it's also "connected". If we "add" all these structures to a chain - we get structure that is 1.3Bly or more in size - but is it really a structure or "beginning" of large scale cosmic foam based on above diagram? I'm sure that one can identify very interesting "rings" and "walls" and "daisy chains" of galaxy clusters in several neighboring super clusters - but is it really a standalone structure or we are just connecting the dots so that it resembles familiar shapes?
  7. That is some sort of issue with data transfer. Images are often read bottom up from sensor - and at one point, communication was interrupted or something happened, and instead of subsequent "scan line" being sent - last one was repeated - probably because it remained in "receive buffer" (no need to clean it since it will be overwritten with next line - but as it happened, next line did not arrive).
  8. No idea. I was speaking in general when imaging, but have never worked with SS so I have no idea what each particular command does. It could be either contrast enhancement - which is just adjustment of black / white point or it could be gamma setting - which is really a type of non linear stretch.
  9. Review in this thread when it arrives and subsequent impressions?
  10. This is often believed and it is so - but solely because of the way software works. There is no 1:1 correspondence between captured light and emitted light. You can simply - increase brightness on your computer screen or other viewing device - just enter settings of that device and fiddle with brightness / contrast. Also - different display devices have different brightness. Process when capturing image up to showing it on screen is always the same, and can be simplified as this: - camera captures certain number of electrons of signal in exposure - those electrons get converted into ADU - by conversion factor we know as Gain / ISO - which is expressed in e/ADU units - those ADU units get scaled to display units by using some conversion which is sometimes known as STF - or screen transfer function. Basic version of that is to set black and white point to appropriate values. Now, if you always use the same physical units in this process - you will get equally bright image every time. For example - if you convert electron count in your exposure to electrons per second instead of using electrons per 10 seconds or electrons per 30 seconds, and similarly if you use the same gain settings and if you set white and black point equally - you will get the same image. On the other side of things - once you have captured certain number of photons / electrons - no amount of above math manipulation afterwards can change that and image stays the same - it is just emitted from the screen differently. This is why we say that only thing that is really important is SNR. Difference between 10, 20 and 30 second subs is not the brightness - as that is something you can adjust without changing the contents of the image (increasing brightness does not change the amount of noise for example) - difference is SNR - which you can understand as - if you adjust parameters to get equal output for all three images - 10 second one will be noisiest, 20 second one in between and 30 second one - the least noisy. On the other hand, if you pay attention to read noise and swamp it with LP noise - then 10, 20 and 30 second subs stacked to same total time (Say 5 minutes worth of each) - will produce the same looking images if you adjust output properly. There will be no difference in noise for equally bright images.
  11. Maybe place it between camera and filter wheel? That way you will only have weight of the camera hanging on it and that is probably a bit less. M42 also won't be a problem as I'm guessing you are already using that to connect EFW with the camera? Only drawback is that you'll need to redo flats when you change camera orientation (which you should really do anyway just in case telescope is causing uneven illumination and not just filter wheel / dust on filters). You can always see if there will be any vignetting by using approximation. You say that you are at 40mm away from sensor and you are using M42 - so let's say that you have 38mm of clear aperture (2mm on each side for adapter). APS-C diagonal is ~28mm so we have (38 - 28) / 2 = 5mm of "room" on each side. In order for light beam to converge 5mm in 40mm of distance - you need to be at F/4 or faster (that is 10mm of aperture over 40mm of distance or 5mm from center down to center). I think you'll be fine at that distance with most scopes unless you have very fast optics - faster than F/4.
  12. Could be. It requires scope / eyepiece combination that has some field curvature to check it out. Usually fast refractor with wider field eyepieces will give you such combination. You can check this if you focus in center and while keeping eye relaxed - look at stars at the edge. If they are larger / a bit out of focus and you can "fix" this by either straining your eye or adjusting focus - then you have case of field curvature (where center of the field and edge can't be in the focus at the same time) - then it is just matter of testing with / without contact lenses - if it's harder to do the "switch" - from center to edge without touching focus - it is the reason.
  13. These might not be as big of an obstacle as it may seem at first. Any near/far sightedness is compensated by focus position (when you focus you adjust for your dioptre). Sometimes, astigmatism can be alleviated by choice of exit pupil. Small exit pupils will only use small portion of your eye lens that might not be as distorted as the whole thing. Many people with astigmatism get sharp views when looking at the planets at high magnification because of that - as most of the time, planets are viewed with high magnifications and exit pupils of 1mm or less. On the hand - observing very dim objects like DSOs - does not trigger color sensitive cells so objects are viewed in black and white - no need to worry about any color blindness .
  14. Yes, eye is focused to infinity if relaxed when observing. Sometimes there is a bit of field curvature and our eye accommodates to it like it does in normal circumstances. It does so with seeing as well if seeing changes are not too fast.
  15. No idea. Never seen either simulated graphs nor the view thru the singlet lens acting as a telescope.
  16. You could use line filters - but that would not produce good looking RGB image. R, G and B values represent weighted sum over all frequencies. Or to be precise - for sRGB color space, R, G and B components have these color matching functions: Note that sometimes these color matching functions are below zero in value (which would be physically impossible) - which just means that sRGB color space can't reproduce full gamut of color vision - some colors that exist and we can see - can't be shown on computer screen as they would need to have negative red value (to put it in layman's terms - it would need to be less red than the pure greed on the screen ). In any case - single lines won't work. Unfortunately that won't work either. I'll explain why. Spherical aberration and defocus depend on wavelength - in fact they are different for each individual wavelength. Two different stars in the image will produce different spectra and if you weigh aberrations by their spectra (Say one star is bluish and other is red in color - in blue one aberrations that dominate will be from blue side of spectrum while for red one - they will be from red side of spectrum) - you will get different PSF - one that depends on spectrum - which you no longer have as you recorded summed response and not individual wavelengths. Software has no way of knowing which PSF to apply to which star - and different stars will produce different PSF because of that - so you can't deconvolve with one single PSF - you need to use adaptive - but that adaptive PSF depends on information that you no longer have (spectrum of light). In principle - you could do following - take singlet lens or achromatic doublet and record Ha or OIII or any other interesting single wavelength (or small bandwidth) - and apply deconvolution as necessary. Since you are recording one wavelength - aberrations will be the same on all stars as it depends on wavelength and there is just one. That would work - you can produce narrowband image with simple scope like that.
  17. This is correct. Question is - just how much more noise will be in 10x1minute vs 1x10minute exposure, and that depends on how high is read noise compared to other noise sources - most notably the highest one which is usually light pollution noise. Noise adds like linearly independent vectors - square root of sum of squares. If one component is much smaller (just to get the idea of what we are talking here - if read noise is x3-x5 smaller than any other noise level in that single exposure) - you simply won't see the difference. Humans can't tell the difference of say 5% in noise increase. This 5% corresponds to ~x3 larger light pollution noise. In another words - if you have some read noise and per exposure you have x3 more LP noise - that is the same as having 5% more LP noise and no read noise at all. When you have 0e read noise then 10x1 minute is equal to 1x10minute as far as SNR goes (tracking and other artifacts not included - just signal quality). Only issue is that LP noise depends on LP signal - which depends on sky conditions but also sampling rate of the camera as it is amount of photons from the sky that end up hitting each pixel. General rule: Faster the optics - shorter exposures can be used Brighter the sky - shorter exposures can be used
  18. This is work in progress - and it requires knowledge of tolerances of your printer. It is designed specifically my printer and for my lens size and tube size that I plan to use. Not sure if STLs would do you any good. Here is FreeCAD model that you can tweak and adjust to suit your lens once you measure it - and adjust for your tube size (ID/OD). Hope you'll be able to use it (FreeCAD is open source so it's readily available) Lens cell.FCStd If you have any questions about it, I'd be happy to answer them.
  19. In theory yes. In practice no. Level of dispersion depends on F/ratio of the lens. Even within single color band - say from 400-500nm representing blue part of spectrum, you will have defocus between different wavelengths. Only single wavelength of light will be in focus at any given time and all the rest will be out of focus. Trick is to make that out of focus to acceptable level - and that you can do only with very slow telescope. And I don't mean F/10 slow or F/20 slow - I mean very very slow telescope. See for example this: https://en.wikipedia.org/wiki/Aerial_telescope (back in the day refractors used a single lens) Even faster achromatic doublet lens won't produce nice results using that technique.
  20. Hi and welcome to SGL. Jupiter will look tiny when you look at it thru the view finder as it takes up very small portion of the FOV. Your sensor has 5184 x 3456 px and Jupiter is usually around 100-200px in diameter in images (really depends on the telescope used and rest of the setup). Barlow lens is the right choice - but barlow lens pushes focus point further out. You might need to add extension tube to your focuser to be able to reach the focus. Planets are usually not imaged the way the rest of things are - they are imaged with technique called lucky planetary imaging (look it up and maybe watch a few tutorial videos on youtube for that). DSLR is also not the best choice for planetary imaging, but can be used if it has particular option (it needs to record video in 1:1 crop mode at small resolution like 640x480). For your model suitability check this: https://www.astropix.com/html/equipment/canon_one_to_one_pixel_resolution.html
  21. As have been said above - ROI stands for region of interest. It is handy little feature where you can select only part of the sensor to image with. This reduces the size of the image captured but allows for much higher transfer speeds. USB connection can sometimes be bottle neck when imaging and you simply can't transfer data fast enough, even if using fast USB 3.0 connection. Planets like Jupiter and Saturn are very small on sensor so it does not make much sense to capture whole frame unless you want to say capture Jupiter with its moons - something like 640x480 or 800x600 is more than enough. Anyways - if you check specifications for camera on ZWO website - you will find some ROI settings and associated max fps: This is from ASI178mc It looks like they stopped doing this for new models as it is now "common knowledge".
  22. Out of interest, has anyone tried to make a plossl eyepiece? It should not be too hard as it can be made in symmetrical configuration. I don't know exact prescription for each lens - but what about using available cemented doublets? Quick search on AliExpress returned this hit: 42mm diameter 110mm focal length for about 10 euro with shipping. Two of these spaced at say 50mm would give combined focal length of 42 + 42 - 50 / 42*42 = 1/f f = 42*42 / 34 = ~52mm of combined focal length. We can even aim for certain FL - say if I want 45mm of FL then I need to space lenses at 42^2 / 45 = 39.2 so 84 - D = 39.2 => D = 44.8mm Let's do math again, I messed up - used diameter instead of focal length. 110 + 110 - 50 / 110*110 = 1/f f = 110^2 / 170 = 71.17mm It turns out that with these lenses we can get down to only 55mm (if we have 0 separation which itself is not possible) and not less! Found it! D32 F86 with distance of 7.6444mm Anyway, you get the idea ...
  23. ASI662 or ASI585 would suit you well without use of any barlow - just directly attached to said scope. Get ASI662 if budget is limited, or get ASI585 if you want to get larger part of the moon in single go. Both will work well for smaller targets as you can set ROI to wanted size, but ASI585 has larger sensor which means larger part of the moon in single go (if that is important to you).
  24. HDR composition in camera does not necessarily entail intensity inversion. We do regular HDR composition in astronomy imaging - well that sort of HDR composition as camera does. Normally, difference in intensity levels that camera can capture are roughly 1000:1 for 14-16bit camera (if we don't want to show the noise, so lowest signal should really have SNR of 5 or so - which means value of about 25 electrons and if we divide 65535 or 16 bit with that we get ~2500 or if we divide 14 bits or 16384 with that we get ~650 so middle ground is about 1000:1) However, when we stack - with each subs stacked - we increase this number. We can end up with 10000:1 easily (that is difference of 10 mags - so we can see both 5 magnitude star and 15 magnitude star in the same image). Stretch maps that many intensity levels onto 256 intensity levels that are available for regular computer image (per channel, but let's not get into color stuff now - let's discuss black and white as most information in the image comes from brightness). This stretch can be linear or maybe non linear - but can keep order of intensities. All of that is HDR composition (either taking two exposures of different length and combining them or stacking multiple exposures - result is the same) - but it won't necessarily produce unnatural looking image because there is no intensity inversion. Look what happens when I do even a small intensity inversion: While in the first image - ceiling and walls of the passage look like they are a bit brighter than a scene would suggest - they still look "natural", but as soon as you do color inversion - you start seeing artifacts.
  25. I would like to point a few things out on this topic. - Fixed set of intensities mimics the way we see. We are capable of seeing larger intensity difference than our computer screens can show - but only because of anatomy of our eyes - we have iris that opens and closes depending on amount of light we see. If we fix size of our pupil - then there is small range of brightness that we can perceive at the same time. Having fixed range of brightness in processing is not limitation of image processing / display systems - it is limitation of our physique for the most part. - Problem is not showing very large intensity range in single image. Problem is with detail / structure. To be able to see detail and structure in something we need to distinguish enough intensity levels that compose those details and structure. Since we have limited "budget" of intensities - we can choose which part of large range will we show in fine detail / granularity. We can opt to show fine grained high intensity range, or medium intensity range or low intensity range. We can also opt to say show low and medium range but with less granularity - or maybe everything with the least granularity. This is processing choice. - There are tools that perform "high dynamic range" - but I would caution against using those. They will show both low intensity and high intensity regions in high fidelity. Or rather - tool will attempt to do this by using certain technique. Unfortunately - this technique is unnatural and in my opinion should be avoided. It relies on using non monotonic stretch of intensities - something our brain easily recognizes in everyday life as unnatural. Our brain does not feel cheated if we keep one thing in our processing - order of intensities. If something is brighter in real life and we keep it brighter in the image - all is good, but if we reverse intensities - our brain will start to rebel and say - this is unnatural. This translates into slope of curves in processing software - if we keep it monotonically rising from left to right - image will look natural (it may look forcefully stretched and bad in different sort of way - but it will look ok in domain that I'm discussing) - so: that is ok - and we don't have immediate objection to the image but this: feels unnatural - note how curves first raises then falls then raises again - it is not constantly rising when going left to right. This in turn produces some very unnatural looking regions of the image - like strange clouds, or parts of buildings that lack detail - look flat / gray. Slope of the curve in curves is level of detail in given intensity region - more it slopes - more detail. If it's flat - less detail. Basic stretch in astronomy images looks like this: and this is for a reason - we want to show detail in faint regions - left of curves diagram, so we set that part to steep slope and we don't care about detail in high intensity regions - so we leave that flat in right part. Here you can see that - in order to have two regions with steep slope - we need to go back down - and change direction of the slope - which is no good as it reverses intensities and confuses our brain (in effect - it creates a negative image superimposed on positive image - and we find negative image unnatural). You might be applying the same technique without realizing it - if you create two layers that you stretch differently and then selectively blend those - you might end up in above situation with reverse in slope. I'll try to find classic example of this in high dynamic range processing and post a screen shot. Here it is - making central part of M42 with all the detail - while still maintaining detail in faint outer reaches. This is intensity inversion.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.