Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Problem is that we are used to understanding things by means of analogy. Most of the time this works well, but sometimes it fails - and in those moments we are very confused and come up with phrases like - "no one can understand quantum mechanics", or questions "how can empty space be expanding". Here is another question that you probably never thought about - but is similarly "weird" when you think deeply about it. Why do two electrons repel each other? Or another one - why do some particles have property called spin? What is spinning? Are those particles little balls that are spinning? See there, I just tried to use analogy to things that I know with electron spin, but simple fact is - it is property of electron that is sort of intrinsic - it is something that we observe and don't necessarily need to explain it in everyday terms. Similarly - light travels at fixed speed. Weird? Yes, but only because we are used to other things traveling at different speeds and behaving differently - and only when we try to apply analogy to light - is when we get confused. You never thought about it - and never had a chance to see it - but empty space is simply expanding. That is property of empty space (much like electrons repelling each other or having spin, or speed of light being fixed). From General Relativity we know that energy and matter bend space and time. Well, it turns out that space "bends the other way" when empty.
  2. It is a bit more serious than Astronomy Tools CCD Suitability calculator, but still flawed in my view. 1. Effect of mount tracking / guiding precision is not taken into account 2. Effect of pixel blur on final FWHM is exaggerated (it has very minor effect that can be demonstrated easily) 3. Under/over sampling is improperly reported
  3. No - size of universe and rate of expansion are not directly related. Don't think of rate of expansion as speed at which "ends" of universe move away. Think of rate of expansion as speed at which two close points move away from each other (or to be more precise - any two points in universe that are separated by that distance).
  4. Depends how you define speed of expansion. Universe is not two distant points that move away from each other at some speed and that is constant. It does not work like that. Best way to think about it is this - picture universe as a grid: and have any two neighboring points move away from each other with some velocity. I've marked few pairs of points - and they all move away from one another with the same velocity. However, if you measure recession speed of these pairs: you will find that they move from each other at twice above speed. Simply put in above image, if we look at top pair - left one is moving away from its first neighbor with some recession speed, and right one is also moving away from its first neighbors with that speed - so they move from each other with twice that speed (as speeds locally add "normally"). At current time this speed is H0 or Hubble constant at current time and it is measured to be around 70 km/s/Mp - or 70 km per second per every mega parsec of distance. In above image if two neighboring points were 1 mega parsec apart - they would move away from each other at 70 km per second. Then points that are two mega parsecs away - would move 140 km / s and so on. Important bit to remember is that we don't have our "warp engine" on - and we are not actually moving thru the space away from every other point. It is space between us that is stretching. We are actually not moving away at high speeds from distant objects in sense of special relativity. Otherwise - we would measure those distant objects to have enormous mass - close to infinity (and that would cause all host of issues). It is the space that is stretching. Depends how you define taking up. Observable universe is 93000 Mly in diameter and Virgo cluster is 5 Mly in diameter. That is 93000 / 5 = x18600 times smaller in diameter. But if we want to know about "taking up" - we need to think in volume and not diameter. We can simply switch from diameter to volume by raising things to third power. 18600^3 = 6434856000000 Virgo cluster takes up 1/6434856000000 of volume of observable universe (as measured by ruler). You can't compare these two units as those are not comparable quantities. 13.7 Bly means that light took that much time to travel certain distance. As I've explained - light traveled while space was stretching and amount of what light traveled does not reflect current spatial extent of observable universe. Imagine that you run on a train. Train is 1km long, but it travels really fast between two major cities - distance of 50km by the time you get from one end to the other end of train running. If we look at your fitness watch - it will tell us that you only ran for 1km, but you managed to cover distance of 50km. If you did not know about the train - you'd think you ran from London to Luton in dozen of minutes This is the same thing with light - it thinks it traveled 13.7Bly, but in reality it traveled for 46.5Bly because it was on a train (or rather inside of expanding universe). For that reason - we can't take 13.7Bly as realistic if we want to compare volumes, as it is distance that is combined with view backward in time (further we "look" - more back in time we see).
  5. I don't think it is dew, unless dew is perhaps on the filter. It has to form on element in converging beam and in such place so that some of the converging beams are not affected by it and others (hitting that part of sensor) are. Example of this would be something on the one edge of secondary if secondary is oversized and FOV in question is smaller. This is usually not the case and there is vignetting due to secondary on larger sensor on fast scopes, so if there is dew on secondary it will affect whole FOV. It can be sort of tube currents / hot air that passes on one side of tube and not on the other. In general - those would affect whole field, but it can happen that disturb only part of converging beam and then it will cause issues only on one side of the frame. Fact that fan introduced difference is supporting this idea.
  6. Depends on how you measure. If we had ruler that is big enough and we could stretch it to the furthest parts of universe that we see now - that ruler would be ~46.5 Bly long. If we time the light that started back in the past from that point that we consider "edge" - it has taken ~13.8 By to reach us - so we would conclude that edge is ~13.8 Bly away from us. How come? We know that speed / length and time are related by simple formula, right? speed = distance / time. Simple, right? Well - yes if you have space and time that don't stretch and bend. Our universe is expanding and above formula does not hold. Light indeed traveled form 13.8 Bly to reach us - but while it was traveling, space "in front" and "behind it" was also expanding. Space "in front" contributed to how much light needed to cross, but space "behind" the light does not contribute to length measured by speed of light and time it took it to travel it - yet, that space expanded and if we fit ruler "now" - it would show above ~46.5 Bly to the place where light first started. This is by the way - size of observable universe. Not the whole universe. Just the part that we see. We simply can't see further than that because light from those parts did not reach us yet. There are parts of universe that we will never see - no matter how long we wait for light to reach us - simply because universe is expanding faster than light can catch up. By the way - universe is not expanding faster than light - it is just that there is so much space between here and where light is - that universe expanding slowly locally builds up over such a large distances - that it expands more than light manages to cover for a given time period - and there is always more distance to be covered then it was on the start of the journey. We do know that universe is at least 1000 times larger than what we can observe. This is our current limit on measurement of flatness of the universe. Universe can have certain geometry to it - positively curved, negatively curved or flat. We are measuring that it is flat - but the problem is - we can only improve our measurement - and never really fully measure that it is flat (there is always margin of error and no matter how precisely we measure it - there is still possibility that it is curved but with curvature less than our margin of error). In any case - given our current measurement we know that universe is between infinite and x1000 larger than what is the size of observable universe. So yea, huge
  7. Yes. That depends. Ha is the strongest signal in many objects. People that image in narrowband know that they must spend significant amount of time to capture OIII and other wavelengths compared to Ha in most targets (some are exception). Similarly - it would seem that H beta is much better candidate for observing since we are far more sensitive there - but H beta emission lags behind Ha significantly (if at all present). Check this resource for example: https://www.prairieastronomyclub.org/filter-performance-comparisons-for-some-common-nebulae/ Interesting discussion with actual hands on experience with observing thru Ha filter: https://www.cloudynights.com/topic/785748-visually-observing-halpha-red-nebulae/
  8. Luminance response is given for normal lighting conditions and represents essentially this: It is the fact that when we present image in black and white without color information - different colors will to us look as different brightness. Only above is applied to individual wavelengths and not broad spectrum colors (but broad spectrum color brightness is derived from individual wavelength brightnesses of its spectrum). What you want to look at is this: Where dotted line is linear sensitivity (we don't perceive in linear but rather in logarithmic - so if in above graph - something is 50% of something else - that does not mean half as bright to us) of Rods and combine this with: Graph is normalized and does not represent absolute sensitivity of different kinds of receptors. In previous post I posted image of actual blue, red and green cones and their sensitivity with respect to others (non normalized) - where you can see that we are by far least sensitive to blue (or rater blue cones are the least sensitive of the three). Now mentally add rows 2 (Very light sensitive 6 (stimuli added over time), 7 (Have more pigment than cones, so can detect lower light levels), 9 (x20 more cells in the eye then other type), and you can see that even if graph almost "vanishes" for Ha - we can still see it in the night. Not as good as other wavelengths, that is for sure - but we are not effectively blind to Ha in scotopic vision.
  9. That is not quite response curve of human vision. It is luminosity function which is read like this: given two colors, bunch of observers will report that their relative brightness for same light intensity is as that curve. In photopic vision it translated to this: (do keep in mind that computer screens can't display as saturated color as actual rainbow, so if we compare say 540nm from Baader Solar continuum to above graph - green on it looks rather dull). If you have uniform light source - like standard illuminant E and you shine it at something white and observe with two filters of same band pass and same peak transmission but one is OIII and other is Ha - relative brightness will be as per above. Ha will seem darker. But above diagram does not tell if Ha will be detected or not - as this works good in bright scenarios (It is good guide for planetary observations). On the other hand we have this:
  10. That makes sense. At first - I thought that it is likely to be IFN that shows as OIII feature due to funny processing, but now, after you've pointed this out (which makes sense), and with above calculation of photon rate - I'm slightly more convinced that it can be legit thing. Would like independent confirmation though.
  11. Maybe we can go about it the other way. erg is 1e-7J So total amount of energy is 4 x 10e-25 joules per cm per second per arcsec squared. Energy of a single photon is hc / lambda, so let's try to crunch the numbers hc = 1.98644586...×10−25 OIII wavelength is 500.7x10-9 = 0.5007x10-6 If we divide the two we get ~4 x 10-19 joules To get number of photons per cm squared per second - we need to divide those two, so 4 x 10-25 / 4 x 10-19 = 10-6 That is one photon per cm squared per million seconds, or about one photon per hour per arc second squared for perfect 4" telescope (100% transmission and sensor QE and no atmosphere attenuation). Ok, I'll buy into premise that one wants hundred of hours to detect it. At least we have a baseline for anyone attempting to capture it - to calculate needed exposure.
  12. Yes, a bit broader OIII is much better choice than blue filter. And if I may another question - what do you think of signal strength inversion in presented images?
  13. In the paper, it says that estimated surface brightness is: 4 ± 2 × 10−18 erg cm−2 s−1 arcsec−2 How am I supposed to read that? From 0 to 4? It might not be even there with 0 surface brightness? Anyway, if it is 4 x 10^-18, what would that be in magnitudes (if anyone has handy calculator)?
  14. What do you make of this supposed continuum subtraction method (I know we don't have much data - but for the sake of argument, let's go with what we know - blue filter was somehow subtracted from OIII to remove residual light that made it thru the filter and is not OIII).
  15. Do people find value in these mounts and if so, what is it? Portability? Lack of backlash? Spec wise, I don't see them being high performing mounts. P2P periodic error of >25". Period of periodic error of 288 seconds - faster than other mounts which puts strain on guiding (faster change in periodic error). 0.17" / micro step tracking resolution being in the same ball park as for example Heq5 or EQ6 (in fact these have ~0.14" / micro step). Reduction of 300:1 requires steppers to operate on 128 micro steps. Period is actually one full revolution of stepper motor 200 steps * 128 micro steps = 25600 micro steps per revolution of stepper motor Sidereal rate is 15.041"/s and if stepper resolution is 0.17"/ micro step that makes ~88.5 micro steps per second (sidereal / resolution). 25600 micro steps per one motor turn / 88.5 micro steps per second = ~289 seconds per motor turn. I wonder if PE comes directly from step motor? In any case, I see these mounts as being overly expensive for what they offer - unless I'm missing something that people value.
  16. Yes, that is why we have mayonnaise. I'm serious . There is an effect that is explained by this called Casimir effect (https://en.wikipedia.org/wiki/Casimir_effect) In order to understand this effect - we must use virtual particles popping in and out of existence. It turns out that this effect is responsible for Mayo not behaving like liquid and keeping its shape.
  17. Ok, so here is a bit of a brain teaser with respect to that. When we say borrow something - we mean take it with intent of returning it. We can take it from "parallel universe", "future", or right here "in the space" we are talking about. Thing is - there is no such thing as negative energy. So we can't borrow it if there is no energy present. I'm not going to entertain notions of borrowing from "parallel universe" and "from the future" (or could we borrow from the past, and if we need to repay it back to the future - that means that in some sense things are deterministic - because what happens if we don't pay it back to the future? ). Energy is related to mass - almost equal to mass - one can produce the other and vice verse, and we still have not seen negative mass. That is just one example of how we don't define negative energy. Energy is also by definition "ability to do work", and while I can personally be on a negative side with my ability to do work - same can't be said for particles Let's just talk about borrowing from space itself. This implies that there is a certain pile of energy present in space. There is some sort of energy density in space. Given uncertainty principle for time and energy - we can "borrow" quite a bit of energy in short amount of time. Now if energy density is constant - it means that we must "pull" energy from ever larger volume of space to satisfy our need for peak energy we are borrowing. At some point we run into problem with relativity - we can't "pull" energy fast enough as to respect speed of light for energy transfer. There is limit to how much energy we can borrow given certain energy density. Now, since this happens to be huge - it turns out that vacuum contains huge energy density. But this is contradictory to relativity - as curvature of space time depends on amount of energy in that space time and if we indeed have huge energy stored in space - we should be able to easily detect curvature of space/time in vacuum. But this is not the case. So question where this energy comes from is indeed sound one and we still don't have answer to that I think.
  18. This math is what I'm worried about. I haven't seen it mentioned in the paper, and there is no description of what they did - just a brief mention in the video, and from that - I can only conclude that it might be the wrong approach and what is causing the artifact (say part of IFN is being rendered like OIII because of that). I thought about possibility of using continuum removal, and only place I see it possibly working is when using another NB filter with slightly wider band. Maybe using 6.5nm and 4nm. But then, what is the point of using wider band in the first place when the same imaging time could be done with narrower band filter. Wider band filter will simply produce worse SNR image, and mixing worse SNR with good SNR data can degrade the whole thing. Then there is mention of using blue to do it, but why not green as well? Both blue and green filters are designed to pass OIII.
  19. Please don't feel that way. I like being contradicted. It makes me stop, re question myself and think harder about my claims. If I never do that - I will miss all the times I'm wrong - and it is a bad thing going around claiming stuff and being wrong. Well, I misread your post for sure, but then again, I'm not sure that I understand what you mean that if something can be seen it can't exist. I guess it is the matter of semantic. Maybe I can give an example that is confusing me: I can see that nothing satisfies following condition: -2 * x + 1 = 2 * x + 2 - see that I did there - I "saw" "nothing" Kidding aside - you are right, if by definition nothing can't be seen and we see it then it can't exist as such - but what would be definition of nothing that says it can't be seen?
  20. Ok, I'm also not overly happy with those black and white images: I see two troubling features that I'd like to try to explain before I accept this. 1. Features in lower image that are of equal brightness as detected feature - but not related to it. There seems to be some sort of signal on both ends of M31 of similar intensity as OIII feature - in fact whole half of M31 seems to be "swimming" in this signal in lower image. Top image has too narrow field of view to be able to see if it is present there as well. 2. Difference in stretch and brightness of the feature. I'd expect two images to show the same amount of stretch for same / similar brightness of the feature. Feature looks to be about the same brightness in the image (dark versus light - feature vs background) - however, top image is significantly less stretched than the bottom. In top image there is clear gap between M31 and M110. In fact this "bridge" between two images is fainter than the OIII feature (less dark in inverted image). In bottom image - it is the opposite - bridge between M31 and M110 is very saturated (deep dark in inverted version) in comparison to the feature. This is impossible if both images show pure OIII and are "normally" stretched - even if stretch is non linear - as long as it preserves order of values (larger values stay larger). I don't think that images presented are pure OIII images and I believe they have been manipulated in some way resulting in two different relative brightness (one has bridge brighter than the feature and with other image it is the other way around) - not something that I would expect from regular feature.
  21. Here is brief paper with very brief description of process (no continuum subtraction is mentioned): https://iopscience.iop.org/article/10.3847/2515-5172/acaf7e?fbclid=IwAR12grzwnrY-GHKDBsR9sp9s3sMGPR-hQb2-TBZbi3cNsNfbCJW-98wkG6M It looks like some professional astronomers were included in the paper. (well I got my black and white images there ).
  22. I would not go so far as to say that if something can't be seen - it does not exist. I know that by "see" you mean detect in any suitable way, but I'll give you an example of something that you can't "detect" - and yet it exists. PI. Yep, mathematical constant of pi. It is a real thing - it is ratio of diameter and circumference of a circle. Not any circle - but every circle. It is also part of so many relationships and equations and pops up in places where you might not necessarily expect it. We know a lot about it really and all of it was deduced by "thought experiment" of sorts. No one has ever seen, measured (to anywhere near precision we are able to calculate it) or written it down (again to satisfactory precision) - yet it is there, it is real and it exists.
  23. It could legit thing, and that is why I'd like to see more data that is also more convincing. It could be the same data they used - but without fancy processing like that "continuum subtraction" - just good old noisy monochromatic data that shows that something is indeed there. Or third party confirmation that has not been processed in the same way of course. We can see this all over the place - people try to push the data beyond what it really shows by using "fancy" / "shiny" new algorithms that often employ AI and "do wonders". I'm not completely against the use of such things to produce nice image - but not for anything that is remotely scientific.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.