Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I would recommend the following workflow to best match color / mono data from two different sources: 1. capture mono at resolution you want to work with (best to keep things above 1"/px if you can as there is not much point going higher res than that) 2. capture osc data at any resolution (again, use reducers or what not to get to reasonable sampling rate - but if you can't - you'll fix that later) 3. Stack mono data 4. Stack OSC data using super pixel mode or split debayer mode 5. Bin OSC data to first suitable resolution lower than mono data Say you captured mono at 1.1"/px, and your stacked color data (after debayering and stacking) is at 0.7"/px - then bin your color data to 1.4"/px. Register color data against mono data using registration method that can handle resizing. ImageJ plugin can easily do this, and I suspect some other software can also handle mismatch in resolutions. In any case - you should register OSC data against mono and not other way around so that OSC data gets stretched / squeezed and mono data remains at its based size. This is because color data is not that important for sharpness perception. In the end - compose data into single image.
  2. You can do both. You can bin OSC sensor to produce mono data, and you can bin it in special way so that it is binned x2 and still produces color data. Both of these are "sub optimal" - or rather don't quite do as you would expect from software binning / SNR point of view. First - producing mono data. You can bin Bayer matrix data and you will get mono data - but it won't be the data as produced by mono sensor for couple of reasons. - bayer filters loose a bit of QE (like any filter does - even in part where they pass light). Compare mono vs osc QE for same sensor: (Screen shot taken from https://astrojolo.com/gears/colour-camera-versus-mono-price-of-comfort/) - second is that OSC filter overlap a bit, so you'll get "repeat" of some of data when you colors - see above graph at 490nm - G+B will be higher / will create spike over regular QE - QE curve will be altered by process of binning - third - it is really hard to predict impact of read noise and overall SNR when doing this - in part because difference in QE graph and because you need to add read noises of each pixel If you accept all of that, then yes, you'll get mono data (with particular QE and QE curve and particular read noise) like normal mono camera You can also produce OSC data by binning each other pixel - or binning red channel separately, green separately and blue separately. However, such image will give you x4 lower sampling rate than what pixel size would suggest. If that is OK with you - then yes, you can bin and preserve OSC data. In fact - if you use super pixel mode to debayer your data and then bin your subs after debayering - well then you effectively have the same thing.
  3. I'd say that is an understatement https://en.wikipedia.org/wiki/Cosmological_constant_problem
  4. Yes, at those distances, but not under the apple tree
  5. Anything we conclude is only a possibility not a fact. You could say that things falling in gravity is a fact, but I'll tell you that that has been our experience so far - but we can't really state that as a fact - as it can change at any moment. Only thing that you can say is that: Within your experience it has been so. Was it so 10000 years ago? We can easily say that it is very very very likely that it was so - but we don't know that for a fact. This should not bother you though as any scientific theory can fall at any moment - and when it does, we will work to create better one - that fits more data. There is no clear point at which we can say - you know, expansion of universe only fits this much data and now we gathered some more data and we now consider it to be a fact. Our confidence in theory grows with each new confirmation and any deviation is cause to stop and examine what is going on - no matter how well established theory is. Lambda CDM is rather good cosmological model that explains many things. It self is not perfect - as we can now attest - we have different measured values for Hubble constant from two sources - so either one of them is making serious measurement error - or our model is somehow flawed. Further investigation will tell.
  6. That really depends on curvature of universe. If curvature is zero or negative - we have infinite universe. If curvature is positive we can have closed universe with finite spatial extent. In some special cases, zero curvature can also mean closed universe - but that requires non isotropic universe and cosmological principle states that universe is both isotropic and homogeneous on large scales. Currently measured value of curvature is very close to 0. It is actually 0 with some margin of error (0.1% or so) - so we can't really tell if it's zero or just very small within measurement error. In fact - this is a problem as we can never measure it to be exactly 0 - there will always be some measurement uncertainty and we will always wander if is perhaps just small positive or small negative value. In case it is small positive value - then universe can be finite and size of universe in this case is at least x18 size of observable universe - so it is at least 18 times larger than what we can see.
  7. I understand what you are saying, but at some point we simply need to accept that nature behaves in some fundamental way that maybe can't be explained further with more basic principles / constituents. Why does energy bend space / time? Why is it that space can be bent? Part of that question comes from our preconception of what space is supposed to be like. We did to much Euclidean math and we are used to thinking of space in terms of some sort of coordinate system that is "rigid". That in turn creates sensation of wondering when we encounter idea that both space and time can be warped. We can ask - why is there dark energy and what it is - in the same sense that we can ask -what is time and why does it exist, but we also must be prepared to accept that it is fundamental phenomena and that we might never find better answer than just - because "it does".
  8. Don't confuse Inflation with expansion of universe. Inflation is something that happened very early after big bang and then it stopped. Regular expansion has been going on ever since and is now picking up in pace (or rather it started picking up in pace after about 5-6By). Current expansion is feature of GR when applied on cosmological scales. Couple of issues here. First is that objects can't travel faster than speed of light - locally thru space/time with respect to another object. On scales involved and due to expansion of universe - they can move away from each other faster than speed of light - but they won't actually be "traveling" at all in local sense - it is space that is expanding between. Second is that red shift suggests only 13.7By distances not twice that. Not sure where you've got that info. Third is that particle horizon is actually at 46Bly away from us at the moment. Do watch those videos that I've posted - they explain how different distances come together because of all expansion, light travel and time passing. No. Dark energy is responsible for accelerated expansion of universe. Not for regular expansion of universe. Let me make an analogy - you throw a ball upwards. It will move away from earth and slow down - at one point it will stop and fall back to earth. If you throw the same ball with enough speed - it will never slow down enough to fall back - but will instead continue to infinity. This is called escape velocity. Similar thing happens with big bang - it has some initial speed of expansion - and how much "stuff" there is in universe - determines how much gravity there is to resist this expansion. If there was no dark energy - universe would still expand but it would be slowing down. Since there is dark energy - that acts as rocket engine in above ball analogy - universe is speeding up - or rather rate of expansion is speeding up. In any case - we don't need to measure dark energy in laboratory - fact that universe is expanding in accelerated manner is evidence of phenomena we call dark energy (regardless what it actually is). There is reason why we call it energy and it is related to GR. In GR curvature of space time is related to energy content in that space time (mass or other forms of energy). Different types of energy have different way of acting in expanding universe - matter for example "dilutes" when you add more space between - density of matter decreases with expansion. Light behaves differently - it gets stretched / red shifted so it also looses a bit of energy - at different rate. Thing we call dark energy - remains constant as it is property of space - and you can think of it as "density of space" - although space is stretched - more space is "created" in between already existing space - so density of that remains the same. This is important to understand because dark energy was not always acting in the way to accelerate universe. At earlier stages when things were closer together - density of matter and radiation was dominant thing and it slowed down expansion of universe but as time went by and universe expanded - matter and radiation diluted and became less dominant and at one point dark energy being constant started to be more important and universe started to expand in accelerated manner. There is a reason why we often see image like this. It shows what happened across the time. At very beginning - we had inflation period - but that was very brief period and it was before time of last scattering. Then for some time universe expanded with slowing down - but only recently it started to expand in accelerated manner. Left is shape of universe with acceleration slowing down - right is graph of universe with accelerated expansion. First image is combination of these two shapes - it stars as slowing down universe but then after threshold is reached where matter and radiation are diluted enough and density falls down - dark energy starts being significant term and shape starts to change.
  9. Here are two more interesting and entertaining videos on this topic: https://www.youtube.com/watch?v=vIJTwYOZrGU https://www.youtube.com/watch?v=AwwIFcdUFrE
  10. There are different "horizons" related to big bang, light travel and expansion of universe.
  11. Not really. Age of Earth is estimated to about 4.5 billion years while age of universe is estimated to about 13.8 billion years. This means that universe is only about x3 older than Earth
  12. If we see that light has been traveling for 2By from an object - we can only conclude that object is at least 2By old - but that can't be its actual age - it has to be more than that - your logic is sound, it can't just pop into existence for us to see after 2By. There are different ways to determine age of things. If we look at a galaxy - we can see estimate what sort of stars are in galaxy based on spectrum - stellar classes. This is in turn related to mass of those stars and mass of a star determines how long a star can live. There are different populations of stars - population I, II and III - that differ in chemical make up (again spectroscopy / color of stars - examining their light). In early universe - there was no heavier elements that could be present in stars - those are made either in stars themselves as byproduct of fusion - or in supernovae explosions. More complex chemistry of a star is - higher likelihood that it was created from remnants of older stars that exploded. We can therefore look at a galaxy and determine how much population I and II stars there is - and from that ratio and statistics determine likely age of that particular galaxy. Similarly - we can observe galaxies in clusters that are of similar age (like stellar clusters contain stars of similar age). There are bunch of different methods to estimate age of something. We know an upper bound of about 13.8By - that is the age of universe. That we know from something else. We measured CMB - which is cosmic microwave background radiation. Every body that has some temperature emits radiation and from shape of spectrum of that radiation we can measure temperature of that body. CMB shows that universe at the time of last scattering is now at about 2.7K. It has cooled down from initial temperature to that temperature due to expansion of universe. We can also know what temperature at the time of last scattering was - because we know temperature at which plasma forms. From these and other indicators and fact that space is expanding - we can calculate age of universe. Hope this answers your question a bit?
  13. I think that your software for comparing channels is doing something strange. Here is what G-B channel looks like when you do the math in ImageJ Most of image is noise - but notice that we can identify some of the stars here - that means that G and B are not even the same - but very close (say 51% QE vs 50% QE from the graph alacant posted)
  14. Welcome to SGL. Edge here is probably used in "poetic" sense rather than to represent geometrical concept, however - there is boundary that we can call an edge. As we look into distance - we are also looking back into time. This is due to finite speed of light - it takes time for it to reach us and we see things as they were at the moment light started the journey. At some point in distance - we no longer see galaxies, as you note yourself, but after that - comes "time" - that we can no longer see further than that. We literally can't see past that point. This "barrier" has a name at it is known as surface of last scattering. Before that time - "stuff" in universe was in different state - it was plasma and that state is opaque for electromagnetic radiation. Photons scattered in all directions - it is a bit like Fog in sense that - everything is haze and you can't see far. We can't see past this point - we can only see up to and including that point. Light that comes from that point is known as CMB - cosmic microwave background - and it is very much red shifted because it is old (expansion of universe) - so it is not visible light, it is in radio part of spectrum and strongest in microwave. This can be considered as a sort of edge to volume we are able to see.
  15. Just a word of caution. Above suggested solution with tube rings works - I've used it like that on HEQ5 mount for imaging, but if you want to keep "dual purpose" for that scope - it will be a bit more difficult. "Ears" that are used on dob mount will get in the way. When you want to mount this scope in Alt Az - you want eyepiece to be in same position as on dob mount - which means that "Ears" will be pointed towards dovetail bar and will in fact cause trouble with rings and dovetail bar. Here is image with dovetail bar mounted "underneath" the OTA: In this configuration - I wasn't able to rotate OTA - dob mount ears would hit dovetail bar and that would prevent OTA from rotating. You want dovetail bar to be mounted on a side of scope. Only simple solution to this is to remove ears completely - they can be unscrewed from OTA, however, I suspect that it will be PITA to put them back on each time as you swap mounts. Maybe dovetail bar can be screwed in one of the ears? Maybe that would make things easier for swapping between mounts?
  16. Add Nyquist sampling theorem (two pixels per single wavelength) and take lambda to be either 400nm - lower end of spectrum or around 500nm as blue colors are more affected by seeing and you probably won't achieve full resolution in blue and you have formula for critical sampling. At 475nm for 2.9µm that gives F/12.2
  17. I'd say that at most you need x2.7 barlow as F/12 is the most you'll need for 2.9µm pixel size. APM x2.7 coma correcting barlow element?
  18. Very interesting. Just for my benefit, that is Balmer series 13->2 transition, right? According to this formula: https://en.wikipedia.org/wiki/Balmer_series Wavelength is 373.34nm. What sort of filter are you using to get this?
  19. Indeed it should - but I would not be surprised if it actually does not. There have been examples before where filters that should not pass IR - actually do pass IR part of spectrum.
  20. If you can make your darks light leak proof during the day - that will certainly do the trick at night.
  21. I just checked ASI533 has just plain AR coated window - that means it is sensitive in IR part of the spectrum. DSLR is not. This could mean that problem is related to focal reducer - but it might be that it is not. It could be simple IR leak that just started to show for a different reason. What has changed in your setup or environment recently?
  22. How do you take your darks? You have to put some sort of cover on the front of the scope? Here is for example solution from one member here: But light leaks can happen in different places - if you have extension tubes or even at scope seams - here is another case of light leak handling from CloudyNights forum:
  23. That actually suggests a light leak of sorts. What do you use to cover your scope? Light leak can come from back side - but also from front side. If you have plastic scope cover - that won't stop IR radiation from getting in. Most people add aluminum foil in that case.
  24. That looks like rather nasty reflection. I wonder if flat would deal with that (I doubt but worth a try). Did you take flats?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.