Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. According to this post: Problem might be only in extension tube used. 90mm one is too long - total distance needs to be 90.3mm and camera eats up 17.5mm of it. There also might be a bit of collimation issue or tilt - not sure which one. Collimation is easier to check and fix. Stars seem to be more affected with coma on the left side of frame than on right: also, bottom right corner seems rather fine compared to others:
  2. Yes, you will get bigger images - but void of small detail. It is a bit like enlarging already existing images in software - image does get bigger but it is lacking detail and looks blurry. There is another very important downside to using slower F/ratio than needed - that is signal to noise. You need to achieve good SNR in your stack in order to be able to sharpen image to retrieve detail that is there, and if your SNR is low - you won't be able to do that because of noise. You will be tempted to use longer exposures to get good SNR - and that is a no-no in planetary imaging as you want to freeze the seeing - that means using exposures up to 5ms and not longer. So - no benefits, only downsides. Use F/13.
  3. It is Starizona SCT corrector - and it does correct for coma. Coma is present in SCT design. You are right - you can't use coma corrector designed for newtonian on SCT - but this is proper corrector for SCT scope.
  4. Just measure it. Your exposure length needs to be such that background / LP noise is at least x3-5 times larger than read noise. Simple as that. Take one sub - measure LP levels, convert to electrons, take square root and compare to published read noise values for the gain you are using. Increase sub length accordingly (depending on measured ratio - mind the difference between signal and noise which is square root of signal).
  5. According to Starizona, sensor needs to be positioned at 90.3mm away from the start of the back thread of corrector. It has very strict tolerances for larger sensors, so you need to get your spacing right. ASI2600 has 17.5mm of back focus and therefore you need total of 72.8mm extension between corrector and camera. I would personally look to get 70mm extension and then add 2.8mm of spacers using distancing rings. Alternative is to get something like this to dial in correct distance (like use 40mm regular extension and add this variable one to dial in exact distance): https://www.firstlightoptics.com/adapters/baader-varilock-46-lockable-t-2-extension-tube.html
  6. People with permanent setups and electronic focusers / filter wheels (that are able to repeat position) often reuse flats. Good thing about flats is that you can do them after the session if you see that something changed - like a bit of dust settled on the optics or moved. Dew usually won't change flats as it is way out of focus (not being close to focus plane) - and acts as general light block over whole sensor equally.
  7. Ideally - you want everything matched between subs and corresponding darks (lights and regular darks, flats and flat darks). There won't be very huge difference, but it can - especially if you use longer flats (like couple of seconds). I think there was something odd with subs - it is not due to integration, I don't think it was due to LP - LP creates simple gradients - this has light all over the place. I don't think it is flats - I think your flats are working properly as you have one dust doughnut (r channel) and that is calibrated just fine - can't seem to find it in insanely stretched sub - but I can that strange light gradients. Here is what I would advise you to do if you want to diagnose this. Take two dark files - with usual settings that you plan to use in future (offset, gain, temperature and exposure length) - take one while camera is on telescope, and do second in dark room with camera covered and placed "face down" on a desk to minimize any change of light leak. Post those two darks for inspection (or inspect them yourself - do stats on both - compare average and median values and stretch them very hard and see if they look different - also subtract them and look at their difference - ideally it should have 0 mean / median value and be pure noise without any patterns).
  8. Only difference as far as data goes is level of read noise. With modern low read noise CMOS cameras - even if you bin x4-x5 you will still be at or below read noise levels of CCD sensors (say 1.7e read noise binned x5 will produce equivalent pixel with 8.5e read noise - that is still in CCD territory). Binning in software allows you to decide on bin factor depending on conditions of the particular session and you can choose to utilize "split" bin rather than regular bin - which produces ever so slightly sharper results (removes pixel blur) if image is properly sampled.
  9. You had me there for a moment. Had to check the calendar to make sure
  10. Photons do regularly suffer from loss in energy levels and I think it has been measured (at least indirectly with clocks). Whenever they start of in strong gravity well and leave it - they loose some energy and experience gravitational red shift. They also experience gravitational blue shift when "falling" into our galaxy cluster, our galaxy, our solar system and finally down to earth.
  11. Don't think you need x2 barlow. Start without it. That scope is F/13 and by the formula I gave above, as ASI224 has 3.75um pixel size, ideal F/ratio is 3.75 * 4 = F/15. You are already at F/13 and would thus need 15/13 = ~x1.153 barlow. Using x2 barlow will put you at about F/26 - that is much further away from F/15 than F/13 is. Even using x1.3 barlow will put you at F/16.9 - which is still too much and it would be better to just use F/13 as is.
  12. Out of interest, in the first link you posted, flat was taken at -15C and flat dark was taken at -5C There is also significant difference between dates of capture - one was taken on 19th (flat dark) and other on 25th (flat). Focal length of instrument also differs. I don't really think flats are to blame here. Maybe there is some sort of light leak from somewhere. Do you have any light near by when taking any of the subs or is it done in complete dark?
  13. What you are talking here about is interstellar reddening. It can fairly easily be calculated how much it would impact light from a distant galaxy depending on density of matter in intergalactic space. I won't bother with that - I'm just going to say that it fails completely as wavelengths get longer as micro waves and radio waves don't really scatter all that well of atoms due to difference in size. CMB at that temperature could not possibly form due to scattering.
  14. I get what you are asking, and one of the ways it can be done is to look at densities. Since we look in past when we look into distance - we can do following - given speed of recession of each galaxy and time elapsed - we can calculate how much galaxies there should be per cubic volume of space. We then start counting galaxies in every direction - at certain distances and this should give us cumulative speed up to that point which resulted in such density of galaxies per volume. Maybe simplest analogy would be with water pouring out of a container. Let's say that we don't really know the speed at which water is flowing out, but we have snapshots of how much water there was at the beginning, and then after 1 minute and then after 2 minutes and so on. Knowing that - we can reconstruct "curve" of water flow out of the container. In reality - that is not what happens, or at least that is not primary method of how things are calculated. In reality, mathematical model is developed by taking certain assumptions - like how amount of radiation in space impacts geometry of space time (how that energy curves the space), how amount of stuff / matter impacts geometry of space time (again energy bends space time) - how both of these change as you change volume of space. You then add cosmological constant, for a good measure and because it seems reasonable (and maybe later say that it is your blunder, but people find that data best fits what we observe when it is there ) and get set of differential equations of how things should behave. These equations depend on some values - like amount of stuff in universe and amount of radiation in universe and all of that. Then you take measurements and let computer try to best fit equation parameters to that what you observe. In the process you find out that energy content of universe is roughly 71.4% of that dark energy stuff that has constant density with respect to space, that regular matter that you can see is only 4.6% that energy in form of EM radiation is very small percentage at the moment and that there seems to be huge chunk of stuff missing - that behaves like ordinary matter but can't be seen anywhere. You call that dark matter and scratch your head where you have gone wrong In the mean time - you start hearing that many others report findings that would be best explained if there was this dark matter that can't be seen but it interacts via gravity with stuff that can be seen. In the end you conclude that most evidence points to your model being correct. Then you take calculations from your model and it tells you things like - what is the age of universe, how much stuff of different kind there is in universe - how fast it was expanding depending on time and how that relates to content of the universe at particular time - and everything seems to fit rather nicely. You are able to plot nice graphs of expansion curving this way or that way If you want to get a bit better understanding of how these equations are derived - there is very good lecture by Susskind on cosmology. In one of his first lectures on the topic - he explains and derives these equations without use of General Relativity - with Newtonian gravity only - easy to follow if you have very basic understanding of differential equations.
  15. Yes, indeed - if we think of space and time as this background that is given and that things happen in - much like theater stage, then it is sort of very unintuitive to think of space being created. That is one of the reasons I find notion of space and time being product of something else very plausible.
  16. Ok, not sure if I can sensibly continue where I left off I was trying to maybe offer hint of explanation to what would mean that entanglement gives rise to space and time. I might be waaay off with this - as it is area of active research, but I do believe it is related to relationships or especially concept of mathematical relation. Certain types of relations give rise to certain structures. I'll give you two simple using binary relations. Binary relation is just simply mapping of elements between two sets. It can be thought of as on/off association - either there is link (correlation in entanglement lingo) between elements of the set or there is not. Such relations can have specific properties - like symmetry. Symmetry is property that if A is in relation with B then B is in relation to A and this holds for any A - element of first set and B - element of second set. To give you an example in everyday life - such relation is "being brother to" - if we observe all male set. If Adam is Benjamin's brother - then Benjamin is also Adam's brother. There is Transitivity or transitive relation. If A relates to B and B relates to C then A relates to C. Examples of such relationships is "being longer than" in set of swords, or being descendant in human population. If we take another property like Reflexivity - that simply means that any element is in relation with itself, and we combine those three properties of relations - we get raise to a structure of sorts Relation that is reflexive, transitive and symmetric - defines what it means to be equal. We can change one of above to something else - and we will have different structure. Say we introduce Anti symmetry - being opposite from symmetry - if A is in relation to B then B can't be in relation to A, then we have new structure that has Transitivity, Reflexivity and Anti Symmetry - and such relation gives raise to order. Example would be less or equal to in set of numbers. If we have such relation over set of something - then we can order them according to this relation. Space is nothing more than set of relations - how is anything position with respect to other things - in front, behind, left, right, far, close, ... Time is same thing but for events - in fact, space time is "space" for events - which has such properties, and if entanglement creates relations between elements - it can give rise to structures and space-time can be resulting complex structure that arises because things entangle. Makes sense?
  17. Balloon is both good and bad analogy. It is good as it shows how certain speed of neighbors moving apart can lead to increase in speed at more separated galaxies. It also shows that there is case where everything moves away from everything else - that is usually hard for people to imagine. It is bad analogy in number of ways - it is 3d object but uses 2d analogy. Your question shows what sort of confusion this leads to as you instinctively asked about thickness of rubber and what is the meaning of that. You should only look at a surface of balloon and forget that it has thickness or even that it is a 3d object for analogy to work properly. - it implies stretching of sorts and that does not correspond to reality. If we were to draw a ruler with marks between two galaxies on balloon itself - it would also stretch as balloon is being blown - but the number of marks would remain the same. That sort of stretching and bending of things - could be happening but we would not be able to tell in any way as it would preserve distances - there would still be 10 marks between two galaxies - regardless of how much air is there in balloon. Much better explanation is that space is being created between galaxies and it can be related to balloon case - not by focusing on balloon fabric that stretches - but merely the surface of balloon and actual distance between galaxies along the surface of the balloon. If you blow balloon to larger size and take tape measure and measure distance between two galaxies along the surface of the balloon (and not "inside" surface - like drawn onto it) - then you will see that distance actually grew as there is now more millimeters between galaxies As for density of space - space itself is empty so it can't have density. However there is something that has density in actual equations used in cosmological model that explains this - and it has name cosmological constant or popularly called - dark energy. Whatever that is - it is modeled and behaves as if it it has constant density - there is always constant amount of that stuff per some cubic unit of volume. This is one of reasons people prefer not to think of space as stretching but rather being created between galaxies. If you take two galaxies and they are in one moment - 10 units of length away from each other and you take 10 units of volume along those ten units of length (like 10 meters of distance and you take 10 cubic meters of space 1m in height, 1m in width and 10m in length) - it will contain 10 units of this cosmological constant. As two galaxies move apart and now there is 11 units of length between them - there is also going to be now 11 units of dark energy or cosmological constant between these two galaxies - as if someone "inserted" one volume of space with same properties as all other volumes of space. That is how dark energy behaves in equations and this is why we say that space is both expanding - but not stretching, because this mysterious stuff has constant density - it is somehow tied to amount of space that exists between things.
  18. Well, you can't know it in advance simply because there is no way of telling what will the seeing be on particular night. What you can do, however, is do a retrospective and form "a sense" or "a feeling" of what your gear is capable of delivering (and also test above formulae). You can always measure FWHM of star profiles in your subs after you take them to see if they match your expectations. It is best to record guide RMS and also note down seeing forecast for the night and see if it matches measured FWHM in your subs. You can find seeing forecast for your location here: https://www.meteoblue.com/en/weather/outdoorsports/seeing/london_united-kingdom_2643743 (just select proper location - I made a link to London as location). There is column that describes seeing in arc seconds - that is FWHM value without impact of guiding and aperture size. When using above formulae, do pay attention that two things can skew up results. 1. Local seeing. Above forecast is for global seeing, but local seeing effects can make things worse. Things like shooting over heated houses in winter or large bodies of water or pavement in summer can cause issues and worse final FWHM than calculations suggest 2. Type of scope and its optical performance. Above I outlined how things work for perfect aperture (actually - I never did give airy disk RMS approximation so I'll do it here). RMS approximation of aperture is given by 47.65/aperture_size where aperture size is in millimeters. 80mm scope will have RMS value of ~0.595625" 100mm scope will have RMS value of ~0.4765" 150mm scope will have RMS value of ~0.318" 200mm scope will have RMS value of ~0.23825" It falls quickly (as it depends on inverse of aperture size) and becomes insignificant to other sources as we have seen. Above however holds only for diffraction limited scopes. It is often case that poor focusing, or using focal reducer or other optical corrector (like coma corrector) - reduces outer field aberrations, but makes scope less than diffraction limited over whole field (it's small tradeoff for good looking stars at the edges), and this can give something worse FWHM than predicted by above formulae. Above formulae seem to give best case scenario - so that is something to keep in mind.
  19. Yes that is correct. Usual explanation given is that gravity is strong enough to hold galaxy together, while weak enough to be able to stop things "flying apart". It is a bit like pulling on a rubber band and pulling on a piece of chewing gum. Rubber band is strong enough for regular pool to be stopped - it will expand a bit, but after that it will stay the same, but if you pull on chewing gum - it does not have enough force to resist and will start to get ever so stretched. Ok, that is somewhat hard to explain, but let's start like this. Entanglement between two entities is introduction of correlation between their properties. Say you have something that is can be of any color. Two such things entangled would have some level of similarity in color - say if one has warm color - other would have warm color as well and same for cold color - if first has cold color (like blue or green) - other will have cold color as well. That is weak correlation. Entanglement just means some level (strong or weak) of correlation or anti correlation between entities (anti correlation is when property in one entity correlates to what we consider opposite property in another entity - like if one is cold color then you know other will be warm - or warm - cold combination). Now, I've postulated long time ago when I thought about things - that you need entities and relationships to form existence. Entanglement represents different type of relationships that can form, and there exist a relationship between elements - that acts like 3d space (you can be left of something, right of something, up/down, forward/ backward and there is order and distance). I have to go now, but will expand more later.
  20. I don't think we can just say that light gets tired. What we can say is this, and it again boils down to the same thing - there is no expansion but instead space between galaxies is stretching. With this continuous stretching of space, light gets stretched as it moves thru it and wavelengths become longer - red shifted. However, effects of the two are the same - we see different densities of galaxies depending on how far into the past we look - we see red shifts when we look far into the past. In fact - I think preferred point of view is that space is being stretched rather than "things are flying apart". This "space is being stretched" - also makes sense in context of new research directions. There is now interesting hypothesis being explored, and I like the way it sounds and what it means. Hypothesis is that space and time are not fundamental but arise from quantum entanglement. I think it is neat idea as it would explain "elasticity" of space and time (as being of a certain level of entanglement / phase correlation and so on) - that is why we see gravity affecting things (presence of things that are entangled) and the reason why would space be able to stretch between galaxies. By the way - stretch is probably not the best term - many people use term "to be created" - more space is being created between galaxies. Again - there is no sensible way to justify that point of view if space is fundamental, but if it is consequence of entanglement - then why would not it be created depending how fields entangle ...
  21. One of the approximation made in above discussion is that seeing and guiding/tracking accuracy are independent variables (linearly independent vectors add in quadrature). This means that seeing FWHM is one that would be measured with perfect mount and that tracking / guiding accuracy is one that would be measured in completely still atmosphere. There is usually some correlation between the two - but it can be reduced with using longer guide exposures - often 2s is given as enough time for atmosphere seeing to average out and tracking issues not to show (but sometimes 4s and longer is needed if seeing is particularly poor). Mount must have smooth error and be mechanically sound for this. Multi star guiding helps to further separate the two, so it is beneficial thing. It won't improve above results - but it will make them more accurate. It shortens exposure needed to isolate seeing effects. With multi star setup even 1s is enough to remove seeing from guiding equation.
  22. Don't mix change in acceleration and change in speed. Acceleration is change in speed - or more precisely increase in speed. Not true - at least, not plainly obvious. Correct sentence that is plainly obvious should be - further away objects are - faster they are moving away from us. Speed not acceleration. Correct Again not true, or rather not obvious - what is obvious is that they have biggest speed away from us. Again - speed not acceleration Two things here. First speed - fact that more distant objects are going faster and closer object slower does not have to do anything with speed of acceleration (if it is increasing, is constant or is decreasing). It has to do with the fact that universe is expanding at some rate. You can sort of equate this with everyone moving away from their neighbor at some speed. Why do we then see distant things moving faster? Well - because they are not our neighbors. Imagine this: A - B - C B sees A moving away and C moving away from them at some speed. C will see B moving away from them at the same speed - but then, they must see A moving away at twice that speed. This can extend to a chain of galaxies and further down the chain you go - faster galaxy moves away from us - because each move at the same speed one from another. Now to acceleration. Acceleration would be small change in above neighbor to neighbor speed. We can't see that, or rather - it takes complex math, or complex diagram to be drawn in order to see if things are accelerating or decelerating or staying the same (constant speed). This can be read of the diagram. If you plot distance vs time - you can always see if object is accelerating or decelerating at particular instant in time. This is what constant speed looks like - distance between two things increases linearly with time. No acceleration. This is what acceleration looks like on such graph: It is change in the slope of the graph that represents acceleration - if it is curved upwards - it is gaining speed - or accelerating. If it is curved downwards - it is slowing down / decelerating. You might have seen image like this that explains how universe (according to our models) behaved in past: Curve on this graph has meaning - and meaning is the one I described above. When graph is curving "downwards" (not necessarily pointing down - just bending in that direction) - we have deceleration, when it is curving upwards - we have acceleration. From above image - you can read how acceleration changed over the age of the universe. First there was big bang and then there was massive acceleration in expansion speed - this is called inflation - you see strong curving upward, but then we see curving downward in "dark ages" period. Universe was still expanding at this stage but this expansion was slowing down. Then after period of reionization, structure of universe changed (mass/density thing, or how much radiation vs regular matter there is) and universe started accelerated expansion once again - but at a much slower rate than in inflation period (mechanism that drives accelerated expansion is different). So universe was always expanding - it always had some speed between neighboring galaxies - but this speed of recession changed during the lifetime of universe - it was rapidly accelerating, then it was decelerating and now we are in very mild acceleration epoch. Sorry if this was too complex - but I don't really know how to answer your question simpler than this.
  23. While pixel size and guide RMS are often connected with some sort of rule of the thumb - I'd rather try to explain it a bit differently. For the moment, forget pixel size. Forget pixels completely. Let's just look what happens at focal plane of the telescope as the light comes in. There are three different major contributor to the blur that happens to the image. - seeing - guiding/tracking precision - aperture size Each one of these produces some level of blur on its own. In perfect conditions with perfect guiding - there is maximum magnification of the telescope that can be used. After that image just gets bigger without detail - it is blurred. Seeing of course blurs the detail and tracking precision does the same. When all three are present - they combine and produce larger blur then each of individual components. Thing is - they don't combine in trivial way - just by simply adding some numbers. They combine a bit more complicated than that (in fact - quite a bit more complicated by process called convolution). However, we can simply things by using some approximations that will help us understand things. Each of components can be represented by RMS value. Guide RMS comes as RMS value already. Seeing comes as FWHM, but can be converted to RMS by simply dividing it with 2.355. Telescope aperture or more precisely Airy disk it produces can also be represented as RMS (although somewhat more complicated calculation) - but for the time being, we won't pay much attention to it - other than to say that for most apertures - like 80mm+ - it is the smallest component. You will notice that I emphasizes the smallest component in last sentence - that is for a reason. Simplified formula for calculating total blur goes like this: square_root(first_rms_squared + second_rms_squared + third_rms_squared). So we have square root of sum of squares. If this reminds you of Pythagorean theorem - then great, because we want to look at it in that way: why? Well, because of this case: If shorter leg is significantly smaller than longer leg then hypotenuse is almost the same length as longer leg. I'll reiterate in a bit different way - if one component of the sum is much smaller then the others - then it is contributing much less. You will say - hold on, that is true for ordinary sum as well: 10 + 1 = 11, 10 and 11 are not that far away. Yes, but look what happens when we add 10 and 1 in quadrature - square_root(100 + 1) = ~10.05 Look how much smaller the difference gets when things are added this way. What does it all mean - how big your guiding error is then? Well - that depends on largest factor, or what your guide RMS is compared to other big RMS out there that is seeing. You mention seeing of 1.2" FWHM - well, year, that does not happen , or rather happens once a year on average site if at all. Usual value is 2" FWHM or 3" FWHM if seeing is average to poor. 1.5" FWHM is excellent seeing for most sites. Let's translate that into RMS: 1.5" FWHM = 0.637" RMS 2.0" FWHM = 0.85" RMS 3.0" FWHM = 1.274" RMS 4.0" FWHM = 1.7" RMS In order not to contribute much - we need guide RMS to be quite a bit smaller than seeing RMS, and if your guide RMS is 1.2" - it is never significantly smaller than seeing RMS. Guide RMS needs to be as small as you can make it. Simple as that. Only when you reach 0.2-0.3" RMS levels - you can say, ok, so I made it small enough compared to average seeing conditions (~x4 smaller) so I don't have to worry too much about it. What about that - versus pixel size thing? Making your RMS half of your imaging scale is good rule of the thumb that works for most common cases. Here is an example: Say you are imaging in 3" FWHM and you have 1.2" RMS guide error. Rule of the thumb says you should have 2.4"/pixel - as that is twice your guide RMS. Let's see if that is true. 3" FWHM is 1.274" RMS and combined with 1.2" RMS that gives: sqrt(1.274^2 + 1.2^2) = 1.75" RMS or ~4.12" FWHM (when we multiply back with 2.355). Optimum sampling rate for that level of blur is 4.12 / 1.6 = 2.575 - which is very close to 2.4 Even if you add aperture size in the mix - you still get very close results in common range - that is (1.5"/px - to 2.5"/px, 2" FWHM - 3" FWHM seeing and 0.7"-1.2" guide RMS, 4"-8" aperture). However, if you want accurate results - there are complex formulae that will calculate effective resolution of your system and what pixel size to use (but these contain some approximation - like perfect optics, which is not always the case and so on ...) Bottom line - make your guiding the best you can (lowest RMS value) always. Don't "settle" for it until you reach 0.2-0.3" RMS. Mind you - that low numbers are not always possible with mass produced mounts, so do research of what can be done and at what cost.
  24. O, I'm gonna travel drunk, that is for sure!
  25. Why do you think we should travel with earth being reference point? How about Sun? Maybe we will jump 500 years in past only to find ourselves floating in space as Earth moved on its trajectory around the Sun. Maybe our reference point is largest center of mass near by - so we will be located in interstellar medium as Sun also moved away on its trajectory rotating around galaxy center. Ok, so I'm guessing earth is reference then, but if you travel in time only - you are very likely to end up on some other place on earth - as earth rotates with respect to its center of gravity, In just 12h one is located on opposite side of earth. I think that oak tree is the least of our worries
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.