Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,027
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Since I moved to the country side - I often get to see very prominent rainbows. Here is one image I took with my phone that shows both first and second order diffraction spectrum:
  2. I would argue that no mount needs that high guide rate. What is wrong with guiding at x0.25 for example? Just to put things into perspective - even if you guide at 0.5 second intervals (which is very very fast guide cycle and probably unnecessary) - you can still correct for 1.875" of error in single correction - that is almost two arc seconds of drift or whatever needs to be corrected. As you see - there is plenty of "corrective power" in x0.25, but such slower guide speed is much easier on the mount and your setup. From physics we know that F=ma, or force is equal to mass times acceleration. On the other hand, acceleration is change in speed per unit time. If your pulse is x0.9 of sidereal - it is very high speed change and very short pulse - that means much more acceleration (or deceleration in case of RA axis and correction in different direction than motion of the mount) which causes jerk on your setup and introduces oscillations that need to dampen down (and in general cause issues with FWHM of your stars).
  3. You are quite right about being careful not to over expose in some cases, but in principle - it is never ok to underexpose. We should really say that one should always strive to do as much exposure as "sensible" or needed. In photometry that you mentioned - under exposing will produce less reliable results because SNR suffers. There is even technique to avoid over exposing star core and to still capture as much signal as possible. Star is defocused a bit so that star profile is no longer sharp in center but rather spread in doughnut. That allows for longer exposure and accumulation of signal and it avoids over exposed parts. Similarly - in planetary / lucky imaging. One can argue that subs are really under exposed - but they are not in general sense. They are not needlessly shorter than they need to be. In order to freeze the seeing one is indeed using very short exposures - like 5 ms or less - and that can look like very dark / under exposed single frame - but max exposure is governed by seeing and no one would go lower than is actually needed to get the sharp image. On the other hand - for say Lunar imaging - one might go lower than what is needed to freeze the seeing - but that is because signal is so strong (moon is very bright target) that shot noise swamps the read noise and shortening the sub duration does not have as much impact on final SNR, but shorter exposures allow for even more stable subs with respect to seeing.
  4. No. In AP we use stacking - so many individual exposures. Rules that apply are: - SNR of stack depends on total integration time - more time you spend - better the image - Individual sub duration is primarily determined by level of read noise of your camera compared to other noise sources (I say primarily, because you might choose based on some other criteria like - ability of your mount to track precisely, storage space availability, likelihood of individual subs being ruined for your setup due to wind or some other effects ...) - Over exposed parts of the image (mostly star cores, but sometimes bright parts of targets as well - like galaxy cores or very bright nebula parts) are handled by using set of shorter exposures that capture data for these parts only and you blend those in in processing to replace over exposed parts in regular stack.
  5. Problem is with the predictive power of the theories. If it walks like a duck and quacks like a duck - it is easy to think that it is then accurate description of underlying physical reality
  6. You can image Jupiter with 3-4 minute imaging runs. If the planet drifts thru the FOV in 25 seconds - you'll need to move scope couple times to re acquire it. This will probably be hard at first - but I'm guessing that you'll get the hang of it. You can use special software to discard frames where the planet is outside the FOV and you are in process of putting it back on sensor. Alternative is to make DIY dob platform and have tracking for up to hour without need to manually move the scope.
  7. Light pollution atlas 2022 also features click for more information when opened in OpenStreetMap I'm currently at 20.34 according to that data source (which I think is very much the case). It also provided me with new possible observing location that I must relay to my local astronomy buddies
  8. I completely missed that! Thanks!
  9. Very nice, but is there any way to get coordinate mapping? As is, resolution is not good enough to pinpoint location. According to that map, I'm somewhere between 20.0 - 20.5 and 20.5-20.9 areas. Above gives 20.84, but I'm sure that level of LP has risen from 2015, so it is more like 20-20.5 now.
  10. https://www.lightpollutionmap.info Click on the map where you want and you will get estimate for sky conditions looking like:
  11. FYI there are two Numberphile videos on the subject. One from 2014 and one from 2024. Video you are talking about is referring to the first Numberphile video, it's not addressing the latest one.
  12. Maybe it is down to achro not being optimized in red. We often focus on blue side of spectrum - because defocus of achromat is bigger on that side (violet halo part in particular), but optical designers have freedom on how to tilt this curve: in above image we see three different ways to optimize achromat - blue, red and green (nothing to do with colors) - which bring different Fraunhofer lines into same focus. Blue curve has the smallest violet bloat but on the other side - it has largest defocus in red part of the spectrum. Then there is also matter of spherochromatism. All of that can cause red planet to loose contrast against black and white features.
  13. It is actually true. Start of with Moon to get to know the scope and learn basics of observing (maybe even watch some introductory videos on you tube on how to best observe). Saturn and Jupiter will be very small but will show detail in good conditions. You'll need to wait for Mars to get into favorable position. Well, this is true for all the planets - you need to wait for their "season". For galaxies and deep sky objects - dark sky is the key. Get away from the city and street lights. You also need transparent skies. Observing is a skill and it is learned. The more you observe - better you become.
  14. https://www.firstlightoptics.com/dobsonians/skywatcher-skyliner-150p-dobsonian.html
  15. Can't tell if that is a joke or you posted wrong link by mistake
  16. To me, it seems like good way for different way of expressing our theories, and a path to get there. If we have a theory which necessarily involves summing to infinity and we need to use clever regularization tricks to avoid that - that just sounds like we are using wrong way of calculating something that in fact does not include infinity. By examining how to best regularize things - I'm guessing will will get more insight how to differently formalize our theory to avoid infinite sums in the first place.
  17. We might be focusing on bits that are less important. What do you say about: - paper coming out about selection of regularization weighting function - fact that choice of weighting function that produces finite value is related to the symmetries in physics
  18. Why not? I put my trust in fellow SGL members who recommend said video when discussion is had on topic of read noise and sub duration. I'm also aware that SharpCap (software which author is Dr Robin Glover) can do needed calculations to determine read noise of camera and proper sub duration. No, we are actually not having civil conversation about this - you keep bashing at what's been said without any arguments. You showed absolutely nothing. You keep giving vague statements on topic. This is wrong on several accounts. I will list them: 1. First part is partly correct - it indeed requires signal to be above read noise, but signal needs to be above total noise - not just read noise. SNR needs to be higher than 1 (or in fact about 5 for reliable detection). 2. Increase in total integration time won't necessarily raise SNR above certain threshold. If we use exposures that are very short - for example few milliseconds, then you will introduce too much read noise for SNR to be above limit. In fact - for any total integration time - I can select sub duration that will keep SNR below 1 due to read noise. 3. Using faster optics is not the only way to increase photon flux per pixel. That is also achieved by using larger pixels or by binning smaller pixels to larger size. I agree on most of what you said here - although I don't know what you mean by "SNR of object signal vs read noise". That statement makes no sense. You should not compare ratio of different quantities (SNR is ratio of signal to noise) to one of such quantities (read noise). I'll just assume you meant SNR (signal to noise ratio). I never claimed differently - although you tried to misinterpret what I've said in such way several times. However, I will repeat what LP is good for - it shows you at which sub duration it virtually makes no difference for given read noise and you don't need to use longer subs. In fact - this is not exclusive to LP - it applies to any noise source that depends on time - when any noise source other than read noise becomes significantly larger than read noise - read noise stops having significant impact on total SNR (unlike above example where we use sufficiently short exposures and read noise overpowers signal for any integration time). Having this capability brings equality to cameras with different read noise. If one camera has 1.5e of read noise and other has 3e of read noise - there are sub lengths for each of them that will produce the same final SNR for the same integration time. I agree. No point in further "debating" this topic.
  19. Maybe they have different size secondary? 150PL being on EQ mount - more suitable for planetary work and thus having smaller secondary and 150P being general purpose scope - larger secondary.
  20. It does not. I was talking about swamping read noise with some other noise source. You can't misinterpret what has been said - it is very clear. It does not imply the same thing - it is very clear in what it says - and that is that impact of the read noise on final SNR depends on other noise sources. Signal is what it is - it does not change if we decide to spend one hour in 60 subs or the same hour in 5 subs. Neither does change any of noise sources that are time dependent. Only thing that changes is read noise - total amount of it. And Impact that this has on final image solely depends on other noise sources and their magnitude compared to read noise. If read noise is small compared to any other noise source in single exposure - it's impact on overall noise and thus SNR will be minimal. I've shown you calculation for this. If any other noise has level that is x5 (or more) than the read noise in single exposure - then total increase in noise (and decrease in SNR) with respect to read noise (for example comparing this case to camera with 0 read noise) - is less than 2%. I can't comment on what Robin Glover presented in that video as I have not watched the video. I've only concluded from comments from other members that he is presenting valid known statements (and thus is surely right in his presentation). Since you say that you don't agree with me - could you please answer my question about difference that 1.5e versus 3e of read noise makes, depending on shooting conditions? It is ok to disagree with me as long as you put forward different view and provide actual facts in support of it. Alternatively - you can point what it is that is wrong (and preferably cite a source) in what I've said.
  21. Here is a bit more detail on why above is considered valid practice: https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/
  22. You are left with LP noise as well - you can't remove that and it exists as we had LP signal present. Not sure what you mean by this - I never said anything remotely so. Never said such thing. I never said that signal is a distraction, could you please point to the place where I said so? Signal itself is not the key, signal to noise ratio is.
  23. Ok, so drop off in illumination is easiest to see in daylight or maybe at dusk or dawn. You also might be able to see it in light polluted night sky. It actually depends on how bad it is. Here is what it looks like (it was surprisingly hard to find good example): So you will see field stop - being more or less sharp (red arrow) - that has nothing to do with vignetting, and then you will see a darker ring close to field stop (blue arrow) - where whatever is in the view will be a bit darker. Now, if I say that there is drop off of 50% at the edge - this means that only 50% of the light will reach parts next to field stop. This however does not mean that image will be half as bright - because our vision is not linear - if there is 50% of light - we don't see it as being half as bright as 100% light - it's more like 80% (our vision is logarithmic in its nature). We often don't notice the difference that is 7% or less - so if there is only 93% drop off at the edge - image will look normally to our eyes up to the edge. In planetary viewing this won't be as important. In lunar it might be as it will spoil the view. In DSO viewing it is very important as 50% light means ~0.75 magnitudes of difference. If you have some star or object that is on threshold visibility in center of the field and you move it to the side - it will disappear and you won't be able to see it because of this. It can also be annoying to have such vignetting if your skies are not pitch dark - you have some LP - because sky gradient will behave like sky in above image - it will be bright in center and get darker towards the field stop - which itself will be completely dark. With your 6" F/8 and 1.25" eyepieces, you'll probably need to really look for it to be able to see it with those two eyepieces you mentioned.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.