Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,010
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. We might be focusing on bits that are less important. What do you say about: - paper coming out about selection of regularization weighting function - fact that choice of weighting function that produces finite value is related to the symmetries in physics
  2. Why not? I put my trust in fellow SGL members who recommend said video when discussion is had on topic of read noise and sub duration. I'm also aware that SharpCap (software which author is Dr Robin Glover) can do needed calculations to determine read noise of camera and proper sub duration. No, we are actually not having civil conversation about this - you keep bashing at what's been said without any arguments. You showed absolutely nothing. You keep giving vague statements on topic. This is wrong on several accounts. I will list them: 1. First part is partly correct - it indeed requires signal to be above read noise, but signal needs to be above total noise - not just read noise. SNR needs to be higher than 1 (or in fact about 5 for reliable detection). 2. Increase in total integration time won't necessarily raise SNR above certain threshold. If we use exposures that are very short - for example few milliseconds, then you will introduce too much read noise for SNR to be above limit. In fact - for any total integration time - I can select sub duration that will keep SNR below 1 due to read noise. 3. Using faster optics is not the only way to increase photon flux per pixel. That is also achieved by using larger pixels or by binning smaller pixels to larger size. I agree on most of what you said here - although I don't know what you mean by "SNR of object signal vs read noise". That statement makes no sense. You should not compare ratio of different quantities (SNR is ratio of signal to noise) to one of such quantities (read noise). I'll just assume you meant SNR (signal to noise ratio). I never claimed differently - although you tried to misinterpret what I've said in such way several times. However, I will repeat what LP is good for - it shows you at which sub duration it virtually makes no difference for given read noise and you don't need to use longer subs. In fact - this is not exclusive to LP - it applies to any noise source that depends on time - when any noise source other than read noise becomes significantly larger than read noise - read noise stops having significant impact on total SNR (unlike above example where we use sufficiently short exposures and read noise overpowers signal for any integration time). Having this capability brings equality to cameras with different read noise. If one camera has 1.5e of read noise and other has 3e of read noise - there are sub lengths for each of them that will produce the same final SNR for the same integration time. I agree. No point in further "debating" this topic.
  3. Maybe they have different size secondary? 150PL being on EQ mount - more suitable for planetary work and thus having smaller secondary and 150P being general purpose scope - larger secondary.
  4. It does not. I was talking about swamping read noise with some other noise source. You can't misinterpret what has been said - it is very clear. It does not imply the same thing - it is very clear in what it says - and that is that impact of the read noise on final SNR depends on other noise sources. Signal is what it is - it does not change if we decide to spend one hour in 60 subs or the same hour in 5 subs. Neither does change any of noise sources that are time dependent. Only thing that changes is read noise - total amount of it. And Impact that this has on final image solely depends on other noise sources and their magnitude compared to read noise. If read noise is small compared to any other noise source in single exposure - it's impact on overall noise and thus SNR will be minimal. I've shown you calculation for this. If any other noise has level that is x5 (or more) than the read noise in single exposure - then total increase in noise (and decrease in SNR) with respect to read noise (for example comparing this case to camera with 0 read noise) - is less than 2%. I can't comment on what Robin Glover presented in that video as I have not watched the video. I've only concluded from comments from other members that he is presenting valid known statements (and thus is surely right in his presentation). Since you say that you don't agree with me - could you please answer my question about difference that 1.5e versus 3e of read noise makes, depending on shooting conditions? It is ok to disagree with me as long as you put forward different view and provide actual facts in support of it. Alternatively - you can point what it is that is wrong (and preferably cite a source) in what I've said.
  5. Here is a bit more detail on why above is considered valid practice: https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/
  6. You are left with LP noise as well - you can't remove that and it exists as we had LP signal present. Not sure what you mean by this - I never said anything remotely so. Never said such thing. I never said that signal is a distraction, could you please point to the place where I said so? Signal itself is not the key, signal to noise ratio is.
  7. Ok, so drop off in illumination is easiest to see in daylight or maybe at dusk or dawn. You also might be able to see it in light polluted night sky. It actually depends on how bad it is. Here is what it looks like (it was surprisingly hard to find good example): So you will see field stop - being more or less sharp (red arrow) - that has nothing to do with vignetting, and then you will see a darker ring close to field stop (blue arrow) - where whatever is in the view will be a bit darker. Now, if I say that there is drop off of 50% at the edge - this means that only 50% of the light will reach parts next to field stop. This however does not mean that image will be half as bright - because our vision is not linear - if there is 50% of light - we don't see it as being half as bright as 100% light - it's more like 80% (our vision is logarithmic in its nature). We often don't notice the difference that is 7% or less - so if there is only 93% drop off at the edge - image will look normally to our eyes up to the edge. In planetary viewing this won't be as important. In lunar it might be as it will spoil the view. In DSO viewing it is very important as 50% light means ~0.75 magnitudes of difference. If you have some star or object that is on threshold visibility in center of the field and you move it to the side - it will disappear and you won't be able to see it because of this. It can also be annoying to have such vignetting if your skies are not pitch dark - you have some LP - because sky gradient will behave like sky in above image - it will be bright in center and get darker towards the field stop - which itself will be completely dark. With your 6" F/8 and 1.25" eyepieces, you'll probably need to really look for it to be able to see it with those two eyepieces you mentioned.
  8. Sure is. We often use short FL eyepieces for planetary to get enough magnification. Rarely anyone uses less than say x70-x80 to view the planets. That would mean lowest mag EP would be something like 15mm. Even wide field 15mm EP will have relatively small field stop - like 20ish mm. At 10 mm away from optical axis at focus point, illumination diagram looks like this: Again not 100% - but that is closer to say 80% vignetting at lowest power wide field planetary eyepiece.
  9. Yet it has only 22% CO - meaning 33mm. If secondary is placed so that focus point is say 200mm away from focal plane secondary mirror (about 90mm to the edge of the tube and another 110mm for focuser and drawtube), then converging beam at secondary will be 25mm or comparable in size. I think that around 19mm away from central axis illumination drop off will be 50%! That is eyepiece with field stop of 38mm. Even field of 1.25" is not fully illuminated. Intersection of two - converging beam and secondary mirror at edge of 1.25" field look like this:
  10. Indeed, however, you will find that most 6" F/8 newtonians come with 1.25" focusers while most 4" F/10 achromats come with 2" focusers. Limitation on 6" F/8 newtonian comes from the size of secondary mirror / central obstruction. To be general purpose scope, CO is often around 25%. That means 1.5" in diameter (a quarter of aperture size 6/4 = 1.5). You can't effectively illuminate 2" field with 1.5" secondary mirror - you will get a lot of vignetting, and compromise is to use only 1.25" eyepieces. Do another comparison between the two scopes - one using max field stop EP in 2" variety and other using something like 32mm plossl (which has largest field stop in 1.25" version).
  11. Only if you use F/5 newtonian instead of F/8. Two different instruments.
  12. Out of interest, what is the purpose of this inquiry? Are you looking to get yourself a new scope and wander which way to go or something else? We have not discussed all the other things that go with owning a telescope - like size, mounting options, portability, storage and so on ... Are these relevant for this discussion?
  13. LP signal is made out of photons, right? Those photons always behave the same. Regardless if they are coming from the target or from LP, which means that they have associated noise - Poisson noise. LP Signal itself is very easy to remove. It is usually either constant or in form of a linear gradient (in some cases it can be higher order polynomial, but in any case - easy to model and remove). What you can't remove is Poisson shot noise tied to that LP signal and it is this noise that adds to total noise in the image. Since both read noise and this LP shot noise have zero correlation - they are added like linearly independent vectors - or square root of sum of squares. We can easily calculate increase in LP noise given the "swamp factor" - or how many times LP noise is larger than read noise. Say that we have some read noise X and we have x5 larger LP noise. Total noise will be: sqrt(X^2 + (5*X)^2) = sqrt( X^2 + 25 *X^2) = sqrt(26*X^2) = X * sqrt(26) = 5.099 *X In another words - if we have LP noise that is x5 larger than read noise it is the same as having LP noise that is 5.099 / 5 = 1.0198... or ~2% (1.98....%) times larger and no read noise at all. By choosing suitable sub duration we can ignore read noise of the camera by assuming we have 2% more of the light pollution noise. That 2% might seem like significant value - but in reality it is far from it. Over the course of the evening, due to target changing position we can have its apparent magnitude change by 0.05 most of the time (atmospheric attenuation). That translates into ~5% change in signal level and consequently more than 2% change in SNR, so you see 2% increase in noise happens just because earth spins and is not something you'll notice in the image. Hear me out . What I'm saying is not my personal opinion. I'm just listing verifiable facts. I've seen people often mention this video, and I think it will benefit you to watch it as well: https://www.youtube.com/watch?v=3RH93UvP358
  14. I also took their test once and "failed" (actually score was above maximum measured by their test). I decided to play "hard to get" and did not become a member (although they really wanted me to). True story.
  15. 4" F/10 achromat is completely different instrument. Mine, although cheap mass produced item (SW Evostar 102) showed me views of Jupiter and Saturn one evening that are probably in top 5 views of all times of these targets that I had with any scope.
  16. Well, here is one thing that you probably can't do with 6" F/8 newtonian: It has a bit more focal length, but main issue is the size of secondary mirror and if it will illuminate 47mm of field
  17. Of what? I'm inclined to say that 6" F/8 newtonian of good optical figure would best 4" F/10 achromat on everything except wide field views.
  18. At this point in time - I'm finding much easier to answer what drives me away from SGL rather than to it. It's become part of my life so I'm not thinking in terms of being drawn to it (much like you don't think what drives you to your family, it's your family, you know), however, there are things that I must actively combat against in order not to be driven away from it.
  19. Why? If we look at weighing function up to N - sure, it will produce smaller result then all ones up to N, but if we include weighing coefficients above N which are greater than zero - why do you think that sums will be different in the limit where N tends to infinity? But when you are summing the series - you are doing the same, you are summing weighted numbers, except in this case weights are 1,1,1,1,1,1, .... ,0,0,0,0,0, ... I'm not sure that you are understanding weighing function properly. Let me show you with a graph Green is poorly drawn and it should look like this: There should be smooth transition at each ever higher N. For the most part - two weighing functions are roughly the same - only around N there is difference in how they transition the actual value of N - either discontinuous or smooth (which makes it analytical function). Both of above weighing functions would produce similar looking graphs when you plot calculated values against N like you suggested: Only difference is that when you apply rigorous mathematical framework of limits to some smooth weighing functions - you get converging result and for others result remains diverging (like in case of step function). Maybe it is best to think of it this way: let's solve X^2 = -4 If you try the "brute force" approach of finding square root of -4 - you might end up in infinite calculation without end (try any iterative method designed for positive numbers in order to calculate X). But if we write above expression a little bit differently - like this: X^2 = 4 * i^2 Then it is trivial to calculate that X = 2i - even if we use iterative method to calculate square of 4 - which will work in this case. You might say - but you used a trick! Sure, but it is valid, well defined mathematical trick that is consistent with the rest of the mathematics, so not much trickier than say checking if number is divisible by two by examining the last digit.
  20. Brightness of the image does not depend on captured data but rather white point you use and brightness of the display device you use to show the image. Important metric is signal to noise ratio - or can something be detected above noise level. I've shown that, while in single image unbinned - there is no way to detect tidal tails as they are below noise floor (you need about SNR of 5 to have reliable detection) - then can easily be seen in binned image because of SNR being higher. Quite the opposite - they need to be truly random with zero correlation in order to add like that. If they were fully deterministic - they would add like normal numbers do - like signal does - just plain old addition. No comment
  21. Signal does not need to swamp the read noise at all - and when imaging the faint stuff - it almost never does, at least not signal of interest - target signal. What is important when we talk about read noise is to swamp read noise with some other type of noise. Out of basic types of noise present when imaging - only read noise is "per exposure". All others are time dependent - that is dark current noise, light pollution noise and target shot noise. Since target signal is often weaker then read noise in single exposure - that is not our candidate. Neither is thermal noise (dark current noise). Only real candidate is light pollution noise. Noise adds like linearly independent vectors and if one is much bigger than the other - result will be very close to that larger one (think right angled triangle and one side being particularly short - that makes hypotenuse almost the same length as the other side). Histogram is just a distraction - it has almost zero value in astronomical imaging. Main thing it can tell us if there is some clipping - but we can see that from the stats as well, so no real reason to use histogram. When you want to calculate optimum exposure length - you should simply measure background signal per exposure - from that derive LP noise (which is square root of sky signal) and compare that with your read noise. This LP noise should be 3-5 times larger than read noise. Any exposure length longer than that is just bringing in diminishing returns (there is only 2% difference in SNR if one uses x5 swamp factor over single long exposure - and humans can't tell that difference in SNR by eye). This makes no sense - as I've shown above with the example of one sub. In that sub read noise is many times larger than signal in tidal tails - yet binning works just as fine. SNR impact of read noise depends on other noise sources and not on signal we are trying to capture. As long as we keep it (read noise that is) below some fraction of some other noise source - it is irrelevant regardless of how low our target signal is. Let me ask you a question like this: Say you have fast scope with large pixels and two different cameras, but cameras differ only by read noise. First camera has 1.5e of read noise, and second camera has 3e of read noise. Under which circumstances will you actually see decrease in SNR between these two setups? Full size well has nothing to do with bringing signal below read noise levels into view. Full well size of camera is largely inconsequential in astrophotography.
  22. actual FPS will depend on several factors - but most important one is the size of frame you are downloading: Other is actual USB speed of computer and settings in capture software (there is something called usb speed or similar which regulates how much of usb bandwidth is hogged up by camera - raise that value to get better FPS but lower it if you start experiencing unresponsive camera / freezes or similar). @Dunc78 I concur it is probably ASI224. Given certain priorities - some other model might be better suited (for example, for lunar, size of sensor might be more important than max FPS as moon is rather stationary target that often needs larger FOVs, or for Ha solar due to fact 656nm wavelength is captured with Ha scopes that are often very high F/ratio - larger pixels are important factor in order not to bin the data)
  23. It's not bad - it is just one of weighing schemes that indeed produces an infinity. Many different weighing schemes also produce infinity, but there are some that produce -1/12. I also think that you can't have arbitrary weighing scheme - only one that satisfies certain criteria. I'm not sure what that criteria is, but as far as I can tell - Terence Tao did research into that and probably proved that certain class of weighing schemes are equivalent. Perhaps one criteria is that weighing function needs to tend to 0 as one moves to the right of N at greater speed than the speed of N approaching infinity and similarly that weighing function needs to tend to 1 as one moves to the left from N (again going faster than N goes to infinity) - or some other requirement like that. My firm belief is that sum 1+2+3+4+5+.... of to infinity is one way of calculating certain value - but flawed way of doing that as it does not converge in classical sense. Which does not mean that actual number does not exist. Another way of calculating the same value would be by using different weighing function - and some of those weighing functions are not flawed and allow you to calculate the value in that way. Zeta function is yet another way to calculate the same value (which again works). Point is not in the infinite sum being equal to -1/12, but rather it is that we have some value that is -1/12 and that there are different ways to calculate it - one of which fails but we know why it fails and when we encounter this way of calculating - we know what the answer should be - regardless if we can't actually pull off that particular calculation. This is why it works in physics - we know that answer is right - it is just "algorithm" to calculate it that it is flawed, and above paper gives us better insight into why it's flawed and what are correct ways to calculate such values that we can use when we stumble onto a flawed way of calculating them.
  24. I don't think it was aimed an anyone in particular - but in general notion that often repeats - advice is "get large newtonian" rather than "get large aperture telescope of any design type that suits you the best". Both will have the same speed at the same pixel scale, but other types might be, and often are, more manageable than large newtonian on several basis. First, they can often be used without corrective optics which often reduces strehl ratio of the telescope in center (to be able to correct over larger field). Second - they can be of a compact design which is of course easier to manage and mount. There are some drawbacks - like ease of collimation (which I think might be debatable) and soundness of construction - but that is just different type of discussion - bad vs good telescope execution. Newtonians also have one major thing going for them and that is price - they are often cheaper. There are some other smaller things in favor of folded designs - like better baffling, slower to dew up, easier to produce flat fields (less chance of light leak) and so on ..., but again, that might be better directed at bad vs good telescope execution rather than inherent design type.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.