Jump to content

vlaiv

Members
  • Posts

    13,264
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Indeed - that is equivalent of saying - larger scope gathers more light. Problem is - that light is spread over more photo receptors and each photo receptor gathers same number of photons as before.
  2. If we keep exit pupil constant we have constant ratio of photons per eye receptor. Our eye receptors are fixed in sense of how many there are per mm2 of back of our eye. If we keep exit pupil the same - larger telescope will gather more photons but at the same time it will magnify image more and projection of object on the back of our eye will be of larger size (eye's focal length is fixed) - covering more millimeters squared. These more photons will be spread over more receptors and on average - each receptor will gather the same amount of photons (for same exit pupil - increase in aperture and increase in magnification just cancel each other out).
  3. I just thought of another point that we have not considered. We take that JND is constant - but maybe it is not. JND stands for Just Noticeable Difference. In discussion so far we referred to it as contrast ratio - or rather we referred to physical aspect rather than perceptual aspect. Say we have two sources of light - how much different they need to be in order for us to be able to tell that they are different. I've found somewhere that this threshold for vision is about 7%. People doing variable star observations (without computer) - can chip in here - what is magnitude difference that can be seen between two stars? Above 7% is 0.07 and that is equal to 0.073 mags of difference. Question is - is this 7% (or what the actual value is) - constant or does it change when we approach low light limit? Maybe in low light scenario we need larger JND and that is part of the problem?
  4. That is empirical fact. I'm trying to understand why this happens because logic / science says following: If you have object with certain surface brightness and you decrease aperture size and decrease magnification equally - apparent surface brightness of object does not change. Same number of photons hit back of our eye - per unit surface or per receptor cell (as we have definite density of receptor cells). How come that with same number of photons hitting our receptor cells - in one case we see something and in another we don't? That is the question.
  5. Well, I think that we need to experiment There is rather nice way to conduct this sort of experiment - one just needs zoom eyepiece and variable aperture mask for their telescope. That way we can vary different parameters - like magnification and aperture size and see how visibility of object changes with these?
  6. Oh, c'mon, just because I used "that" circle - viewed at an angle image? What's wrong with plastic circular disk with half of world map printed on it? (btw, yes you do want to try Rakia https://en.wikipedia.org/wiki/Rakia )
  7. Although I agree with some of the points, I find it hard to follow when I see graph that is explained as : "This diagram shows that, for a given background (e.g. the night sky), less contrast is needed to see a larger object." yet graph shows brightness down to mag30 per arc second squared (contains data down to mag27) - when night sky is never darker than mag22 - which is natural brightness of dark sky. Further, if magnification is the issue, given certain surface brightness vs background brightness in say 8" scope at certain exit pupil when object is seen and same exit pupil in 4" when object is not seen. We can conclude that it is down to magnification as contrast ratio and actual surface brightness of both sky and object is the same. We can then increase magnification in 4" scope - two fold and that should keep contrast ratio the same because both target and sky will drop in brightness by same amount - and if we did not push target out of zone of perception (enough photons received) - we should now detect object in 4" scope as well, right? Does that happen? I don't think so.
  8. Could you elaborate? Resolution here is rather inconsequential, I think as we are talking about medium magnifications and extended faint objects - not planetary detail.
  9. Not sure what you mean by integrated brightness? If you are referring to integrated brightness of the object - well that does not change. As surface brightness goes down with magnification and area increases - they cancel each other in the end. That is sort of obvious - for a given scope, we can only gather so much light from the object - changing magnification and exit pupil can't change that. On the other hand - total amount of light gathered for any given object is proportional to aperture surface, and indeed 8" scope does collect more light than 4" scope, but look at my argument above about surface brightness and size - take either two different sized galaxies of same surface brightness or same galaxy placed so that only portion of it is visible in the FOV. It will not change visibility of object - yet it is analog to using larger scope and smaller scope on same exit pupil - but these two differ in what they show.
  10. I hear what you say, but although I phrased question like that - what I really want to know is how come we see more in bigger scopes. Yes, point sources are easy to explain. Planetary detail is easy to explain. What I'm failing to explain is surface brightness. Few years ago I had rather satisfying session with 8" scope next to 4" scope at dark(ish) location. There was clear distinction between views provided by two scopes - for example 8" scope showed all "members" of Markarian's chain, but 4" scope showed only 3 brightest ones. Regardless of magnification used. Scopes in question were 8" F/6 dob and 4" F/5 short frac.
  11. I'm not sure that I buy into that explanation for size. That would mean that two galaxies of the same surface brightness would be seen differently - or rather one seen and other not, depending on their size? There is easy test that we can do on that one - take EP that has smaller field stop so as to avoid vignetting issues - maybe Ortho with small AFOV will do. Take any galaxy that is close to threshold - just barely distinguished and observe it while it is whole in FOV vs - being only 1/4 of it in fov - a bit like this:
  12. Nice image. I don't particularly like star reduction that you performed (or at least that is what I think that happened - all stars are tight but there is halo around them, galaxy cores also look funny), and I don't like plasticky feel to the background. There is also central bit that has very strange red cast - maybe flats issue for red channel? I know that there is pressure to show as much as you can from the data you have, but maybe try processing it less aggressive. Don't worry if your image does not show full extent of galaxy or whatever.
  13. Why would you have x64 as many subs? You only need that if you set exposure length much shorter in case of 8 stops camera - but you should not base your exposure length on that - but rather on ratio of read noise to other dominant noise source. Maybe both cameras have same level of read noise, only one with 14 DR stops has much larger full well capacity? In that case - you would take the same number of subs for each - and recover saturated parts in 8 stops camera with few filler subs.
  14. If you mean this: That is quite large range of sizes for small difference in contrast visibility. Take for example "Seen section" - there are at least 6-7 waves that are essentially the same in terms of contrast visibility - and largest is easily twice as large as smallest in spatial extent. To me, above graph alone does not explain what is going on ...
  15. This is not a click bait, but genuine question (although, it just occurred to me that including "this is not click bait" in title is guaranteed to be in fact click bait ). Surface brightness depends on exit pupil, right? I mean, take 8" F/6 scope and 4" F/6 scope and our regular John/Jane Doe galaxy with uniform surface brightness - like M33 or perhaps Fireworks? If we choose EPs that in both scopes provide same exit pupil - we will in fact make views in both scopes equally bright. 8" has x4 larger aperture, and twice as long FL (if both are F/6 scopes), so using say Baader Morpheus 17.5mm in both scopes will make same exit pupil around 3mm. Now, as 8" gathers x4 more photons - but due to twice as long FL - it will have twice as high magnification so galaxy will be double in size or x4 in surface. x4 more photons over x4 larger surface simply means same number of photons hitting same surface in our eye. We should see things the same (only bigger in bigger scope). Yet, field experience says that we can easily see some faint fuzzies in 8" but struggle to see them in 4". How come?
  16. Oval halos are made by oval cross section of circular cable. It is oval because it is tilted. Tilted stuff needs collimation - that bit is obvious Take circle and look at it face on - it will be circle - but look at it from an angle and it will be ellipse. For example, this image clearly shows what happens when you look at circle from an angle :
  17. That is far from ideal. Almost no amateur is capable of producing images that are truly below 1"/px. Math behind all of that is somewhat more complex than simple rule of the thumb like that. There are three major components that impact final FWHM of stars in the image: Seeing FWHM, Guiding performance and Scope aperture. If you want things simplified - Optimum sampling rate is FWHM of stars that you actually produce divided with 1.6. For example - you shoot your image and measure FWHM of stars in it and it turns out to be 3.5" - optimum sampling rate in that case is 3.5"/1.6 = 2.1875"/px Saying that seeing alone is responsible for star profile FWHM is incorrect. Seeing FWHM is defined as FWHM of star profile imaged for 2 seconds with very large aperture (large enough so its contributions are negligible - that means 20" or more). If you have 2" FWHM seeing and use 60mm scope - you can't expect to have stars of 2" FWHM in your image. This is because Airy disk of 60mm scope alone is ~4.3", and two "mix" to produce final result (they convolve in mathematical terms). Add to that - your guiding errors if you image for few minutes, as mounts don't track perfectly and smear your stars some more. You have rather big aperture at 8" and you should be ok with 1.7"/px so yes, bin your image. In fact - you can just image normally and calibrate and stack image normally and then in the end, while still linear before you start any processing - decide if you want to bin and by how much. On some nights it will make sense to bin x2 but on some nights - it will make sense to bin x3. This will also depend on what sort of coma corrector you are using. Some coma correctors introduce a bit of spherical aberration thus enlarging stars in the image - which reduces resolution further (above discussion on seeing, airy disk and guiding holds for diffraction limited optics. Coma Correctors, Field flatteners and reducers often bring down telescope below diffraction limit - which is fine for long exposure imaging, but should be taken into account - stars will be just a tad wider than diffraction limited scope). If you are in doubt if image can be binned - there is rather simple method of testing this. Take original image, and down sample it to smaller resolution and then up sample it back to original resolution and look for differences. Here is an example. This is part of M51 image that has optimum sampling rate around 2"/px (taken with 8" RC scope on modded Heq5 with guide error in 0.5-0.6" RMS range). I actually binned it initially to 1"/px (since my native resolution is 0.5"/px ASI1600 and 1600mm FL scope) You might think that this image is looking good and sharp - but it is not properly sampled, here let me show you. Here it is down sampled to 2"/px using Lanczos resampling. Now that is what I would call sharp and not first image. This image looks like it is properly sampled (btw, all of this is when you observe image at 100% zoom - if you have it rescaled to fit the screen - you might not notice any of this). I'm now going to enlarge that small image to 200% Can you spot any differences to original image (except perhaps lower noise)? This goes to show that no detail is lost at 2"/px - and SNR is gained at 2"/px. That is the point of binning - process your data so that no detail is lost in the image and you have best SNR possible for your data.
  18. Dynamic range of used sensor is really not important for long exposure astrophotography. It is irrelevant metric (it is useful for daytime photography where we take single exposure). Stack two images - you increase DR by 1 stop / 1 bit, stack 4 in total - increase DR by 2 stops / 2 bits and so on. Increase over basic DR depends on how many subs you stack. You should determine your base exposure length so that your read noise is swamped by some other noise source. Usually it is LP noise, but if you have DSLR - it can be dark current noise as well since camera is not cooled and dark current can be considerable. Once you determine your base exposure length - shoot as many subs as you can in your "budgeted" imaging time. If you have saturation (and most likely you will) - shoot couple of short "filler" exposures at the end. In the end - let's think about what dynamic range is and how it is calculated? It is max signal divided with read noise. But what if we had 0 read noise camera? DR would be infinite in that case? To me, dynamic range should represent ratio of strongest to weakest recorded signal. It can never be infinite.
  19. Given the size of the image, I think I was using x2 barlow at the time but not sure what magnification it gave. Probably this model: https://www.365astronomy.com/GSO-2x-Barlow-2-Element-Achromatic-Barlow It has barlow element with 1.25" filter thread that you can unscrew and screw onto nose piece of camera. I probably used it like that, since at the time I was not familiar with all the technical details like critical sampling rate (max F/ratio) and all of that. By the way - placing barlow element further away gives larger amplification of focal length, and placing it closer - smaller. Therefore if you have such barlow with removable lens element - you can vary magnification just simply by using different 1.25" (in this case or T2 in some other cases) extensions. We can actually calculate what sort of barlow I was using, from the image. Jupiter on the image measures roughly 143px across. Image was taken in April 2015, and apparent diameter of Jupiter at that time was around 40". If we divide the two we get 40" / 143px = 0.28"/px. I was using 3.75µm pixel size camera which gives FL of ~2760. So barlow used 2760 / 900 = x3 At the time I did not have x3 barlow. I had x2 and x2.5 GSO barlow, so I probably used the later one with a bit more spacing to give it x3 magnification (not that I aimed for that - I just inserted nose piece of camera into barlow and that is what it gave me ).
  20. Term magnification does not apply to cameras and camera FOV is only indirectly related to resolving power of telescope. With cameras, maximum resolving power of a telescope is related to pixel size. SV105 has 3µm pixel size (from what I see via quick Google search), which means that optimum F/ratio for that F/11.7 or about F/12. Your scope is F/6.92 so x2 barlow would be most suitable to raise your F/ratio to F/13.8. Barlow magnification changes depending on distance from optical element to sensor - so if you can bring it a bit closer, you could be able to get F/12 or F/11.7. This is theoretical maximum "magnification" (although like I said - wrong term here). Using higher F/ratio, or longer focal length will result in larger image without any additional detail, so this is the limit. You can use shorter FL and besides maybe missing out on smallest feature - no harm is done. In reality, maximum level of detail will be governed by atmospheric turbulence / seeing. If you want to observe Jupiter via computer - then just go ahead and dial in exposure length for best image, but if you want to image Jupiter - well, that is all together another matter. Planets are imaged using something called Lucky imaging approach. Movie is made consisting of very short exposures - like 5-6ms each, and many frames are captured - often in tens of thousands. Then this video is fed into special software that selects best frames - ones the least distorted by atmosphere and stack is created (meaning average of these best frames) - that improves signal to noise ratio making images less noisy. In the end, special sharpening algorithms are applied that reverse some of atmospheric influence and limitation of optics and one ends up with decent sharp image of planet - like this one: This was captured with exact same scope (SW 130/900 Newtonian) and similar camera - QHY5LIIc, using above lucky imaging technique. By the way - Welcome to SGL
  21. I have no issues with circular cable management - in the same way I don't mind diffraction spikes. I do however object then cable circle is tilted - in the same way I object when scope is not collimated properly .
  22. Yes, that is correct. You really need only two types of exposures - regular ones and short ones to fill in saturated bits in regular exposures. You don't need more than that. Bias and flat files can be shared between the two. You only need separate darks - and only if you don't do dark scaling (for example CCDs with set point temp cooling can use same darks scaled down to match short exposure length). Blending of the data is ideally done in linear stage, and here is the "logic" for it: Take all pixels in regular stack that saturate and replace them with short_stack * time_ratio from short stack. You don't need to be exact about saturated pixels - you can use simpler condition like all pixels that have value larger than 95% of max value for example. Time_ratio is just ratio of two exposure times - so if you have 2 minute regular exposure and 10 second short exposure, you would multiply short stack pixel values with 120s / 10s = 12. I think that DSS can do something similar automatically for you if you use "Entropy Weighted Average (High Dynamic Range)" as stacking algorithm. Here is what technical info on DSS has to say about this method: I have tried this method of stacking and it works - but I have not tried it with different exposure times, so not sure if it will work well for that use case.
  23. For me - neither. Not sure what is the reason for enlarging image so much (3-4 larger than it should be) If images are presented at critical sampling rate - then it is B:
  24. This is closest to those scopes that I have (taken with 130mm F/6.92 newtonian with apparently spherical mirror): Taken in 2015.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.