Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,034
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I don't see that happening. Only thing that you can benefit in going from 310 to 360 is maybe reduction in read noise. According to this: difference will be minimal - both are about 1.35e give or take.
  2. I think that etendue encapsulates much more but also it missis a bit that is important when we want to understand all of this. It is essential to this discussion and I believe pretty much all will agree that it is implicitly implied. It is essential component of why telescopes work the way they work - conservation law that explains why we see equal brightness of the star regardless where in the fully illuminated field we put it. However - I believe it is concept that is missing noise part and hence can't be used to explain SNR and consequently speed of the system. It is also very difficult concept to grasp for most people. For this reason, I don't find it very useful in explaining things. We need to find, or attempt to find something that is easy to understand but does provide accurate and applicable way for people to think when thinking about speed of telescopes. There might not be such thing - at least not easy one. What I wanted to point was this: If someone would approach majority of astrophotographers with following question: ED80 + ATIK 414EX color or ATIK 428Ex color vs Skywatcher Maksutov 102mm and ASI294 I have sneaky suspicion that 95% of answers would be along the line - ED80 + atik of course with screams in horror when Maksutov F/13 scope is mentioned in astrophotographic context. Then I say, let's look at following: Mak has enough illuminated field to capture same FOV, it has enough aperture advantage to offset mirror reflectivity and central obstruction and will in principle produce same image for same amount of time (at half the price?). I do agree that Mak is going to be a bit harder to work with - does the mirror shift? You'll need OAG rather than guide scope and so on, but that is besides the point. But if all I said was - I have 100mm of aperture that covers 0.85 x 0.6 degrees, is it going to be fast - I might get different answer than above.
  3. Not sure - will need to look it up. I've heard of etendue but have no idea what it is
  4. Just had a thought that could be interesting for discussion on this thread. We talk about speed of telescope and speed of camera - but those things don't have a speed. They can't produce image on their own. They need to be paired with the other component to be able to produce image. Thus we can only talk about potential of telescope to be fast and potential of camera to be fast. In that sense, I feel that aperture is essential component of telescope potential to be fast. Other component that is often overlooked in these discussions would be - fully illuminated and corrected field (well, it does not need to be fully illuminated to be useful, but we do like fully corrected field). Scope with small useful field has lower potential than scope with large useful field - as later can be paired with larger cameras. Third component that is neither "good" nor "bad" would be focal length. I think that three of those in some way represent scope's potential to be fast. I still don't clearly see relationship between the three and potential to be fast in a scope. Anyone has any ideas about this?
  5. I understand term faster/speed like this: - for a given target, setup and expected SNR - how long does on need to expose to reach expected SNR. Where more time is slower than less time and other way around - less time is faster than more time (same as in real speed - how much time it takes to travel from A to B). When we start thinking like that, then we tend to start thinking about components of the setup in similar terms. How fast is that scope, or how fast is particular sensor? In it self - it does not make much sense as camera by itself can't perform the task of imaging - neither can telescope by itself. What we really mean is - this part of equipment has "the potential" to be fast if properly paired with other part. In that sense F/ratio of the scope is not necessarily measure of potential to be fast, while sensor size is measure of potential to be fast. We are used to speed of scope - being ratio of focal length and aperture, but it is aperture in reality that represents potential of telescope to be fast. Here is an example: Take 100mm F/5 telescope and pair it with 10mm diagonal camera. F/5 telescope is fast, right? Can we match that speed with F/10 telescope by any chance? We actually can. Take sensor that has 20mm diagonal and same pixel size as first sensor and put it with 100mm F/10 scope. It will result in same FOV but more pixels. Bin those pixels and you will get exactly the same setup as first one. Pixel size will be the same, FOV will be the same, aperture will be the same, sampling rate will be the same. SNR (except for read speed - which you can mitigate by using different sub length, depending on your LP levels) will be the same in given time and hence speed will be the same. How is that possible if one telescope is "slow" and other is "fast". Because F/ratio is not very suitable measure of potential to be fast. We have other two measures. Aperture and sensor size. In this case aperture plays a part same aperture - potential to have same speed - just pair it with suitable camera. You can also look at it the other way around - larger sensor has potential to "go faster" - it can either offset "slow" scope and make it equally fast as "fast" scope, or it can make "same speed" scope much faster. If we take 200mm F/5 scope with larger sensor - it will again give same FOV, same sampling rate after pixel binning and so on, but aperture is now x4 in surface over 100mm F/10 example. Instead of being of the same speed - 200mm F/5 scope with larger sensor will be much faster than 100mm F/5 with smaller sensor. Both have "same speed" scopes. In the end, let me recap: we can talk about speed of the system and not components, but if we want to talk about speed of components - it's probably better to talk about potential to be fast rather than actual speed as single component can't get the job done without being paired with other parts of the system.
  6. There seems to be contradiction in the table provided. It might be me and my understanding. What is considered as Image Size at fixed DPI? I see two possibilities - Size of object and size of actual image - width x height - or size of FOV of the image. But we already have FOV - I'm inclined to believe it is Object size in the image? If we say that it is object size, then: Pixel size increased -> object size reduced is true statement Sensor size increased -> object size increased is false statement If we assume it is opposite - that image size is actual pixel count - width x height or FOV then: Pixel size increase -> Image size decrease is true Sensor size increase -> Image size increase is true but Focal length increase -> Image size increase is false
  7. It is faster in one particular example - trying to get larger FOV than small sensor is capable of. In this case you are limited (we exclude focal reducers for the time being or we assume that it is already fitted in form of CC / FF) to mosaic technique. Imagine that you have to take 4 panels to cover target with small sensor. Large sensor will cover all of that in one go - let's say one hour. To get hour worth of signal per panel - you need to spend 4h total to get the same image with smaller sensor.
  8. It shows something else - speed depends on sensor size as well as aperture size. Not something that we consider often. Take for example ASI1600 + 8" RC scope vs ASI6200 + 16" RC scope. Roughly speaking, since ASI1600 has diagonal of about 22mm and ASI1600 has diagonal of 45mm - we can say that it is twice as large sensor (in height and width) - or x4 in surface area. 16" RC scope will have twice the focal length of 8" RC scope. Thus both will capture the exact same FOV (five or take a bit). With use of binning - you can make them sample at same sampling rate. 16" RC + ASI6200 will be clear winner as it grabs x4 more light and illuminates same FOV. In that sense we can say that with scope it is aperture per FOV and basic unit of FOV is arc second squared. In the end - we conclude what visual observers new all along - with scope, aperture is king . As imagers, we need to add our own saying, and I propose: With sensors - sensor size is king.
  9. Something similar. I would not agree that all targets and all conditions have same rule - x2.51 for mag off difference. It depends on brightness of the target - I did my calculation based on mag26 targets, and also on my particular setup (sampling rate affects both target and LP signal in the same way - spread over more pixels, however signal grows linearly and LP noise as square root of LP signal with time).
  10. How about if we take pixels out of equation and just concentrate on aperture per arc second squared as "speed" of the scope. I know that many will say - but you can't take pixels out of the equation, and I say - we can match resolution by using fractional binning or other resampling methods.
  11. That is not misconception - large sensor will cover more of the sky and hence gather more light. Not more light from the same target - but more total light (small sensor won't gather any light from targets that fall outside of it's FOV - but large sensor will pick up on those). If you want to do wide field shot with smaller sensor - you need to do mosaic - and spend much more time doing it to match output of large sensor. Alternatively, you can use large sensor on large scope and get faster system. Large sensor is going to be faster if matched with proper scope compared to small sensor.
  12. Not quite as easy as that. SNR depends on signal - which is what is discussed here, but also on all noise sources - read, thermal, shot and LP noise all play a part. In recent times, LP noise has become quite dominant noise component since set point cooling has become widely available in amateur setups and more and more man made light is present at night. Defining speed of telescope in light gathering capability only is very useful for comparing two scopes - but can be really misleading to some. We can conclude that particular scope and camera is very fast system - yet people in heavy LP will struggle to get decent images in multiple nights of exposing. For example - I calculated (or better term would be estimated?) that exposure difference between my current and future site - mag18.5 vs mag20.8 will be about x6 in exposure time. Same scope - x6 times faster in darker skies!
  13. Depends what you mean by resolution. That word has many uses that can sometimes get mixed up. If by resolution you mean sampling rate / number of pixels per part of sky or number of pixels of the image in general - guiding accuracy does not play any direct part in that (it is related thru other use of the word if you want good results). On the other hand, if by resolution you mean actual detail captured / resolved (hence resolution) - guiding precision does play significant part. Better guiding means less blurring due to poor tracking - in turn that means sharper image. Sometimes it can depend a lot on guiding precision and sometimes there won't be much difference between good and bad guiding - atmosphere/seeing can play significant part as well - so much so that in some cases good guiding simply won't help. Sometimes poor seeing is preventing you from guiding good - guide star just jumps around too much. This is true. Larger scope does not necessarily mean better resolution - in sense of resolving things. In particular this is true for undermounted large scopes. There are other intricate details of how atmospheric seeing effects depend on aperture size and so on - not easy topic to both understand and predict correctly. In general - larger aperture has the possibility of resolving more, but host of factors - seeing, guiding and such, can interfere and prevent it from delivering. Yes and no. It does affect highly contrasting features, but such features often have enough signal for impact to be minimal. If something is high contrast (like star against black space) - one component has to have strong signal, and strong signal means good SNR. On the other hand if you have low contrast feature - like uniform nebulosity, then blurring that won't do pretty much anything to disturb signal and hence snr (blur of uniform intensity is again that same uniform intensity. only place where you see blur is where there is contrast). In most practical cases - a bit more blurring due to poor guiding or poor seeing won't affect SNR much. There will be cases, for example - trying to catch very faint star, or similar where it will reduce SNR and can mean difference between detection and having no detection, but for regular imaging - it won't make a difference.
  14. You are right - larger aperture means more photons captured. Given two scopes - a smaller one and larger one, both paired with cameras (binning or otherwise) that result in same sampling rate / resolution and cameras being otherwise equal (QE, noise and all) - larger scope will be faster. Why don't people see this sometimes? Because larger scope means larger focal length and that means larger sampling rate and most people don't bother to bin or otherwise adjust sampling rate. Another thing is that small changes in SNR might not be obvious to human eye - no two images are processed the same, but they are measurable quantities. If you want to visually assert the difference in SNR - it is best to form single image while data is still linear - by using half of one image and other half from second image. That way processing is guaranteed to be the same as you are processing only one image. Of course, you must take care to "equalize/normalize" signal levels so that they match when you form single image.
  15. I'm rather sad I missed all the commotion here
  16. Looking at the curve - I would say it's a decent LPS fitler. You won't have much issues getting color balance right and it should cut into heaviest LP (sodium lamps and such). In case you are in mag19+ skies - I would say - it will help quite a bit. Don't know what the quality of build is or what are optics like, but you'll see if its ok soon enough. Poor optics can sometimes distort stars or make them bloat a bit and there can be odd reflections and halos - hopefully, you won't have to deal with any of those.
  17. That looks like bloom. Most CCDs nowadays have ABG - anti blooming gate - which prevents this from happening. It is electrons from pixel well overflowing into adjacent pixels and filling them up as well. Maybe in bin x4 on your CCD (and I'm assuming these images are from CCD) ABG is not working 100% properly and lets "a few" electrons (enough to fill adjacent pixel well to a certain level) escape? Btw - here is what bloom looks like: It is usually a streak in direction of CCD columns (same direction CCD is being read out by shifting electrons from pixel to pixel until it reaches end of a column).
  18. Keep distance - a few meters, wear protective gear - mask and wash hands afterwards? Given the current stats in UK, probability of random person having infection is very low - about 1/3 of a percent. As long as both parties involved understand that certain behavior and protection is required - there is minimal possibility of anything going wrong.
  19. I would use 2" LPS filter and I would put it in front of CC, so it would be: camera, FW, CC, LPS filter, scope
  20. I've found these images online - specifically on LightPollutionMap.info website - you select different SQM/SQM-L/SQC readings to be shown on the map. Most of them are by Andrej Mohar. I searched for software that could convert all sky images to SQM readings, but I have found none. Anyone having suitable planetary camera and short focal length lens should be able to do such a map - provided software is available for that. In principle, software is just a bit more complicated than measuring sky background on regular images. This is because for the most part - regular images taken with telescope are more or less uniform in geometric distortion (large zoom - means low distortion). Here we have "fisheye" type lens that has very high distortion. Pixel in center does not cover same surface area as pixel at the edge of image and this needs to be taken into account when creating map because we need amount of light per surface area. Other than that - filtering bright stars - which can be accomplished by doing multiple exposures over the course of half an hour for example and then doing sigma clip stacking - rejecting any brighter pixels as having bright star in them. Not sure how one would subtract Milky way from the image though. But you are right - if you can do all sky image with your gear like this one: You can produce report like this one: If you visit lightpollutionmap.info - and download these measurements, you'll find even more images that are interesting, for example this one: Which happens to be cylindrical projection of brightness together with markings of large cities/towns and their distance - very cool. I've just seen that there is SQC 1.9.3 marking on these images - could be software name and version number .... of to search the net to see if I can find it.
  21. SQM is not very precise - it has measurement cone FWHM of about 45 degrees. SQM-L is a bit more precise - it has about 20 degrees of FWHM for measurement (more narrow part of the sky is measured). Both can give somewhat inaccurate readings because they average out reading over said part of the sky. This is why it is advised to aim at zenith - the least changes of brightness can be expected there. To put it into perspective - difference between mag20.9 and mag21.1 is 0.2 mags. You can use standard magnitude formula to see the ratio of intensities between the two: mag = -2.5 * log(I1/I2) or I1/I2 = 10^ ( mag * (-2.5) ) = 10^ -0.25 = ~0.56 Intensity at darker site is about half that on brighter one. Mind you - we don't perceive it as half as bright - our vision is closer to magnitude scale - it is same difference as between mag6 and mag6.1 star - very small in terms of perceived brightness. Probably best thing to have is something like this: It does however require specialized lens / camera and computer to generate. From here you can see how SQM reading changes depending on direction and altitude of measurement. In my view - it's worth having such chart for observatory / regular observing site
  22. Jupiter at what magnification and what time of the year? . It could almost be possible with a small scope. Jupiter is about 45" in size and for 80mm scope airy disk diameter is 3.21" - difference being around x14. That can "easily" be accomplished with set of eyepieces - take 32mm Gso plossl and 2mm Vixen HR planetary - difference being x16. 80mm scope + 2mm Vixen HR will make Airy disk larger than same scope and 32mm GSO Plossl on Jupiter.
  23. We can use simplification that I gave above - no need to think of photons as waves or anything - we think of waves that produce certain intensity of light - which you can relate to photon count for higher intensities or probability that photon will hit for low intensities (same thing really). In this context - photon is a "hit" rather than particle - +1 number in our CCD detector. Everything else that we use is more or less classical wave.
  24. I'm not even certain about 50 to 150 part - threshold of detection is about 7 photons (5-9 in some sources). Since we are talking about eye/brain system - we need to take into account image build up and filters that our brain use (like noise suppression filter - we never seem to see photon noise at those levels and it should be obvious).
  25. I think that it is - we only need to substitute notion of intensity of light with notion of probability of photon detection in specific area (and then integrate over that area).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.