Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,105
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I understand term faster/speed like this: - for a given target, setup and expected SNR - how long does on need to expose to reach expected SNR. Where more time is slower than less time and other way around - less time is faster than more time (same as in real speed - how much time it takes to travel from A to B). When we start thinking like that, then we tend to start thinking about components of the setup in similar terms. How fast is that scope, or how fast is particular sensor? In it self - it does not make much sense as camera by itself can't perform the task of imaging - neither can telescope by itself. What we really mean is - this part of equipment has "the potential" to be fast if properly paired with other part. In that sense F/ratio of the scope is not necessarily measure of potential to be fast, while sensor size is measure of potential to be fast. We are used to speed of scope - being ratio of focal length and aperture, but it is aperture in reality that represents potential of telescope to be fast. Here is an example: Take 100mm F/5 telescope and pair it with 10mm diagonal camera. F/5 telescope is fast, right? Can we match that speed with F/10 telescope by any chance? We actually can. Take sensor that has 20mm diagonal and same pixel size as first sensor and put it with 100mm F/10 scope. It will result in same FOV but more pixels. Bin those pixels and you will get exactly the same setup as first one. Pixel size will be the same, FOV will be the same, aperture will be the same, sampling rate will be the same. SNR (except for read speed - which you can mitigate by using different sub length, depending on your LP levels) will be the same in given time and hence speed will be the same. How is that possible if one telescope is "slow" and other is "fast". Because F/ratio is not very suitable measure of potential to be fast. We have other two measures. Aperture and sensor size. In this case aperture plays a part same aperture - potential to have same speed - just pair it with suitable camera. You can also look at it the other way around - larger sensor has potential to "go faster" - it can either offset "slow" scope and make it equally fast as "fast" scope, or it can make "same speed" scope much faster. If we take 200mm F/5 scope with larger sensor - it will again give same FOV, same sampling rate after pixel binning and so on, but aperture is now x4 in surface over 100mm F/10 example. Instead of being of the same speed - 200mm F/5 scope with larger sensor will be much faster than 100mm F/5 with smaller sensor. Both have "same speed" scopes. In the end, let me recap: we can talk about speed of the system and not components, but if we want to talk about speed of components - it's probably better to talk about potential to be fast rather than actual speed as single component can't get the job done without being paired with other parts of the system.
  2. There seems to be contradiction in the table provided. It might be me and my understanding. What is considered as Image Size at fixed DPI? I see two possibilities - Size of object and size of actual image - width x height - or size of FOV of the image. But we already have FOV - I'm inclined to believe it is Object size in the image? If we say that it is object size, then: Pixel size increased -> object size reduced is true statement Sensor size increased -> object size increased is false statement If we assume it is opposite - that image size is actual pixel count - width x height or FOV then: Pixel size increase -> Image size decrease is true Sensor size increase -> Image size increase is true but Focal length increase -> Image size increase is false
  3. It is faster in one particular example - trying to get larger FOV than small sensor is capable of. In this case you are limited (we exclude focal reducers for the time being or we assume that it is already fitted in form of CC / FF) to mosaic technique. Imagine that you have to take 4 panels to cover target with small sensor. Large sensor will cover all of that in one go - let's say one hour. To get hour worth of signal per panel - you need to spend 4h total to get the same image with smaller sensor.
  4. It shows something else - speed depends on sensor size as well as aperture size. Not something that we consider often. Take for example ASI1600 + 8" RC scope vs ASI6200 + 16" RC scope. Roughly speaking, since ASI1600 has diagonal of about 22mm and ASI1600 has diagonal of 45mm - we can say that it is twice as large sensor (in height and width) - or x4 in surface area. 16" RC scope will have twice the focal length of 8" RC scope. Thus both will capture the exact same FOV (five or take a bit). With use of binning - you can make them sample at same sampling rate. 16" RC + ASI6200 will be clear winner as it grabs x4 more light and illuminates same FOV. In that sense we can say that with scope it is aperture per FOV and basic unit of FOV is arc second squared. In the end - we conclude what visual observers new all along - with scope, aperture is king . As imagers, we need to add our own saying, and I propose: With sensors - sensor size is king.
  5. Something similar. I would not agree that all targets and all conditions have same rule - x2.51 for mag off difference. It depends on brightness of the target - I did my calculation based on mag26 targets, and also on my particular setup (sampling rate affects both target and LP signal in the same way - spread over more pixels, however signal grows linearly and LP noise as square root of LP signal with time).
  6. How about if we take pixels out of equation and just concentrate on aperture per arc second squared as "speed" of the scope. I know that many will say - but you can't take pixels out of the equation, and I say - we can match resolution by using fractional binning or other resampling methods.
  7. That is not misconception - large sensor will cover more of the sky and hence gather more light. Not more light from the same target - but more total light (small sensor won't gather any light from targets that fall outside of it's FOV - but large sensor will pick up on those). If you want to do wide field shot with smaller sensor - you need to do mosaic - and spend much more time doing it to match output of large sensor. Alternatively, you can use large sensor on large scope and get faster system. Large sensor is going to be faster if matched with proper scope compared to small sensor.
  8. Not quite as easy as that. SNR depends on signal - which is what is discussed here, but also on all noise sources - read, thermal, shot and LP noise all play a part. In recent times, LP noise has become quite dominant noise component since set point cooling has become widely available in amateur setups and more and more man made light is present at night. Defining speed of telescope in light gathering capability only is very useful for comparing two scopes - but can be really misleading to some. We can conclude that particular scope and camera is very fast system - yet people in heavy LP will struggle to get decent images in multiple nights of exposing. For example - I calculated (or better term would be estimated?) that exposure difference between my current and future site - mag18.5 vs mag20.8 will be about x6 in exposure time. Same scope - x6 times faster in darker skies!
  9. Depends what you mean by resolution. That word has many uses that can sometimes get mixed up. If by resolution you mean sampling rate / number of pixels per part of sky or number of pixels of the image in general - guiding accuracy does not play any direct part in that (it is related thru other use of the word if you want good results). On the other hand, if by resolution you mean actual detail captured / resolved (hence resolution) - guiding precision does play significant part. Better guiding means less blurring due to poor tracking - in turn that means sharper image. Sometimes it can depend a lot on guiding precision and sometimes there won't be much difference between good and bad guiding - atmosphere/seeing can play significant part as well - so much so that in some cases good guiding simply won't help. Sometimes poor seeing is preventing you from guiding good - guide star just jumps around too much. This is true. Larger scope does not necessarily mean better resolution - in sense of resolving things. In particular this is true for undermounted large scopes. There are other intricate details of how atmospheric seeing effects depend on aperture size and so on - not easy topic to both understand and predict correctly. In general - larger aperture has the possibility of resolving more, but host of factors - seeing, guiding and such, can interfere and prevent it from delivering. Yes and no. It does affect highly contrasting features, but such features often have enough signal for impact to be minimal. If something is high contrast (like star against black space) - one component has to have strong signal, and strong signal means good SNR. On the other hand if you have low contrast feature - like uniform nebulosity, then blurring that won't do pretty much anything to disturb signal and hence snr (blur of uniform intensity is again that same uniform intensity. only place where you see blur is where there is contrast). In most practical cases - a bit more blurring due to poor guiding or poor seeing won't affect SNR much. There will be cases, for example - trying to catch very faint star, or similar where it will reduce SNR and can mean difference between detection and having no detection, but for regular imaging - it won't make a difference.
  10. You are right - larger aperture means more photons captured. Given two scopes - a smaller one and larger one, both paired with cameras (binning or otherwise) that result in same sampling rate / resolution and cameras being otherwise equal (QE, noise and all) - larger scope will be faster. Why don't people see this sometimes? Because larger scope means larger focal length and that means larger sampling rate and most people don't bother to bin or otherwise adjust sampling rate. Another thing is that small changes in SNR might not be obvious to human eye - no two images are processed the same, but they are measurable quantities. If you want to visually assert the difference in SNR - it is best to form single image while data is still linear - by using half of one image and other half from second image. That way processing is guaranteed to be the same as you are processing only one image. Of course, you must take care to "equalize/normalize" signal levels so that they match when you form single image.
  11. I'm rather sad I missed all the commotion here
  12. Looking at the curve - I would say it's a decent LPS fitler. You won't have much issues getting color balance right and it should cut into heaviest LP (sodium lamps and such). In case you are in mag19+ skies - I would say - it will help quite a bit. Don't know what the quality of build is or what are optics like, but you'll see if its ok soon enough. Poor optics can sometimes distort stars or make them bloat a bit and there can be odd reflections and halos - hopefully, you won't have to deal with any of those.
  13. That looks like bloom. Most CCDs nowadays have ABG - anti blooming gate - which prevents this from happening. It is electrons from pixel well overflowing into adjacent pixels and filling them up as well. Maybe in bin x4 on your CCD (and I'm assuming these images are from CCD) ABG is not working 100% properly and lets "a few" electrons (enough to fill adjacent pixel well to a certain level) escape? Btw - here is what bloom looks like: It is usually a streak in direction of CCD columns (same direction CCD is being read out by shifting electrons from pixel to pixel until it reaches end of a column).
  14. Keep distance - a few meters, wear protective gear - mask and wash hands afterwards? Given the current stats in UK, probability of random person having infection is very low - about 1/3 of a percent. As long as both parties involved understand that certain behavior and protection is required - there is minimal possibility of anything going wrong.
  15. I would use 2" LPS filter and I would put it in front of CC, so it would be: camera, FW, CC, LPS filter, scope
  16. I've found these images online - specifically on LightPollutionMap.info website - you select different SQM/SQM-L/SQC readings to be shown on the map. Most of them are by Andrej Mohar. I searched for software that could convert all sky images to SQM readings, but I have found none. Anyone having suitable planetary camera and short focal length lens should be able to do such a map - provided software is available for that. In principle, software is just a bit more complicated than measuring sky background on regular images. This is because for the most part - regular images taken with telescope are more or less uniform in geometric distortion (large zoom - means low distortion). Here we have "fisheye" type lens that has very high distortion. Pixel in center does not cover same surface area as pixel at the edge of image and this needs to be taken into account when creating map because we need amount of light per surface area. Other than that - filtering bright stars - which can be accomplished by doing multiple exposures over the course of half an hour for example and then doing sigma clip stacking - rejecting any brighter pixels as having bright star in them. Not sure how one would subtract Milky way from the image though. But you are right - if you can do all sky image with your gear like this one: You can produce report like this one: If you visit lightpollutionmap.info - and download these measurements, you'll find even more images that are interesting, for example this one: Which happens to be cylindrical projection of brightness together with markings of large cities/towns and their distance - very cool. I've just seen that there is SQC 1.9.3 marking on these images - could be software name and version number .... of to search the net to see if I can find it.
  17. SQM is not very precise - it has measurement cone FWHM of about 45 degrees. SQM-L is a bit more precise - it has about 20 degrees of FWHM for measurement (more narrow part of the sky is measured). Both can give somewhat inaccurate readings because they average out reading over said part of the sky. This is why it is advised to aim at zenith - the least changes of brightness can be expected there. To put it into perspective - difference between mag20.9 and mag21.1 is 0.2 mags. You can use standard magnitude formula to see the ratio of intensities between the two: mag = -2.5 * log(I1/I2) or I1/I2 = 10^ ( mag * (-2.5) ) = 10^ -0.25 = ~0.56 Intensity at darker site is about half that on brighter one. Mind you - we don't perceive it as half as bright - our vision is closer to magnitude scale - it is same difference as between mag6 and mag6.1 star - very small in terms of perceived brightness. Probably best thing to have is something like this: It does however require specialized lens / camera and computer to generate. From here you can see how SQM reading changes depending on direction and altitude of measurement. In my view - it's worth having such chart for observatory / regular observing site
  18. Jupiter at what magnification and what time of the year? . It could almost be possible with a small scope. Jupiter is about 45" in size and for 80mm scope airy disk diameter is 3.21" - difference being around x14. That can "easily" be accomplished with set of eyepieces - take 32mm Gso plossl and 2mm Vixen HR planetary - difference being x16. 80mm scope + 2mm Vixen HR will make Airy disk larger than same scope and 32mm GSO Plossl on Jupiter.
  19. We can use simplification that I gave above - no need to think of photons as waves or anything - we think of waves that produce certain intensity of light - which you can relate to photon count for higher intensities or probability that photon will hit for low intensities (same thing really). In this context - photon is a "hit" rather than particle - +1 number in our CCD detector. Everything else that we use is more or less classical wave.
  20. I'm not even certain about 50 to 150 part - threshold of detection is about 7 photons (5-9 in some sources). Since we are talking about eye/brain system - we need to take into account image build up and filters that our brain use (like noise suppression filter - we never seem to see photon noise at those levels and it should be obvious).
  21. I think that it is - we only need to substitute notion of intensity of light with notion of probability of photon detection in specific area (and then integrate over that area).
  22. No need to use that - use a bit of wave mechanics to describe what is going on - it is fairly "easy" to understand it. It is very similar in nature to double slit experiment - different paths come together at certain places and phase of the light makes it either reinforce itself or destructive interference happens. A bit of math will tell you that you need to integrate over aperture and very soon you'll see that what you derived is actually Fourier transform of aperture. Now you have a tool to examine Airy patterns of different types of aperture (obstructed, with spider support, hexagonal, square, ....). If you go further - you'll also notice that another Fourier transform of particular Airy pattern gives you MTF - this is how airy pattern affects image produced by aperture. This will explain why there is maximum resolving power of a telescope and why unobstructed aperture gives better contrast for visual. There is field of optics called Fourier optics that deals with all of this. https://en.wikipedia.org/wiki/Fourier_optics
  23. You don't really need different magnification as it does not change properties of airy disk unless in relation to something else. Magnification on its own is scale factor - unit conversion - call it what you will - and has absolutely no impact on airy disk unless it is in relation to something else. We use term magnification to signify that is in relation to human vision. We also use term sampling rate to signify that it is in relation to camera sensor. In fact - magnification and sampling rate are two slightly different concepts - first is about angular sizes - magnification specifies ratio of angular sizes of things, while sampling rate is related to projection - ratio of angular size to length or pixel size (which represents unit length) - in linear space.
  24. I think it might be photon shot noise. Eye / brain combination is very good at filtering this out - at very low light level when it should be obvious - we never see it. I think that this mental noise filter is engaged in particular circumstances and it fails in other circumstances - most notably with Ha in bright day - when eye/brain is "not expecting" this to happen as there is plenty of light around. It happened to me once with imaging Ha filter - not solar one, just 7nm wide 1.25" filter - I was in dim room and it was really sunny outside and I looked thru the filter at outside scene. It looked a bit like regular scene (in deep red color) - with old cathode ray tube TV not being tuned type noise superimposed over the scene. Anyway - noise filter thing is the explanation I came up with - I might be wrong and it could be something entirely different - so far that explanation is only feasible I've found so far.
  25. I have couple notes on this one - just random points that ought to be taken into account when taking about such phenomena: - all stars form equal airy disk (this is not strictly true as stars have different spectra and there is dependence of airy disk size on wavelength - but we can say that they are equal for purpose of discussion) as it is property of the telescope not of the star. - at threshold levels of vision, human vision does not quite behave like smooth function - we can say that it is non linear (in fact, it is always non linear in our sensation but this is not what it is meant here - it is non linear in physical response) - hence spreading equally objects of certain brightness will result in change in contrast between them - this is why (together with other factors - like how well we perceive contrast depending on object size) we have magnification that shows particular galaxy the best under given conditions - it darkens background the most and galaxy the least - hence creating best contrast. - we have to observe profile of Airy pattern - it is not flat / uniform - it is rather pointy - and while we can at certain magnification resolve whole disk - with faint stars, parts of airy profile that are bright enough to trigger response might not be resolved and are still point like - this graph can probably explain it better: If we don't see the rest of the airy disk - above top of the peak will not be resolved - even if full disk is.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.