Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I've written this many times before, and I think I'm probably getting boring with it by now , but here it is one more time: - nature of the light is such that circular aperture that is limited in size produces image that is limited in frequency domain. There is maximum level of detail such aperture can record. It is well known fact and law of physics and there is no way around it (no simple way anyway - there have been some attempts at "super resolution" - namely speckle interferometry - but that is simply not applicable to planetary imaging). This cut off frequency is given by simple formula: https://en.wikipedia.org/wiki/Spatial_cutoff_frequency - Nyquist sampling theorem is mathematical theorem and it is proven correct - there is no arguing about it. It states that if we have band limited signal (and with telescope aperture we have - see above point) - we need to sample it at twice maximum frequency component in order for our samples to be able to completely and fatefully restore the signal. Using these two - we can fairly simply derive rule for F/ratio of a telescope system. Pixel size is equivalent to "wavelength" associated with needed frequency and we need to sample twice per wavelength (or half the wavelength - because we need to sample at twice the cut of frequency), so in above formula we replace f with 1/2*pixel_size and we get: 1/2*pixel_size = 1/lambda * f_ratio which we can rearrange to be 2*pixel_size = lambda * f_ratio or f_ratio = 2*pixel_size / lambda where lambda is wavelength of the light that we want to record faithfully. Visual spectrum is from 400nm to 700nm and we can use 400nm as that is the shortest wavelength that we want to record in above formula and it turns into f_ratio = 2 * pixel_size / 0.4 (if we use micrometers for pixel size then we must use micrometers for wavelength of light and 400nm is 0.4um). which further turns into f_ratio = 5 * pixel_size Now, why people feel they need to sample at higher sampling rates (slower F/ratios or larger focal lengths) is something that I personally can't explain. All there is to be recorded can be recorded at sampling rate described above. There is no need to use higher F/ratio as far as recording the signal goes. I know that some people feel that they get better results and that can be due to number of reasons - like processing workflow or simply "feeling that image is better" (we often have problem comparing two images that are sampled at different sampling rates and conditions can never be completely the same to exclude influences). What I do know is whenever someone over samples - it can be: a) shown to be over sampled in frequency domain b) down sampled without loss of information and then up sampled back to original size and still look the same Not only that, you can even sample at lower sampling rates with virtually no loss of information for several reasons. I often say that wavelength of light to be used in above equation should be 500nm instead. - we are much more sensitive to luminance than to chrominance and peak of our luminance sensitivity is above 500nm. Detail that we see comes mostly from that light - atmosphere bends shorter wavelengths much more than longer (think rainbow) and seeing affects shorter wavelengths more than longer. 400nm is going to be affected much more by atmosphere than say 700nm (this is often exploited by lunar imagers that use Ha narrowband filters to tame the seeing further), so there is greater chance that we will actually loose information at 400nm - refractors are optimized for 500-550nm wavelengths (same reason as point 1 of this list) and will produce best results at those wavelengths. Furthermore - difference of 20% in sampling rate is not that big as people often think. We can see this if we take very detailed image, reduce it's size to say 80% by using some sophisticated resampling method and then scale it back to 100%. There will be some very small degradation - but not as much as one might think without testing it and seeing it live. In the end - OSC vs mono. Yes, in principle OSC captures at lower sampling rate then pixel size suggests because of bayer matrix. Red and blue are captured at half of the frequency, while green is captured at ~0.7071 (1 over square root of 2) what pixel size suggests, but this is important for single image or when video is debayered prior to stacking. If one uses AS!3 (and one should) - then algorithm called bayer drizzle is employed to restore the color - and it makes up for that lower sampling rate of OSC camera (in fact - that is one of rare occasions that drizzle algorithm works in amateur setups). For all intents and purposes OSC and Mono+filters can be seen as equal for planetary imaging (except when doing specific stuff like NB or maybe methane or UV).
  2. In that case, I feel that your approach is maybe wrong? I'd be happy to answer any genuine questions that you personally have in order to deepen your knowledge and as you already know - from the dept of ones knowledge comes ability to educate others. Why would you post poll about shooting under the bright moon instead of having discussion about impact of light pollution on the SNR of the image and impact of the brightness of the moon on the light pollution? First will give you just bunch of opinions (and their worth is questionable as anyone can have an opinion) and other will possibly provide access to verifiable facts and deeper understanding of the topic.
  3. Do keep in mind that you need some "pre/post processing" for best results with C925 (even with focal reducer) and such small pixels. I'd keep resulting sampling rate at around 1.5"/px in either case. ASI294 with x0.7 reducer on C925 will give you ~0.58"/px. I would bin such data x3 after stacking and before processing ASI533 is going to be even higher sampling at ~0.47"/px - again at least x3 bin. If you use them natively - then bin x4.
  4. Just recently we discussed the fact that people even use AI to write articles for them:
  5. It is not uncommon for people to start a website on some particular topic that they might not know much about. Intent is to earn money from traffic and advertising. I could for example start website on snowboarding. Write some texts after some research online and put a lot of adverts for snowboarding equipment in order to earn revenue. Given that I know almost nothing about snowboarding - I would have no idea how complex topic is when I start researching it and might mistake it for rather simple endeavor. "Get some ski clothing, snowboard and hit the mountains, and yes, check our sponsor along the way ..."
  6. Looks perfectly fine. You have to note that ASI294 has 4.63um pixel size and ASI533 has 3.76um pixel size. That is x1.5 in signal level per pixel if they are both used on same scope. Difference can be even bigger if there is additional difference between scopes. If you use both cameras on same scope - you need x1.5 more total imaging time with ASI533 to match 294 (if they have the same QE, and they probably do). What scopes are you using with these cameras?
  7. It is fairly easy to do it you have access to 3d printer. I'm slowly starting to push the idea that 3d printer is as essential to astro amateurs as is barlow lens or T2 extension
  8. If you make worm / worm wheel arrangement - then you will have extremely precise control over focus position. One whole revolution of stepper motor will be - just "one tooth" of the focusing ring That can give you enough motion for V shaped curve to work with minimal motion past infinity focus ...
  9. Fair point - if moon is very low due south and one images due north - then it is viable option.
  10. One way of doing that is to slightly tweak sensor distance so that infinity focus is no longer at infinity position of the focusing ring but somewhere closer - that way there will be some distance past infinity focus for automatic focus to work.
  11. Those poll results are very interesting I made a joke at the beginning that I like to image 180 degrees away from the full moon - but if you think about it - that is "on the other side" of the earth and when the moon is full - sun is on the other side, so what I actually said is that I best like to image during daytime - hence the joke. In any case - 11 people so far, chose either 90 or 180 degrees away from very bright moon, and I'd argue that that is highly inefficient use of imaging time. We can stipulate that bright moon is past first quarter or before third quarter. In any case - to be 90 degrees or more away from it - you'll be either imaging - during dusk or dawn or daytime if it is closer to being full, or very low towards the horizon - like 20-30 degrees or below. All very inefficient usage of imaging time. I wonder why people do it like that?
  12. https://skyandtelescope.org/astronomy-resources/astronomy-questions-answers/how-does-the-moons-phase-affect-the-skyglow-of-any-given-location-and-how-many-days-before-or-after-a-new-moon-is-a-dark-site-not-compromised/ According to this - full moon overhead is SQM18 which is equivalent to Bortle 8 zone. Adding same amount of illumination in already bright sky will just double the amount of background photons and that won't change SQM by 1 (x2 increase is + ~0.75 magnitudes). Some calculations suggest that it takes about x35-40 more exposure compared to pristine dark skies (SQM 22). When I did some calculations, moving to 2 magnitudes darker skies (from 18.6 to 20.6) yielded something like x6 reduction in total exposure time needed to achieve target SNR for regular targets (like SQM26-28 faint galactic outer arms and SNR 5+).
  13. I usually image when the full moon is 180 degrees away from the target
  14. In that case you would use 2:1 ratio or 2*lum + nir
  15. Well, I did not explain very well that first bit, and in fact - I was probably incorrect in what I said. Here is what I meant: Say you expose for 30 seconds luminance and you get 100ADU value. Further, let's say that in same time you'd get 50ADU of nir. If you exposed with clear filter in those 30 seconds you'd get 150ADU total. What I was trying to say is - handle with care cases where you have say 1 minute lum subs and 2 minute nir subs. If we go by above analogy - in 1 minute sub you'd get 300ADU for clear filter, but you will have 200ADU of luminance in one minute and 200ADU of nir in 2 minutes (50ADU/30s = 200ADU/120s) Now if you simply add the two - you'd get 400ADU or ratio of lum and nir is not "organic" as it was before - it is not 2:1 (100:50) - but now it is 1:1 or 200:200 - because nir exposure is different. If you've used same exposure length for both - you can simply add them (provided that you used average stack instead of sum stack - which we mostly do), but if you used different sub length - then you must scale signal on one of masters to match exposure length of the other. This is done so that you can keep original signal ratio between two parts of spectrum. You of course don't have to do that and you can simply add them up in 1:1 ratio, but that would throw off "color balance" (if we were talking about RGB and their ratios - not that this has any color information in it - so keeping the ratio is more realistic thing). Hope this clears things a bit.
  16. Yes, this sort of looks like some pinching, but I'm not 100% certain it is. This star maybe shows the effect the best. It looks like flaring in 4 directions. It reminds me of issues with 130/150PDS line of reflectors that have issues when focuser is protruding into optical path - but these tend to be to one side only: Maybe some screws on 4 sides do the same with your setup?
  17. It really does depend on what you want to achieve. Simplest thing indeed is to do a weighted sum of two masters. Say you have 5h of regular luminance and 3 hours of NIR - then you would combine them in 3:5 ratio. This would be equivalent of capturing single luminance in extended range rather than two split ones. What logically follows is - why do you indeed have split things? Why didn't you simply record full spectrum camera is capable of for luminance? Second way you could combine the two could be described as "per pixel SNR weighted" method. For this you would need to make SNR map of each of the two masters. That is fairly easy thing to do if you can repeat your stack but with standard deviation instead of average. I'll explain things a bit better in a simple case. Say you use regular average stack (nothing fancy like sigma rejection as it is easier to understand, but method works with fancy stacking as well). You average 100 subs in one image. Then you calculate standard deviation of pixels in another image. Then you divide that standard deviation with 10 (because 10 is root of 100 and we stacked 100 subs - noise is x10 less after stacking in final result). Now you have two images - signal and noise. You can divide the two to get SNR map (Signal / Noise - simple, right). Why would we want to do this? Well, in order to get best result from adding regular luminance and NIR luminance. Here, look at these cases: Neither luminance contains signal in particular region - we need to "average" the background noise to get the best result Regular luminance contains signal and NIR luminance does not - we need to use just regular luminance - adding NIR luminance will just add noise Both contain signal - here we need to add them up normally. You can easily see that we need to add based on noise to optimize signal to noise, but how do we do that? We solve maximization problem. Let the weight of regular luminance be P and weight of NIR luminance will then be 1-P. Noise adds like square root of sum of squares so we take above Noise map for each pixel and we look at the sum resulting_snr = signal_regular*P + signal_nir*(1-P) / sqrt((noise_regular * P)^2+(noise_nir * (1-P))^2) We need to choose P for each pixel as to maximize above expression. Find first derivative and solve for it being equal to zero with constraint that P is in 0-1 range. In the end - I want to mention one more thing that you can do, and that is interesting on several targets - like M42. We have starnet ++ and similar star removing techniques. These can be used to get "stars only" version of the data. In dense nebulae where stars are born there a lot of dust and visible light can't easily penetrate that, but IR can. It is therefore interesting to use NIR to record stars present in those regions and add them to regular image.
  18. If you want ideas for the content for your article - just explore previous posts here on SGL. Countless threads where were started on what would be a good combination of equipment for particular AP purpose and equal amount of answers given over the course of years.
  19. Stars - check Planets - check ISS - not so much. The thing zooms around very fast and best you can hope is to have it fly thru your field of view in less than a second on medium power. Much faster on high power (fraction of a second). Other interesting things - check (galaxies, nebulae, clusters ...) - check, although those require darker skies. Something like this is very serious starter scope: https://www.firstlightoptics.com/ursa-major-telescopes/ursa-major-8-f6-dobsonian.html (in fact - it is scope for life, most people that get one of these - keep it around) It has a lot of aperture so it will give you plenty of light and ability to resolve things (x200-x300 is piece of cake for this kind of scope). There are several drawbacks: - it is heavy and bulky (do look up 8" newtonian on youtube to get the idea how large thing is compared to average person) - it requires some adjustment from time to time (it is called collimation - or aligning mirrors for best image - there are tutorials online and it only requires tightening / loosing couple of screws) - It won't allow for any sort of photography - except occasional snap with mobile phone at the eyepiece (and that probably only of the moon and maybe bright planets). Photography is really another level and there is no simple way of just "snapping few pics" at the telescope. It requires time and dedication and extensive budget. Even very basic photography (which is just taking long exposure images with camera and lens on a star tracker) - will easily eat up the whole budget that you listed.
  20. Original meaning of the word achromatic (a-chromatic = without color): Achromatic refractors where indeed achromatic (in above sense) with respect to singlet lens of the day (back when they were invented). If we go by the definition of the term achromatic - then yes, reflector telescopes are achromatic
  21. Not sure what you are trying to point out here? Regardless of whether OP is a beginner or not - questions have been asked in "beginner" fashion (most likely because content is geared towards beginners). Still, from the question - we don't have enough information to go by to give sensible answer. In general, when planning an imaging system, you really need to answer following questions: - what types of targets I want to image - what sort of money do I want to spend in doing so. Based on answers to those questions above - you proceed to: - identify the best mount (if any) that is suitable to fulfill that task (you can't expect to image close up galaxies with mount such as EQ5 - it simply won't allow for 1.2"/px resolution) - identify working resolution and needed FOV - which will allow you to pursue particular camera / focal length combination - see what scope will fit your budget at given focal length and needed correction given the sensor size. If you want a scope that will satisfy multiple roles - then that is much harder thing to find. Most imagers end up with at least two scopes for different uses. Small APO refractor for wide field and larger reflector for close up imaging.
  22. Hi and welcome to SGL. Yes, that is somewhat wrong terminology, but I sort of understand what you are asking, so I'll try to answer the best I can. There are two major things that telescope does for us - one is to gather light and other is to provide magnification. If you want to know if a book can be read from certain distance - then you are interested in magnification part of telescope operation. Simplest way to explain this would be - telescope can magnify up to certain magnification that depends on telescope size. Actual useful magnification will depend on you as observer and how sharp is your vision, but general rule is to say multiply aperture size in millimeters with x1-x2. For example - 4" or 100mm telescope can magnify up to say x200 usefully. This does not mean that you can't magnify up to say x400 with such telescope - it just means that image will start to be larger but blurry without additional detail. Back on the book example - with simple trigonometry, we can see that book at 200 meters when magnified x200 will look like book at 1m away. If you can read it at 1 meter away - then yes, you'll be able to read it at 200 meters with telescope that provides x200 magnification. Things are a little bit different when doing astronomy - because there is atmosphere in the way (its a bit like looking over fire or over hot asphalt road in summer - everything shimmers and blurs), so maximum useful power will depend on how stable atmosphere is - but also what type of object you are looking at. This brings us to last part of answer - are all telescopes equal. To answer that - you must understand too much magnification part and how it affects things. That blur that I mentioned at high magnification is actually loss of contrast (at certain spatial frequencies - a bit technical stuff here really). It will affect different things differently. Text in a book is high contrast detail - it is black on white - it does not get more contrasty than that. Planetary detail is not like that. Lunar detail is not like that. It is often subtle difference in contrast and color and any blurring will hide such detail. In principle - all telescopes that are diffraction limited - will go like 95% in sharpness / power, but last few percent will be determined by quality of optics. Better optics will simply give slightly sharper views at those high powers - and can reveal those subtle features. It is really hard to explain the difference until you start looking thru a telescope, or even better - compare two telescopes side by side. This difference is either very big or hardly noticeable at all - depends on how you look at it. If someone without much experience takes a look - image will be pretty much the same in two scopes, but experienced observer will see small crater on the moon in one scope and fail with the other (inexperienced observer simply won't see it in either scope) - and that will present major difference - crater is either seen or not. It is the same difference in scopes - just described differently by different people, so personal experience is the key there.
  23. Well, there is no simple answer to that question. It will very much depend on camera selected to be paired with a telescope. Camera and telescope should be viewed as a system and same telescope will perform differently depending on which camera it is paired with. Closest thing to all around scope on a budget for beginner for imaging would be F/5 newtonian in 5" or 6" size.
  24. This point is very ambiguous. To someone starting just starting out in AP - average budget can mean something between £500 and £1000. For someone longer in the game that can mean £2000-£3000 Not the telescope itself. Number one concern is the mount. You start by getting the biggest/best mount you can for your budget.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.