Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Sun is about the same size as the Moon is in the sky (apparent size). It is also extremely bright. When you do solar Ha observing, you are in fact using extreme filtering - much more than with say Ha nebula filter. Good Ha solar filter will pass 0.5-0.7A where A is angstrom - or 0.1nm (tenth of nano meter). Ha filters for observing nebulae is usually around 7nm wide so we have one filter that passes 7nm while other has 0.07nm bandwidth - that is about x100 difference. It has to be because sun is so bright and you can't normally observe it with a telescope unless you have some serious filtering to reduce amount of light - otherwise it can damage your eyes and cause skin burns. For that reason - never point scope to the Sun unless you have proper filter - even simple finder scope can cause serious burns. We observe sun and moon at very high magnification - like x100 and above. This is because detail that we are interested in is often small. We can do full solar disk viewing and that is also interesting - but usually people do want to see tiny detail and use high magnification. Nebulae are completely different. They are often much larger than the sun or moon (although there are some that are even smaller - like planetary nebulae) and they are very very faint. Using too much magnification can be counter productive - they often don't have tiny detail that can be seen and raising magnification makes view dimmer - which is problem for already dim objects. We often use peripheral vision to observe nebulae as they are very dim and we can't see detail with peripheral vision. What is claimed for these Cemax eyepieces is that they have special coatings that makes Ha image very sharp/high contrast at high magnifications used when viewing the sun. Not going to go into whether that is true and how much difference there is between such eyepiece and regular plossl - but thing is - what they claim happens with such eyepiece - will not affect views of nebulae because the way we observe them - they are faint so we can't really see that much difference in contrast (there is something called JND - just noticeable difference and for humans JND for visual stimulus is about 7%-10% so we can see difference of 7% in signal strength. That is a lot of photons for bright target like sun, but that is just a few photons for dim targets like nebulae - and we can't see individual photons - there needs to be at least 7-8 photons for detection) - we use low power to observe them - any difference in sharpness of the eyepiece is not critical at low power since we can't resolve it at low power
  2. I think so too but description says that selected coatings are optimized for Ha usage - and that can be true as there are different types of coatings - some optimized for UV some for IR and some for visual. for example look here: https://www.evaporatedcoatings.com/optical-coatings/ar-coatings/ you'll find several different reflectance graphs depending on type of coating applied.
  3. It won't do much good for Ha nebula viewing. Those eyepieces are meant for Ha solar observing where there is plenty of signal and image brightness is much higher than with nebula viewing. Magnifications are much higher - like when observing the moon and planets so it is the sharpness and contrast that is important to clearly see features on surface of the sun (but only with special filter - never turn telescope to the sun unless you have special solar filters!).
  4. You are comparing mono camera with 7.4µm pixel size shooting luminance versus color camera with 4.63µm Well, there is a difference between CCD and CMOS image - or shell we put it like this - there is possibility for a difference (will there be any, depends on how it is used). With CCDs, individual subs had much more signal. Larger pixel sizes and longer exposures (needed to overcome read noise) just mean more signal per sub. There is absolutely no difference in CMOS and CCD stacks made with summing of subs (for same total duration and other parameters) - but there is difference for average stack. Here is example: 2 + 2 + 2 + 2 + 2 versus 5 + 5 (where 2 and 5 are signal in CMOS vs CCD subs respectively) That is just 10 versus 10 But if you average them you get 2 versus 5. What does this mean? Well same information is just "crammed" into left part of histogram - at lower ADU values with CMOS cameras. With CCDs it was OK to use 16bit image mode when processing, but with CMOS sensors it is no longer ok to do that if one uses short exposures and since pixel sizes are small. Another thing that happens is - you need to stretch more. If you are careful - SNR achieved will be the same (or even higher with CMOS because higher QE and lower read noise), so stretching harder will not create issues, but you need to use software capable of handling hard stretch without introducing artifacts. Why don't you try for example Gimp to process your image - latest version (higher than 2.10 - as that is the version with full 32bit per channel support). Save your image as 32bit float point per channel format and use Gimp to process it. (Gimp is open source / free so it will cost you nothing to try - except some time to download and install it).
  5. CCDs also had gain - but you were not able to change it. If you want something like that with CMOS sensors - just pick a gain and stick with it.
  6. If you want to compare results between CMOS and CCD cameras - then you have to say (or show) what sort of results you were getting with CCDs and in what sort of time / with what gear. If you want to see if you manged to do the best with your data - or learn how to do it, if that is not the case - it is maybe best to attach raw / linear data and see what others are capable of producing. If you worry that you did not stack properly - giving access to original data and letting other stack that for you so you can compare is best course of action.
  7. Completely forgot about that (I just saw that I participated briefly in that thread). Will need to revisit it.
  8. Don't think you'll see much difference between the two.
  9. Ok, so here is a quick break down of terms and what they represent and how to calibrate in different circumstances. We have dark signal and dark noise and bias signal and bias noise (which is always referred to as read noise). Bias signal and read noise are not related in any obvious way. Signal is the bit you want to remove by calibration, while noise bit is reduced by stacking (Signal to noise ratio is improved). In order for stacking to work noise needs to be truly random. Bias signal is just "offset" added to pixel values - which is not the same for every pixel but in general it is pretty uniform as far as value goes - in modern CMOS cameras you can set this overall level by using offset parameter. Dark signal and dark signal noise are completely related. Dark signal is buildup of electrons due to thermal fluctuations in electronics. Dark signal noise is just randomness in this build up - similar to shot noise associated with light signal. There is strong relationship where dark signal noise has magnitude that is exactly square root of dark signal (expressed in electrons - in ADUs this does not hold). When you shoot bias exposure - it only contains that bias signal (and read noise, but we don't care about the noise bit here). When you shoot dark exposure - it contains both bias signal and dark current signal When you shoot your regular light exposure - it contains bias, dark and light signal. Point of calibration is to remove all signals and only leave light signal (light gathered by telescope - you don't care about thermal properties of camera or offset added - and you don't want them as they mess things up). You can remove dark from your lights and that removes both dark current signal and bias signal as darks contain both. You can use both bias and darks when calibrating your lights - but there is really no point in doing so - as darks remove both (using bias in addition to darks won't mess things up as algorithm produces correct results). There is one special case where you can and need to use bias and that is in the case of dark scaling - which in general you should not do unless you know what you are doing (both knowing what you are doing and being sure your camera is capable of that). That is when you use different exposure time for darks and lights and you want to compensate by scaling darks. You can use bias only for calibrating lights - but that is not proper way to do things. It will work under two distinct cases: - using DSLR that internally subtracts dark current for you - and all that is left is bias. This is actually a good thing - newer DSLR cameras have some clever ways to measure dark current while exposing so dark is taken at the same temperature as light. - your camera has exceptionally low dark current at temperature used and your exposure is short enough that dark current is virtually 0 for duration of exposure. This is something that you really need to check for your setup as two same cameras can behave differently depending on scope and light pollution levels. For example ASI533 has 0.00013e/s/px - which is very low dark current. But if you expose for say 5 minutes, total value of dark current will be 0.039e. Now that might seem very low dark current - and in principle it is. It is only very small percent of background signal in most cases, but what if you use high resolution and you shoot in very dark skies and your background signal in exposure is something like 0.1e? Then this dark current is no longer negligible small compared to background signal and you will still see over correction in the corners (if you have strong vignetting).
  10. I personally would not be happy with stars like that in my image and would seek the way to further improve them. To my eyes there are three corners that show issues - which again points to some sort of tilt. Now, these are minor imperfections and I've myself have had images with astigmatic stars in corners, so it will depend on you if you want to pursue that. For some reason, I feel that stars need to be as good as possible over the whole FOV.
  11. Well, if you are happy - I'm happy as well It is indeed big improvement on the start one.
  12. It has to do with leveling - your mount should be level and altitude scale should be precise enough. Actual elevation above sea level has zero impact Atmospheric refraction can also have impact on this, but effect is rather small: (up to half a degree - but it really falls down to just few arc minutes at 40° altitude).
  13. Why do you say that? That is precisely why you have bright corners. Dark frames don't remove noise. No calibration frames remove noise. They all remove / correct some sort of signal. Dark frames remove dark current signal. Imagine following scenario. You have illumination of 80% in corners due vignetting and 100% in center of frame. Your light sub gathers 100 electrons over whole field. Your dark current is 10e. In center you will therefore have 100 electrons (no vignetting) and 10e from dark current - so that is 110e In corner you will have 80 electrons from light (80% illumination) and 10e from dark current so that is 90e total You divide that with flat frame which is 1 for center and 0.8 for corners 110 / 1 = 110 90 / 0.8 = 112.5 What just happened? How come that corner is brighter than center? All we did was to apply correct flat frame? Look now what happens when you remove dark: (110 - 10) / 1 = 100 (90 -10) / 0.8 = 100 No more brightness in the corners! After removing dark signal we have proper flattening of illumination.
  14. Well, that half resolution recommendation is just rule of the thumb. A bit more precise formulation would be like this: - star FWHM in the image depends on several factors - seeing, aperture size / telescope spot diagram and guide performance. - sampling rate should be FWHM / 1.6 here is example for 80mm diffraction limited telescope and 2" and 3" second seeing and 1" and 1.5" guiding RMS 2" seeing, 1" RMS = 3.39" star FWHM or 2.12"/px 2" seeing, 1.5" RMS = 4.295" star FWHM or 2.68"/px 3" seeing, 1" RMS = 4.06" star FWHM or 2.54"/px 3" seeing, 1.5" RMS = 4.84" star FWHM or 3"/px It is better to image in 3" seeing with 1" RMS than it is to image in 2" with 1.5" RMS. Why is that? Because of units. RMS and FWHM are not the same and there is relationship that states FWHM = 2.355 * RMS (for gaussian curve). For this reason 1" difference in FWHM is about the same as 0.5" change in RMS (even a bit smaller in this case).
  15. That sort of paints a bleak image, doesn't it? In reality - P2P error of say 35" is spread over 638 seconds, so it takes about 300s to go those 35" (full cycle is a Hobbit's tale - there and back again ), or about 1" per 10 seconds (give or take). Somewhere along the curve things will be slower, and somewhere faster - so at least 50% of subs of 30s exposure length will have that "RMS = half of imaging resolution" thing even unguided. This was taken on HEQ5 with planetary type camera - unguided. I don't remember exposure length, but it was something like 30s-1m, not longer than that. It is 4.6"/px (it was taken with F/7.6 scope and tiny sensor in heavy LP - not the best combination). Above is 100% zoom - just to show that stars are nice and round. I don't think I've discarded any subs.
  16. I agree and I think it is a pity there is no single "obvious/sensible" answer to this question - setup that is relatively affordable and "can do it all" (to an extent obviously). Above is based on what I perceive to be common beginner question when looking for starter gear in this hobby. So far I've gathered that most people do want to be able to do some visual, want to do some "mixed meat" astrophotography - so basically to try a bit of everything - some lunar/planetary, some DSO - a bit of galaxies, a bit of wide field and so on, and their budget is £300-500 most of the time. Obviously, there is no solution to that equation, but what is the answer that comes closest? Something needs to be sacrificed, but what? And if there are several options - then how to best present pros and cons for people to choose themselves? This is somewhat off topic but I guess it is related. I think that start into "serious" AP (one that can handle most of bits of AP and do it good) - is still those requirements that I listed (10Kg+, 1" RMS, ...,.). Here is another question. Which one would you recommend to novice in 1k category: 70mm ED scope + flattener + Az gti mount with mods + DSLR versus manual eq5 + single RA motor + 130PDS + SW 0.9 CC + DSLR
  17. This is very good combination for wide field imaging, let's see what sort of money that will all cost and main question - how likely is for novice to upgrade from that gear in some future after purchase?
  18. Testing out flats: - take two set of flats with different settings - calibrate them one against the other and observe result - it should be perfectly flat uniform noise (no variation in brightness). Don't forget flat darks for each master flat. Testing out bias: Examine average ADU in sequence - they should be fairly uniform (very small differences on second or third decimal place) Take set of bias, power down camera and computer, do another set of bias. Stack them using simple average each to their own stack and subtract two stacks - you should get perfectly uniform noise with mean ADU value of 0. Darks - same as with flats. With darks there is one more test - mean ADU value should raise linearly with time for given set point temperature You can see if you have proper bias by taking set of bias, set of darks at one exposure and set of darks on double that exposure. Then you stack each and produce (long darks - bias) / 2 - (short darks - bias) This frame should again be uniform noise with mean ADU value of 0.
  19. Ok, I'll get the ball rolling by listing all EQ type mounts that I could find in HEQ5 price class. iOptron CEM26 (a bit more expensive, but let's call it same price class - if it's up to £100 above Heq5). Celestron AVX Bresser Messier EXOS 2 Explore Scientific EXOS-2 PMC-Eight (these two appear to be different mounts although they have same moniker EXOS 2 and come from "sister" companies - one uses stepper motors and other servos). Explore scientific iEXOS-100 PMC-Eight EQ5 EQ35 EQ3-2 Out of these - first two are same price class as HEQ5 so it is worth comparing them directly for performance, however - neither will bring significant saving and will have same "Woa', that's expensive!" response from beginner. Both EXOS 2 are interesting as they claim ~13Kg photographic payload? Yet they both look like EQ5 class mount. EQ5/35/3 - simply lack weight capacity. Why is that when say EQ5 can handle 7Kg payload for photographic purposes? Because any scope capable of 2"/px will have at least 4-5Kg, add flattener/CC, DSLR body, guide scope+guide camera and you are already pushing those 7kg I have to correct myself - EXOS2 can't take 13Kg photographic payload (18Kg visual) - that seems to be error on @FLO's website. On TS website both EXOS 2 mounts (Bresser and ES one) are listed at 13Kg max payload and they come with either one or two 4.5Kg counter weights (not something I'd expect from 18Kg class mount). So there you go. Is HEQ5 realistic recommendation for a beginner or not? I'm also open to other star tracker setup examples that should be instead recommended for beginners - with explanation of use case scenarios and expected performance.
  20. Can you explain this a bit, or even better - give recommendation of a beginner setup that "keeps ones focal length short and has a good camera" versus "dreaded HEQ5 option" --------------- Interestingly enough, no one came up with recommendation for HEQ5 alternative with specs that I gave (10Kg payload, 2"/px imaging, 30" P2P PE, 1" RMS guiding)?
  21. That is very interesting find. I had no idea camera can behave like that. It is good to know that such thing is possible.
  22. I think that will be fine, but best thing to do is to try it.
  23. It's latest member of Ender 3 family - upgrade over v2 but with direct drive extruder and bed level probe pre installed. Few additional upgrades as well - but all other important bits the same as V2. I still don't have it by have long list of what I'll be using it for - astro related
  24. I guess it some sort of OCD - binary type. I'm computer programmer and to me, it makes sense to take power of 2 number of subs because it can be easily divided with integer math (binary shifting). Real reason would be reduction of noise. ASI1600 is 12bit camera and I use unity gain. That makes single flat kind of noisy with only 4000 ADU levels and say 75% of histogram I'm at 3000e per peak value. Someone using 14 or 16 bit ADC (like in CCDs with 30000-40000 FWC) will have single exposure with x10 signal, so at the start my flat exposure has x3-x4 worse SNR than someone using CCD. In order to get to same level - I would need x9-x16 more flats in my stack. If person with CCD is using 20-30 flat subs, I'd need to use what, 200+ to compensate - see, it sort of checks out This is of course exaggeration - since even single exposure of 3000e worth of signal will have SNR of 50+ and since we are dividing with flats - resulting noise polluting the image is very small - but like I said, OCD thing - it is very short, I can do it, it divides nicely by shifting (power of two) - then why not
  25. I also wanted to get that, but now looking at new Ender 3 s1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.