Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. In order to use that analysis you quoted - you need calibrated sub and measure it's background. 829 value is ADU value from uncalibrated sub, divided with 4 so actual measured value is ~3316, but this value contains offset as well. If you want to utilize the method you quoted me on - you need to do following: Take one of calibrated subs (calibrated = (sub - dark) / flat, where flat is normalized to 1), if you are still working with padded numbers - divide it with 2^(16-camera_bits) and multiply with e/ADU and then measure. That way you will get pure background signal in electrons - and square root of that will be LP noise.
  2. I switched to 16 bit and it improved my guiding (or at least that is what I perceived). It allows for much higher SNR to be obtained in PHD2. With 8bit guiding I often had SNR in range of 20-50 - with 16 bit I can get >1000.
  3. I've never done it myself (nor any Ha solar imaging for that matter), but here is how I would do it: - Shoot the same as above - a video of solar disk or some features, every couple of minutes (it will take up a lot of space) - Process each image individually like you normally do - stacking, wavelets / deconvolution - Register all subs against each other or rather against one reference frame. Use software that is capable of registering against features. ImageJ with plugins should be able to do it. If you are doing full disk imaging - then align disk against disk rather than features as you want rotation to show in animation. - Normalize all subs against reference - meaning make them equally bright. - Export as animated sequence - either as sequence of frames or maybe do feature interpolation (morph) between subsequent subs (morph makes animation much smoother but can look artificial if there is large difference between subs).
  4. I expected to see some processed animation in that video Since I downloaded it - why not give it a "spin" in AS!3 and Registax6? I down sampled it in the end to 50% of the size - as it seems a bit over sampled as is.
  5. That is vignetting. Telescopes can illuminate only limited part of focal plane. At some distance from the center - amount of light reaching it from objective simply starts to fall and ends up at 0 (due to construction of telescope and internal baffling). This is normal thing. You are using very large sensor - it is full frame sensor with diagonal ~43mm. Most telescopes can't fully illuminate such a large sensor. You need to do two things here: 1. Use flats 2. Crop to sensible size. Even with flats that correct vignetting - you don't really want to use far corners of the frame. In fact - you probably want to limit your image to area that receives 80% or more light. Anything lower than that will have lower SNR and it will start showing as increased noise after flat fielding.
  6. What is your setup like, what camera are you using? Is it 485mc? Maybe you could try EP projection instead of using reducer? If I'm not mistaken, eyepiece like 18mm ortho should give you full disk view and it would make solar disk about 25 degrees in angular size. 485mc with 12mm C/CS lens should be able to fit whole disk onto sensor.
  7. Contrast or brightness? Brightness variations can be solved with flats, but contrast issues can't. I don't have experience with Ha solar etalons, but logic tells me that issues with etalon should present themselves as contrast issues - etalon being off band / passing more continuum and so on. We have to be careful about what we are seeing. Sometimes issues with sharpness are thought of as issues with contrast. In some sense they are - blur is loss of contrast at some frequencies. However - I don't think that issues with etalon will present them selves as blur. I see two things (actually three things - I'll talk about third in the end) - variation in brightness and blurring. Variation in brightness can be easily seen if one really reduces the size of the image: There are two dark patches - one to the left and one to the bottom. In general - there seems to be lack of uniformity in illumination. I think that this effect can be solved with flats. There is also issue with sharpness in the image. If we compare these two patches: It does not take much to conclude that bottom one seems too blurred out. I have no idea where this effect is coming from. I'm not sure it is down to etalon. How did you take the image? I ask because there is third thing: These look like compression artifacts. It looks like jpeg / mpeg compression after sharpening / wavelets. This makes me wonder if above blur has something to do with the way image was taken / created. As far as issues with etalon - this is how I think it would present itself: Instead of seeing this: you would see this: slightly brighter view (more continuum) and "drowned" Ha features that are hard to make out.
  8. Maybe astigmatism is coming from the mirror? If there is load on the mirror that is making it "squeezed" in one axis - that can produce astigmatism.
  9. Ah ok, so I'll explain that formula you wrote above in detail so you can understand it. DN = (read_noise^2 * swamp / gain + offset) * (2^16 / 2^camera_bits) This is very unfortunate type of equation as we don't have very well defined meaning of things here, but I will explain what everything means, and produce same equation or rather similar equation - from procedure I presented above. First - read noise bit is fine - and you can use it from graph you have on ZWO website. It is about 1.5e for your camera. Swamp here is "background" swamp - not noise "swamp" - as it is acting on squared value of noise not directly on noise (remember that square relationship between noise and associated signal). If you put swamp being 10 in above equation - it is the same as putting 3.16 in procedure I shown you above. gain is e/ADU for your gain setting. offset is very unfortunate naming. It is not offset that you use in your drivers. It is not offset that we measured in electrons from bias -it is ADU value from bias already divided with 4. This is problematic part of the equation. As is - it is very ambiguous what type of value you should use and in fact - it is probably worst type of value to use as it is "in between steps". Last bit is calculation of that multiplicative constant for ADU values 12bit camera = 16,. 14bit camera = 4, 16bit camera = 1 It is bit manipulation and there is several ways to write it down - all equal. You can write it down like this 2^(16-camera_bits) for example. That is the notion I would use 2^(16-16) = 2^0 = 1 (for 16bit camera) 2^(16-14) = 2^2 = 4 (for 14bit camera) 2^(16-12) = 2^4 = 16 (for 12bit camera) In fact - 2^16 / 2^camera_bits = 2^16 * 2^-camera_bits = 2^(16-camera_bits) - it is same expression if you rearrange it a bit. In any case, I would write above equation like this: ADU = ((read_noise * swamp)^2 / gain) * (2^(16-camera_bits)) + offset It is same type of equation but swamp and offset have slightly different meaning. Swamp here is 3.16 or 5 (like I explained above - as it is "inside square root"). Gain is of course e/ADU value for your camera gain, and offset is simply measured mean ADU value of bias sub (without any division or anything - just straight mean of pixel values as they are produced by capture app). Let's run that equation on above data to see what we will get: ADU = ((1.5 * 5) ^2 / 1e/ADU) * 4 + 2800 = 56.25 *4 +2800 = 225 + 2800 = 3025 You need to measure 3025 on your sub - median value of patch of background. I don't like this approach - as it does not allow you to quickly calculate how much longer your sub needs to be to hit the target. If you measure say 2000 and you calculate 3000 - it simply does not mean that you should extend your sub by 3000/2000 with this approach. Because most of both values is offset and it does not scale with time - it needs to be removed. What you want to do is - calculate expected background value in electrons - and calculate measured background value in electrons (after subtracting offset) and ratio of the two will be ratio of your exposure times. Makes sense?
  10. I'm not sure if it will help. Fact that you have little crosses means that there are two types of astigmatism working at the same time - sagittal and tangential Tangential astigmatism elongates star in towards center of the field, while sagittal - perpendicular to that. Fact that you have it combined - means that some there is optical element that produces both effects at the same time and I'm not sure if changing distance could solve this (but you could try). Here is diagram from wiki page on astigmatism: That is simple type - where focus point changes between two types - but notice that between two types - there is point where both are present. Having cross means that you have point of best focus, field is "flat" - but there is residual astigmatism of both types.
  11. Maybe look into starnet and processing of starless image? You can then blend in stars afterwards. However, I would advise you to deal with the problem on "hardware" level. Part of the issue with the stars is star bloat produced with fast ED doublet optics. I took randomly two stars with surrounding star field and split channels Different level of bloat can be seen in different channels. I think that focus is a bit off as well because of this and trying to make stars the smallest in all three channels. It worked for red and green which seem to be equal (with possibly red being focused a bit better than green) - blue is of course the worst. Taming far edges of the spectrum might help here. It will allow for better focusing (focusing on green) and will remove some of star bloat. Answer to this puzzle is Astronomik L3 luminance filter.
  12. Yes, but why would you want to go with such large factor? If you've seen such large factor being used - its probably not the factor for read noise, but rather background level. Signal and associated noise are related by following equation (for Poisson type noise) - Signal = noise^2 or noise = sqrt(signal) (by square root / square function) If you apply factor of 10 to signal - that is the same as applying factor of sqrt(10) to noise - which is ~x3.16 Here is quick calculation in how much "total" difference is made by read noise for different "swamp" factors. Noise swamped by x3 - there total noise increases by ~5.4% Noise swamped by x5 - total noise increases by ~1.98% Noise swamped by x10 - total noise increases by ~0.5% We can't visually notice increase in noise of less than 5-10%, for this reason it is probably enough to swamp read noise by x3, but I prefer to swamp it by x5. Swamping it by x10 is really an overkill as you certainly can't perceive increase of less than half of 1% in noise. What you are probably referring to is "swamping background signal" kind of thing where x10 means signal not noise - and is equivalent to swamping noise by sqrt(10) = ~x3.16 - a bit more than x3 which is on edge of what we are able to detect. Which ever number you choose in x3-x5 - you'll be fine. Going over x5 really makes no sense as improvement is minimal and can't be perceived by human.
  13. +1 astigmatism. Not sure where it is coming from - but probably CC as @alacant pointed out.
  14. Try fits (right one out of the two posted) - it seems linear. Here is quick process in Gimp after background removal in ImageJ: I did nothing to the data except for bin x3 and background removal - just levels / curves in Gimp. Interestingly star field looks far richer than in any other processing above.
  15. Why? We just calculated it above on real data.
  16. Ideally, you want to measure background LP on calibrated sub - as it contains only light signal and all other signal removed. However, we can measure uncalibrated sub and then remove offset measured from bias and ignore dark current as it is probably very small and won't affect results too much. This is the reason I asked you for just bias and light sub. Your bias has mean value of ~699.23ADU Although camera is set to offset of 70 - it does not really mean anything - it is just number / percent / something in drivers. Actual offset is measured from bias sub. You measured mean bias value to be ~2800ADU - but that value is 14bit value padded with two zeros to make it 16bit - and is hence multiplied with 4 (don't worry about this bit - just remember, 14bit cameras multiplied with 4 and 12 bit cameras multiplied with 16. 16bit cameras are as is). 2800 / 4 = 700, so actual value is above 699.23ADU - that is our offset. Next I took your light. Split into channels, took green channel and made selection somewhere where there is not much vignetting (again - we are working with uncalibrated data - so aim to get the least vignetted region) and not much target: Here I measured median value of 829 (again - measured value /4). Our LP signal is thus 829 - 699 = 130 From read noise we saw that you need 1.5 * 5 = 7.5, 7.5^2 = 56.25 background signal value and yours is 130 130 / 56.25 = ~2.3111 = ~2.3 You are currently exposing about 2.3 times longer than you need to. Since you are exposing 300s - you can expose for 130s or even 2 minutes and you will be fine. Should you reduce exposure time to 2 minutes? Well that depends on you. If you feel comfortable working with 5 minute exposures - no need to reduce it to two minutes, but if you are loosing subs due to guiding issues or wind or whatever reason and you feel more comfortable with shorter subs - you can use shorter subs down to about 2 minutes.
  17. I'm afraid not. Artifacts are there in every frame. They are usually not seen in single sub, after all - that is the reason they are used at all - because they don't mess up image in normal usage. Problem with lucky imaging and those artifacts is that we sharpen our result to bring out detail. We can do that because we stack many subs and have high SNR. If you sharpen just one image - you will bring out the noise - but by stacking many images you average noise out because it is random - and it is no longer issue for sharpening. Same process happens for artifacts - only difference being - artifacts are not random and they don't really average out. They remain in the image after stacking.
  18. One thing that needs to be taken into account is bit depth of camera. Actual ADU values might be multiplied with 4 or 16 (14bit or 12bit camera) - this will skew results. @Pitch Black Skies Can you post single raw / uncalibrated sub from one of your sessions and single bias raw sub - again straight from camera?
  19. Yes, LPS type filter such as L-Pro - or similar filters like Hutech IDAS LPS P2 for example do disturb color balance, but effect is much lesser if one does proper color calibration of the data. Quite a bit color information can be restored even when using such filters. Astronomik L2 filter is specialty item and is therefore probably priced accordingly. L3 and L2 remove some of far regions of 400-700nm range. L3 is a bit more aggressive than L2 at this. This helps bring down star bloat with faster doublets that don't have perfect color correction. Usually far ends of visible spectrum are most out of focus with such scopes and these L2 and L3 serve to remove those offending parts of spectrum. Actually no. Askar DUO band filter is much narrower around 7nm, and while both will show same part of spectrum that is of interest - Ha+OIII, Askar filter will remove much more of the LP. You can still use UHC, and it will work, but Askar will probably work better in the same role.
  20. Without any filter - there will be a lot of IR photons hitting the sensor. There are a few camera models that have strong red and IR response and that will make everything looking pink. see for example ASI485mc model - above 700nm it has much stronger red response. Any IR light will be registered as predominantly red and will be added to the image as reddish hue.
  21. Ok, I see. Here is the thing - you can only achieve so much with given aperture. Small scope - 50-60mm one, will simply resolve less than 100mm one - even with seeing involved. This does not mean you can't image smaller galaxies - it just means that you need to understand that there will be limit to what you can resolve - or how "close in" you can get. Part of getting in close - is FOV. You can adjust FOV by cropping your image. As long as you don't try to enlarge your image past what you can resolve (and get just blurry result) - you will be fine. To give you some idea - let's take few smaller galaxies and see what sort of size they have: M51 is 11 arc minutes x 7 arc minutes. That is 660 x 420 arc seconds. M63 is similar in size So is M94 To this size - you need to apply your sampling rate - one that you can achieve with such small telescope - and that would be 3"/px. Galaxy size that you can resolve will be around 200-300px across. This is your target size. In order to fit galaxy into FOV - you should really crop to about 1000px to 1200px max in width. Here is M51 in FOV that you would get if you crop your image to 1000px and sample at ~3"/px (Fov is whole gray background - not just part in the center - I simply did not have large enough M51 image to make example. You can't zoom in further into such image - it is as large as it is - only 1000px across. But it will look nice.
  22. That really depends on what you are trying to image. If you are trying to image the Moon or planets - then there is certain F/ratio that depends on pixel size. This is only feasible if you are using lucky imaging technique by shooting very large number of images that are very short in exposure (up to 5ms). Some DSLRs are capable of doing this in video mode - but not all (because they use compression for video and that causes artifacts). It is better to use dedicated planetary camera to shoot these kinds of images. If you want to shoot deep sky objects - then in general - you don't need to use barlow, Even x2 barlow will be too much magnification because atmosphere blurs the detail.
  23. Just to be very clear - this is setup that goes into a 1.25" barlow: It is T2 ring for DSLR + 1.25" / T2 nose piece screwed in - you can insert this into a barlow like any other 1.25" eyepiece.
  24. It is possible and fairly simple - use 1.25" / T2 extension (you can do that with any 1.25" barlow. Although you can achieve any FOV that way - it is not possible to get detailed image at any FOV. With x5 barlow and F/6 scope - you'll be operating at F/30 - you will get very blurry image that way without much detail.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.