Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Stars are actually very useful in determining blur level in the image. Blur is characterized by it's PSF - point spread function. Star is for all intents and purposes at such a distance, point like source and when you are imaging a star you are in fact getting exact PSF profile, and analyzing it will tell you everything you need to know about the nature of the blur. It is done in the same way to test the optics - look at star pattern and you can "read" performance of the telescope and it's PSF and MTF. When star testing a telescope we tend to use defocus pattern because it is easier to read, but in principle, what you read from defocus pattern you can read from a star profile as well. Only thing that can skew your analysis via star profile is the fact that atmospheric turbulence is not the same across the whole field, and this is important in short exposures where there is no enough time for these small differences in seeing to average out - like in planetary imaging - that is why adaptive optics works for very narrow FOV - as it tracks only one "star" (usually artificial laser beam) and can correct for disturbances only in vicinity of that "star". In our case, things average over time and stars are a good match for overall PSF that has affected the image.
  2. Well, for all those interested, I did google translate:
  3. Yep, check your calculations. No way such a large mirror can be 25mm thick - most of that thickness is going to be ground away if mirror is to be fast, and it ought to be. F/3 800mm scope is going to have 2400mm FL - or be almost two and a half meters long. Anything slower than that is going to be as tall as a house Next thing to worry about is impact of mirror thickness on ability to maintain figure. When you have a chunk of glass that thin it is going to bend by its own weight and that will produce astigmatism in virtually any part of the sky apart when looking straight up at zenith. I know that newtonian seems like easiest thing to do, but you might consider two mirror system instead? Instruments of that size are probably best made in Nasmyth configuration. Here is schematic: Such telescope does not need central hole in the back and hence no hole in primary, but uses two curved mirrors and one flat. It can be mounted on dob type motorized mount - alt/az, with addition of camera rotation device if you need to counter field rotation (not needed for visual, but probably needed for imaging and science stuff). Flat mirror and "focuser" are mounted on alt rotation axis, so it is pretty much stationary. It is actually fixed in place if you have your scope on large platform where observer is rotated with the scope. In any case it is easily accessible - no need for ladders or anything.
  4. I can see an improvement there, average about 2.5px or so? If you reach something like 1.8px on average - that will be the limit of what your sampling rate can capture, for tighter stars it's worth going with finer resolution than 1.96"/px.
  5. Well actually, if you look at this for example, you will see that 14" is going to beat refractor in terms of resolution: same pair of stars in above image: And here I'm not talking about the size of each star - I'm talking about "dark space" between them in bottom image (which is over sampled, but there is still cleaner split).
  6. Ok, yes, if by any chance you get blown out core in your subs, then of course either reduce exposure or do "filler" subs, but I'm almost certain you won't saturate galaxy core in 15 minutes. In principle bin 2x2 on camera will save space, but I would advise against it. If you use long subs, you won't have much data anyway.
  7. I think that @JamesF gave that advice in light of long exposure AP, where you can cycle between filters for each sub (or couple of subs). With planetary imaging this is of course not feasible since exposures are really short, but you can actually use this approach - shoot 1 minute video in each filter then change and do next filter. When you finish all three (or four if you do LRGB, but not sure if it is worth it - maybe just do RGB to start with) then return to first filter and again do another set. This way you can choose best 6 minutes of video - two for each filter, depending on seeing as you can cover as much as half an hour or more by cycling thru the filters. I'm not sure if you will be able to use all recorded material due to motion blur even if you derotate your videos. Yes, each channel will have lower SNR compared to lum, but that is simply the way things are if you want to shoot in color - same thing happens if you use OSC sensor since only 1/4 of pixels pick up red and blue and 1/2 green color - again meaning less signal and lower SNR. This of course should not worry you much as there is no other way to do it that is reasonable (you could in theory use multiple scopes and shoot all filters simultaneously, but like I said, let's keep things reasonable ). Yes, wavelets work on both mono and color data. In fact wavelets work on mono data "exclusively" - but when you think about it R, G and B when you look at them individually are mono data - they become color only as consequence of "interpretation" by hardware capable of displaying color.
  8. Well, my view on the matter is that you should keep about 15 minutes exposures. If you use reducer you'll need to bin x3 to match x4 bin of color data (btw, if you don't want to upload subs so we try processing it the way I described, maybe you would be willing to do it yourself? In that case let me know, and I can post plugin for ImageJ that will do this for you and explain how to use it). No need to use lower than unity gain, and offset of 50. These should work well with longer exposure. Don't worry if you blow out a few stars in luminance. Once you stretch your data, those will saturate anyway. With luminance it is not critical to have full star profiles like it is with color where you need proper balance of R, G and B - and you loose proper ratios if star cores are clipped.
  9. Ok, let's run some calculations to see what we can get. As far as I can tell 60da has 4.3um pixels, so you were sampling at (fl 3556mm) - 0.25"/px, or in reality 0.5"/px given that it is OSC sensor. There is "space" to bin x2 additionally to get to about 1"/px, which would be probably good sampling rate given aperture and guiding. Easiest way to do it is to first do split of bayer matrix and then do additional split bin. Not sure if you can do it in your software, but if you like to do a test, maybe simplest thing would be to upload calibrated subs as 32bit fits, and I can prepare all the files for you to stack again? If you have something like 7 subs, this will result in 28 subs for red and blue channel and 56 subs for green (yes, I know, "magic" ). With ASI1600 you will be sampling at about 0.22"/px - which is really "crazy" that should be binned at least x4 or maybe even x5 in some cases (aim is to get to about 1"/px - a bit less if seeing and guiding are exceptional and a bit more if atmosphere is not so good). Btw, since you will be over sampling by this much - use longer exposures. Short exposures on CMOS sensor work because read noise per pixel is relatively small compared to other noise sources, namely LP noise. Darker the skies - you need longer exposures. When you over sample like this - you spread signal over more pixels and each pixel gets less signal. This is also true for LP, so your background is going to be really dark in single sub if you use short exposure. If you can guide for 15 minutes - yes, use that sort of exposure on this resolution as well with ASI1600, because at that resolution 16 pixels divide signal of a single pixel at 0.88"/px, and in order to get same level of signal as 1minute exposure at 0.88"/px you will need 16 times that amount of time.
  10. Oh, I like it! Yes, there is tilt, and it now it clearly shows as there is "gradient" of defocused stars - worst in top left corner. This is one of my favorite targets, and it looks like it is quite challenging one. I'm yet to see sharp rendition of it (well, at least sharp to my liking ). I like the final version, and love the fact that you brought it down in resolution since your shots at 1:1 are very over sampled. How did you do it? Did you bin your data? Thing with ASI1600 is that you are going to have crazy sampling rate, and you will need to bin it quite a lot to get to decent sampling rate.
  11. Let's see what we can gather from this. First - DSS seems to over estimate FWHM by just a tad - most stars AstroImageJ measures to be around 2.8-2.9 pixels FWHM, but larger ones are 3+px FWHM, so I would say that average is about 2.9 - which is not far off 3.0px that DSS gave (maybe DSS is only looking at the brighter stars). This gives rather large FWHM in arc seconds - 2.9px x 1.96"/px = ~5.7". We do have info that you had about 1" RMS guide error, and that you are using 80mm scope. Let's assume somewhat poorer seeing to see if we can get close to that 5.7" FWHM. According to calculation, you should be getting quite a bit lower FWHM for your stars with this setup in regular seeing conditions (assumed 80mm scope, 1" RMS guide error and 2" seeing) - about 3.4"-3.5" which is ideal for sampling rate of about 2"/px, but for some reason you are getting higher value of 5.7". Might be that seeing is indeed poor - I see you are doing M31, and I'm guessing it is still in the east when you shoot it? What sort of landscape do you have due east? Might be that there are heat sources there that create local seeing effects (houses, road, large body of water) - that impacts your FWHM significantly? For the time being, under your circumstances, about 2"/px is probably optimal sampling rate. If you want to move "up" in resolution, it might be worth considering 6" or 8" scope and aim for 1.5"/px, but only if you can get about 1.8-2px FWHM in your subs in DSS with the gear that you already have.
  12. These show improvement in comparison to last post - now you are down to 3.1px average. Out of interest, what is your guide total RMS (in arc seconds)? And also, could you post a single sub - one with known FWHM (from DSS - for example 1min lum 31-1_006.fit - that has FWHM of 3.0) - I would like to compare it to other FWHM measuring method to see if it is indeed 3.0px? Btw, if DSS has good FWHM measurement you need about 1.6 FWHM figures to fully exploit resolution that you already have.
  13. I think that you have pushed the data too much, here is a quick tweak on levels and curves in Gimp of your original image (a bit of wavelet denoising as well):
  14. Yes, this part confused me and that is why I suggested attaching one of your subs. Decent stretch - either by capture application as a preview or in PS or similar should show plenty of stars in the background and any captured nebulosity or glob/galaxy - what ever you were after. If you missed your target, then it will obviously not show, but star field should show none the less.
  15. I meant to ask in regards to setup, how do you find filter thread connection on lens - is it strong enough to carry both lens and the camera without any issues? I was always under impression (maybe due to cheap/stock canon lens) that this filter thread and top of the lens is not most firm to be used for attachment.
  16. I was referring to this in fact. I have heard people having trouble using APT with dedicated astro cameras once they switch from DSLRs. I have no doubt that APT works good for DSLR type devices but those occasional complaints suggest that it's not geared towards usage with dedicated astro cameras. It might be as simple as already pointed out by @Laurin Dave and @david_taurus83 - fact that auto stretch works differently with ASCOM than with DSLR in APT. If not properly auto stretched you will just see black sky with odd bright star here and there. As for adjusting workflow - I would start by looking at maybe different capture software if you can't get APT to play ball with ASCOM based camera - there is a free software ("on the rise" - there is even a thread named like that) that has quite a lot features and people give favorable reviews of it - it's called NINA, and here is thread about it: You can also give SGP lite a go as it is free (and limited) version?
  17. There won't be much difference in going from unity gain to gain 200 in terms of quality of single captured sub. You will lower read noise from about 2.6e to about 2.25e while loosing some dynamic range (going from about 12.5 stops down to 11 stops). Depending on your LP levels and duration of your exposure - this can have minimal effect on the resulting subs. I think that most of your frustration with this camera comes from wrong workflow - one geared towards DSLR type camera. If you spend a little time adjusting your workflow, you'll will be much happier with your camera.
  18. Why not posting a single light sub so we can have a look what you are dealing with? It might be that everything is fine and it is due to software that you are not seeing what you are expecting to see (different workflow to DSLR).
  19. Not sure if using red channel as luminance is such a good idea. I understand that you want sharpness for luminance, but you will get quite unnatural appearance of target if you select red channel to be your luminance. Human vision is geared towards green being the luminance. For same intensity of source, we perceive green as being the brightest of the three - r, g and b. Here is for example luminance formula when you have sRGB components: This means that we will perceive green as being about 2 times brighter than same intensity red and even more bright than blue (although above table gives x3.5 and almost x10 - it is still linear and not gamma corrected, after correction this difference is less). In answer to your question, I would just shoot same number of frames per channel, as sensor is roughly as sensitive as it should be in comparison to luminosity of each part of spectrum. After than just do regular R, G, B compose from these filters and whitebalance image and do wavelet sharpening. That should give you good results.
  20. Well, something is off, that is for sure. I suspect some sort of misalignment, but can't be sure if it is collimation. Examine stars in different parts of the image and you will see different things with them: Bottom left corner: Stars look almost ok, I would say that there is slight elongation, but can't be sure what is the cause of it - might be periodic error / guiding - what sort of target spread did you have on PHD - was it elongated in one direction or circular? This one is close to center of the field: Clearly shows defocus, maybe a bit of miscollimation because bottom left part is brighter. Top right corner: again, stars are tighter - but this time clearly elongated. Top left / bottom right corner show in principle the same thing: a bit of astigmatism and lateral chromatic aberration, but otherwise almost fine. You should probably try without reducer first to see how well optic is collimated. Then check spacing on reducer as well as possible tilt (rotate camera + reducer, and camera with respect to reducer to see what happens to star shapes in corners). I just searched online for any clues, and found some, it appears that Meade ACF design uses hyperbolic surfaces and is therefore much more sensitive to collimation. It also has thicker corrector plate than Celestron SCTs and this leads to more lateral chromatic aberration.
  21. If you are already at close to 2"/px (1.96"/px) then these FWHM values are quite high - and you should not be trying to get higher sampling rate. If DSS is correct about these values, this means that your FWHM is 3.3px x 1.96"/px = ~6.47". Under these conditions your sampling rate should be about 4"/px. My guess is that on this particular night seeing was poor and guiding was not very tight as a consequence.
  22. I'm not sure if it's the feature of the scope or the way our brain/eye system works. There is a limit to how low magnification we can go without getting too large exit pupil and turning our eye into aperture stop. On the other hand, different parts of FOV that we normally have (without use of any optic) have different properties - there is central vision which gives best sharpness, then there is peripheral vision for threshold detection at low light levels (averted vision). Having light "crammed" into smaller space does increase our ability to detect contrast. If one needs to move eye from one patch with certain brightness to different patch of the image with different brightness - we might not be able to detect contrast change as having both patches into same part of our eye FOV - to see them simultaneously - it's much easier for our brain to see the difference if we have two things side by side. You can test this by having very close colors but separated so you have to turn your head to look at each - you might not be able to tell the difference - but put them next to each other and you will instantly see that there is in fact difference. These "HET" scopes enable you similar effect and are hence better for detecting these low contrast features - not because they are somehow better at rendering the contrast - it is just the fact that they compose image in such way that it is easier for our eye/brain combination to see that contrast. I quite love idea of larger aperture that you can use while still comfortably seated down would not mind having something like 16-18" F/3 scope and appropriate corrector.
  23. I'm really torn between these two. This is so interesting. I like them both for precisely opposite reasons! Love saturation and "sparky" feel in the first one, and love the subtle tone of the second one.
  24. Your example is a bit flawed. Although we don't usually work with uniform spectrum, that is not the most important thing. You calculated number of photons hitting "whole" sensor, and that is rather meaningless in terms of SNR, as SNR depends on intensity per pixel. Most users will use full resolution of mono sensor, so we need to calculate per pixel SNR for given time. Actually, may I suggest a bit more realistic scenario so we can examine what happens? Here are initial assumptions of the calculation: - Same sensors in terms of pixel size, QE and other things, one being Mono and other being OSC - We won't take read noise or dark current into consideration for the moment - We are going to shoot relatively bright target in moderately dark skies - photon flux for target and for sky will be the same (for example mag21 skies and mag21 target - that is quite reasonable, but we won't calculate photon flux for each, we are going to assume same parameters as above - uniform spectral source and 60 photons per hour for full spectrum). - 1 photon a minute in full spectrum, 4h total exposure time, filters divide spectrum 1/3 and are 100% transmission in their respective band, sky flux is the same as target 1 photon a minute over whole spectrum and is uniform (so 1/3 per each band). - We will compare resulting SNR per R, G and B values - all calculations will be per pixel with a difference, OSC "pixel" is twice the size and is composed out of 4 bayer matrix pixels - RGGB. Let's do OSC first. Total photons: R - 80 target, 80 sky G - 160 target, 160 sky B - 80 target, 80 sky SNR per channel Red - signal 80, noise (shot + sky) = ~12.65 ( sqrt(80+80) ), SNR = 80 / ~12.56 = ~ 6.3246 Blue will be the same, so SNR in blue = ~6.3246 Green - signal 160, noise (shot+sky) = ~17.8885 (again sqrt(shot^2+sky_noise^2) = sqrt(160+160) = sqrt(320)), so green SNR will be 160/~17.8885 = ~8.9443 For OSC and 4h exposure RGB SNR per channel is (6.3246, 8.9443, 6.3246) Let's now do the same for Mono/LRGB combination, it will require a bit more math since we need to do "color compose" of LRGB, but it's quite straight forward and we only need to do it for single color, because we have uniform light source. Total photons (per pixel): L = 60 target, 60 sky R, G, B = 20 target, 20 sky SNR per captured channel: L - signal 60, total noise ~10.9545, SNR = ~5.477 R, G, B - signal 20, total noise ~6.3246, SNR = ~3.1623 But we need to calculate SNR per RGB and not per LRGB so we are going to apply RGB ratio method of color composing. This means that L will be multiplied with R/(R+G+B) for ratio of red for example. We are going to simplify things and assume that R+G+B is only signal and no noise (although we could do full calc but it would be a bit more complicated), so it has the value of 60. so R channel SNR will be like this: (60s + 10.9545n) * (20s+ 6.3246n) / 60 = 60s*20s / 60 + 60s * 6.3246n / 60 + 10.9545n*20s / 60 + 10.9545n*6.3246n/60 We have 3 components of noise and one of signal. Signal is obviously 20 - 60s*20s/60 and we need to calculate total noise by noise addition. Let's first calculate magnitude of each noise term 60s * 6.3246n / 60 = 6.3246 10.9545n*20s / 60 = 3.6515 10.9545n*6.3246n/60 = 1.1548 Now we can calculate total noise by sqrt(n1^2+n2^2+n3^2) = sqrt(40.00056516 + 13.33345225 + 1.33356304) = ~7.3938 So SNR per R = 20 / 7.3938 = ~2.705 Per pixel SNR for R, G and B after simple ratio color composing will be (2.705, 2.705, 2.705) Even if we equalize sampling rate, and use 4 pixels of Mono+LRGB approach added together, we still have lower per RGB SNR at the end (5.41, 5.41, 5.41) vs (6.3246, 8.9443, 6.3246) This shows that speed of mono/LRGB approach does not come from the fact that Bayer matrix is inferior to mono. It in fact comes from those things that we left out - read noise and difference in QE between regular filters/mono and bayer filters. Point made by @ollypenrice about ability to use different ratios of LRGB than basic one is also very valid and above calculation would look different had we done so.
  25. I gathered general layout, I just wondered how you managed to put everything together (adapters and connections) and what sort of lens did you use? As far as I can tell from the image, lens is narrow enough to fit inside T2 extension or something? You also happen to have two rotators in this setup one manual and one electronic? EP is probably this one (or similar) with T2 thread? https://www.teleskop-express.de/shop/product_info.php/info/p1428_TS-Optics-40-mm-1-25--SuperView-eyepiece-with-T2-connection-for-cameras.html
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.