Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. No, I'm pretty sure I got it right - you can see it yourself in above images - enlarged no drizzle version is lacking sharp per pixel noise (because it was enlarged, and I did not use any sort of sophisticated rescaling algorithm - I just pressed ctrl+ in fire fox to get it to 200% display size). Mind you, lack of high frequency noise might seem like sharpness - but actual signal is lacking sharpness - no additional detail is there, which is to be expected as drizzling will not work in amateur setups. Although majority of people disagree with this, drizzle in fact requires such a guiding/dithering precision that you need to make half pixel offset between subs. This is possible for Hubble and it's pointing and stabilization, but it's not possible for amateur setups (drizzle was developed for Hubble undersampled images). On the other hand drizzle will definitively lower SNR, because you are stacking less samples per each output pixel, and that can be seen in above images.
  2. Could you please describe layout of this? Since this is "relay" based system, have you thought of adding a half slit to it? By half slit I mean following:
  3. Still not seeing it ... Upper is enlarged no drizzle version, lower is drizzle version. If anything, upper looks like it has better SNR, but I'm not seeing added sharpness in second one (drizzled).
  4. Actually difference between mono and OSC is not as drastic as it may seem. Let's analyze actual differences between same sensor in OSC and Mono variant to better understand which one is "faster" and by how much. "Speed" of course will vary depending on several factors, but we can draw some general conclusions. Given 4 hours of imaging time, let's see what sort of signal will be gathered on same scope, same sensor, but in mono vs OSC. Mono: Each pixel will gather signal for 1 hour per filter for example, so we will end up with 1h of each L, R, G and B. OSC: For the time being, let's put aside other differences and concentrate on gathered signal, so we will have following: Total imaging time in this case is 4h, but red signal will be recorded by 1/4 of pixels, blue signal will be recorded by 1/4 of pixels and green signal will be recorded by 1/2 of pixels. This translates into 1h of R (4h of 1/4 pixels is the same as 1h of "all pixels"), 1h of B and 2h of G. If we had LRGB bayer matrix on OSC in principle we would gather 1h of each LRGB same as Mono. So what are the real differences between the two? Here is a list: 1. Filters in bayer matrix are less effective than interference filters covering whole chip. This is in part due to fact that bayer filters are absorption filters rather than interference (this is a guess on my part, I might be wrong, but it does seem reasonable to assume this as interference filters on each pixel would be hard to fabricate and reflections would probably be serious problem in such case). This relates to lower quantum efficiency of OSC sensor. Here is comparison (taken from astrojolo's website and similar article, probably worth checking out, here: https://astrojolo.com/gears/colour-camera-versus-mono-price-of-comfort/) 2. Sampling rate - with OSC you have coarser sampling rate, meaning you won't be able to record all the high frequencies, and often that will lead to under sampling (but one will be hard pressed to tell from the image that last bit if sharpness). 3. Thing that happens when we examine many short subs vs more long subs will happen here as well - impact of the noise because of "sparse" pixels and slightly lower QE. Difference between OSC and mono will depend on target brightness - this is why many have found that for really faint signal you want to go Mono + filters, and also LP impact will be somewhat higher on OSC - this is why "OSC works best with fast optics in dark skies". Fact that we get 2h of green is not such a big problem and it just happens to be better in daytime photography, because the way our vision works. If we examine "green" (this is not strictly green response - it is close to it, but cameras need white balance to get it right) and curve of our eye sensitivity to light - we will see very good match between the two - this is for a reason. Look at green line representing our eye's response to brightness, and for example ASI183mc QE graph (green line): For day time photography, where we expect luminance to match what we would see with our own eyes - green bayer filter is actually luminance filter - it represents luminance of each color as our eye/brain sees it. In this regard OSC is actually getting 2h "L", 1h R and 1h B worth of data in 4h exposure. We use "full spectrum" as L in astrophotography because every photon counts, and although our AP images differ slightly to what our eyes would see in terms of relative brightness - it does not matter much because we end up stretching our data anyway and changing brightness response to show all that dynamic range captured. After all of this - OSC is slower, there is no doubt about it, but it might not necessarily be much slower than Mono + filters - difference will more depend on other factors than technology itself (like type of target, sky transparency, levels of LP and such).
  5. how about this then: https://www.firstlightoptics.com/se-series/celestron-nexstar-6se.html Or maybe even 8" version (that is closer to 15kg).
  6. Quite a bit of difference, lets just list relevant stats so you can compare them: 600D at ISO800 (close to unity): pixel size: 4.3um peak QE: 41% read noise: 3.2e dark current : ~0.4e/px/s (at about 33C sensor temperature) ASI294 at Gain 120 (close to unity): pixel size: 4.63um peak QE: ~75% (estimated by ZWO) read noise: ~1.8e dark current: 0.0065e/px/s (at -15C) So, larger pixels (not necessarily what you might need but means higher sensitivity and lower resolution on same focal length), higher QE (more than 80% higher) and lower read noise (almost twice). Dark current is also quite high with 600D - let's take 200 seconds exposure and see how much noise is there from dark current. Accumulated dark current in 200s exposure will be about 80e, and square root of that will be ~8.95e - that is almost three times higher than read noise, so it's not negligible at all. Dark current and read noise combined give 9.5e of noise vs ~2.13e noise with ASI294 (1.8e from read noise and 1.14e dark current noise from 200s exposure at -15C). 600D will have more active pixels due to smaller pixel size (both are APS-C type sensors) - 5184 x 3456 vs 4144 x 2822.
  7. Not sure why would we go thru the trouble of splitting beam when we can have multi scope setup, each having its own sensor and filter. Quad scope, each equipped with L, R, G, and B filter respectively, aimed on the same part of the sky would provide further benefits - like increased total aperture (or if you want to look at it like that - capturing LRGB at the same time at full resolution and full aperture of single scope).
  8. Just seen that quoted article is from 2013 - if they wanted to market such sensor, they would have done it by now. I don't think it's going to happen any time soon.
  9. That is actually quite clever as it combines above approach with color subtraction. Two clear pixels and red and blue one! It looks like best option for OSC astronomy. This sort of sensor will be a bonus for EEVA as well.
  10. There are couple of things going on here and all are quite "normal". First, you are using full frame sensor with a scope that does not have such a larger corrected field - hence distortions in the corners. It's the limitation of the gear used and you can't do much about it except crop. Star trailing in your stack in the corners is another consequence of gear used and imaging method - as noted, you did quite a large dithers between subs. You have fairly large field covered with this setup, and when trying to map a "round" thing onto a "flat" thing there will be distortion - sky is a sphere and imaging surface - sensor, is flat. This leads to slight distortion in star positions over the surface of the sensor. It is very similar to barrel / pincushion distortion that you get from wide lens in regular photography (or some eyepieces in visual). There would be no problem if you had your frames pointed to same place every time (small dither), but large dither means that different parts of image are stretched / distorted, so DSS is having trouble aligning them as it can't handle image distortion of this sorts. It expects "flat" images - meaning geometrically flat, or each frame having stars the same distance from each other, which is not the case here. Using different software that is capable of correcting for distortions of this kind will help get better stack. I think that APP can do it as it says among its features: " advanced image registration using true optical distortion correction ".
  11. I think next simple advancement would be OSC camera with slightly different bayer matrix, instead of RGGB, one can imagine something like LRGB - one pixel being full spectrum instead of having filter.
  12. Depends on response curves of the filters. L = R+G+B in a "special" case - where L for example covers 400-700nm range, and B, G and R, 400-500, 500-600, 600-700nm respectively. There should be no gaps and no overlaps, and QE of filters should be the same. However, technique that I described above can be used regardless if filters subdivide 400-700nm range precisely. If one has algorithms that: a) normalize frame intensity (meaning equalize both signal strength and background, and such algorithm should be part of processing workflow because as is, even single filter will have difference in intensity and background levels between subs in single evening because target moves and attenuation is not the same and depends on air mass, and also LP levels change during the course of the night, both with time and target position in the sky) b) good SNR based sub weighting (again - one should use it anyways because of a) ) then stacking L together with synthetic L subs made out of R+G+B and similarly stacking colors with subs made out of L minus other colors will improve SNR and still create perfectly acceptable results. In both cases - regular LRGB and this sort of mixed LRGB one needs to do color calibration to get proper color, so there is no difference there if one color is a bit stronger or weaker than single filters (their strengths depend on QE of filters and camera QE in the first place - so they are not "proper" either). L itself depends on QE of sensor, so it is not uniform but we don't have trouble using it like that. If there is some data missing, or some overlap, it is like using different QE curve sensor with regular L filter - again results will be acceptable in the same way they are in the first place - depending on QE curve of the sensor.
  13. I have one better for you Let's do LRGB, but to get max out of data, lets compute luminance as stack of L and (R+G+B), R stack as R and (L-(G-B)), B stack as B and (L-(G-R)), and so on ... I think that would be best use of the data. CMY approach is in principle doable, and even has been tried and measured for performance, look here: http://www.astrosurf.com/buil/us/cmy/cmy.htm I'll quote just conclusion at the end for those who don't want to read whole page:
  14. Ok, let's do a comparison of "features" Canon 6D vs ASI183mmc (mono cooled version) with appropriate scopes so that each gives nice resolution of 2"/px and is light weight enough to be handled by HEQ5? I'm going to "assume" "lower end" scopes here, so no Taks and such . In order to sample at 2"/px with color sensor that has 6.5um pixel size, you need ~1350mm of focal length. I know, I know, many will point out that 6.5um at 1350mm will give 1"/px, but that is for mono sensor where each pixel counts, with OSC sensor if you want true sampling rate of 2"/px and pixels are spaced in bayer matrix, you need twice the focal length. You can do interpolation debayering that many do, but that is not real sampling, but rather interpolating sparse samples - you won't record actual information, but rather make it up, and due to Nyquist - you won't get higher frequencies and detail that way. There is decent "candidate" scope to match this focal length, but I'm not sure if it can really cover full size sensor. 6" RC has 1370mm focal length, and according to TS website, there is x1.0 field flattener for RC scopes that should correct up to 45mm circle. Not sure if will produce acceptable results with full size sensor and 6" RC, but let's suppose it will. So we sorted Canon, we found combination that will make it truly sample at 2"/px and it will gather 6" worth of light. Let's see what we can pair up with ASI183 to give same FOV and sampling rate. Due to very small pixel size of 2.4um, we are limited to ~250mm of focal length, and that is very hard to get with with any but smallest scopes. But we are going to use a trick here. Because ASI183 has very large pixel count - 5496*3672 and since we decided to use "super pixel mode" on our canon to get true sampling rate and thus effectively cut down pixel count by 4 (2 times in width and 2 times in height), we can do the similar for ASI183 - we can choose to bin x2 in software, so we can afford to use 500mm focal length. Let's see what sort of scope can we use then. We could choose some wacky hyperbolic newtonian at F/2.8 or something like that, but since we said no Taks, we will also disregard this option (although I've seen very interesting offering from TS for half a price of Tak Epsilon), and let's also skip Boren Simon astrographs, although 6" F/3.6 would fairly interesting in this combination. I think only "realistic" option here would be 4" refractor reduced to about 500mm (thus being F/5 - I don' think there should be any problems with getting there), although maybe 130PDS with x0.73 CC would be a cheaper option, but that would result in ~475mm and that is sort of "cheating", as we would sample at ~2.1"/px rather than 2". Anyways, let's go with that Canon 6D at 6" vs ASI183 at 4". Canon 6D: Light gathering of scope: 225% Effective resolution - 2736 x 1824 px Read noise - 4.8e (at ISO800) (source: https://clarkvision.com/articles/evaluation-canon-6d/ ) QE: peak of 49% (according to this: https://www.dpreview.com/forums/post/53054826 ) Remarks: no set point temperature control / questionable calibration ASI183mm: Light gathering of the scope: 100% Effective resolution: 2748 x 1836 px Read noise - 4.4e (2.2 at unity gain, binned x2 gives 4.4e read noise per "pixel") QE: peak of 84% Remarks: proper calibration. With regards to FOV: Virtually the same, so no difference there. If we assume nebula target, than Canon is at disadvantage even with the scope with 2" more of aperture. QE accounts for 170% of light, while you will be capturing Ha with only 1/4 of pixel in case of OSC - so that is additional 400% difference. Together 680% - that is x3 225% difference that comes from aperture. ASI183 will also have lower read noise, and will calibrate out properly. There ya go - if you aim for fixed sampling rate and Heq5, for imaging Nebulae, ASI183 is better choice with 4" scope vs Canon 6D on 6" scope.
  15. Depends on your intended imaging resolution. What camera do you plan to use it with? If we are to go by your signature, meaning ASI1600, then I would choose following, if your mount is capable of being guided to about 0.5" RMS: 10" RC + Riccardi x0.75 FF/FR. This will give you working focal length of about 1500mm and if when you bin your subs in software, around 1"/px. For that sort of resolution, you will need good seeing and good guiding. 8" RC + above reducer will be more forgiving as it will give you 1200mm reduced, and at 3.8um binned x2, that results in 1.3"/px.
  16. I can understand issues with storage, but problems with setup? You place the base on the ground, and then put you OTA on that base and you are ready to go ... Not sure that any other scope will have advantage in this regard - it is as good as it gets - place the mount on the ground, put the OTA on the mount. Only thing that is going to beat that is grab&go setup - you place the thing on the ground and you are ready to go. Mind you, such grab&go will have much lower light gathering and you won't be able to see as much of deep sky as with 8" aperture. Planets and the moon will be doable (although again, aperture is important there as well).
  17. That one is indeed difficult. I prefer set point cooling because it allows for proper calibration. That sort of rules out DSLR type cameras, but you have a very good point. Larger sensor with larger scope will have "speed" advantage over small sensor with a small scope providing the same FOV. Even if sampling rates are mismatched, in principle, after use of fractional binning to bring things to equal with regards to sampling rate - aperture will win. How much, and whether it will overcome advantage that Mono offers in terms of SNR per unit time - I guess that depends on actual numbers (QE of sensors involved, their total surface, scopes involved, etc ...). But if we go there, we will start discussing proper mounts needed for handling large scope+large sensor vs smaller combo, price/performance ratios and all the nice things we tend to discuss anyway in separate topics
  18. But that is very significant advantage. Not so much in lowering dark current and associated noise, but advantage in being able to perform proper calibration.
  19. I do understand OSC/Narrowband imaging when one already has OSC camera, no budget to invest into mono camera and they want to try something new or combat LP. Choosing OSC camera based on ability to do narrow band is rather wrong. Even if you used duo/tri/quad filters. Mono is simply more sensitive in this role, and difference is quite large. Only 1/4 of sensor is sensitive in Ha/SII wavelengths, about one half in OIII, and one quarter in Hb. These are rough figures, buy you get the idea. Duo/tri/quad band filters let you image certain wavelengths at the same time, but this is not restricted to OSC sensors - they can be used with mono sensors as well, and I think it is great way to incorporate LRGB approach to narrow band. Multiband filter will be used as luminance and single band filters for color.
  20. Hm, tough one ... In second image I like the way cluster is rendered / presented - it is not too "in ones face" - outer regions are more subtle which I like. There are however some things I prefer in first image. One is color tone, I find it more natural in first image - blue stars are as they ought to be and there are yellow stars. Second image has certain red tone shift to it as far as I can tell. Also in the first image, star shapes are better, or more natural looking - they appear to be too flat in second image for some reason. I'll post comparison to explain it better: First image: In this section, you can tell that arrow is pointing to two stars, although they are "touching". Same region in second image: You can almost tell in this one too, but star edges are too abrupt - it almost looks like single elongated entity because there is some "glow" or "softness" missing around stars (background on the other hand, to me looks better in second image). Here is another example of two groups of stars that look better resolved in first image: Somewhat "flatter" look in second image (also, notice red cast in what should be yellow stars):
  21. Yes, it should be - just be careful if it is ADU value and you might need to convert it to electrons for above to work.
  22. Rather simple - you take a sub of "empty" piece of the sky (there really is no empty piece of the sky, but aim to miss milky way, major galaxy or nebula, also aim for sparse star field) and you calibrate your sub with dark (you can also do flat, but in principle it's not needed if you measure in central region to avoid vignetting and you also avoid any major dust shadows). Then you measure average signal in each channel - it will give you sky flux when you divide it with exposure length and number of pixels used to measure. You also need to know your e/ADU gain and convert measured ADU values to electrons.
  23. You are right, according to https://www.telescope-optics.net/Mak-Newton.htm There is some coma in MN design, but it should be less than standard newtonian. Although it has spherical primary it looks like meniscus corrector in front is introducing some coma in the system. I've never heard that people use coma corrector with this scope, even for imaging. Above quote from linked page (at the bottom) does suggest that tilt could be culprit of funny stars in the corner as these scopes tend to be sensitive to it. Images that I see taken with this scope, usually don't have coma in the corners, but they might be cropped. However, people did have corner star issues with collimation of this scope, like in this thread:
  24. This scope is Maksutov Newtonian - so it should not have coma - no coma corrector needed. There are couple of things that can be wrong - first is collimation. I have no idea how to collimate MakNewt, but since it has secondary as well as spherical primary. Primary can be tilted - that will "move" it off optical axis with respect to corrector plate. I don't think corrector plate can or needs to be collimated - but I might be wrong there. Secondary can probably also be collimated. There should be resources online to both check collimation and how to perform it. Here is an example of a thread that deals with MN collimation - might be of interest: https://www.cloudynights.com/topic/590912-collimation-on-a-maksutov-newtonian/ Other than that - it can be sensor tilt. Although there is a "snug fit" for camera adapter inside focuser - either focuser can be at an angle, or maybe there is enough play to cause tilt. First thing to do would be to examine each corner and also check collimation. Tilt is probably last thing to consider if it is indeed snug fit.
  25. 150PDS will be well suited to 183MC if you use super pixel mode for debayering of color data. With focal length of 750mm (~675mm with x0.9 CC) you will have ~0.66"/px if you take 2.4um pixel size - that would be oversampling and if you were using mono camera, then I would advise you to bin it to 1.32"/px, but since you are using OSC camera, in effect you will be sampling each color at twice the distance on sensor - effectively every 4.8um and that gives you 1.32"/px if you don't interpolate while debayering ("fill in the blanks" - or calculate "missing" data between pixels). Simple way to do this would be to use super pixel mode for debayering.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.