Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

sgl_imaging_challenge_banner_globular_clusters_winners.thumb.jpg.13b743f39f721323cb5d76f07724c489.jpg

vlaiv

Advanced Members
  • Content Count

    2,958
  • Joined

  • Last visited

Community Reputation

2,026 Excellent

4 Followers

About vlaiv

  • Rank
    Brown Dwarf

Profile Information

  • Gender
    Male
  • Location
    Novi Sad, Serbia
  1. I would not consider ROI to be particularly interesting feature for EAA, at least not with fast CMOS sensors. By fast I mean - fast download times - USB 3.0 devices. Even USB 2.0 devices have much faster download times compared to CCDs where download time depends on pixel read clock (higher clock speeds increase read noise - so its useful for framing/focusing and such, but you want slow/steady readout for actual subs). With CMOS sensors you can just read out full frame and crop it in software if you want ROI type of functionality. ROI is useful if you go for ultra fast readout needed for planetary imaging to achieve high FPS, so you need to download frame in 5ms or something like that. For EAA where you take each sub in dozens of second exposures (even for 1-2s exposures), downloading full frame that takes couple of hundred of milliseconds is no biggie (most sensors can achieve at least 10-15FPS on full frame). Here is alternative way of looking at color sensors - which can help when deciding if it's worth using them over mono (and only benefit would be color recording / display in "single go"). Look at color sensors as being 4 different sensors "overlapping" - that is basically what you get with bayer matrix. You have one sensor with red filter, one with blue filter and two with green filters. Each of those sensors has (and this needs a bit of mental work): 1. twice lower sampling rate than a mono sensor with such pixel size would have 2. less pixel blur because of smaller pixels (with mono sensors, sampling rate and pixel size is tied together, but with above view - sampling rate is as would be with twice larger pixels but pixel size remains the same) 3. Because of the above, each sensor has 1/4 of QE that sensor with larger pixels would have. To put above in perspective, imagine you want to use OSC sensor in place of mono sensor. You have certain resolution for mono sensor set - let's say you want to go for 1.6"/px. What would be appropriate OSC sensor to achieve same sampling rate and how would it behave? It would be the one having twice the smaller pixels - that would make it "sample at 0.8arcsec/px". But in reality every color would be still sampling at 1.6"/px. If you need help seeing this - just think of sampling resolution not in terms of pixel size, but in terms of how much you need to move to get to next sample. First red pixel would be at 0", then comes next pixel at 0.8" but that is not red, so you skip it, then you move again (we are moving in X direction) and you get to next red pixel - that one is at 1.6", .... so first red pixel is at 0", next red pixel is at 1.6", next one is at 3.2", .... Actual sampling rate in red is 1.6"/px. Same goes for other channels, with green being considered G1 and G2 (it's the same response but we look at it as two different sensors). Now look at pixel size. Mono sensor would have whole pixel surface 1.6" x 1.6" collect the light. But with above sensor - only 0.8" x 0.8" will collect "red" light, same for "blue" light - so it can be thought of each channel having 1/4 of QE of mono pixel. I think this is better way to think about OSC sensors, particularly for EAA applications.
  2. 1 - debayer settings are probably RGGB, but camera manual should specify that. Actually it's given on ZWO website: 2. Same as any other camera. You need a source of light that gives uniform illumination. This can be flat box, or even white t shirt over scope aperture pointed to uniformly lit sky at dusk / dawn. Some people use lap top screen and software that produces white screen. Just shoot such frames and be careful that no histogram peak (and you should have three of them, since it's OSC camera) is clipping. Make them at about 3/4 to the right on histogram. 3. For PHD2 performance you can either look at live graph and observe RA, DEC and Combined RMS values (expressed in arc seconds - ", and not px / pixels). Do note P2P error as well. This will tell you how good your guiding is at any particular moment. If you want to assess whole session - you can load PHD2 log files in suitable software to examine same values and other characteristics of your guide log. As for attached gif, it's a bit hard to say since it's not debayered and pattern can be seen, but some stars look a bit misshapen. This could be due to different reasons - optics alignment, guiding performance, scope inherent aberrations (like coma in newtonian or astigmatism in RC or whatever). Pinpointing it would require examining multiple frames and final stack.
  3. There is no magnification associated with DSLR or other sensors. Magnification is ratio between angular extent in the sky when viewed with naked eye vs apparent angular extent when viewed thru the eyepiece. Image on the other hand has no magnification because it depends on both screen size and observer distance to that screen. Put the same image on your computer screen and view it from 2ft away and then move 20ft away - image will look smaller when you look at it (regular perspective, whole computer screen will look smaller as well). What can help when thinking about extent of target visible in the eyepiece vs extent of it on surface of the sensor is actual focal plane size of things. Your sensor has 35.9mm x 24 mm - which makes diagonal of it 43mm. 25mm eyepiece has field stop of about 22mm, or about half of your sensor diagonal. So if M31 fits on your sensor diagonal in length, you will be able to see about half of it in your 25mm eyepiece. Above Stellarium plugin shows this nicely.
  4. M81/82 and M51 are surely visible if M1 is. You should have go at those. M3/M5 and later in the night M13 should be something to check out. They should all be visible from even strong LP. Maybe have go at Virgo galaxies as well.
  5. Such a large sensor is suited to larger scopes. There is really good mono alternative that is between 178/290 and 294 in both size and price. What do you think about 183 mono sensor?
  6. This is related to mount "smoothness" - and is one of key aspects of "premium" mounts. You want your mount to be able to do 5 or more seconds between corrections without worrying if it's going to spike or not. It needs to have smooth changing error. That way you can comfortably use longer guide exposures - which smooth out seeing and everything
  7. I'm sure that both CEM60 and CEM60EC are capable of let's say 0.5-0.6" RMS guided performance - but so is HEQ5/EQ6 (belt modded and tuned that is). True difference would be seen in actual RMS and p2p errors. If EC version is capable of doing for example 0.3-0.4 while regular only down to 0.5" RMS and EC version never having p2p larger than 0.6-0.7" while regular one going up to 1.2" and over for p2p - then that puts them into separate mount class performance wise.
  8. Thanks for that. I'm not a member of CN (nor tempted to join) so I can't ask further questions but Does not inspire confidence in the claim that CEM60EC is not worth the money. No actual figures are given, guide scope can be small one like ~250FL which would not provide adequate resolution to really see the difference in RMS that would differentiate two mounts, and guiding up to 20 min exposures is not that big of a feat. I'm certain that CN member really sees no benefit from EC version, but question remains would someone with different requirements still make the same call on that.
  9. Any details on this? How do two compare?
  10. At first I thought that it will be an issue, but after running some tests, it turn out not to be as much. Currently, sensor designers are using read noise to dither quantization errors, and even at quite a bit of clipping (gain above unity) - there is not much impact. Noise distribution remains fairly ok and stacking works as expected. I simulated ASI1600 at 0 gain setting - and although there is increase of noise due to quantization error - it still stack ok, since read noise is 3.4e at this setting. I personally still like to use unity gain or higher gains that don't introduce quantization errors to photon count (read noise does get shaped by this - but it always is regardless of e/ADU used) - like integer fraction e/ADU values - 1/2, 1/3, etc ...
  11. That is something that you should definitively try out - simple uncooled camera with very low read noise can be really interesting option for such large scope and very short exposures. Check out these done with such setups: https://www.cloudynights.com/topic/502370-4-second-exposures-asi174mm-16-dob-m51-and-m57/ https://www.cloudynights.com/topic/536494-m51-in-poor-seeing-20001s-asi1600mm-cool/
  12. It's about stacking and way noise adds. It adds up as linearly independent vectors and large LP shot noise will "absorb" small amount of read noise. Read noise is only parameter that does not depend on time - signal, shot noise, thermal noise, LP noise - all depend on time and when you stack these you get the same result as going with single long exposure. Only place where multiple stacked exposures differ from single long exposure (for same total time) - is read noise. This is of course in terms of SNR. Other things also matter - like amount of data, need for precise tracking / guiding, amount of lost frames due to unforeseen events, etc ... Once you have source of noise that is dominant over read noise - what ever that may be, LP noise, thermal noise or target shot noise - then difference between stacked frames and single exposure becomes very very small. As it happens with AP - targets are faint and we use cooled cameras, so really only thing left to dominate read noise is LP noise - hence whole logic. Indeed one would improve SNR of result by using single exposure over multiple exposures even when read noise is swamped but difference is so small - that it's not worth it (in the light of other thing related to multiple exposures). Back to original topic. Traditional long exposure going away will be dependent on two factors really. One is computing power and algorithms - and that develops really fast, even now we don't need more powerful hardware to do this - everything is already invented and affordable, we just need software support. Second is even lower read noise sensors. Just as a thought experiment, but not far removed from what could be expected in near future - imagine 0 read noise sensor. Just a "digital" photon counting device that adds no read noise. Such device could take any length exposures and produce exactly the same result as equivalent device (pixel size, QE and the rest) doing single exposure. Yes, this means that one could do 5ms exposures and stack 2880000 of them and get exactly the same result as single 4h exposure. There would be less need for full well depth, there would be less need for high ADC bit count. One might wander, well how about storage space? Why not do stacking in real time - no need to store all that frames as live stacking can be used to produce number of "sub stacks" or even single stacked image. What would be benefit of such approach? Well, we just showed that one would have no need for storage of large number of subs (or even small number of subs). Other things include - no need for guiding what so ever, even poor performing mounts will be excellent. There would also be a way to select frames based on seeing - something people doing planetary lucky imaging already employ. Passing satellite? Just drop that half a second ... Wind / cable snag? just drop that few seconds and continue ... That is all it's needed to completely change both the way we gather images and also the quality of result. Btw, DSO lucky imaging is something people are already doing. I've seen some extraordinary images with large dobsonian telescopes and tens of thousands of short exposures. Here are couple of threads on this topic on CN: https://www.cloudynights.com/topic/550166-m57-with-short-exposures-10000x500ms/?hl=lucky imaging https://www.cloudynights.com/topic/550864-ngc-40-with-short-exposures-21000x500ms/?hl=%2Bngc40+%2Bwith+%2Bshort#entry7443798 And people are writing presentations to explain this type of imaging: http://www.cedic.at/arc/c11/dwn/CEDIC11_FilippoCiferri.pdf Only thing "holding back" this type of imaging is read noise, and good software to process things on the fly.
  13. No, don't do that. Small sensor cmos "dedicated" cameras are not replacement for DSLR. There are cases when you want to get such camera - like when you need guider / planetary fast camera, or EAA camera - something that is light weight but has small sensor. Or perhaps you have set of filters and want to give mono + filters a go, but then again - there are better models (larger sensor). True step up in terms of dedicated cameras over DSLR would be either large mono version, or cooled camera / or combination. Large format color CMOS dedicated camera without cooling can be better option than DSLR in certain aspects, but it has drawbacks as well. Positive side is less read noise, no need for modding, easier mounting, smaller weight. Negative sides would be the price, astro use only and need for laptop to operate.
  14. Not easy one. I'm inclined to say 178, but for EAA applications 290 has slight advantage in lower read noise. If we for example take "10bit ADC" (or shell I better put it - 1024 FW capacity rather than bit count for easier understanding) for short exposures and we compare read noise of two cameras ASI178 will get there with gain setting of about 233 - read noise from published graph is about 1.4e ASI290 will have that much FW with gain of about 232 (strange coincidence that these two number are so close, don't read much into it) - read noise is about 1.1e It's clear that 290 has slight advantage in this regard - lower read noise allows you to get same SNR with shorter exposures (for same total integration time) - and that is very positive thing for EAA. On the other hand, 178 has three times as many pixels - this allows for binning - either "on the fly" or afterwards to further boost SNR. It gives good resolution with your proposed setup and in general I find it quite ok sensor. I don't have any examples of 290 for DSO, but here are two images taken with 178 - a bit different model, but comparable setup. Mine is color cooled model, and these were shot at 384mm F/4.8 (80mm F/6 with x0.79 FF/FR): Both were "binned" (superpixel mode) and second one cropped because it was from a two night session (not perfect alignment). Sky was border of white / red zone - around sqm 18.5. First one is 4h of exposure under good conditions, second is 8h over two night and slightly worse conditions (a bit of fog/haze - transparency issues).
  15. I think there is your explanation. Together with CLS filter. It's just a regular type of noise, in this case dominant in red part of spectrum. If you look closely, you will see that it's not all red, there is a bit of blue as well (probably blue black point was pushed down in comparison to red). Green is lacking because CLS cuts of part of spectrum where green is sensitive (middle part of spectrum). If you split your subs into R, G and B components and process each as mono, you will find that each is noisy.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.