Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Checkerboard pattern that you are seeing is not related to darks (at least not directly). If you look at the image at 100% zoom level - it's gone: It is there only when image is resized (to fit the screen for example) - if one is using some low quality interpolation resizing. With high quality interpolation resizing - it won't happen. Here is example of the effect: left is image viewed at 66% of original size, and right is image resampled to 66% of original size by using Lanczos interpolation. I have noticed this effect before and I'm not 100% sure what is causing it - but my suspicion is that it is caused by linear debayering in combination with very slight rotation when aligning frames - just few degrees. Look what happens to gaussian type noise when I rotate it by ~1 degree using bilinear interpolation (I adjusted contrast so it can be seen more easily): Grid pattern emerges.
  2. From the text I linked, isoplanatic angle is calculated to be 2" in visible light. To put this into perspective, here is image of Jupiter as it would appear under high magnification with marked area of 2"x2": Yellow dot is area corrected. In fact - at this scale, it is even less than that - about a single pixel, but I was not able to mark it that small with the tool I was using. This is with x300 magnification. Rest of the image would be "normally blurred" (or even a bit more than normally as it would be affected by deformed mirror - mirror would be deformed to correct just that tiny yellow patch, but it would cause additional blur for the rest of the image where deformation does not match the seeing).
  3. Here is a bit more on this: taken from : http://www.vikdhillon.staff.shef.ac.uk/teaching/phy217/telescopes/phy217_tel_adaptive.html
  4. It would be useless for amateur observing needs. Problem is that seeing is both time and direction dependent. Each point in the sky has its own wavefront aberration and is in principle different to any other point in the sky. To be precise - there is very small "window" around a point that has same / similar aberration - something like few arc seconds in radius - and this is zone that adaptive optics works in. Such system can't even correct field large enough for planetary observation (in visible light) - it is mostly used for stellar sized objects.
  5. Don't think that is the case. As far as I can see from the images - all edges on the focus tube are parallel to secondary spider vanes. That would just increase secondary spikes.
  6. Everything looks pretty standard. Only thing that I can think of is to check those mirror clips. Is one protruding in light path (covering part of main mirror) - much more than other two? That could be cause of spike at 60 degree angle - but that would be additional small spike on top of already existing ones that are at 90 degrees. Even reflection of secondary spider looks ok from this angle.
  7. That would be helpful. If you can (even phone at the eyepiece type of image) - take focused and defocused Vega images. One will show spikes at the angle and other will show collimation.
  8. I'd be really surprised to see them at other angle than 90 degrees. Secondary offset is normal and depends on speed of the scope (it is more important in faster scopes like F/5 and F/4) - and has nothing to do with what you are seeing.
  9. Diffraction spikes are formed by straight edge in optical system. Resulting spike is perpendicular to edge that created it. Single vertical stalk will create horizontal spike. This spike will go both ways from star in horizontal direction - not just one. Cross that we see at 90 degrees on bright stars with telescopes that have regular spider comes from supports - but not as one might think - vertical support will create horizontal spike and horizontal support will create vertical spike. It is interesting that three spider supports at 120 degrees holding secondary will create 6 spikes oriented in hexagonal lattice. In order to have two spikes that are roughly at 60 degrees to each other - you'd need sort of V shaped straight edge in optical system with sides of V being at 60 degrees. Either that or X shape - again where two angles would be 60 degrees and remaining two would be at 120. Any sort of issue with primary like pinched optics or tilted primary would show primarily in star image. What is your defocused star image like?
  10. Your question and example don't seem to match. Question: In principle no. If camera A is better than camera B - it will be so in Bortle Sky A and Bortle Sky B. Better camera is always better - regardless of what sort of sky you use it in. Only thing that can change is "quality gap" between cameras. For this - answer is yes. (see the mismatch?) But so will any other camera. This performance improvement has nothing to do with camera itself - but rather with light pollution as source of noise. With astro imaging it's all about the SNR - and in given amount of time signal will equal in Bortle 4 and Bortle 7/8 (useful signal / target signal that is). What won't be different (and substantially so) is level of noise. This has nothing to do with camera used and will be so for any camera. Mono included.
  11. No, Yes, Don't think it is related to clips. Are you sure your collimation is ok? Did you remove secondary spider while tending to primary and if so - is it properly returned to its place?
  12. vlaiv

    Hello All

    Hi and welcome to SGL
  13. I think that it is in fact right. 263Khz is rather slow readout. https://www.aavso.org/what-download-time-ccd-readout
  14. Indeed - I just found some specs for HiPERCAM and it indeed allows for ~1000Hz sampling - which is of course much better than regular amateur planetary cameras. Only issue that I can see is a bit higher read noise than wanted at ~4.5e, but given that sampling rate is 0.081"/px versus critical for up to 400nm and 10.4m aperture is ~0.003967"/px - so about half of that, it might not impact results as much (can be viewed as camera that has 2.25e of read noise at critical sampling rate). There is only one thing that I can't decipher - and that is relation of readout speed in khz to frame rate. Does it depend on ROI? It says that above 4.5e is for 263khz readout rate, so I'm guessing that would be 263000 A/D conversions per second, right? With ~200fps (or should we go with 263fps for simplicity) - that is only 1000A/D conversions per frame - and that is 33x33 pixels? That can't be right?
  15. What are you going to use then to capture data? I'm not convinced that device usually attached will provide necessary capability (very fast readout of small ROI + low read noise?).
  16. What are your future astronomy plans? Do you plan on doing long exposure astrophotography with DSLR (or dedicated astronomy camera) at some point? If so - any planetary camera is sound investment at the moment - as you'll be always able to use it as planetary camera and it will double as your guide camera at some point. For planetary imaging you want several things: - high frame rate, a lot of short exposures taken. This is why we talk about video rather than still images. We are in fact taking large amount of short exposures (you can think of them as stills) - 20-30 thousand subs per run is not uncommon for targets like Jupiter or Saturn. In order to do this - camera runs on very high FPS - like 150-200fps is typical. You can't do this with DSLR - you'll be limited to say 30fps to 60fps depending on model and this means that in same time - you'll capture x5-6 less data. - You want your data to be "intact" - or what we call raw data. You don't want any manipulation done to it - especially compression that looses information (so called lossy compression). DSLR can shoot video - but in order to do so and still be able to write it to SD cards (which are not know for great speeds) - it will be compressed. Check if your DSLR is capable of shooting in RAW video mode. - You want to match pixel size to F/ratio of your scope - or rather other way around, given pixel size you want to be between x4-x5 that in F/ratio. Say your pixel size is 4um - then you want to be between F/16 and F/20 (x4-x5). You use barlow lens to achieve this. In any case - seriously consider getting dedicated planetary / guider camera like this one: https://www.firstlightoptics.com/zwo-cameras/zwo-asi224mc-usb-3-colour-camera.html (it is fairly cheap at the moment - but still one of the best models for planetary imaging). Just do note that you'll need laptop or other computer with USB3.0 port to capture and process the data. By the way - you can use eyepiece as well as barlow to achieve magnification. You can even use both of them at the same time (although without any benefits and with some drawbacks). Eyepiece will work a bit differently then barlow lens - and it will allow you to either magnify or even reduce image size, depending on distances involved (between eyepiece, focal plane and camera sensor). General rule is that less optical elements equals better the image, so while you can use eyepiece - barlow is both simpler to mount and use and has less optical elements and will produce better image - so it is preferred choice for this and other uses.
  17. I hope you get the chance. I think that results would be very interesting to see. If you ever get the chance to do it - maybe do couple of runs with different exposure times / focal lengths (with / without a barlow and changing of sensor / barlow element distance). With 10m of aperture, I guess one can afford to under sample somewhat if that will reduce exposure time needed to go sub millisecond - just in case seeing behaves differently on such large apertures (although from what I read on adaptive optics for large telescopes - it behaves pretty much the same).
  18. No What color banding are you talking about? Those are often perpetuated myths. First, I've shown you how you can easily get 16bit data by stacking 16 subs from 12bit ADC - to the same level of performance as one acquired with 16bit ADC (true 16 bit ADC). Second, light comes in photons, very few photons . Most targets out there that we image will need far less bits per exposure. Signal of say few hundred ADUs per exposure is deemed very strong, and you only need 8-9 bits to properly record it. Third - in astrophotography we mostly deal with stacking - regardless of what type of ADC we use. This means that we end up with bit count that is bigger than either 12 or 16 bits in our final stacks. For each x2 number of subs stacked we add one bit to our bit depth. Stack 32 subs - we increase bit depth by 5. Stack 128 subs - and we add 7 bits. That is why stacking 16 subs produces same results with 12bit as we get with 16bit. 16 is 2^4 - so it is like adding 4 bits and 12+4 = 16. General formula is number of bits added = log base 2 of number of stacked subs. Only difference between few long and many short subs in stacks that add up to same total time is in read noise. It is only thing that adds per sub and not "per time". CMOS sensors are optimized for shorter exposures as they have lower read noise. They don't need 16bit ADCs to perform the same job - they can use 12bit ones. Back to the banding - I'm writing this on 8bit display. In principle, I have very tough time seeing banding in any sort of gradient. We now have 10bit displays as ones that guarantee that humans can't notice banding. With even single sub of 12 bit - that is x4 finer "granulation" of intensity levels, let alone stacking that produces much higher bit range. What you should mention if you want to talk about color banding is bit count when processing images. I've seen most people use 16bit fix point format for image processing - and that can be a problem for modern CMOS sensors. Data should be in 32bit floating point from the moment we start calibrating - until we are finished with our processing and we save it for display (then it will be saved in 8bit format). Because of the way CMOS sensors work - we use shorter exposures and more subs in our stacks. Shorter exposures mean that signal per sub will be lower - and that ADU values will be lower in general. If we use fixed point format and bunch up our signal on one side of this range - we will loose "resolution" or number of levels available. Here is an example to show you what I mean. Say that you image a target that produces 100e/minute signal. You do that with CCD and expose for 10 minutes and you do the same with CMOS but expose for only a minute. You get same total imaging time (CMOS will produce x10 more subs in stack so SNR will be the same). You use average stacking method for both. CCD will have 1000e per sub (100e/minute for 10 minutes) CMOS will have 100e per sub. average of ~1000e values is ~1000e - no matter how many subs you average average of ~100e values is ~100e - again, it does not matter that you average more samples. If you record those two resulting stacks in 16bit fixed point - those average values will be rounded and you introduce rounding error. This rounding error is larger compared to absolute signal value for CMOS - because of shorter exposures (although SNR is the same for both images). Now we reduced signal to 100 distinct levels - and that is less than 7bits - that is something our eye can notice, especially after stretching. But this happens only if we process our data in 16bit fixed point format. If we have our data in 32bit floating point numeric format - we won't be introducing rounding error when averaging and we won't have these distinct levels. Stretching will work normally.
  19. No. CMOS sensors have much lower read noise than CCD. Again - this does not explain how is 12bit ADC low quality and 16bit ADC high quality. From astrophotography perspective - there is absolutely no difference. Let me show you in rather simple example. Say you compare 12bit CMOS sensor that has 1.7e of read noise with 16bit CCD sensor with 7e of read noise. For the sake of argument - let's suppose that both sensors have FWC that matches their bit count and operate at system gain of 1e/ADU (that is not the case with CCD sensors as they often have less than 65K FWC where CMOS sensors more often have higher than 4K FWC and thus have variable gain). Since there is 4 bits of difference between CMOS and CCD - it stands to reason that CCD can expose for x16 times longer than CMOS, right? But what happens if we take 16 subs with CMOS and just add them up? Signal adds up - and yes, it becomes the same as one taken with CCD. All time dependent signals/noises will just add up. Only one that is not time dependent is read noise. There will be 16 "doses" of read noise in CMOS - so let's add those up to see what noise level we'll get. Noise adds as square root of sum of squares so we have sqrt(16 x 1.7e^2) = 4 * sqrt(1.7e^2) = 4 * 1.7e = 6.8e Hm, same thing as CCD, or still slightly better (6.8e vs 7e). How is 12bit ADC inferior?
  20. Not sure if I can recommend any particular book / source that contains all the information. I picked up things and pieced them together from variety of sources. Often recommended is "Making every photon count", but I'm not sure how much it goes into technical detail. In any case, if you have any questions, I'd be happy to help.
  21. I did not go thru all of the text, but have noticed few things. First - all this data on DSLRs (and more) is available online from several sources. Look here: https://www.photonstophotos.net/ Next - you listed both cost and costly as being disadvantage of CCDs for some reason Then - you listed "low quality 12bit ADC", "Non linear pixel sensitivity" and "Amp glow" as disadvantages to CMOS. Can you explain what do you mean by low quality 12bit ADC? In what way is it low quality? Does it produce digital values that are somewhat lower quality than standard? Does it introduce some sort of error into ADC process (different than read noise)? Which CMOS sensors have you found to have pixel nonlinearity? Amp glow is term we use on CMOS sensors but it has originated with CCD sensors (there was single amplifier unit on CCD that would thermally "glow" and cause electron build up on one side of sensor) - which means Amp glow is not exclusive feature of CMOS sensors - and can be calibrated out.
  22. I think that "green issue" in astrophotography is issue of misunderstanding. People often reach for things like "Hasta la vista green" or SCRN in PI - and in my view that is simply wrong thing to do. Green is essential component of the image - although it might not be so obvious. It is strongest component of brightness. I've often posted this image that nicely shows the effect: Another way to see the importance of green is to look at RGB -> Luminance transform which goes like this: From this matrix we can see that Y component (which is essentially luminance) is calculated from RGB triple as ~0.2*R + ~0.7*G and ~0.07*B or in another words - Luminance is over 70% green channel. Artificially killing green is really hurting luminance information. Problem with green comes from lack of color calibration. sRGB matching functions look like this: This represents ideal sensor - that is matched to our computer screens (or at least to sRGB color space all images on internet without explicit color space should be encoded in). Fact that some parts of graph are negative - has to do with fact that sRGB is limited in gamut and can't display some colors properly. For example - it can never display pure OIII color properly - as it would need to produce "negative" red light in order for our eyes see accurate color (which is impossible). But important thing is - compare that to normal sensor QE curves: Or add RGB filters to grey monochrome curve - in either case, green will be strongest of the three, but for correct sRGB color - we need it to be weakest of the three as seen on sRGB color matching function graph. Sensors simply produce stronger green than should be for color space we are working in and we need to compensate for that - we need to color calibrate our sensor (as each will have different QE curve - but sRGB has standard one). Thing with saturation is that it is not a physical property of light. This is something that I think most people don't think about or realize when working with images. There are two different "approaches" or "sides" to color. One is physical and other is perceptual. Maybe much more easy to understand analogy would be with temperature. With temperature we have measurable quantity - physical value of temperature of something. We say something is at 15C and that is something that can be measured. There is also perceptual side to temperature - we can touch something and it will feel - cold, warm, hot, freezing - wide variety of terms, but most important thing is - how it feels - does not depend solely on temperature. It also depends on other factors - ambient and condition of our body (being sensory system). Color is the same in this regard. What we perceive as color depends on conditions that we do our observing in. For example - if we take 6500K light and look at it in daylight / normal office conditions - it will be white light, but if we take the same light and view it in the evening, next to incandescent light bulb or maybe fireplace - it will no longer look white - it will look bluish. Light has not changed - our perception did. Thus we distinguish two different types of color spaces. One relate to physical side of things, while other attempt to tackle perceptual side of things. If we look at terminology for CIECAM02 for example - it will be more clear what this means: https://en.wikipedia.org/wiki/CIECAM02 All those names - like hue, saturation, chroma, brightness - are things that we perceive. There is nothing in electromagnetic spectrum that says one should be more saturated then other (at least nothing obvious, we need to derive complex mathematical equations to be able to calculate saturation). Here is explanation of terms from the same page: We are attempting to create uniform color spaces - which means that same numerical "distance" results in same perceived distance / difference in color. If we boost saturation by 10 and then by another 10 - we should perceive that increase in saturation to be linear as well. In any case - you choice of LAB color space is appropriate for saturation manipulation as it is intended to be (relatively) uniform perceptual color space, but maybe CIECAM02 / CIECAM16 would be even better color space to do that in as you'll have finer grained control over what you want to do with color. Here is the thing. There is no such thing as color-weak image. It is what it is - certain spectrum of EM radiation and if that spectrum happens to produce color that is weak in Colorfulness or Saturation - then that is just the case. If you take a photo of pale beige ambient like this: you don't want it to look like this: (I did not even do a good job of killing that green in plant :D).
  23. Here is what M31 really looks like: Inner part of the galaxy is full of yellow stars. There is some brown / darker color in dust lanes and ha regions, and there are white/ bluish young stars in outer parts. All of the color of M31 is mostly composed out of star light and stars shine with light that is equivalent to their temperature: Since both Ha regions and young hot stars are relatively faint compared to yellow core - it is hard to separate them nicely and people often employ saturation to get those bluish stars - but it is very easy to over do it.
  24. Yes No There are number of things that need to be done for successful lucky imaging run. Freezing the seeing is one of those, and it depends on behavior of atmosphere rather than on size of the optics. Atmosphere is in motion, and it is the speed of this motion that determines how short exposure time you need. Large aperture will in fact have some advantage over smaller aperture in terms of coherence time in some cases. If you have 10 meter aperture and 30cm aperture and bad layer that more or less uniformly moves in atmosphere - in time it completely changes "aberration content" over 30cm of aperture it will only change 3% of wavefront over 10m of aperture. In any case - even when you've frozen the seeing - it is not going to present you with clear image. It is still distorted image on several different levels and those need to be corrected in software. You need to correct for tilt component with use of alignment points when stacking and you need to stack to average out rest of wavefront errors so you get nice symmetric blur that is close to gaussian in nature so that you can sharpen image nicely. You also need good SNR to be able to sharpen. Using shorter exposures will only be counterproductive. Although you have big aperture - if you are imaging at diffraction limit (critical / optimum sampling for given aperture) - you will get same level of signal per pixel in 8" scope as well as 10 meter one, regardless of pixel size. This is because everything is "matched" - F/ratio will be determined by pixel size so pixel size is no longer a factor, and with fixed F/ratio (and pixel size) - any two scopes will have same "speed" in terms of light gathering. Using shorter exposure will just give you less SNR per sub and lower total SNR means that you can't sharpen as aggressively as might be needed. Ultimately, for best results, even with lucky imaging, one needs moments of good seeing. If that does not happen - results won't be as good as possible.
  25. Just use whatever barlow element is cheapest for you / you can most easily get. You can vary magnification of barlow by changing the distance between it and sensor (closer to sensor less magnification, further away from sensor - more magnification, just add some sort of extension tube) to dial it in to suit your needs. Here is a good one: https://www.firstlightoptics.com/barlows/astro-essentials-125-2x-barlow-with-t-thread.html you can add T2 extension between it and camera T2 adapter, but by the looks of it, if you attach it directly with T2 ring to DSLR body - it should already be at about x3 or so. If you find that you have too much magnification - barlow element detaches and has 1.25" filter thread - which you can use to connect it to shorter 1.25" nosepiece that has t2 thread and so on ... Camera will be limiting factor for you, and if you are even a tiny bit serious about planetary imaging - it is better to invest into planetary type camera.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.