Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. You will again have to explain this bit, since what you are saying does not add up. First you are talking about information being SNR. then you use this definition: Increased integration will not resolve finer detail at pixel level. One 1s sub carries same signal as one hour of integration if we use term signal to describe average photon flux. Only difference being precision to which that photon flux is recorded - level of noise. Since longer integration does not change photon flux being recorded, nor does it help resolve finer detail, I'm asking again - what sort of additional information longer integration or stacking provide? and I now understand the issue. Issue is that SNR is inherently mathematical thing and not defined by perceptual interpretation. It is ratio of two quantities - above mentioned flux intensity and noise where noise is statistical deviation in measured values from true flux intensity. This is only definition of signal to noise ratio - and it's name is given accordingly signal to noise ratio. There is only one way SNR can be used in this or any other context - the way it is defined and what it represents. Does stacking improve SNR? Yes it does. Does binning improve SNR? Again, yes it does. Same mathematical process is responsible for both. Does this mean that one is "true" SNR improvement and other is somehow fake SNR improvement? No, since they both conform to same definition of what SNR is - signal to noise ratio. Does binning lower detail in the image? Well that depends. Was original image over sampled? If so, then, no - it can't reduce detail that was not there to begin with. But even if image is properly sampled or under sampled to begin with - binning will still improve signal to noise ratio as signal being the photon flux over certain area - small or large pixel, is constant irrespective of selected pixel size or math we do with measured value. Signal is always the same - recorded on one sub or on 10 subs stacked. It is the noise that we manipulate by taking averages and that is what improves SNR.
  2. This Samyang lens that I purchased (very nice second hand price, I might add) has very interesting feature. It is cinematic lens rather than photographic - which means it has continuous aperture ring rather than one with "clicks". It is actually T1.5 rather than F/1.4 model. In most cases fast lens need to be stopped down somewhat, depending if narrow band filter is used or OSC camera. Continuous aperture enables me to try F/1.6 and F/1.7 or perhaps F/2.2 and see which one is acceptable. With regular lens - one is limited to choices like F/2 and F/2.8 or similar and nothing in between. What sort of additional information does regular stacking provide? Do you think that signal is somehow different in single 1 minute sub in comparison to 60 stacked 1 minute subs? Both images captured same photon flux - if you measure brightness of a patch of nebula - you'll get same answer - different only by amount of uncertainty / level of noise. It is not that signal changed between the two - it is level of noise polluting that signal that changed.
  3. Completely agree. Can't wait to receive my Samyang 85mm F/1.4, hopefully tomorrow, to see what it can do
  4. Here is then one thing for you to try and see Take one of your datasets - align all the subs, normalize them but leave one sub aside after that. Stack all the other subs - use average stacking (not addition - we want to preserve signal level). Now subtract that sub and stack. Result should be mostly the noise in that single sub (given number of subs we regularly use in our stacks - impact of noise of all other subs will be minimal). Measure standard deviation of result - this will give you average noise level in that single sub. Now bin 2x2 both single sub and stack in software. Subtract again and again measure standard deviation. Explain how we just measure x2 less noise with binned versions than original if binning "inherently keeps both signal and noise the same"
  5. No - it is actual improvement in SNR - legitimate thing as hardware binning. Software binning differs from hardware binning in only one thing - level of read noise. With hardware binning it stays the same, while with software binning it increases by bin factor - if bin 2x2 - read noise increases by factor of x2, for bin 3x2 - read noise is x3 higher, etc ... With Poisson noise there is no difference between these two cases: - One pixel captures 4 times as many photons - 4 pixels capture regular number of photons and then are summed. Let's do the math. Single pixel captures N photons. SNR is there sqrt(N) or N / sqrt(N) = sqrt(N) (noise is square root of signal). Now we increase surface of pixel by factor of 4 and we capture 4*N photons. What is SNR now? It is again sqrt(4*N) = 2*sqrt(N). Capturing 4 times as many photons yields SNR that is twice as large. Now we do second case. We have 4 pixels each having SNR of N / sqrt(N) (top is signal, bottom is noise associated with that signal) Let's add those 4 together to see what SNR we are going to get. Signal adds regularly, so we have N + N + N + N = 4*N Noise adds in quadrature so we will have sqrt ( (sqrt(N))^2 + (sqrt(N))^2 + (sqrt(N))^2 + (sqrt(N))^2) = sqrt( N + N + N + N) = sqrt ( 4 * N) = 2 * sqrt(N) New signal to noise ratio is therefore 4 * N / (2 * sqrt(N)) = 2 * N / sqrt(N) Signal to noise ratio went from N / sqrt(N) to 2 * N / sqrt(N) - it increased by factor of two. It does not matter if you see binned pixel as having x4 as large surface and capturing x4 more photons or as a regular sum of regular 4 pixel values. SNR improvement (in all those quantities that follow Poisson distribution - Target signal, Sky signal and dark current) will be the same. Read noise is only exception to this and only difference between software and hardware binning.
  6. Not sure what you are asking? Software binning in most cases can be viewed as stacking - you take 4 pixel values and you average them out - it improves SNR by factor of 2 (square root of number of samples averaged). In most cases signal in those 4 pixels is the same (or very close in value).
  7. Far from it! Software binning is as useful as hardware binning in terms of SNR. Well maybe not 100% the same - reason is read noise. With hardware binning - pixels are actually joined to form one super pixel - which is read once. With software binning - pixels are joined together to create one super pixel - but they are each read out regardless - as joining is done in software and reading happens prior to that. Exposure length has to do with read noise only. If read noise was 0 - there would be no difference between 3600 subs one second long versus 1 sub hour long. This is because all signals in sub grow linearly with time. Target signal, LP sky background, thermal / dark current - all are linear functions of time. All except - read noise. Only one that happens per sub and each sub has same "amount" of it. We determine our sub duration based on one condition - once there is other significant source of noise that is bigger than read noise, we can stop our exposure as impact of read noise becomes very small in resulting stack. We usually use LP noise as this other noise source that will become significant because: 1. Grows with time - longer the sub larger the LP signal and hence LP noise - at one point it will become large enough 2. We have cooled cameras and in most cases, thermal noise is much lower than LP noise 3. LP levels are fairly consistent for given imaging site - unlike target shot noise. Sometimes we choose bright targets and sometimes we choose faint targets (in fact, later is way more likely). In any case - we need consistent metric that will work even if we are imaging extremely faint targets (and hence shot noise is very small) All of the above means that we really need to compare read noise to LP noise to determine limit of our exposure. Olly rightly noted that this ratio depends on pixel size, or rather - sampling resolution and not just f/ratio of the scope used. There are different ways to change sampling resolution, but if scope is fixed then there are only three ways: 1. change pixel size - use camera with larger pixels 2. use hardware binning if your camera supports it 3. use software binning - all "cameras" support this - because it is done in software and does not really depend on camera used to record image. I just noted that exposure length depends on cases 1 and 2 - simply because LP signal level depends on sampling resolution and hence LP noise depends on sampling resolution and finally ratio of LP noise and read noise depends on sampling resolution. It does not however depend on point 3. This is due to the way software binning works. Imagine you have just image consisting out of read noise data - so bias file where bias/offset signal is perfect 0 and only random read noise is there. Each pixel has some value of read noise N. We add 4 adjacent pixels together - and in this case we'll be just adding 4 x read noise N. Noise adds in quadrature - so we will have sqrt ( N^2 + N^2 + N^2 + N^2) = sqrt(4 * N^2) = N * sqrt(4) = N *2 Noise of resulting binned pixel is twice of noise of individual pixels - When binning we have to take into account this - read noise of software binned pixel is twice as large (or 3 times if we are binning 3x3 or 4 if binning 4x4, etc ...). Now let's see what happens if we have 4 pixels that recorded sky background and have some values M (M is pretty much equal for 2x2 group of pixels since sky background does not change that fast even if there is gradient). Resulting pixel will have 4*M as signal. But what will be noise associated with that signal? It will be sqrt(4*M) = 2*sqrt(M). Original pixels had sky background M and hence sky background noise of sqrt(M) Binned pixel has sky background of 4 * M and associated LP noise of 2 * sqrt(M) In both cases - read noise and sky background, bin x2 resulted in noise increase by factor of 2. Ratio of these noise magnitudes remain the same before and after software binning: sqrt(M) / N = 2 * sqrt(M) / 2 * N This does not mean that software binning does not work - it simply means following: If you have your setup and you calculated / measured that 5 minutes exposure is good for that setup: 1. If you can hardware bin and decide to hardware bin - you need to recalculate exposure length as it will be different 2. If you software bin your data - no need to change exposure length - if 5 minutes was good for single pixels - 5 minutes will be good for software bin x2 pixels (and x3 and x4). Hope that I now explained it better and have not created additional confusion.
  8. Not sure what the formula is. One that I usually use is ~ x5 ratio of LP noise to read noise.
  9. We can't ignore pixel size or hardware binning - but it turns out that software binning does not change things. Let's say that we take 2x2 software bin as an example. Adding together 4 pixels as in summing values will increase background sky signal by factor of x4 - thus increasing sky noise by factor of 2 (since noise is square root of signal). Adding together 4 pixels via software binning will increase read noise by factor of x2 (same as stacking 4 pixels - square root of 4 is two). Both sky noise and read noise increase by same factor - so their ratio remains unchanged. However this is not the case with hardware binning - where read noise remains unchanged while sky noise changes because sky signal per pixel changes. It is not the case with pixel size either - small pixels require longer exposure than large pixels because they accumulate less sky signal and less sky noise while read noise has nothing to do with pixel size (it is per camera model).
  10. Yes, under given conditions above, you can record Mars as it is now for up to 20 minutes per channel / video. You don't have to derotate video, you only need to derotate stacked image from video to align it to image from next video and previous video (you need to align two channels to third). As far as I know WinJupos has derotation feature - it can derotate video, frame by frame or derotate signle image. This is screen shot from a video dealing with derotation: https://www.youtube.com/watch?v=nOqY49FkomM As you can see - it can either derotate single image, RGB frames (to align them) and derotate whole video - which is just derotation of single frames making them match in time and orientation. Subs on your video that is 20 minutes long will differ in rotation - first few subs will clearly be of different rotation than last few subs. Point is that software like AS!3 can compensate for that in the same way it compensates for seeing differences between successive frames. If you watch this video https://www.youtube.com/watch?v=sO9KbbzP09U you will notice how Jupiter jumps around due to seeing. If you take two successive frames and stack them without any alignment - you will get large motion blur. This does not happen in AS!3 because it has alignment points and it knows how to figure out transformation between successive frames to make them "the same" or rather stack compatible (to some extent it unwarps warping done by atmosphere). It does the same with rotation, so there is no need to derotate separately. Once you record for larger duration than above calculated - then rotation becomes too great for AS!3 to handle. This is why we did calculation in the first place. Those 4-5 pixels of shift used in above calculation were confirmed by Emil himself as distance that AS!3 will handle.
  11. It could probably be added to software but it would depend on lens used. Wide angle lens have pretty serious distortion (fish eye thing), so things are not straight forward. Your best bet is to measure distance from Polaris to star trail and figure out Declination from that (need to plate solve at least one single frame to get idea of arc seconds per pixel but also what level of distortion lens produces). Start and end of a trail in combination with start and end time of recording could provide further clues for figuring out Right ascension for a given star.
  12. Will you be going mono or OSC for your planetary camera? My personal view is that OSC is far easier way to do it than all those electronic filter wheels and derotations and .... I would say - look at ASI224 / ASI385 for OSC planetary. Look at ASI290 or ASI178 for mono. Depending on what type of camera you get and what pixel size it has - you'll need to use barlow to get to optimum sampling rate. Best is to get only barlow element rather than barlow lens. This way you can dial in the distance needed to achieve wanted F/ratio. Probably the best choice there is Baader VIP barlow (one of the best barlows there is). In any case, if you go with ASI290 or 2.9um pixel size - mono option, F/ratio that you need will be F/11.375 For ASI178 and 2.4um pixel size - F/9.4 Both of these F/ratios are very close to native F/10 and I would not bother with barlow in either of these two cases. ASI224 and ASI385 (which is just a bit larger sensor and worth getting if you want to do Lunar) have 3.75um pixel size, but are also OSC sensors, which means they require a bit larger F/ratio. F/29.4 and special "processing" - you either need to do super pixel debayering or downsize final result x2 if you want very sharp image. In any case, I would aim for x3 barlow with these cameras. ASI174 is not very suited for planetary imaging (it is good for solar ha though and lunar use) due to high read noise, but if you want to give it a spin before you purchase something else, then best F/ratio to do that would be: F/23 in case of mono and double that F/46 in case of OSC version of ASI174 (for OSC version, you'll need to downsample it by x2 or use super pixel mode - otherwise it might look too soft).
  13. If you want to use DSLR to capture planets, first thing to do would be to see how to record raw video instead of compressed video. As far as I know - Magic Lantern enables you to record raw video. You need something like 640x480 for planets and hopefully your DSLR can do this (I think you need Canon for use with Magic Lantern). Look at this video: https://www.youtube.com/watch?v=sEXVvry2oiA Or other videos on youtube that explain how to shoot raw video on your DSLR.
  14. This is actually very easy thing to do. One just needs to pair negative and positive lens element. Here is simple ray diagram: Negative lens element needs to be placed at specific distance from focal plane - it's own focal length and it will create parallel rays. Parallel rays are ideal for etalon to do its job, and after one just needs to put another small doublet (like 50-60mm) to bring rays to focus. I guess that red thing with pressure tuner is etalon together with two suitable lens - front negative and rear positive lens. Such assembly can do the same job on any F/6 scope (if original 70mm lens is F/6 to have 420mm FL), if placed at proper distance from focal plane.
  15. Ok, this is interesting - to answer my own question - indeed - very modular: This is same scope with Ha etalon (assembly should I add) - removed. Anyone seeing PST style mod in the making? Just a F/6 donor scope and some machining?
  16. I'm sort of being dumb here. This is the image of the item: Can anyone explain how this works? From image, it is obviously sub aperture etalon. That means it needs to have some sort of negative lens in front of etalon and positive lens behind etalon in order to collimate light beam for etalon to be efficient. How does one remove all of that and why does the text say it is 60mm aperture for Solar and 70mm aperture for nighttime use? If it is indeed front mounted etalon - which would explain everything above - will it indeed be pressure tuned?
  17. Problem with dynamic range is that it is completely useless thing in astrophotography This is because we do stacking. Dynamic range might be useful metric for single exposure. It tells you how much exposure compensation you can get in post processing if you miss your exposure in regular photography. In astrophotography we do stacking and each new stacked image increases dynamic range of the stack. Want better dynamic range? Just simply stack more images. Another thing that dynamic range tells us is how likely it is that we will saturate bright parts of the image. Again - not a problem in astrophotography. With regular photography and single exposure - any saturation is simply lost data - no way of recovering actual pixel values. Same thing happens in AP, but in AP we are used to taking multiple exposures and we can simply do "exposure bracketing" - take bunch of long exposures and in the end - take a few short exposures that we will use to recover signal that is saturated in long exposures. Cooling in itself is not very important. Yes, it lowers the noise, but if ambient temperature is low enough - passive cooling these cameras have, especially in winter - does a good job. Problem with passive cooling and even active cooling with fan is that it does not have set point regulation. You can't say - cool the camera to N degrees and keep it there. This is rather important if you want to match darks and lights. If you go for some sort of DIY solution to camera cooling - maybe pay more attention to getting stable sensor temperature, by creating some sort of feedback loop, then to cooling efficiency. Again - this will be ambient temperature dependent, but if you can keep sensor at set temperature - that is a big plus. With TEC - you need to be careful of icing and dewing issues. Maybe keep camera temperature above 0 and do what it takes to prevent dewing once camera body temperature drops below ambient.
  18. Sure ZWO website has specs for ASI178 and there you can see relationship between e/ADU and read noise: If we consult this diagram - about gain about 400 has lowest read noise, but differences are so small. You shouldn't really concern yourself with read noise if you don't have cooled camera. For any sensible sub duration - thermal noise is going to swamp read noise. Also note that ASI178 does not have sweet spot - all gain values are below unity since camera is 14bit and gain 0 is e/ADU of about 0.9. Relationship between gain and read noise is given with above graph. Relationship between dynamic range and gain is a bit more complicated. First - note that gain with ASI cameras is in 0.1dB units. Which means that gain 270 is actually 27dB increase over gain 0. If gain 0 is e/ADU of 0.9, then gain 270 will be ~22.387 smaller so about 0.04 e/ADU. You can calculate this by decibel formula - value in dB is equal to 20 * log_base_10 (ratio_of_intensities). Now we can see what is maximum number of electrons that can be recorded with this gain factor - 14 bit max number is 16384 - that is our ADU, lets convert to electrons - 16384 * 0.04 = 655.36 We need to divide with read noise to get dynamic range. 655.36 / 1.38e = ~475. How much is that in bits? ~8.9bits. Dynamic range at gain 270 is about 8.9 bits. But you can check the graph: If you are using sharp cap - it might be sensible to use ASCOM driver instead of native drivers for long exposure. My ASI178 requires very high offset - last time I used it, I decided on offset 256.
  19. If you don't want to have set point cooling, then you don't really need active cooling at all. Set point cooling has edge over regular cooling - either active, like a small fan or passive - just having aluminum heat sink - and camera body acts as aluminum heat sink. It provides repeatable temperature. With fan or passive cooling, sensor temperature will depend on ambient temperature. TEC cooling with set point will bring camera to exact temperature regardless (for the most part - it can only reach deltaT below ambient, where deltaT is camera specific) of ambient temperature. This means that with TEC cooling you can get perfect darks. You set your camera to certain temperature like -10C or 0C or even 5C - it does not really matter - and you take darks that match your lights taken at same temperature. Dark current noise is very small for the most part and advantage is not so much in going below zero - advantage is much more in having stable and repeatable temperature for both lights and darks. ZWO ASI 183 that is comparable to that Altair is actually £548 vs £499: https://www.firstlightoptics.com/zwo-cameras/zwo-asi-183mc-usb-3-colour-camera.html
  20. Not at all - ASI178mmc - no use of barlow, just prime focus. Actually, for imaging, probably the most important aspect of telescope is aperture. Most telescope designs are diffraction limited in the center of the field. Even largish central obstruction is not detrimental for imaging as its effects are corrected in usual processing step of sharpening (either wavelets or deconvolution). Compare above image taken with 102mm of aperture and this one: Which was taken with 130mm scope and apparently spherical mirror at F/6.9 (should be very poor for high power visual). Fact that it was spherical mirror did not seem to impact imaging results after sharpening. Use of barlow lens really depends on wanted resolution (and with planetary imaging one will almost always aim for critical sampling - best resolution delivered by aperture) and pixel size. If we aim for critical sampling then F/ratio required is really dictated by pixel size only (aperture size that is related to max level of detail is part of F/ratio and focal length and pixel size determine resolution - throw everything in the mix and you end up with formula for F/ratio depending on pixel size only). If you decide to stick with 130p and have a go at planetary imaging - depending on pixel size, which can be 2.4um, 2.9um or 3.75um with most planetary cameras, you'll need: F/9.4, F/11.4 or F/14.7 for critical sampling. With F/5 scope, you'll need x2 barlow to get approximately first at F/10, x2.2 barlow to get approximately second F/ratio and x3 barlow to get approximately third ratio.
  21. If there is any flexibility in the budget - look at 294 offering - preferably cooled. That is going to be better match (also larger sensor).
  22. As a contrast, here is your M31 image processed to resemble visual appearance under dark skies as would look like to human eye if signal was amplified enough: Quite a contrast, isn't it?
  23. I would say it depends. Mak102 can be very potent planetary scope. It is also very affordable and easy on the eyepieces. Mine gives very good image. Here are some images that I recorded with that scope: It is not very expensive, and one might argue that if you want to get good 3-4mm eyepiece to achieve high magnification with 130p or 10mm + x3 barlow - you wont spend much more getting the whole scope.
  24. Both yes and yes. Yes, nebulae have color that one could see if light was intense enough. Second yes is for people coloring nebulae on purpose sometimes. There are two main types of images from nebulae, regular "color" images, or something like this: and then there are images of nebulae in narrow band imaging technique that uses false color - or something like this: You will notice that these two images have different color but depict same object. First image is so called RGB image and tries to mimic what we would see if somehow light from nebula would be strong enough to cause color sensation. For the most part - people don't really get exact color and that is mostly due to color balance and the way cameras work. If one tries to capture exact color then special care must be taken to calibrate color in images properly. Most astrophotographers don't do that and often boost their saturation and process their images in different ways. In the end - real color would resemble image number one but would not be 100% the same. Second image is colored differently on purpose. This is special method of recording of the image that targets specific gasses that compose nebula. Hydrogen, Oxygen, Sulfur and Nitrogen are often used. Three different gasses are recorded and then color is assigned to each of them. In this case - colors don't really resemble anything real but are instead used to show where different gasses are inside nebula. It shows nebula structure better than regular color image.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.