Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. All the photons collected by the scope that originate from the galaxy will go into "image" of that galaxy regardless of the fact how large it is in FOV (except when it extends beyond FOV). That is how telescope works - it collects photons and forms an image out of all those photons. There is slight distinction between large galaxy that fills the FOV and smaller galaxy that is only half the size (or rather quarter the size - half width and half height). Distribution of the light. All the light collected will be the same - but it will be spread over more pixels in case that galaxy fills the FOV. It's like taking canister of water and filling glasses - if you have smaller number of glasses - each will contain more water once you divide all the water between them (if the water does not spill or overflow). It is this level in each glass / pixel that gives rise to SNR. More water / photons per pixel you have - better the SNR. Better SNR for same time means - same SNR in less time (faster system).
  2. Yes it looks like bin x2 of 183 sensor would give edge over Kaf8300 due to less read noise and better QE - but the question is - why would you use x2 barlow and then bin x2 if you get the same sampling rate with regular pixel size and without barlow? Btw, here are some guidelines on max achievable resolution - expected FWHM of the star divided with 1.6-1.8. So if your star has FWHM of about 3" then max useful resolution to record such image would be 3"/1.6 = 1.875"/px. To get to 1"/px you need to have very tight stars - around 1.6" FWHM. That means excellent tracking and largish aperture and steady skies. You don't have to sample at 1"/px to get data that is sampled at 1"/px - like you already know, you can sample at 0.5"/px for example and bin later in software (for CMOS sensors). In fact - there is better way to bin than regular binning. See this thread for some details:
  3. I would not do that either, but if you are starting AP on a limited budget - you can in fact use it to get decent results. It is also good to know that it can be done - and more importantly why it can be done and under what circumstances.
  4. I'm assuming you in fact wanted to quote me although you are quoted as saying that (maybe you quoted your quote of my text - irrelevant ). This in fact has to do with SNR and brightness. You can increase "brightness" of something as you please. You can't do that with SNR - it remains the same. That is why we always talk about SNR rather than brightness. If you have a pixel with value of let's say 10, and you multiply it with some number - let's say 5, your brightness will now be 50. You changed the "brightness" of it by simply multiplying with a number. Now here is another case. Again pixel has the value of 10 but you know that it is a wrong value. Right value is 12. This means that there is an error in pixel value - or noise. Pixel value deviates from true value by 2 (12-10 = 2). And ratio of the two is 6 = 12/2 (true signal divided by noise). Now let's multiply again that pixel with 5 and see what we get. Again recorded pixel value will be 50 (5*10), but true value of the pixel should be 60 (12*5). So error is now 10 instead of 2 (it also got multiplied by 2). Now SNR of this new image is 60/(60-50) = 6. It has not changed. Multiplying image with some number does alter its brightness but SNR remains the same. Now you are right that changing of sampling rate will affect brightness. It is in fact recorded signal - but you can equalize the signal like above with simple multiplication, but you can't do that with SNR - it is important bit. You can loose some resolution if there is detail there in the first place. If there is no detail - you won't loose anything - two images will in fact be the same. In fact - most people don't have a sense of how much of sharpness/detail you actually loose when changing pixel scale by factor of two. And the answer to that is - in most cases that we face when imaging - not a lot. It is quite subtle effect. You are looking at this wrong way around. Images look better when reduced because reduction causes improvement in SNR. Almost always (depends on way you reduce, and there is only one way to reduce that does not affect SNR). If your image at 1:1 is good enough and has no noise to be perceived - then reduction will reduce the noise, but it will still not be perceivable - and images will look the same in quality. Zooming in, or enlarging image does not change quality of image (or signal to be precise) - it adds nothing to it (depends on the way it is enlarged) - it is only our perception of the image that changes. Imagine following scenario. Instead of enlarging image, you are observing the same image on very large display with very large pixels up close. It will not look as good as on regular monitor - your perception of the recorded data will change - not the image. Ok, I can't be responsible to what other people say. What I say should not be taken at face value as well. That is why we have mathematics and science - it is unambiguous and can be verified for correctness and consistency. If what someone is saying can be verified by science / math facts - then yes, what that person is saying is in fact pointing towards what science and math says - and that is what you should believe, and if you don't believe it - well there is another way to go about it - test it and verify if it is indeed so. On the subject of focal length and using reducer - it depends on couple of factors: 1. size of pixels used 2. effective resolution system can provide (scope / mount / atmosphere) 3. Interest in doing recording at the limit of resolution ( for some images / measurements you are not interested in finest detail for example) Also bare in mind that don't many focal reducers operate on x0.5 without severely degrading quality of image (optical aberrations).
  5. I'm not sure it is my place to convince anyone of anything. It is in fact the math and theory that offers explanation, and only thing I can do is help understand it.
  6. For wave specification, I think green light about 500nm is used (I've seen figures from 500 to 550nm used). This is because it is peak of human vision sensitivity curve. Sometimes that figure is given along 1/x wave specification. Often when working with refracting telescopes, red, green and blue correction is specified because they differ. And yes, explanation is correct - it is deviation from perfect figure in terms of wavefront - to which part in single wavelength does it deviate. Not overly accurate figure because it does not say how it deviates - it can be smooth or "pointy", but in most cases it is in fact smooth deviation.
  7. There are only two differences between 2x2 binning vs using focal reducer (x0.5 one of course): - with software binning you will have slightly higher read noise. Using focal reducer is like hardware binning - read noise per pixel stays the same. So difference here applies only to software binning / CMOS sensors - with 2x2 binning FOV remains the same (you can't change size of chip by binning), while with FR it increases (it still does not change size of chip but it does change geometry of light - making "relative" chip size larger). Otherwise they are the same.
  8. Works both ways - for the same camera and same scope (aperture) - adding a barlow "makes pixels smaller" - they end up collecting less light and SNR will be less for same imaging time compared to same scope without barlow. It does not matter if you use longer focal length scope with same aperture or barlow (provided barlow is perfect and has no light loss - but even if it has, with modern coatings difference will be minimal). F/ratio myth is not is not a myth if you keep pixel size the same. It only becomes the myth if you assume it to be true always, regardless of the pixel size. Statement like F/5 scope will always be faster than F/10 scope is incorrect - because it does not consider pixel size. It will be true if you use the same camera, but if you put camera with pixels twice the large on F/10 scope (and otherwise equal camera) - they will have the same speed. If you put camera with x3 large pixels on F/10 scope - it will in fact be faster than F/5 scope with x3 smaller pixels.
  9. I can easily refute that . It involves understanding how telescopes work, a bit of nature of the light and some mathematics. If you wish, we can go into details of it, but here is very simplified version of it. When using same camera (same pixel size, same QE, ....) and same telescope with reduced and native focal length - you will get different pixel scale. Let's say that at native focal length, your pixel scale is 1"/px, and reduced you have 2"/px. In first instance, pixel will gather all the light from 1"x1" area of the target. In second instance, reduced, pixel will gather photons from 2"x2". Because of the way telescope works - all the photons originating from said areas that fall on aperture will be focused on given pixel. In 2"x2" case, we simply have more signal collected in the same amount of time in comparison to 1"x1" case. Because of the nature of the noise sources present - more signal in given amount of time means better SNR (some sources stay the same per pixel - like read noise, dark current noise, while some have square root dependence on the level of signal - like shot noise and LP noise). Just to make it clear why is this observe following: 1, sqrt(1) = 1, ratio of those is 1/1 = 1 4, sqrt(4) = 2, ratio of those is 4/2 = 2 9, sqrt(9) = 3, ratio of those is 9/3 = 3 .... In another words - you increase signal and ratio of that signal and associated noise is increased (SNR), while other noise sources remain the same per pixel. I'm just showing here that more signal per pixel will always produce better SNR even if we account for all the noise sources (I can do precise math formula for SNR if you want - it will still show the same). Btw shot noise is equal to square root of signal because light comes in photons - Poisson distribution. For the same aperture - pixel covering more sky will have better SNR than pixel covering less sky. You have mixed in resolution in all of that, and yes you are right, in some cases system with reducer will loose some of high frequency components over unreduced system - there will be some detail loss, but let's look at what happens with SNR if you upsample the image - do you loose it? Here is a little test, we start with empty image and add gaussian noise of magnitude 1. Here it is - we measure it, and it is indeed 1. Now I will upsample image by x2, and again measure noise of it to see what we get: Ok, so noise did not increase, in fact it decreased somewhat (I'll explain why), and we know that signal remains the same if we upsample the image (resizing does not make image brighter or fainter). SNR of upsampled image remained the same, or to be precise - it improved a bit. There is no other conclusion but to say that in your mentioned workflow, for same total time - system with reducer will provide better SNR, even if you end up upsampling result image (which I would not personally do - nothing gained by it except larger scale but somewhat blurry image - I like better smaller scale sharp image instead). Just a final note - upsampling of pure noise image has less noise because noise is random, and is distributed over frequencies. When you upsample something you are in face missing those highest level frequencies - their value is 0 - that means that noise part in those highest frequencies is also 0, but pure noise should be distributed over frequencies, so it should also have component at highest frequencies. By upsampling we effectively removed noise from the highest frequencies and total noise has to go down because of this (it is equal to sum of noise over all frequencies).
  10. I think it is better to look at it this way: CCD camera with 5e read noise and certain pixel size - let it be for example 4um, with hardware binning - produces effectively camera that has 5e read noise and 8um pixel size. CMOS camera with 2e read noise and certain pixel size - let it be the same as above 4um, with software binning - produces effectively camera that has 4e read noise and 8um pixel size. Regardless of the resulting read noise - one can deal with it by increasing single exposure duration (less subs for same total imaging time) - as it again adds like you said when stacking subs, and by increasing sub length you can make it be much less than other noise sources (a bit harder for narrow band but doable).
  11. I do. I owned ST102 - that is 102mm F/5 scope and chromatic aberration was horrible on bright targets, but that is not what the scope was intended for - it is wide field scope. I sold that one and got Evostar 102 F/10. Yes, it does have blue fringing, but I found that Baader Contrast Booster helps a lot with that. Moon is almost clear of CA with it and still has rather neutral tone. I briefly observed Jupiter with it on one occasion and again there was no bright purple halo with this filter. You can use wratten #8 yellow as cheaper alternative but it will give you slight yellow cast on the image. What you can further do to reduce chromatic aberration is to use aperture mask. It reduces aperture and increases F/ratio at the same time. It will limit max usable magnification, but will provide much clearer image. For example using 80mm aperture mask, you will get F/12.5 scope. If you look at above posted table - you are in the green zone and still have a scope capable of delivering around x160 magnification. I'm sorry but that simply won't work. You can image with such scope though and get some decent results. You don't even need mono camera for that. There are couple of tricks that you need to employ in order to minimize chromatic aberration issues (one of problems is that you will no longer have F/5 scope, but that does not matter - it is aperture at resolution that determines the speed of the system). Here is an example: First ST102 F/5 scope without any "tweaking": Image is rather poor as it was one of my first attempts with OSC camera (small sensor) and this scope. Camera had no cooling and it was QHY5IILc I believe - so not a very good camera. But that is not the point - resolution is terrible because of CA issues and there is large blue bloat around the stars (and red around smaller stars). Now here is the image done with the same scope and different OSC camera - but not too different. Still 3.75um pixel size, still uncooled - it is ASI185. And later same target, a bit smaller field but better color correction: As you see - no fringing and images are quite "decent". Trick is to use aperture mask (66mm) and wratten #8 filter - or that is what I used. No special processing was done to remove bloat from the stars.
  12. Not true. Given certain SNR unbinned, regardless of dominant noise source, bin x2 will provide SNR increase by factor of 2. It acts on total noise, so dark noise, shot noise, LP noise and read noise combined.
  13. I'm not sure why is this causing so much headache to people - it is in fact simple. Using focal reducer will in fact increase the speed needed to reach target SNR regardless of the fact that: 1. whole target fits both unreduced and reduced FOV 2. same aperture is used and number of total captured photons for the target remains the same. When we talk about SNR in terms of imaging it is rather simple - one is interested in number of photons per pixel - that is what counts towards the SNR. It is signal level per each pixel that goes into that equation. Using focal reducer means that (from above two points) - same amount of light from a target is divided by smaller number of pixels (coarser sampling rate). Each pixel there fore receives more signal and that means that it has better SNR (for same total imaging time - or system is faster to reach target SNR). Although common belief is that this holds only for extended targets it is actually true for stars as well to some extent. Star profile is about the same expressed in arcseconds (FWHM or other measure) and when we decrease sampling rate - again star profile is spread over fewer pixels. Stars are almost never single pixel unless image is hugely undersampled - so they are in fact spread over number of pixels. Relation between resolution and spread is not as straight forward as with extended targets, but in principle that is what happens - increase sampling rate - you decrease SNR. Decrease sampling rate and you increase SNR. Using focal reducer is not part of F/ratio myth. It works and speeds up things. F/ratio myth is about something else - it is about saying that fast scope will always be faster than slow scope. That is not true because it does not take into account pixel size (again how much light is reaching individual pixels). Slow scope with large pixels can be faster than fast scope with smaller pixels. That is F/ratio myth. When using same camera / same pixels - reducer does raise the speed of reaching target SNR.
  14. Ah wait. You are looking for a single item and not general recommendation for students to get for them selves? Here is an interesting item then - it will offer both simple to use in field without computer / laptop approach, but also advanced features of set point cooling (will need power source like a battery or something). It is a bit on heavier side.
  15. In "economy" price range - not sure there will be much of a difference in sensor quality. Yes - go with Canon. I've got the impression that Canon raws are the least fiddled with by camera firmware, and that is a bonus. Pixel sizes are not as important for lens / short scope wide field setups, and SA dispersion can be controlled by lens, again not much of impact on camera body. I think that choice of lens is going to be more important here. It needs to be relatively fast with good star shapes. It needs to have some common filter thread and there should be thread reducer for 1.25" in order to use SA front mounted. One thing that you can recommend would be astro modding - removing anti alias filter and replacing UV/IR cut filter. If you can find online tutorial for that / video of some sort - go with the body that is used in tutorial. Second hand cameras can be really cheap, and there is no need for new unit to start with.
  16. I thought that too but look at diameter of "T2" part - it is about the same if not wider of 2" nose piece. I don't think it is in fact T2/2" adapter. It might be M48 / 2" sort of extension tube with a "limiter" - but my guess is that it is indeed related to DSLR like @Swoop1 said.
  17. ASI224 does not have integrated UV/IR cut filter and one needs to be provided if required for application (cover window is only AR coated). It is also very sensitive in IR part of the spectrum (even more so for color camera because above ~800nm sensitivity of bayer matrix is equal and camera can be effectively treated as mono in that rage). Here is published spectral response for it: Although graph is cut below 400nm, we can sort of guess it goes down to about ~320nm. One thing to note when trying to capture "whole" spectrum that sensor is capable to record is overlap of different orders. In order to capture full spectrum you need to capture it in "pieces". If camera is sensitive down to 320nm, second order spectrum will start overlapping at twice that value so already at 640nm. I'm guessing that DSLR is not astro modded and has UV/IR cut filter in place - that is why it is limited to 420-680nm range. One way to split spectrum would be to use UV/IR cut filter for 400-700nm range, and something like 495nm long pass from baader (yellow) or 570nm long pass (again from baader - orange) to record NIR spectrum. With 495nm long pass, overlap will start somewhere around 980nm, and with 570nm long pass - you don't have to worry about second order since sensor is not sensitive above about 1100nm and there will be no overlap. For UV part of the spectrum - don't use any filters and extract 300-500nm range for example.
  18. There is a bunch of options really. In CMOS "arena" there are ASI120, ASI178, ASI290 (and of course other vendors based on those sensors). In CCD "arena" look at Moravian G-1200 for example - it has similar specs (or Atik GP - same sensor, a bit less expensive).
  19. Important part of the "equation" of microlens artifact seems to be missing, so I'll mention it for all who want to get to bottom of this. Filter type and spacing. I don't think that microlens effects exists on its own. It needs some sort of reflective surface for interference effects to kick in. I've heard the explanation that sensor cover window is not AR coated (not chamber cover window, but sensor - much closer to sensor surface) - but I don't necessarily buy into it. If it were in fact true microlens effect would be fairly similar in size and only impacted by speed of the scope and wavelength. I'm not saying that sensor cover window is not playing a part - I'm just saying that I think it is due to combination of multiple reflective surfaces, including any filters used. Many people use 1.25" filters with ASI1600 and that means placing them fairly close to sensor. Lower scopes allow for placing them further away, Maybe general filter distance plays a part and is partly responsible for fast scopes - no artifact / slow scopes - artifact phenomena.
  20. I was expecting something along these numbers. This splitting approach works the best for two reasons: 1. Pixel blur. It makes small but measurable difference. Larger the pixel surface - larger pixel blur. With regular binning method (both hardware and software) we are in fact increasing pixel size. This adds small amount of blur to resulting data. With splitting of subs we are leaving pixels as they were, and result will have slightly smaller FWHM in comparison to regular bin method because of that. 2. There is also small SNR boost over regular binning. This has more to do with the way data is stacked than binning process itself. Resampling needed to align frames (although resolution does not change, we need sub pixel shifts and sometimes a bit of rotation to align images - so we must resample / interpolate image) has effect on both noise and signal. Depending on type of resampling used it can have significant effect. Signal can be additionally blurred while noise cut down as result of that. This is why I recommend using Lanczos resampling - it offers best results for properly sampled data. It will cut off noise frequencies above sampling frequency, and it will not add almost any blur to the image if image is sampled properly. Coupled with the fact that we did not change pixel values with splitting, and it has original noise distribution (unlike with regular binning where noise is reduced and data a bit blurred due to pixel blur) - enables actual resampling to do a bit of noise reduction by "cleanly" cutting higher noise frequencies. Yes, that is precisely what I used. If you work with OSC cameras as well, you can use this tool to separate colors, but you need to know your bayer matrix order. After you split your subs into channels - you can continue using RGB type of processing approach - you will have set of red subs, set of green subs (twice as many - better SNR which is good as luminance depends mostly on green) and set of blue subs.
  21. It should be fairly easy to do. Just have your subs calibrated and saved in single folder in 32bit fits format. Download ImageJ and put it somewhere on your file system (it does not install, you can run it from that folder). I can either provide you with source code, or compiled plugin. Later is simpler as you need to just copy it to plugins folder inside ImageJ folder (it is already there). You need to restart ImageJ if it were open for it to recognize new plugin. After that it is only matter of: 1. File / import / image sequence here you select first image in your folder, it should pick up on all others (there are options to "filter" what subs you want to load). This will open "stack" in ImageJ 2. Next you open Plugin / Sift bin V2 and select following settings: It should create another stack of images that contains x4 as many subs and has twice smaller height and width. 3. Save as / image sequence will let you save those subs choose fits format, give it a name, and number of digits (they will be for example labeled name001, name002 ....). That is it. Plugin is attached : Sift_Bin_V2.class
  22. I can send you source of ImageJ plugin that will do it if you want? Not sure if you are familiar with that software package - it is free, written in Java so works on various operating systems .... At 1.16"/px, bin x2 will produce resolution of 2.32"/px, and that is just a bit lower sampling rate than is needed for 3.14" FWHM stars, which should be around ~1.95"/px - so not much loss but there will be a bit of increase in FWHM due to this. Rest is down to pixel blur. What resampling method are you using when registering your subs? I think there is an option in PI to select Lanczos resampling (maybe it is even default) - you should use that one for sub registration after binning.
  23. What was original sampling rate? Btw, can you try "split" bin to see if you retain some of the sharpness and get the same SNR improvement? I'm not sure that there is option in PI to do split bin, but I've seen a script somewhere that does it (it is not actually designed for that, but for splitting bayer matrix into 4 color subs - R, 2xG and B, if you run it on mono sub, you will end up with 4 subs of x2 coarser sampling rate but same pixel size - you increase number of stacked subs by factor of x4 and that leads to overall improvement in SNR by factor of x2 - same as binning).
  24. I think that AZ-GTI is more versatile as it can: 1. Operate in alt-az mode for observing 2. Has guiding in both axis Depends really on focal lengths that you plan to use for AP. I don't know which one is more precise though in tracking and guiding. If you want really wide field - maybe az-eq avant + tracking motor is enough.
  25. I don't think above refers to usefulness of on chip binning with CMOS in terms of SNR gain. It refers to the fact that doing it on chip provides only one benefit - smaller files. There are several benefits when doing it in software - look at my post above. Both on chip and in software will provide same SNR gain and will be equal if "standard" binning is used.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.