Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Sub normalization is very important for sigma clip to work properly (called background calibration in DSS). Unfortunately - DSS version is not the best way to do this as it only considers mean / median pixel value of the sub, but should instead account for both LP and atmospheric extinction, effectively performing linear fit between two subs. Small number of subs and background calibration turned off - or even large difference in transparency between frames will make sigma clip algorithm fail as pixel values will have high dispersion and statistics will fail to isolate hot pixels.
  2. Yes, it would be quite automated. It would read FWHM of all subs and show you distribution. You would only need to select target FWHM somewhere in that distribution and it would do the rest. There is similar thing that can be done to correct known aberrations that also might be interested. I see quite a bit of people image without coma corrector to start with. This approach could use synthetic coma to reverse its effects. It would have some drawbacks - since coma depends on distance - it would have to be variable PSF deconvolution and as such it would introduce more noise in outer regions of the FOV, but I guess it is a worth a try to implement something like that.
  3. I have algorithm that will keep all the data and use most of it . I just don't have time to implement it - but hopefully that will change soon. We often use deconvolution on final stack to sharpen it up - but that is really not the best way to use deconvolution. Most deconvolution algorithms are designed with simple premise in mind - there is some read noise and some Poisson / shot noise and image has been blurred with known kernel. When we deconvolve stacked image - it is no longer simple statistics. We have changed noise statistics when using interpolation for aligning and stacking combines individual PSFs / blur kernels into very complex shape. We then use approximate blur kernel for deconvolution. My idea goes like this. We split data into three groups - subs at target FWHM (with very small variation from target FWHM), subs below target FWHM and subs above target FWHM. We stack subs with target FWHM to get reference PSF. For each sub below target FWHM we derive convolution kernel by deconvolving reference stars with sub stars and averaging result. We use this kernel to convolve stars in sharper images to make them same as reference frame. We do similar but opposite thing with subs that have higher FWHM - we find deconvolution kernel and then deconvolve subs. After this operation all subs will have about the same FWHM (this will also correct for star elongation if we form reference from subs with round stars). Subs from below FWHM group will have improved SNR, while subs from above FWHM will have worse SNR. In the end we need very good adaptive algorithm that can take into account different levels of SNR (per pixel not per image as there is no single SNR value per image). I already have that bit implemented. Above should produce better SNR than throwing away poor subs - without loss of resolution.
  4. Depends on what kind of subs you measured. To be sure - you can plate solve single sub to get exact sampling rate. Without focal reducer, regularly debayered image will have sampling rate of 3.76 * 206.3 / 2000 = ~0.388"/px Without focal reducer, splitCFA image (or rather color channel sub) will have sampling rate of twice above, so 0.776"/px With focal reducer and 1430mm of FL, regularly debayered image - you will have 3.76 * 206.3 / 1430 = 0.542"/px splitCFA + reducer will hence have 1.085"/px To be honest, I'm not surprised with those FWHM results. Even slightly worse than excellent seeing will easily push resolution above 3" FWHM (or to around 2"/px equivalent).
  5. Actually - maybe try it, you might be surprised. Proper sampling rate can be deduced from data - look at average FWHM in arc seconds in your subs and you can get the sense of sampling rate required. Relation is rather straight forward FWHM / 1.6 will give you sampling rate. If you have 3.52" FWHM or larger - 2.2"/px is actually proper sampling rate for that case. Even if your FWHM is smaller than 3.52" you might not loose much in terms of detail, yet you might get quite a bit in terms of SNR.
  6. From the list of equipment in signature - image is most likely from APS-C sized sensor - that is ~28mm diagonal, so any coma on the edges would be the same as at the edge of 32mm eyepiece (which has field stop of 27mm). Here is level of coma in 8" SCT: 0.25 degrees at 2000mm of focal length equals to 8.72mm radius (while above would be at ~14mm radius - almost as twice far away). Problem is that 32mm plossl won't magnify image sufficiently to properly resolve coma at that level. If we look at spot diagram - coma is 4-5 times larger than airy disk size and airy disk size for 8" scope is 1.28". 32mm eyepiece will give x62.5 magnification and 1.28" magnified x62.5 is 80 arc seconds or 1.33 arc minutes. Limit of resolution of human eye is about 1 arc minute - so coma size is going to be barely resolved at 6-7 arc minutes - and certainly not very apparent. In images on the other hand it will be visible as long focal length often means high sampling rate - like 1"/px or less with SCTs - and coma will here be at least 8-10px in size.
  7. Exactly. If you think about it - color sensor / bayer matrix is nothing more then RGB filters on top of pixels. It is just the matter of grouping each color into a image - and that is what splitCFA does. You'll also notice that each of those 4 images is in fact twice as smaller in height and width than sensor size. This is the reason why color sensors have effectively smaller resolution and not the one indicated by pixel size (every pixel of each color is spaced "two pixels apart" really as they are interleaved depending on color filter on them).
  8. That does not seem right. Here is what you should do: - calibrate - SplitCFA - take images from one green directory and move into second green directory. This is because bayer matrix has one red, one blue and two green pixels - 4 resulting directories will contain: red, blue and two times green parts of the image. You want all green subs in single place From this point workflow does not differ from LRGB workflow - you now have "per filter" subs in your directories (minus luminance as OSC camera does not produce that). Next step can be: - IntegerResample (or this can be done later) - Integration (integrate each color separately but align them all against the same reference frame) - RGB combine - (you can do IntegerResample at this stage as well if you haven't done it above) - Process your image as you normally would
  9. Yes, splitCFA sounds like a proper name for that procedure. Drizzle on the other hand is something you want to skip - except for bayer drizzle, that one is legit, but it is used when you want to make color sensor have the same resolution as mono version (which means that you are not over sampled at resolution given by pixel size).
  10. If my "math" is correct - every black hole has the same gravitational field at its event horizon - that is why we call it event horizon, it is the place where gravity is so strong that it curves light paths so much that light cannot escape black hole any more (no event beyond event horizon can be seen as all light trajectories from it curve back towards singularity).
  11. I see why you might think that, but consider this - which one is original aperture then: this: or this one: In either case - central dot is strangely displaced from original aperture - and that happens with coma, so actual effect is combination of coma and astigmatism in SCT rather than shadow of baffle tube (which would show as decrease of brightness rather than blockage as defocused pattern is image of entrance aperture - everything away from entrance pupil will be "out of focus").
  12. Why do you think that? Here is simulation of defocus + coma versus actual image: Actual image shows a bit astigmatism as well which I did not simulate (that is why outer stars have "almond" shape - because combination of coma, astigmatism and defocus).
  13. If binning is to work - you need to be careful what sort of debayer you are using. Any sort of interpolation debayering will artificially increase resolution of image and further binning of that data will be "null operation" (like multiplying and dividing with same number). In order for binning to work - you need to treat OSC data as it is - at lower resolution to start with. Best way to debayer data is "split debayering". If you are using PixInsight - there should be a script to do that. @ONIKKINEN mentioned it few times, but since I don't use PixInsight - I don't really remember what it's called or how it's used.
  14. It depends if you use CMOS or CCD sensor. With CCD sensor it makes sense to use hardware binning. With CMOS sensor - it is really the same. I prefer doing like you did up until now - download regular subs and then decide afterwards what sort of bin factor to apply. I even apply different bin factors depending on what I'm trying to do with data. I might use one bin factor for image as I like ability to zoom in to see detail - and different, higher bin factor to produce some sort of data diagram. For example - if you want to produce surface brightness chart - you want higher SNR data and you don't really care about small details in the image - then you can use higher bin factor. There are also several different ways to bin data - and you can choose among them in processing. You can even bin after stacking (although this can have different results depending on what sort of alignment interpolation you used) - just make sure you do it while data is still linear prior to any processing.
  15. Yes, it does take finite amount of time in reference frame of falling person. I however wonder if falling person ever reaches singularity or even passes event horizon. All that light from the universe falling after the person (but at the speed of light) - gets blue shifted - closer to event horizon - more so, and at some point it will be seriously gamma in nature and will possibly rip anything material apart? There are so many thing at play that it is really hard to imagine what will actually happen.
  16. I think you have cropping part reversed - cropping is needed if you have reducer for two reasons - first is edge of the field aberrations (if exist in a setup) will be worse with reducer (not true for matched reducer / flattener as those correct for aberrations) and second is framing - reducer often give more space around the objects and if you want to emphasize them - you want to crop away that excess space.
  17. Ok, so CCD suitability calculator is wrong in couple of ways (although people seem still reluctant to accept that). Not really sure how to best explain this one. To me it is obvious, but that is simply because of the way I think about things (given my background and previous work in graphics, math, signal processing ....). I guess simplest analogy is measurement and measuring accuracy. You can measure tall building with a yard stick to a good precision, right? You can't measure height of your desk with a yard stick very accurately, right? In first instance - if you round to a length of a stick - your error will be few percent. In second case - you can miss up to 50% - which is large error. In order to measure something accurately - you need to have enough precision and that precision depends on size of thing you are measuring. Same goes for image. How dense pixels you need (it is actually "distance" between two pixels that counts and not the size of single pixel) depends on how big detail in the image you want to measure / record. It turns out that detail that you get in astro images depends on seeing, mount performance and aperture size. It is often not as high as we would like (and think it is). If there is no detail to be measured - using high resolution will produce same results as using low resolution (measuring desk height is equally efficient in millimeters and microns within given error of measurement). Since images are impacted by noise - if measurement error is lower than noise - you won't even see measurement error and for all intents and purposes - both measurements will yield same results. (in reality there is additional component that there really is no detail beyond certain scale - but that has to do with wave nature of light, and frequencies and Fourier transforms and Nyquist sampling theorem and so on). For most amateurs, top limit seems to be around 1.5"/px. Sometimes, people with excellent mounts, great skies and large apertures can go down to 1"/px - but I don't think I've ever seen image that is effectively below 1"/px - unless it comes from professional telescopes that are often more than 1m in diameter and are situated at high altitude sites with great seeing. Ok, so why bin or match pixel size to what we can capture? - even if we there is no detail to be recorded - why does it hurt to use smaller pixels? Well it hurts on two different levels - first is real and second is aesthetic. 1. We often base our decisions on "speed" - we don't want to expose for 20h to get good image so we want fast setup. People value small F/ratios and use focal reducers for that same reason - but what they should be looking instead is to avoid over sampling. If you image at 1.1"/px instead of 1.5"/px - it will take you 1.5 x 1.5 / 1.1 x 1.1 = ~1.85 or 85% longer to reach the same SNR. This is simply because you spread light over larger surface and lower recorded signal - less signal, lower SNR, simple as that. So instead of looking at F/ratio and focal reducers and all that - people should really look at their effective pixel size and optimize that. Speed of the system is best described as "aperture at resolution". Fix resolution to what you can realistically achieve given your sky and mount (and potential aperture size) and then throw as much aperture at that resolution as you possibly can. Just make sure that you can also vary pixel size (one part of resolution equation) - either by changing camera or by binning. 2. Aesthetic reason being - max zoom. Many people are unaware of this because they like to look at whole image. Your image above is rendered by SGL on my desktop computer at 67% of its original size - and it looks good because at that rate it is properly sampled. I like to look at small detail in the image next to main object - I like to see background galaxies or small features of the image and I often look at image at 100% - to be able to see all of that. If image is oversampled - it just looks blurry and bad at 100%, so question is - if you did not want people to look at image at full size - why did you bother to make it that size in the first place? Alternative view - if you are going to make image - then make it viewable / looking nice for all zoom levels up to 100% (most people know that zooming past 100% simply yields poor results as image can't record detail past 100% level). In the way - over sampling is the same as zooming in past 100% - you get the same result - things get bigger but there is no detail in the image - everything looks bigger and blurry.
  18. I was basing my assumption on the fact that light gets blue shifted when it falls into gravity well - and hence all clocks are seen to tick faster (same reasoning as light escaping gravity well is red shifted and clocks are seen to slow down). You are quite right - we won't be able to see the light that has not reached us yet, but this poses another question: Will person falling into black hole still see the end of the universe in a "flash" - the same one person floating in empty space would see in huge amount of time? If "universe clocks" seem to speed up as you descend into a black hole, and light that can't reach us while falling into a black hole - won't reach us - is that equivalent to a "big rip" that will happen due to accelerated expansion of universe? I'm just wildly guessing here, but what if falling observer does indeed see the end of universe as it will be as to preserve the idea of time speeding up for rest of the universe - only picture that they will see is not what we think it is - but rather what will be visible (or in fact not visible) due to galaxies moving away from us and big rip happening in the end?
  19. Not sure what you are asking. I'll give my view on the subject of reducer and if I fail to answer your question, then let me know so I can be more specific. There are two things that I can think of at the moment, that are important for discussion on focal reducers and RC. 1. Usability of focal reducer. This is the question of usable / corrected field with given scope. Because of data published by TS, I was under impression that focal reducer won't do much in widening the FOV on large sensors. TS states that RC10 for example has corrected field of about 30mm. When using focal reducer, those 30mm get shrunken to 30 * 0.67 = 21mm, so only central 21mm of 28mm APS-C sensor (or about 3/4 of it) should have good stars. Reasoning comes from the fact that this is simple reducer and not focal reducer / field flattener that corrects for curvature (which reduces other aberrations as well - or makes them less visible). I've read somewhere that although this is not FF/FR - it does flatten field a bit. In any case - above would make it not quite usable on APS-C sized sensor. RC8" users report that they had to use it at x0.72-0.75 with 4/3 sized sensors to get good results (changing reducer-sensor distance changes reduction factor) - which is in line with this since RC8" also has ~ 30mm of usable field and 30 *0.72 = 21.6mm (ASI1600 has diagonal of 22.2mm). This is the reason I was surprised to see much larger corrected field in your image with RC10, as TS claims 30mm for it as well 2. Imaging speed Many people equate F/ratio to speed of image capture. This is only true if one keeps pixel size constant (does not bin or use different camera). Using focal reducer there fore "increases" speed of the system as it reduces F/ratio. Given that we over sample with long focal length scopes - this is not correct as we need to bin our data. For this reason - it is better to look at final sampling rate and decide on focal reducer based on that and not F/ratio alone. It can happen that you have faster system without reducer than with if you can match sampling rate better with binning. This can particularly be true in cases where you need to crop away part of the image due to outer field aberrations (like in above case) or framing. If you can frame target without use of reducer - then why use reducer in the case where you can better match resolution of the image without it. For example, you have 2000mm of focal length and 3.76um color camera. Let's say your target resolution is 1.5"/px. Should you use focal reducer or not (if the target can be framed without it). Actual sampling rate of OSC camera is half of that of mono, so we have 2 * 3.76 * 206.3 / 2000 = 0.776"/px as base sampling rate. Bin that x2 and you get 1.55"/px - excellent match for target resolution. What about case with reducer? You'll have ~1400mm FL (1340 or if you have slightly different reduction - like in your case 1400 and something - so let's round that to 14000) and sampling without binning will be: 2 * 3.76 * 206.3 / 1400 = ~1.11"/px Now that is too high if you don't bin and too low if you bin x2 (2.22"/px). Most people won't bin and the will in fact have slower system. Things change if you can achieve 1.1"/px (very hard to do) - then it makes sense to use focal reducer as you can better hit your target resolution at 0.776"/px will be over sampling and slower. Now, from what I can measure on your image - you left it at 1.1"/px - and it can be seen, image is slightly over sampled. One of these two images was reduced to 66% of original size and then upscaled back to original size (which would make it loose detail if detail is there to begin with), other is original without any change. Can you tell which one is which? This means that all the detail in the image could have been captured at 66% resolution - or 1.66"/px. 1.55"/px would be better imaging resolution and in this case - it would be faster and better to image without reducer and bin x2 after you debayer without interpolation.
  20. Ah, ok - that is the same one I have. That is probably copy of following item: https://www.teleskop-express.de/shop/product_info.php/info/p4955_Astro-Physics-0-67x-Focal-Reducer-for-astrophotography.html same specs, but AstroPhysics reducer has been around for much longer. It was designed for slow refractors and other flat field telescopes (F/9 and above) and it turned out that it works well with RCs as well.
  21. You are quite right! I was expecting much worse. Yes, there is some astigmatism, but this reducer does a good job of flattening image a bit (it's not flattener but it does slight flattening effect). I'll need to revisit it with my RC8" and ASI1600 - it should work well in that combination also. Just one more question - did you use CCDT67 or CCD47 (one is original AP reducer while other is TS/Chinese copy). I have CCD47 version, and my initial impression was not as good on RC8".
  22. Thanks, but I don't speak xisf , Maybe fits? (don't bother if it is too much hassle to upload it).
  23. That is quite interesting. 1426 / 2040 = x0.7 Effective reduction factor was x0.7 and ASI2600 has diagonal of 28.3mm - that would mean that used field was 28.3 / 0.7 = 40.4mm TS lists usable field of 30mm for 10" version. That is quite a difference in field size.
  24. Given that ASI2600mc is APS-C sized sensor and that you are using x0.67 reducer - how much of the frame did you have to crop away / what is the useful surface of sensor in this combination?
  25. That camera would be my choice for large aperture (and focal length) scope. I don't really care much that it has small pixels - there are really no downsides to binning it - even 4x4. You can think of it like this: Do you know full frame CCD camera that has 15um pixel size, ~90% QE, 6e read noise (with 1.5e read noise and bin 4x4 - you get effective read noise of 6e with cmos) that costs $4000? In any case - I stand by my advice from above - figure out realistic sampling rate for small targets, figure out focal length and bin factor to get that sampling rate with 3.76um pixel size and get the largest aperture at that focal length that is corrected over 43mm field and that you can effectively mount (given your mount and obsy limits).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.