Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. That is actually quite good thing - if one takes advantage of that. Human eye/brain system is most sensitive to noise and detail in green part of spectrum (Y component of XYZ color space). Green in OSC cameras is very close match to Y component. Best processing for this case is to create OSC RAW -> XYZ color space conversion matrix and then process Y as luminance and use X and Z as color information.
  2. It does not have amp glow / dark current is low and it appears to be uniform - much easier to perform dark scaling.
  3. Here is one more quick point about that. I found it very useful to just simply get image with two elements - Gaussian PSF and block of random noise (gaussian in nature). Whatever you do to that image - it will be easily measurable. Change in FWHM of Gaussian profile will indicate any loss of resolution and change in standard deviation over random block will represent SNR gain.
  4. How things have changed over the years. Just a decade ago - cameras like Atik 314 were perfectly suitable beginner CCD cameras - and those have only ~1400x1000 resolution Now we are not happy if we can't make FullHD image Mind you - most DSOs are in fact small enough to fit on 1000x1000 image. Even if we somehow manage to sample at 1"/px - 1000px will be almost 17 arc minutes - that is quarter of a degree (half of the full moon).
  5. That is how I process OSC data "by default". Split channels without debayering and stack each as if is mono. Green gets x2 number of subs "boost" over blue and red that way (which is x1.41 SNR improvement).
  6. It is part of the stacking - or rather special way you stack the data. Bayer matrix is left as is, and then aligned with main stack. Pixels are then stacked against color rather than against each other. After frame is aligned - if pixel is R pixel - it gets stacked to R channel of the image, if it is blue pixel - it gets stacked against B channel and if it is green pixel - it gets stacked against green channel. Deep sky stacker supports this stacking mode. AutoStakkert!3 (planetary stacking) - by default handles OSC data like that. Not sure about other stacking software.
  7. Yes. OSC effectively samples at half the sampling rate of equivalent mono sensor. You can interpolate the data to get the same pixel count / image size as mono - but you can't interpolate detail. What you can do is use Bayer drizzle (if properly implemented). That will give you almost identical resolution as mono version. I usually advocate against use of regular drizzle - but Bayer drizzle works (this is because pixels are indeed smaller - only "sparsely" distributed in Bayer matrix).
  8. I showed two ways of doing it above. One is to imagine pixels being squares (given that they are surface sampling and spaced in rectangular / square grid, although, as we have seen - pixels on sensor are not really squares - they can be oval shaped or with rounded corners) and then attribute weight to surface covered by large square. Alternative to that is to see pixels as being sampling points and then locate binned location and assign weights based on inverse distance to other pixels. Either maps to exactly proper binning expression when doing integer binning. Binned pixel as square "covers" all 4 squares of binned group so each has weight of 1 or 1/4 if we are doing average. Also - if we observe pixels to be sample points - we will put binned pixel smack in the middle of those 4 pixels - distances will be the same so again we have equal weights 1 or 1/4 depending if we sum or average. I just wanted to show properties of bilinear interpolation. Shifting image should not change it - it should remain the same. All interpolation algorithms have property of blurring image a bit when image is shifted - question is by how much. Only interpolation that does not alter the image / function if function is properly sampled is Sinc interpolation. Problem is that Sinc interpolation is "infinite" in spatial extent and we would need infinite size image to apply it. This is due to fact that we actually have band limited image and if image is band limited - it is infinite in spatial extent. Luckily for us - those values that are far away from say a star in the image rapidly tend toward zero and are swamped by noise. That is why we can use sensors that are finite in size and still get normal looking image. It also means that we can use approximations to Sinc function that are finite in spatial extent and still get good results. You can see a bit more on that here: Here is important bit: This is comparison of different low pass filters that resampling algorithms represent. These are just polynomials - linear, cubic, quintic and so on. Point is to approach "box" filter - one that will not change any frequency within sampled frequencies. All of this shows that bilinear interpolation has the least desirable properties for image manipulation, and since it is mathematically equivalent to fractional binning - that means that fractional binning also has those undesirable properties. What is the point of doing fractional binning if you are going to additionally blur the image and again lower its optimum sampling rate - why not go to that lower sampling rate with regular binning straight away? I'm not sure we are progressing in original topic. Yes, discussion is developing, but we are not really discussing the tool that describes scope / camera suitability for long exposure astrophotography (or EEVA extension, maybe even planetary). I proposed model that can be used for all three. In long exposure astrophotography - all three are important - seeing, mount performance and aperture size. In EEVA - we might not need to model mount if exposures are short enough so that mount does not play a part - that leaves seeing + aperture size. In Planetary - we use short exposure to beat the seeing - that only leaves aperture size for determining proper pixel size if we want optimum sampling. Only drawback for planetary in proposed model is that we approximate Airy pattern with Gaussian curve. That enables us to easily calculate total blur - otherwise we would need to calculate it in frequency domain (MTF * FT of seeing gaussain * FFT of mount precision gaussian). For planetary we should use just Airy pattern / spatial cutoff frequency to get best / most accurate results.
  9. Not sure what this means? One thing is MTF of optics and spatial cut off frequency without influence of atmosphere and mount, and PSF in long exposure is something completely different (where we have seeing and mount performance in addition to optics).
  10. You can simply do experiments with ImageJ and bilinear resampling acting as fractional binning. Bilinear resampling introduces noticeable blurring. here is some noise in the left image and right image is left one shifted by 0.5px in both x and y. Blurring is more than evident. Interesting feature of FFT is that frequency spectrum does not change if we translate image - this lets us calculate spectrum before and after interpolation and see what sort of filter it represents: Top row - frequency spectra of original and shifted image, resulting image when we divide the first two. Bottom row is just surface plot and profile of resulting filter response.
  11. Their premise is wrong: There is well known relationship between aperture size and spatial cut off frequency: https://en.wikipedia.org/wiki/Spatial_cutoff_frequency If you rearrange this for 3.75um pixel size - you get that you need F/15 optics for critical sampling (not F/7 nor F/10 as proposed) - for 500nm wavelength.
  12. If you really want to match that resolution, maybe best approach would be to do following: Bin to first integer factor that will get you under sampled below target resolution - in your case 5.8um pixel size and then drizzle to wanted resolution with appropriate parameters.
  13. I made numerous posts about fractional binning and related mathematical concepts here on SGL. I'm not very fond of fractional binning - because in essence it is mathematically very similar (if not the same) as bilinear interpolation - which is arguably the worst kind of interpolation. This can be seen if we take simple example - we take 3x3 matrix of pixels and bin them by factor of 1.5 (thus reducing them to 2x2). Here we think of pixels as being squares for purpose of calculating their effective sampling "area" (although they might not be squares in reality on sensor and have oval shape or some other). Let's think of how we are going to calculate top left red pixel from underlying black pixels. We need to take whole (1,1), half of (1,2), half of (2,1) and quarter of (2,2) pixel (by value of course) and add them up (we can then divide with "total" area if we want average). This would be fractional binning process. Here are some things to observe: 1. pixel (1,2) will contribute its value to both top left and top right red pixel. Similarly central black pixel (2,2) will contribute its value to all four resulting red pixels. This is introducing pixel to pixel correlation - which equals to blur and pixels are no longer linearly independent (might be important in scientific analysis as their associated noise is no longer random but in part depends on same underlying pixel). 2. Process is the same as bilinear interpolation. Imagine that we want to calculate value of a point that is at coordinates of (1.66, 1.66) - given that above black pixels have coordinates in 1,2,3 range for (x,y). What would mathematical expression for that be? We now sum of four values weighted by inverse of distances - and you'll see that you get again 1 : 0.5 : 0.5 : 0.25. 3. We don't get predictable SNR improvement. With regular binning - if data is genuine we get predictable SNR improvement - we get x2 for bin 2x2 and x3 for bin 3x3 - but here we don't get x1.5 improvement in SNR if we bin x1.5
  14. Sometimes even x4. I've posted M51 taken with that setup that is best sampled at x4 bin. That also goes for this image, for example - taken on a night of rather poor seeing: Just look at the size of those stars - very bloated. I binned this x3 (it is something like 1500 x 1100px in size) while in reality, it is better suited for bin x4.
  15. I do it regularly. I have RC8 and ASI1600. Natively that is 0.48"/px - way over sampled. Every image I do with that combination needs to be binned.
  16. There is also question of OSC sensors that @Adam J pointed out. Problem with OSC sensors is that contrary to popular belief - they are not the same as equivalent mono versions as far as sampling / resolution / detail go. OSC sensors always sample at half the resolution pixel size leads you to believe, and final image is just enlarged in software x2. For example - if we take modern DSLR with small pixels that has something like 6000 x 4000 pixels (24Mpixel) - we expect resulting image to be 6000 x 4000 px - but it is not as far as sampling goes. It is actually 3000 x 2000 (6Mpixel) and artificially scaled up x2 so that actual pixel count is "returned" to 24Mpix. What does this mean? 1. We can't calculate sampling rate based on pixel size - we need to take twice smaller pixel. 2. Problem is that people won't easily accept this lower resolution - after all, software produces correct image size, right? 3. Binning such large image after stacking simply does not bring in SNR benefit like binning regular image that was not enlarged (true data) 4. Preferred way to debayer such images should be either super pixel mode, or if one really wants to exploit full resolution - bayer drizzle (if it is implemented correctly). Main problem here is that people will have hard time adopting to all of this because so much software just ignores above and "works out of the box" - with interpolation debayering. It takes quite a bit of effort to process OSC data properly.
  17. Could be that SGL recompresses it upon upload and it could be that jpeg has something to do with it.
  18. Actual comparison would be to take linear data, but in any case - it shows two very important things: - you are a bit over sampled on that image - probably not 2"/px, but not as high as 1.4"/px either (we can't really tell without actual FWHM measurement). - differences are very small when things get this sharp (and it is indeed sharp image). Way image is processed will have more impact than difference of 1.8"/px vs 1.4"/px if data is handled properly.
  19. Check that left against original. (hint - I think I left sharpen when resampling turned on )
  20. I think it would be interesting to see and compare with FWHM approximation. If it's too much hassle - then don't bother, but if not, I would be interested in checking it out. We can go by HFR for the time being. If it is 1.7-1.8 - that would put HFD at 3.4-3.6" (HFR is half flux radius while HFD is half flux diameter). For perfect Gaussian HFD is the same as FWHM, so it is safe to say that your images have 3.4-3.6" FWHM star profiles. That is what I would expect from 100mm of aperture under "regular conditions" (seeing 2" and guiding of about 1" RMS - actual calculation gives ~3.3" FWHM). By the way - that is 2"/px sampling rate rather than 1.4"/px Btw, one of these two has been reduced to 66.7% in size and then resized up to original size (150%).
  21. Out of interest, what is average FWHM of stars in the image in linear data?
  22. One of the issues was with drivers - no drivers available for Win7. I guess support really ended for Win7 and we are forced to move forward (although I was quite happy with that version).
  23. As far as I understood OP is looking for longer FL scope to pair with new camera?
  24. This was rather interesting experience. I hoped to just "transplant" my SSD into new setup and Windows 7 is usually resilient enough to withstand hardware changes like that with few driver updates - but new hardware is not compatible! There are no drivers for Windows 7, UEFI won't even see SSD as bootable disk - it needed some compatibility setting turned on - but since I used integrated graphics instead of discrete - it was not an option since graphics would not work in compatibility mode, so I had to do fresh install of Windows 11. That required fiddling with bios - turning on secure boot, TPM and what not just to be able to install it. Windows 11 is rather bloated with preinstalled applications so I took quite bit of time to setup (turn off) everything I did not need/want. In any case, papa's got a brand new machine now - it is literally at least x4 as powerful as last one (6 core / 12 thread vs 2 core / 4 thread, 32GB ram vs 8GB ram, 6.4GB/s NVME SSD vs 550MB/s SATA 3 SSD and so on ).
  25. I think camera choice needs to be paired with telescope choice and of course - depending on mount. First step should be determining desired working resolution. Then we can discuss if that is realistic and under what circumstances. In the end - we can see what camera and binning combination will get us closest to that.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.