Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Although those concepts are related to math, understanding them is related to physics more - why those particular mathematical constructs are used when they are used. Out of larger class of mathematical constructs, each having certain features, these are used in physics because of their suitability - they fit underlying physics. Maybe this analogy will help - consider vectors - little arrow ones. We have vectors in 2d, 3d and higher dimensions. Once you understand basic concepts related to them - like their addition, dot product and so on - you have mathematical knowledge of vectors, but calculating mechanical system in 3d requires 3d vectors - particular kind of vectors and you need to know what sort of mathematical manipulations yield physical results - that is related to physics rather than mathematics.
  2. I think you can do better with the data - you still have mono image although your frames / camera are OSC. It was shown in this thread earlier that when you debayer image, there is a slight offset to each channel due atmospheric dispersion. If you debayer your frames and stack them and then align R, G and B channels and then do sharpening, that should improve your results. As is there is additional blur because R, G and B are slightly offset. Even as is, a bit of wavelet sharpening is going to give results: Registax wavelets: AstraImage multiresolution sharpening:
  3. Collimation errors are somewhat different - I think I excluded them in my post I think that aberration needs to be symmetric - I might be wrong at that. As for frequency restoration, we are doing it regularly - when we sharpen image using wavelets. Although wavelets seem somewhat "magical" - they operate on above given principle. I can briefly explain principle of operation of wavelet sharpening - and give you an example how to do it by hand. I will then expand on it a bit and present better approach based on second generation wavelets and Lanczos kernel. It is a bit mathematically involved by I'll try to keep it simple and understandable. Let's just for now look at above MTF diagram and think of what it means - it has frequencies on horizontal axis. We can think of image being a function in 2d space - where pixel intensity is value of the function at coordinates given by X and Y. Such function can be represented by Fourier analysis as sum of sine waves each having different frequency (and phase and amplitude). Important part in previous sentence is "sum". I'll now briefly skip to analogy with numbers and digits. Imagine that you have a number like 314 - it has 3 digits - one representing hundreds, second multiples of ten and last one ones. Such number is in fact 3*100 + 1*10+4*1. Now let's think how can we get each of those digits by some math - isolate it from the rest of the number. Here is what I propose - we find a way to replace digit in anyone place with 0. Than what we need to do is just subtract such number from original number. If we want hundreds we do 314 - 014 = 300. If we want digit in second place (tens) we do the same 314 - 304 = 010 and of course ones - 314 - 310 = 4. If we want to "isolate" particular frequency and do something with it we need to take original function and produce function that has value of 0 for that particular frequency and subtract it from our original function. Result will be only that frequency. How do we do that - "kill off" certain frequency? Well, it is called band filter - we need to filter out particular frequency. With images it can be done with convolution - it is operation in spatial domain that is equivalent as multiplication in frequency domain. In our example with numbers we would do 314 - 011 & 314 = 300 - where & stands for - multiply each digit with corresponding digit (multiplying frequencies). Here comes funny part - blurring of the image is convolution. If we blur with gaussian kernel (so called gaussian blur) we will be "killing" off high frequencies. Take image that we want to sharpen, and then make a copy, blur some more and subtract the two - what is left is only high frequency components of original image. Now we take that high frequency image and multiply with some factor to restore frequencies and we add to blurred copy and we will get sharpened original. I know that this sounds funny blur image more to sharpen it, but it is just like numbers above - in order to isolate particular digit we need version of that number where there is 0 in place of wanted digit. Blurring is just that - reducing value of "particular digit". Do this with multiple blurred images - each with progressively larger blur and you will be able to isolate different frequency bands that you can "boost" by different amount. This is in essence what wavelets do. Let's do an example to see how it works in practice. First we take some image like this: then we apply gaussian blur - this is will be our "planet" that we will attempt to sharpen up a bit: Next step would be to blur it a bit more like this: And then we subtract the base blurred version and a bit more blurred version, and we get this: You can already see that difference containing higher frequencies is already easier to read than both blurred versions - it is however low contrast (stretched here, but in reality it would be rather "smooth" gray with barely visible lettering). We can now multiply that image with some number to "enhance it" - for example 5 and add to "more blurred" version to get a bit sharper version than our blurred baseline: Spread like this it is hard to see that there is some improvement, so I'll create blinking gif to make it easier to see the difference - here it is, blurred baseline vs a bit sharpened version: We only did one "band" of frequencies and it already shows improvement in sharpness. I would like to add that gaussian blur is not the best blur to do this. Fourier transform of Gaussian is Gaussian and that means when we blur with gaussian we are multiplying in frequency domain with gaussian. We won't have proper filter in frequency domain but rather some frequencies will be attenuate a bit and some more, but none will be exactly 0. Compare two shapes: What we want is blur that has shape like black line that gives cutoff filter, but with gaussian blur we get blue line and that is why gaussian blur is not giving us optimum results. Cutoff filter is very hard to implement as blur - it is in principle sinc function or sin(x)/x as can be seen on this graph: but Sinc function has values all the way to infinity and our image is limited in size, so we can't use it for blurring, but there is another function, a clever function which is windowed sinc function, and in particular Lanczos kernel which is sinc function windowed by another sinc function. We can for example compare limited sinc blur (due to finite size of image) with lanczos blur and their respective filters are: We see that yellow line looks like cutoff filter much more than gaussian. Zig-zag pattern of sinc is because sinc is limited to size of image - otherwise it would tend to perfect cut off for infinite sinc function like this: More frequencies we include it gets closer and closer to perfect cutoff - first frequency will be just sine, but as we increase "range" of sinc - it tends to perfect cut off as sinc tends to infinity in spatial extent. Ok, that sort of concludes frequency restoration using spatial domain, or "How to blur image more in order to make it sharper" There are other frequency restoration techniques like deconvolution, inverse filtering and others but most of them require knowledge of the blur to do restoration (except blind deconvolution which is seriously hard topic). Nice thing with above approach - wavelet restoration is that it is guided - you can use sliders and adjust each frequency band until image looks right - regardless of the MTF shape, we rely on our external knowledge of what proper image should look like to replace missing MTF information.
  4. I'm going to make some bold statements below Although Edge is sharper on the axis then regular C8 - for imaging it is not crucial thing. It does help, and it is better to have perfect optics, but I stress again - it is not crucial thing. It is much more important for visual. Why is that? Well any sort of symmetric aberration which happens on axis (I don't mean coma or astigmatism that can be due to poor collimation or design features off axis) will alter airy patter in predictable way. It will alter it by lowering strehl ratio. Same thing happens with for example central obstruction. There is calculation that shows that SCT CO is equivalent of 1/4 spherical on unobstructed scope - or close to it. Why is it not crucial thing? Well for the same reason we can obtain sharp images with central obstruction. I borrowed this nice graph from CN post on similar topic (discussing central obstruction): It shows MTF, or modulation transfer function for different apertures and central obstructions - similar graph can be plotted for various aberrations and less than tight spot diagram. What does this graph represent? On X axis we have "detail", or to be more precise spatial frequency - when image is represented in frequency domain by Fourier transform it will be composite (sum) of sine waves with different frequencies and phases. X axis represents these frequencies - left being lower frequencies (higher wavelengths or "larger detail") while to the right we have higher frequencies (shorter wavelengths - "finer detail"). Y axis represents attenuation - by which amount any given frequency is attenuated. 0.6 on Y axis stands for particular frequency being attenuated to 60% in comparison to fully resolved image. Line represents the fact that as we go higher in frequency (finer and finer detail) - there is more attenuation until we reach 0 - no frequency after that is visible - we have reached the limit of aperture. Attenuation can be viewed as contrast in visual - higher the line in one particular place - greater the contrast at that frequency - easier to see features of "that size" (not strictly true to equate frequency and feature size, but close enough for vague understanding). This graph explains why unobstructed aperture gives best contrast for visual, and why larger scope provide finer detail even when somewhat obstructed over smaller aperture. When it comes to imaging, something else happens - there is processing stage, and in particular there is sharpening involved. What does sharpening do? It "restores" attenuated frequencies. It is a bit like having sound equalizer control (like on old stereos): and the higher we push certain frequency - more it is amplified. We want to amplify all those attenuated frequencies to restore them to their original amplitude - being 1. This is in essence what wavelets in Registax for example do (we could discuss those further, but in essence to first approximation that is what happens): Bars here are levers of "equalizer" - more you push it to the left, more you amplify that frequency range. First layer is finest frequencies (most to the right in above MTF graph) and then goes down to coarser frequencies. Number next to slider is how much you amplify - in this particular case highest frequencies are amplified by x13.4 - meaning that person doing wavelets judged that those frequencies were at roughly 7.5% of their original value (1/13.4) so they need to be multiplied with 13.4 to get them back to 1. If you have SNR high enough you can in principle restore all frequencies back to their original value - fully resolved image, regardless of the shape of MTF. This also means that more MTF deviates from clear aperture - strehl 1, one will need higher SNR to do restoration. This is why it is better to have sharper scope (and smaller CO) - but in principle it is not essential. Main problem is of course noise - noise is also distributed over different frequencies (randomly) and more you boost certain frequency of signal - you also boost that frequency component of the noise - making it larger and more obvious. SNR helps there - if you have large enough SNR - you can boost signal to needed level while noise will still be sufficiently low not to show. Here is bold statement: In principle, given high enough SNR we could fully resolve any astro image, even long exposure DSO image influenced by seeing, up to resolution provided by aperture. It follows from above explanation of frequency restoration and the fact that PSF in case of seeing is Gaussian distribution and it's MTF is also Gaussian - which never falls to 0 even at infinity. Limiting factor thus can only be aperture of the scope. In real life we struggle to get decent SNR, let alone have enough of it to do complete frequency restoration.
  5. Actually no, that is what you in principle need: - calculus - algebra (meaning algebraic structures like vector spaces and groups/rings/ ....) - Fourier analysis - complex numbers Important thing is ability to think in abstract terms - which should be covered by second above - algebraic structures. Most of QM formalism is expressed in these abstract structures. For example, when we speak about vector spaces - most people imagine plain old vectors - little arrows that have origin, direction and magnitude. But vector space does not mean those vectors - it can mean space of real functions where you can use certain operations on members of the space. Vectors are thus members of a set - or in simple terms some entities that behave in certain way. Here is what wiki lists as set of math tools needed:
  6. 1000x1000 is very large ROI for Saturn. Have a look at ZWO specs for 178 model: Just as comparison, if you maxed out at 50ms then you were getting somewhere around 20fps. In 4 minutes that is about 4800 frames total. You say you stacked 45% best - that is 2160 subs, or SNR increase over single sub of about ~ x46. Let's say you used 8bit capture and 6ms exposure on 640x480 ROI with USB3 connection. That would give you around 166fps (limited by exposure time), and in 4 minutes you would collect around 40000 frames. Now take just 15% best of that - it is 6000 subs or SNR increase over baseline of x77. Baseline will have smaller SNR for 6ms exposure, but you will be able to compensate that with higher SNR boost from stacking considerably more frames, even if only using 15% of them.
  7. Try using ROI - that will help with frame rate - less data to transfer means better transfer speeds. Going with shorter exposures does increase noise but it is essential in freezing the seeing - long exposures will average distortions produced by atmosphere and create strong "motion" blur. You can use 30ms exposures only in the best conditions, but if seeing is not good you will need to go shorter than that, often as low as 5-6ms exposures (even if USB link won't provide 200fps at small ROI).
  8. I think @Anthonyexmouth has one, but it's probably recent acquisition, so not sure how much time was there to get the hang of it. You will be slightly oversampling with C8HD natively, but that really depends on your mount and skies. Sampling with OSC models is a bit different than with mono models - where pixel width is measure of sampling interval. With OSC camera, since there is bayer matrix, you can think of pixels being spaced 2 widths rather than one, and having 4 grids overlaid on top of each other - each slightly offset. There is red grid, blue grid and two green grids. If you think of it that way - at native FL of C8HD, being 2032mm, you will be sampling each color at 0.94"/sample - which is just a bit oversampling if you have very good mount. If you put x0.7 reducer on that scope (which would be a good idea), then you will be working with 1.34"/sample, and that is excellent sampling rate for such aperture (again, suitable mount that can be guided with total RMS less than around 0.5" is implied). In order to work in this mode - you should use superpixel debayer mode - it is closes thing implemented in software. Better still is splitting bayer matrix into sub fields - one red, one blue and two green. In any case you should not consider this camera to be 4144 x 2822 in this case, but rather 2072 x 1411 pixel resolution. If I were in market for OSC - 294 cooled would be my choice.
  9. I think it might be related to "blue lettering"
  10. Not sure what you mean by adding grayscale copy of RGB as luminance, but if for example you have OSC image (rather than mono/RGB variant) and you want to apply LRGB workflow to it, meaning: 1. create artificial luminance 2. process that luminance (stretch, denoise, all the fancy stuff one does) 3. apply RGB ratios from RGB data to above luminance after color calibration you could end up with better looking image - less noisy. Why is that? Thing with OSC data is that you have twice as much green pixels (data) than red and blue. If you apply luminance transform for linear sRGB - given above, you will take most of green and just a bit of blue and red, and green has best SNR because you have twice as much data - then your luminance will be a bit less noisy. Other thing that you can do - when doing LRGB style of composing and processing you can denoise color data much more without blurry impact on final image - most of the sharpness is carried by luminance.
  11. @iwols Have you tried settings and workflow discussed in recent thread involving this particular sub you posted?
  12. C8 (EdgeHD or not) if properly collimated should give you very good results. What seems to be a problem with your images? Maybe something in your workflow is limiting the potential of the setup to deliver. In answer to original question, if I was in the market for planetary imaging scope under £1000, this would be my weapon of choice: https://www.teleskop-express.de/shop/product_info.php/info/p10753_TS-Optics-8--f-12-Cassegrain-telescope-203-2436-mm-OTA.html and possibly second hand C9.25 if it could fit within budget (new one without tax at FLO is now discounted at about 1225e, but after shipping, local import fees and taxes it would be about 50% over the budget). Aperture rules, and I would even consider the largest possible newtonian that would fit that budget on EQ platform - but that would be quite difficult to "operate" in imaging role.
  13. This is somewhat complex topic and I'll try to answer your question - hopefully in understandable way. Let's first look at producing mono image from RGB data. There are multiple ways to produce luminance layer from RGB data and most of it reduces to this formula: c1*R + c2*G + c3*B = mono value Choice of c1, c2 and c3 will determine outcome. For example if you wish to produce luminance layer from mono camera and RGB filters that would be the same as capturing luminance layer with L filter and you happen to have RGB filters that split 400-700nm range in disjoint way and cover whole range (in principle no filters do it but most filters come very close to this as following graph shows: there is a bit of overlap around 500nm and a bit of poor coverage at 580), then c1, c2 and c3 will simply be 1 each, so formula will be: R + G + B That translates into - take all light between 400nm and 500nm (blue) and 500nm and 600nm (green) and all above 600nm up to 700nm and add together and you will get all light between 400nm and 700nm. There is another approach often quoted on how to get luminance data from color: If you happen to have sRGB linear RGB data (not the case with OSC or mono+RGB filters unless you do color calibration of such data), then coefficients c1, c2 and c3 have values: 0.2126, 0.7152, 0.0722 (as defined by sRGB standard). Why such different numbers? This is because in this case luminance is designed to mimic our eye sensitivity to light - we perceive green to be brightest color (by intensity) while red is less bright and blue even more so (for the same intensity). I often show this image as demonstration how different colors (different mix of three primaries - r, g and b) give different sense of brightness: You can see that pure green has something like 80% brightness, red about 54% while blue only 44% (these percentages differ from above coefficients because here we have gamma of ~2.2 applied by RGB standard while above values should be linear). Therefore if you want to mimic sensor response you should just add colors together, but if you want to get brightness as we would perceive it - you should use different coefficient (which depend on color space / color calibration of your RGB data - example above given for linear sRGB). There are other ways (other coefficients) that you can do above with slightly different results. In regular image processing because you will be using nonlinear stretch of the image - there won't be much difference between approaches and resulting image will be more influenced by your processing then choice of method for luminance (sensor based, perceived luminance based or something else). For example with DSLR or OSC sensor you might consider using different coefficients for producing luminance based on the fact that color sensors with bayer matrix have twice more green pixels than red or blue - this means more data gathered in green part of spectrum and better SNR. Couple that with type of target you are shooting and you can get different c1, c2 and c3 values based on best SNR. This gets us to difference between LRGB vs pure RGB. Why people use LRGB approach vs RGB when in principle both provide same result in terms of image? It is down to SNR and perceived noise in the image. It is known fact that human eye/brain system is more sensitive to variations in luminosity than in variations in color - we tend to spot luminance noise more easily then noise in color data. Shooting L gets you better SNR then adding RGB together. This is because there are other noise sources besides shot noise - dark current noise and read noise. Imagine above scenario where you have RGB filters that split 400-700nm range. To gather all the light in that range you can either use L filter in 10 min sub, or you can use RGB filters each in 10 min sub. L sub will have dark current noise of 10 minute exposure and one read noise dose, while adding RGB subs will result in 3 doses of read noise and 30 minutes of dark current. This may not seem like a lot, but it does have quite a bit of impact in some cases, like when capturing faint signal (comparable to read noise amplitude) or when shooting for example narrow band target that does not have much if any light in green part of the spectrum - in this case adding G to R and B to produce luminance just adds noise and no signal at all - in that case it would be better to produce luminance by just adding R and B together and leaving out G completely in RGB case (setting coefficient c2 to 0 in this case). Thus LRGB vs RGB depends on many factors and ratio of time spent on L and that on RGB will have impact on final result in case of LRGB. Like, I've said it is fairly complex topic and there is no simple and straight forward answer. However, for processing purposes it is better to have luminance and RGB information separate as you can apply different processing to each (remember human eye/brain sensitivity to noise in luminosity vs color), so even having RGB data and creating artificial luminance out of it can help with processing - especially if you pay attention how you create luminance to maximize SNR of artificial luminance. As a last point, I want to just briefly talk about "best of both worlds" - how to maximize effectiveness of LRGB imaging. As far as I'm aware, currently no one is using this approach and it is not really supported in software, but that is something that I'm working on: when working with above filters that "add up" to full 400-700nm range, we have seen that adding R+G+B produces same result as having L (a bit lower SNR, but same signal captured) - this can be useful. When you shoot your LRGB data and start stacking L, you can include R+G+B part if you have algorithm that deals with subs of different SNR - this way you will improve your L by adding more data. That is not all - if you look at equation L = R+G+B, then you can easily see that rearranging it can lead to this for example: R = L - (G+B), and other combinations - this means that you can augment your color data by use of L in similar way. Moral of this short final part is that although LRGB is often better than pure RGB - we are still not utilizing it's full potential in data reduction. Hopefully software that does it will be available soon. Hope this helps understanding artificial vs "natural" luminance.
  14. Dark that you uploaded looks rather normal, except for the fact that I think your offset is set too low - you might want to increase it. It is probably set in range of 20-25. Raise it to something like 60ish. Do this on a new target and do another set of darks that you will use after this setting change. Don't mix subs that you have now (with offset 20ish) with new subs that have offset 60ish (you can mix calibrated lights with one setting and new setting, each calibrated with corresponding darks / flats / etc, but don't "cross" use calibration frames - if you know what I mean). Background in top image looks darker than background in bottom one - this is due to different level of stretch. But if you look at stats panel - bottom image is much "darker" in background than top image: vs They are either of different exposure time or upper image was shot in rather poor conditions - meaning low ALT for target and in direction of LP. In any case, those specks might be artifact of stretching in SGP rather than true artifact of the image. In order to see what the actual sub looks like - it's best to have one calibrated light sub and examine that - maybe post one calibrated sub of above target here so we can run stats on it and see if there are unusual specks in the background or whatever.
  15. This is very good suggestion, as there are two possible cases: - crop - scale In principle you should be able to tell by stretching hard one of lights comparing to master flat. If you happen to have dust shadows that will help quite a bit as you will be able to tell their respective position and if cropping or scaling helps.
  16. This one was not easy at all After all of my trickery, this is what I was able to get out of that data: I think I rejected something like total of 8 frames, and another 4 green fields - they simply refused to be registered with ImageJ registration plugin so had to be removed.
  17. Well, my best effort in handling this data (for now red channel only) involves binning x2 in software to increase SNR - then we get good outline of nebulosity: My "sophisticated" stacking algorithm does not yet support sigma clip - so some satellite trails are visible. Final image will be quite small in size - this is due to fact that I debayer by splitting channels rather than interpolating, and then after I did bin x2 which brings image size down x4 compared to original sub size - above screen shot is 1:1 (or 100% zoom). Will try processing other channels as well and combining data to get resulting color image ...
  18. @Anthonyexmouth No wonder you have trouble stacking and processing your image. Data is rather poor. A lot of subs is just high altitude clouds and LP reflection of them. Here is a little gif that I made - it's red channel binned to small size and linearly stretched to show frame to frame difference. Some of the frames contain nothing more than LP glow and only brightest stars. Stacking such data will produce poor results - both in terms of SNR and with significant strange gradients. I will try to get the best out of this data using my own "sophisticated" algorithm designed to handle such cases where there is significant SNR difference between subs, but I don't expect much from this data. I will need to remove at least couple of frames that are very poor.
  19. I have not worked with tablet/laptop screen so I can't comment on that. I think some people are using it successfully. You can keep short exposure times you have now. I have rather strong flat box and my flat exposures are also very short - just a few ms. I only once had a problem with flats and that was because of faulty power supply connector on my flat panel. Flat panel was flickering rapidly, probably due to sparking on less than perfect power connection, but it was not something that you could see with naked eye - light appeared uniform. Frequency of flickering was probably way too high for eye to see, but at such short exposure it was noticeable as banding in flats. Once I figured it out, it was easy fix with a bit of soldering. Best way to go about flats in your case is to set flat exposure manually (don't leave it on auto or whatever) so that all three histogram peaks get on right part of histogram if possible, just avoid clipping bright areas. Take as many flats as you can and then leave all settings as they are and do another run but with scope cover on. Simple as that. I advocate large number of all calibration subs - I tend to use 256 of each (have this thing about binary numbers - and that is 2^8 ). If you dither your subs, then you can use less calibration subs, you are already using pretty decent number - 50 of each. Btw, I'm just about to stack your subs using ImageJ to see what sort of result I will get in that workflow. Will post results.
  20. What do files look like? Do they have cr2 extension? What is the size of them? Can you open them in another application and check stats - like image size and such? Canon has some free utility for image enhancement - maybe use it to see what the images look like?
  21. Not familiar with APT but I suspect that is because of the need for "automatic" flat exposure - it is Aperture priority mode, meaning that aperture does not change (nor it could since you have your camera on the scope instead of lens) so camera automatically determines needed exposure time. It should not affect the file/image size.
  22. Only reason that I can think of, except of course DSS having trouble reading those particular files, would be change of image size in camera settings? Did by any chance anyone use that camera during daytime? Maybe they set different image size / returned to jpeg capture or similar and you did not notice that?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.