Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Could you post - just a crop of linear data of that region? I'm intrigued to figure out what has happened with the data in order to display that web like feature. I have an idea, but would need the data to confirm it. We see it as a web - and AI made in a web like structure because it is lacking color differentiation present in Hubble image. Out of 5 spikes going out from the center - top one is actually of the different color in HST image and represents background gap rather than some material. After being blurred - it start to have very similar shape to other features blurred (while features are different - blurred versions look morphologically the same) and since red channel is clipped - color also starts being the same (difference being in non clipped portion of the red channel - that is what I'm hoping to find in linear data).
  2. Point taken. I will be more careful of attributing certain artifacts to AI side of things, and will try to be more constructive.
  3. Did you apply NoiseX to that image you posted to show that feature is there or to linear data? Image that you posted to show the feature has red channel completely clipped in that region and all the data was lost because of that: as such - it can't show Ha features properly whatever is interpreted as Ha feature comes from blue and green channel (which I guess it should not be happening, but that is again technical side of things talking).
  4. There is however difference between the two. Other processes work the same regardless of "any training" they had, or rather - they don't have training and their result is always the same for the same input image. With AI tools - if you apply different training to neural network and present it with the same image - it will produce different output. In that sense, result of the AI algorithm is not made by analysis of that image alone, but rather input data set is actually (training data set, input image) rather than being just (input image). That alone does not "disqualify" algorithm - it is the nature of the change applied to the image that is important. There are "fixed" algorithms that distort the image in what we could argue is unacceptable way. Problem with AI is that you can't really tell if it's going to do that or not for two reasons: 1. being too complex for simple analysis. It is very hard to predict output of algorithm of that complexity 2. not knowing complete input set - we don't know what sort of training neural network underwent.
  5. I think you gave the answer in the second half of that sentence. There is no reason why not - and hence, answer is yes. Dual band filter will pass both Ha and OIII - and while Ha is almost invisible to dark adapted person - so is the sky glow at that wavelength. Filter will act as OIII filter for 99.9% of the time (unless there is really bright Ha source - like M42, where it could make small difference over regular OIII in what it shows).
  6. It's not just the tools - I think I have different idea on how to value an image than most people. We could call it much more technical. To me, astronomy image is valuable if it's more informative and more correct. If one can learn more from it. In above example - to me Hubble image is better - not because it is visually more pleasing (nicer or prettier image) - but because it is clearer in showing certain features and making a distinction between features better. If one calls for critique and suggestions - that is what I do, I stay within the limits of my mindset (most of the time, sometimes I'm able to switch to aesthetics of the thing and give comment on that) - I notice where the image is lacking in things that I hold important - and that is what I present. Most other people probably prefer artistic side of the image, and place greater value on that (and I tend to agree in some aspects - like when an image manages to stir up emotions about the vastness of the universe or general awe because of sheer range of magnitudes of size, energy and so on).
  7. I guess that I'm caught up in technicalities like fact that we are doing astrophotography and not astroimaging, astropicturing or astrodrawing
  8. Equivalent physical process for visual would be: 1. Amplify all light coming from the target - let's say by x100 times 2. Filter all light except Ha to let's say 1% of its signal strength - this can be achieved with narrowband filter that lets 100% of Ha pass while cutting down everything else to 1%. That is what is wanted, right? Same image but only Ha signal specifically boosted by large factor so it's made visible? Again - same procedure I mentioned can be used - except we would modify strength of Ha signal in the mix.
  9. My bad - I did not draw arrow precisely, but I thought it was obvious what I was referring to. I was specifically referring to this feature: vs While it has similar general morphology - details are different (to my eye). What we identify as Ha filaments in Hubble image can be identified with Web like lines in upper image - except - in upper image we have ring in the center of the image with clear spike that goes almost vertically from it - and there is no Ha signal in bottom image that matches that. That is another example where AI though it was seeing Ha signal and decided to put it there - although it is really not Ha signal at all
  10. Actually - no. Interference red filter does not mimic our vision and once we capture data with it - we loose ability to be visually as accurate as possible. Here is simple way to see this: Look at typical response curves of astronomical filters. Imagine for a moment that you have 610nm light and 656nm (or Ha wavelength) light. You captured 100 photons of each - in two different images. Could you tell which source is 610nm and which is 656nm from those images? No, because both images would be exactly the same. 100 photons in red and no green and no blue data (or rather zero value). Now if you look at this image - that is best approximation that can be shown on computer screen (sRGB color space) of colors of individual wavelengths: You will see that we can tell these two apart when looking at the light itself. We can distinguish them by eye - but we did not capture them as distinct. This is true for any type of red in the image - not just single wavelengths. You can have 3 different scenarios: 1. Just Ha 2. Some other light mixed with Ha 3. Some other light (that might be red) without Ha When using RGB + Ha filters - you will never reach point 1 and ability to render light as pure Ha (deep, dark red) if you don't use subtraction and remove Ha from the red channel. You also need way to distinguish two reds from each other (like 610nm and 656nm in above example). There is a way to best do this that is based on science (much like used to create above spectrum image on sRGB screens).
  11. Interesting thing is that there is an accurate way of adding Ha data to RGB one - but no one is using it (and probably would not prefer the scientific result over layered composition or pixel math that is used).
  12. I would personally avoid using AI tools completely, but that is just my view on things.
  13. There are two things that can happen when using "normal" sharpening tool that distort the image. 1. Increase in noise levels Sharpening boosts high frequency components of the image, but noise is spread over all frequency components so part of the noise that is in high frequency range will be boosted as well (we can't effectively separate the two - otherwise denoising would be trivial affair) 2. Inaccurate blur kernel assumption In order to effectively sharpen the image - blur that was applied to the image must be guessed accurately - otherwise additional distortion can happen. For most astronomical images Gaussian type blur is very good guess and not much issues arise if one uses that as blur kernel. FWHM of that blur kernel is also imprinted in the image - it is star FWHM. In that sense - with good sharpening tool - there is little that can go wrong except for "excessive use of the tool" and the fact that it creates additional noise.
  14. I just wanted to point out that if you image at such pixel scale - features that are below resolving capability of a system on a given night will be made up to look like something that is not there if you use AI tools (and sometimes even if you are not below resolving capability of the system). If you look at the size of that "ring" structure - it is about the same size as a pair of stars in the left part of the image that are not resolved properly. Once image is shown at resolution that it supported by the data - very little difference between it and reference image:
  15. I added red emphasis - that is screen shot from NoiseX. It is neural network that is trained to "guess" what the actual image is given the noisy version (it sort of makes up the features). Here are a few examples I'm referring to: Top is Hubble reference, and bottom is your image. Two stars in top reference image have no Ha signal around them. In bottom - AI tool mistook noise around stars for Ha signal and "connected" it with other "Ha signal" that is in the area - all of which are noise level and not true Ha signal if we look at above reference image. Then there is this "web" of features - that does not look like that in reference image: There is no central ring like structure with 5 "spikes" coming out of it - nothing like that in reference image. then subtle variations in what appear to be bright and dark regions in Ha
  16. I'd ease on AI tools - a lot of made up detail that is just not there in the galaxy
  17. Welcome to SGL. Very nice project! I've been doing something similar lately, only with belts and nema 17 motor (trying to get cheapest solution possible).
  18. Actually, all it takes is for imaging software to talk to guiding software. It already does that on some basic level - imaging software sends dithering commands and reads guiding graph (waiting for it to settle in order to continue exposure). What imaging software needs to do is to be able to read guider calibration or send its own plate solve data to it. Instead of dithers being completely random - they need to be random so that star positions in two different light frames are integer number of pixels away. In principle - if you don't have significant field rotation - you can already do this - force sub alignment to end up on integer boundary. Problem with this now is that you'll increase FWHM significantly - because there is possibility of sub pixel shift between frames as guider and imaging system are not in sync when performing dither (dither can have arbitrary value - and it often does - like 10.35 pixel shift in RA). Once you have two subs that you need to shift 10.35px in order to best align them - but you only shift them by 10px (integer value, no need for interpolation) - you introduce that 0.35px elongation in your stars - or you increase FWHM. Ideally - dither needs to be 10px to start with (in imaging camera space).
  19. Yes, that is correct process. Split CFA can be used instead of split bin x2. If you want to bin to higher number however, you'll need to use dedicated script for that (split CFA works only on 2x2, but not 3x3, 4x4 and so on). There are no downsides if you want "good" SNR improvement. Any blur will in fact increase SNR somewhat, but that is "poor" SNR improvement because it is tied to pixel to pixel correlation that is not present in original data. If you measure SNR on different stacking methods - you will see that FWHM and SNR are in inverse relationship because of that. Gains in SNR due to this are small (as are gains in FWHM) compared to overall gain of binning. Ideally - you would want to do "traditional" shift and add technique. This method allows you to skip interpolation altogether - but requires ability to guide very precisely and for guide system to be connected to main camera, or for software to understand relationship between guide sensor orientation and main sensor orientation. When doing dithers, software can issue move command that will place main sensor integer number of pixels in X and Y direction compared to last sub. When stacking, all you need to do is then shift sub in opposite way compared to that dither. (if you think about it - nothing is different from regular imaging except you assure that no interpolation is needed - just pixel indices shift in order to align subs. Data is left as is, no pixel values are changed in alignment process). Polar alignment also needs to be perfect to avoid any field rotation. In any case, since this is very hard to do on amateur setups with gear and software that we use - above method is next best thing. It preserves the most sharpness (introduces the least amount of blur) and keeps the data as clean as possible (which is beneficial in processing stage - denoise algorithms work better if data is good than if it is already too correlated).
  20. There is "pixel blur" component to the FWHM which is usually small, but can make a difference. Larger pixels contributed to larger FWHM because of this. Cause is that pixels are not point sampling devices, but rather that they "integrate" over their surface - and this integration creates tiny blur on the pixel level. Since all blurs add up - this adds up to increase FWHM when converted to arc seconds. Btw - it is always preferable to stack at higher resolution and bin in the end. Stacking involves registration and registration uses some sort of interpolation to do its job. Interpolation introduces another little bit of blurring to the data - which is very similar to that pixel blur. Level of blur depends on interpolation algorithm used (I've written about this before in one topic) and Lanczos provides best results while surface methods and bilinear interpolation provide the worst (in terms of blur introduced). In any case - interpolation of binned pixels makes interpolation blur operate on larger pixel size - and hence introduces larger blur when converted into arc seconds. I think that best approach would be this: 1. use bin1 to capture the data 2. prepare data in bin x2 format - not by regular binning but by using split bin (similar to split debayer - it leaves pixel size the same but halves resolution and produces x4 more images to stack) 3. stack that with Lanczos resampling for alignment This approach should produce best FWHM of resulting stack compared to other methods.
  21. There probably is not, but ASI Air is aware that you are using OSC camera and can choose bin method based on that. Only thing that you can do is select - bin x2, and it will apply appropriate bin method based on whether it detected an OSC camera or not.
  22. And do you know what ASI Air did when binning the data? Did it perform operation that I outlined above? If so - color can be preserved. Do have another look at my previous post - it explains two different ways of x2 binning that can be performed on OSC data. Both are bin x2 - and one is "naive" bin x2 - that is suited to mono data and destroys color information - while other is still bin x2 - but it is suited to OSC data and preserves color. If you have color data in the end - second version was performed.
  23. Hi and welcome to SGL. To be honest - I did not read your post as it is long and I for some reason I lack the patience at this moment. I did see you mention sampling rate, Dawes limit and such - so here is brief guide how to choose good telescope for galaxies. First determine your sampling rate that you'll manage to achieve. From experience, I'll say that it is 1.4 - 1.5"/px. Now, this sampling rate can be achieve with selected camera and range of focal lengths. Say that you keep IMX183 sensor with 2.4um pixel size. Your target focal lengths will be multiple of 330mm. So 330mm for bin x1, 660mm for bin x2, 990mm for bin x3, 1320mm for bin x4 and so on. In next step - get the most aperture you can manage at any of those focal lengths (but at least 5" because if you go lower than 5" - realistically you'll have tough time reaching 1.5"/px). If possible - avoid excessive use of correctors or try to use best ones. Ideally you want just diffraction limited scope over large enough field and sensor.
  24. Everything is right if done properly. Bin x2 simply means following: - reduce width and height of the image by x2 - improve SNR by factor of x2 - average 2x2 pixels to produce single output pixel (see there is everywhere number x2 present - similar thing happens with x3 bin and x4 bin and so on, except different number appears in each statement). In order to get black and white image - following must happen: You take group of 2x2 pixels of different "color" (different filter of bayer matrix) - and you average those to form output pixels. But if you do following: you take only red pixels (2x2 of them) and average them to red pixel, and you take 2x2 neighboring blue pixels and average those to one output blue pixel and you do the same with "top left" green group and "bottom right" green group - you preserve color information because you only average red with red, green with green and blue with blue pixels. Only when you average red, green and blue pixels together do you loose color information (in top example).
  25. Depends on how you bin them. Binning is trading spatial information for improvement in SNR. You basically "stack" adjacent pixels, so it is form of stackin but instead of temporal "direction", like stacking multiple exposures, you use spatial direction - you sum adjacent pixels. With OSC data you have two choices - you can pay attention which adjacent pixels you stack and preserve color, or you can just stack closest ones and then you destroy bayer matrix information and loose color. If you take 2x2 groups of pixels and produce single pixel in output - you loose color information. If you take 4x4 groups of pixels and produce 2x2 group of pixels in output with paying attention to only add relevant colored pixels together - you preserve information (and get the same result as if you used OSC camera with twice as large pixels but half of the pixel count in horizontal and vertical).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.