Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Eyepiece becomes too expensive expensive for me - if there is eyepiece that offers near or same performance for less money
  2. No, there is no accepted standard. I would argue that one should follow set of technical standards when attempting to recreate actual color of object (within acceptable limits), but otherwise, people are free to do what they like with their images - sort of artistic freedom. There were couple of discussions on what is "allowed" to be done to processing of astrophotography that is not to be considered "cheating" / "making data up", and my view on that topic is rather conservative. I don't really like clone stamp tool or use of brush or morphological filters for star reduction, I also prefer minimal noise reduction and minimal sharpening if any. I also don't like boosting saturation - although most people are used to seeing images like that - over saturated.
  3. Ok, I understand now what you are after - you just want to make sure you preserved color that came out of camera as is, right? Well, that is something that is not possible. We stretch our data and in doing so we are changing it. RGB ratios change when you stretch your data - and even if you stretch it in particular way that keeps relative ratios - you are again going to change what we see as color. Different stretches of same RGB ratio will produce different color sensation in our eye. Bright orange is well - orange, but dark orange is no longer orange - it is brown to our eye - although it is same spectrum of the light - we see it as different hue. Having said that - if you don't intentionally mess with color - you should be able to preserve raw color from camera "in general" (that is not accurate in terms of color reproduction - but can still be considered authentic as made by your equipment). Level of processing involved will determine how "accurate" or rather "preserved" color is (and there are different metrics of "preservation" - one related to light spectrum for example and one related to our perception). If you again look at that image I posted - yes, OIII + Ha combination can produce orange color: It can range from green/blue combination across pale yellow to orange and red. In fact, when doing bicolor image - we often intentionally change color. That is because OIII signal can often be much fainter than Ha signal and image will end up being mostly red. If we want to show OIII structure - we will often boost it compared to Ha and that will shift color on above line towards green end. So here is what "as is" raw color from camera looks like: This is to be expected - Ha signal is much stronger and image is almost completely red. If you want to show OIII a bit more clearly - well, you need to boost it separately. This is already deep in fake color territory - as we adjust color components separately. Now that we made OIII stronger - it shows some structure - in upper right corner there is wall that still has dominant Ha but rest of the nebula shows OIII presence as well. This shows that both of your versions are "correct" - just depend on how you define correct. If you want data as out of camera - linked stretch will provide that. If you want to emphasize otherwise weaker OIII signal - use unlinked stretch.
  4. I'm interested, but I don't own a copy of PI so would prefer in standard file format like 32bit floating point FITS rather than xisf that is PixInsight only file format. On color topic - well answer is rather complex. Fact that this is in essence narrowband image does not mean that it is necessarily false color. If color is to be consider accurate - we must decide what color are we talking about. - Actual color of the object and stars - well, no luck there, filter used completely obliterated color information there - Color of the light that passed thru the filter - we can talk about that color. Since these are emission type objects with Ha and OIII emission lines - we can talk about those colors. You can choose to do actual narrowband color image - and you may be partially or fully successful in recreating actual color of the captured light. That largely depends on ratio of Ha to OIII signal. Both Ha and OIII colors are outside of sRGB color gamut and we will here talk only about sRGB color gamut as I presume that intent is to display image online and sRGB is implied standard (we could also talk about other wider gamut color spaces - but only few people would be able to use that as it requires both wide gamut display and properly setup operating system that can display wide gamut images). This is chromaticity diagram that shows all "colors" (at max brightness) that exists - with actual colors from sRGB color space shown. This is because other colors (gray area) simply can't be properly displayed by your screen / sRGB color space. I outlined a line on this diagram. It connects ~500nm point at spectral locus to 656nm point at spectral locus. These are OIII and Ha colors. Any light consisting out of these two wavelengths (any combination of strength of each) will have color that lies on that particular line. sRGB color space will only be able to show those colors that are in colored triangle along black line. All other colors that are left or right on that line will be too saturated green/teal or too saturated deep red. Computer screen simply can't display those. If your image consists of combinations that lie inside triangle - great, you'll be able to fully display color of the light that passed thru the filter. If not - we must use a trick. There are several ways to do it: we can just clip the color along the line - if color lies in deep reds - we just show it as reddest color along the line that we can display. Similarly for OIII side. Another approach would be to do perceptual mapping - we instead choose color that looks most like color that we can't display - like dark deep red for Ha. This process is know as gamut mapping. You can also decide to do fake color narrowband image - similar to SHO or HSO images - but you only have two wavelengths captured so you can only create bicolor image. Above is also bicolor image - but with accurate colors for the captured light. With fake color - you can choose any bi color scheme you like - like HOO or similar. In any case - before you choose any of these - you actually need to extract color data from you image. You used color sensor with duo band filter and you'll need some math to extract the actual data. Look here: We can see that L-eXtreme only passes ~500nm and ~656nm, but if we look at QE graph for ASI071: We can see that 500nm ends up being picked by all three channels and so does 656. In fact we can write couple of equations: red = OIII * 4.5% + Ha * 78% green = OIII * 68% + Ha * 9% blue = OIII * 45% + Ha * 3% From these you can get OIII and Ha in several different ways using pixel math - I recommend following: Ha = (red - green / 15.111) * 1.292 OIII = (green - red / 8.667) * 1.46 If you want to get accurate color, then you need to convert Ha + OIII into XYZ value and then convert XYZ to sRGB, otherwise - just use Ha and OIII to compose RGB like HOO or other bicolor combination.
  5. There is probably one more variable that is off - at least according to AstroBin info for that animated version - it says SII filter was used (probably to tame the seeing). That is different wavelength than 540nm used for simulation above.
  6. It says D3300 is not supported. I also checked APT website - D3300 is not supported, Ivo mentioned that Nikon does not publish comms library for this model (although it is basically the same as D5300). As far as I can tell - you won't be able to control it via computer and that is a bit of a problem - but won't be detrimental for starting in AP. You can either work with 30s exposures to start with or get intervalometer for this camera - like this one: https://www.amazon.com/PHOLSY-Control-Intervalometer-Shutter-Replaces/dp/B01N133BI6/ref=sr_1_5?dchild=1&keywords=intervalometer+nikon&qid=1623774250&sr=8-5 or similar. You can use remote or tethered one for AP. Save your subs to card and later transfer them to computer for processing.
  7. If you are sort of person that enjoys extreme sports / horror movies - then yes Otherwise - no as it won't make much difference. I did that with my first newtonian when I flocked it and at the time I was so worried that I'll damage the mirror in one way or another - that it caused adrenaline rush . Now looking back at it after some time - I really do think that it was unnecessary as far as views go - but interesting exercise and possibly good way to get more comfortable around optical equipment.
  8. I'd say start with what you have. D3300 is nice camera to get you started in astrophotography. You'll just need suitable connection ring. I think that this one is suitable: https://www.firstlightoptics.com/adapters/borg-nikon-f-adaptor.html but I'm not 100% sure. Your camera should have Nikon F mount and you need T2 adapter for that mount.
  9. Here is another sim This time 10" scope capturing granulation: Left is excellent capture by @AbsolutelyN using 10" scope and ASI178mm camera and right is simulation of granulation from above image. I'm not sure if I matched pixel scale properly since I'm missing pixel scale for both images - I just measured cells and assigned 2" to measurement. I also tried to match contrast brightness.
  10. Don't worry, I perfectly understand where you are coming from and I do believe that you are seeing what you are seeing. This is not the first time people say that they are seeing more at the eyepiece than theory predicts should be possible. Well - not theory, but rather my interpretation of theory / my simulation. I have no doubt that theory is correct (otherwise it would not be scientific theory) - and it is often our misuse of it that makes bad predictions. In any case, if practice and theory/simulation strongly diverge, then there are couple of possible explanations. - My understanding of theory is incomplete / my application of theory is outside of its domain of validity or I simply made a mistake applying the theory (wrong simulation parameters or just error in simulation process). - There are additional parameters that we neglected / omitted that are significant enough to make the difference - There are other factors that result in perception of what might not be physical reality. When I say this - I don't mean that you are imagining things, I just simply mean that eye-brain system is very complex and does a lot of things "behind curtains" to enable us to see the way we see and some of that might have particular effect in this case. That is the reason we have optical illusions for example, or the reason we never see photon noise (although we should - we are sensitive enough to detect light at that level - however our brain does noise suppression). In any case, I feel that it is beneficial to pursue this discrepancy further, and I already have couple of ideas of how to go about it (not for this particular case - but to test out resolution of actual optics on object such as Jupiter without influence of atmosphere - to see/record how does different quality optics render image).
  11. Indeed. I think that OP has enough material to at least somewhat asses differences between 100mm and 80mm scopes for WL observation - both simulated and first hand accounts.
  12. Is that sharpened version? If I take the image you posted from the beginning of the thread and compare that to 80mm scope enlarged to same resolution, we get this: vs Mind you, second one is not wavelet sharpened / deconvolved and is simulated at 540 and not full spectrum. That looks pretty similar to me.
  13. Do keep in mind that image was produced at 540nm - or rather simulated at that light - as Baader Continuum filter is centered at that wavelength. At 540nm, critical sampling frequency for 80mm scope is 0.7"/px - that means that typical cell is represented with only 3 pixels. In those three pixels across one needs to put both walls of the cell and interior. I just don't see how can we resolve typical cell at that wavelength to clearly see interior and walls. Here is what 0.7"/px looks like regardless of resolution - and that is sharper than the optics will produce: Here we have comparison of original image and one sampled at 0.7"/px and then enlarged (Lanczos resampling) to match the size of right image. Even without effects of aperture - we can't really say that we see cells and walls. Add to that blurring by telescope optics (perfect one - better even than the all mighty Tak ) and you'll get this: Here is full procedure for creating the image: I took original Jpeg image, converted it to 32bit / monochromatic version, did inverse gamma of 2.2 to bring it to linear (hopefully whoever made an image followed sRGB standard), and measured random cell size. It measured 270px across so we now have 1/135"/px base resolution for our base image. I generated Airy disk pattern and calculated critical sampling rate for 540nm and convolved linear image with airy disk and scaled it to critical sampling rate (no need to make it bigger - but we can always enlarge it if needed without loss of detail as there is no detail past critical sampling rate). I applied green LUT to further simulate 540nm filter and I did forward gamma of 2.2. Image converted to 8bit and saved as jpeg (high quality settings). @Stu Please don't think that I'm negating your experience. There are several things that could be wrong with my simulation. I could have made wrong measurement of a cell - took particularly large cell that is actually 3" instead of 2" in size (although I did aim for average one and measured it in shorter direction just to be sure). I could further apply theory in a wrong way - or simply made error in calculation. That is why it is important to have repeatable results when doing simulations - similarly to experiments. I would be much happier if someone else also did simulation so we can compare results. There is also already mentioned question of dynamic range / contrast and monitor calibration, pixel size and viewing distance.
  14. Well, here is simulation so we can see what it looks like. I took this high resolution image of granulation (best I could find and in fact it's titled: "Highest_resolution_photo_of_Sun_(NSF)_as_of_January_20,_2020"): this is actually scaled down image - original is huge 7320px x 7320px - which is plenty of resolution for simulation, individual cells are huge and nicely resolved: I took two perfect 80 and 100 mm scopes, added Baader Solar Continuum filter (540nm simulation) and produced respective images. In reality, view is going to be worse than this because of imperfect optics and seeing effects: 80mm: 100mm: Both of these squares represent full image posted above (first one scaled down to fit the screen). Although you can see the granulation texture - you really can't resolve single cells in either 80 or 100mm scope. Most cells are visually simply joined into larger blob that we see and think is individual cell - but it's not.
  15. Lens don't perform good at the edges of the field when using wide aperture settings. If you want good correction to the edge of sensor - try stopping down lens to F/4 or even less.
  16. In most cases - I was seeing limited. I'm not experienced solar observer and most of the times I was actually looking at some phenomena - like eclipse or transit rather than actual solar observation. I remember seeing granulation - but it was on one or maybe two occasions. My current WL solar equipment is actually very suited to run experiment on this - I have both 80mm F/6 APO and 102mm F/10 achromat. Together with Lunt Herschel wedge and Baader solar continuum filter - it is decent WL setup. I have no reason to disagree as you have first hand experience. To the first approximation, it looks like 80mm is not able to resolve granulation, but that is not what theory is saying (and in fact - when theory disagrees with practice - it is not because theory is wrong - it is because it is not properly applied). Best that I could do is make simulation of view in 80mm and 100mm scopes so we can see how much better the view would be in 100mm scope under perfect conditions. Alternative to that would be for me to get out and make comparison in WL between two scopes and attempt to see granulation in 80mm one. That would require good seeing. Last time I observed in WL I had trouble even seeing faculae near the limb, and a friend that I invited over for a session - could not see them at all (that clearly shows that even limited observing experience helps in seeing detail).
  17. I agree 100% - however,that does not mean that the eye at eyepiece can also see all there is in the same data. There are couple of important points related to how we see and why processed images show more detail than can be observed at eyepiece. 1. Contrast enhancement. Humans have something called JND - just noticeable difference. We can't tell difference between two levels of brightness if it is not as high as JND. Processed image has no issues with this 2. Motion blur. We observe in presence of astronomical seeing. In lucky imaging we use exposure times that are at least 5-6 times shorter than what our visual system uses as integration time. We can look at 30fps image and see no individual frames - our brain "integrates" for at least 30ms. We often use 5ms or shorter exposures for lucky imaging. In 30ms, atmosphere easily additionally blurs the image. 3. We use sharpening in processing. Our brain can't do that. Here is perfect aperture MTF: When we sharpen image up - we "raise" this line to be (almost) horizontal. When it is like above - it acts as low pass filter and image is blurred. In best of seeing - that is limit to our vision, but not to processed image - processed image can be sharper (which is again local/micro contrast that can be further enhanced)
  18. No, not much - I observed on couple occasions - but never with aperture smaller than 4". I based my claim that 80mm won't be sufficient - simply on difference between airy disk size of those two scopes and the fact that 4" is often recommended as a minimum needed to clearly show granulation (see link I posted - it says that granulation is to be seen in excellent conditions with at least 4" scope). Size of granules according to wikipedia (https://en.wikipedia.org/wiki/Granule_(solar_physics)): and that of airy disk - support that.
  19. Diameter of airy disk is 2.5" but you can split doubles that are separated by radius of airy disk that is half that size: Top image - Airy pattern of two stars separated by airy disk diameter (and having same intensity / magnitude). Middle image - this is often called Rayleigh criterion for resolution - we can still resolve two stars as being two separate entities - this is when their distance is that of airy disk radius (and not diameter). This happens when maximum of one star sits at first minima of other star. Here is better graph of that: This shows that although Airy disk diameter is 2.5" - you can split stars separated down to 1.25".
  20. Spot as in visually see, or spot as in record with lucky imaging approach, stack and then sharpen up? Lucky Imaging, stacking and then sharpening / frequency restoration will produce image with detail that can't be observed at eyepiece - our brain does not posses ability to sharpen up image like that.
  21. Both are very close to limit of resolution in order to see granulation. Solar granules are about 1500Km in size - which makes them about 2 arc seconds in size. Some are larger, some are smaller. Airy disk size of 80mm scope is 3.2" while that of 102mm is 2.5". I think you have better chance of seeing granulation in 100mm scope over 80mm one - so if that is interesting to you - get larger aperture scope. In fact, this article: https://skyandtelescope.org/observing/observing-the-sun/ says:
  22. Thanks for that link. I found following paper last night and will look into it as well as that FAQ list you provided. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6434053/
  23. https://www.youtube.com/watch?v=sE18PBQRzDQ Does not really answer how good the optics are, but compared to FS 102 following comment is given: "Tak has better star image, but in focus, you really can't tell". Start at about 8:00 for comparison of two scopes ...
  24. What does it mean for me to weight the branch? I'm an observer, I flip a coin. World splits - first one is heads + me recorded heads and second is tails + me recorded tails (and of course environment and deconherence and all entangled into particular entity). What is justification of assigning weight of 1/3 to first copy of me, and 2/3 of second copy of me. Why is there "privilege" on any particular copy? Just because QM says so? QM only says so because there is something in actual workings of the world that makes me (single copy) - detect heads 1/3 of time. and tails 2/3 of time. You have to show that above split leads to me (any probable copy of me - one that I am at the moment) - having basis to deduce that probabilities are 1/3 and 2/3 and verify that by experiment. This is not something that happens sometimes - this is something that happens all the time and is one of most confirmed scientific theory. Many worlds must provide mechanism for that being so if it is to be correct interpretation. Saying that world splits into two - one with weight of 1/3 and other with weight of 2/3 - just makes no sense if you don't explain what those weights are. We can never test probabilities on single measurement - we need many measurements and there will be converging on particular probability. Binary branching will never produce convergence to 1/3 - 2/3 probabilities and since coin has only two eigenstates - only binary branching is possible. We used to kid back in school that any event has 50% probability - either it will happen or it wont. Many worlds interpretation is a bit like that
  25. But this exactly contradicts QM - we know relative frequencies of event based on eigenvalues from wavefunction. QM tells us that probability of heads is 1/3 and probability of tails is 2/3 - however, in no worlds this is true. If we pick any branch on random after 10000 coin flips - it is very likely that we will end up in a branch that has roughly the same number of head and tails - but QM tells that we should likely end up in one having 1/3 - 2/3 ratio instead.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.