Jump to content

sgl_imaging_challenge_2021_annual.thumb.jpg.3fc34f695a81b16210333189a3162ac7.jpg

vlaiv

Members
  • Content Count

    8,107
  • Joined

  • Last visited

  • Days Won

    6

Posts posted by vlaiv

  1. 6 minutes ago, simonharrison said:

    Interesting, didn't expect that re: Barlow, as I'd read so much on forums about pixel size * 5 being the optimum F/number to image at.

    image.thumb.png.e9a8d336ea9e15a3c61a557722262015.png

    Add Nyquist sampling theorem (two pixels per single wavelength) and take lambda to be either 400nm - lower end of spectrum or around 500nm as blue colors are more affected by seeing and you probably won't achieve full resolution in blue and you have formula for critical sampling.

    At 475nm for 2.9µm that gives F/12.2

  2. 8 minutes ago, Quetzalcoatl72 said:

    Heres a stacked image with my dslr and reducer, if it's a leak I would be from the asi533 however that's not the case without the reducer.

    I just checked ASI533 has just plain AR coated window - that means it is sensitive in IR part of the spectrum. DSLR is not.

    This could mean that problem is related to focal reducer - but it might be that it is not. It could be simple IR leak that just started to show for a different reason. What has changed in your setup or environment recently?

    • Thanks 1
  3. 2 minutes ago, Quetzalcoatl72 said:

    I do not use covers of any kind, can you give me an example?

    How do you take your darks? You have to put some sort of cover on the front of the scope?

    Here is for example solution from one member here:

    IMG_1670.JPG

    But light leaks can happen in different places - if you have extension tubes or even at scope seams - here is another case of light leak handling from CloudyNights forum:

    post-260258-0-92948200-1581897946.jpg

    • Like 1
  4. 1 minute ago, Quetzalcoatl72 said:

    When I was taking darks the next day(not added to this image) I could see similar rings on my darks which is literally impossible right?

    That actually suggests a light leak of sorts.

    What do you use to cover your scope? Light leak can come from back side - but also from front side. If you have plastic scope cover - that won't stop IR radiation from getting in. Most people add aluminum foil in that case.

  5. 2 hours ago, msacco said:

    Thanks for the great comment once again! So is there some 'accepted standard' here on which of them should be used? Or it's really just a matter of personal preferences and each of them could be considered as 'okayish'?

    No, there is no accepted standard.

    I would argue that one should follow set of technical standards when attempting to recreate actual color of object (within acceptable limits), but otherwise, people are free to do what they like with their images - sort of artistic freedom.

    There were couple of discussions on what is "allowed" to be done to processing of astrophotography that is not to be considered "cheating" / "making data up", and my view on that topic is rather conservative. I don't really like clone stamp tool or use of brush or morphological filters for star reduction, I also prefer minimal noise reduction and minimal sharpening if any.  I also don't like boosting saturation - although most people are used to seeing images like that - over saturated.

  6. 58 minutes ago, msacco said:

    My question is, would the correct thing be to combine my results to some sort of HOO? And would having these orange colors I got be considered 'incorrect'?
    Again, it's not like I expect the image to represents the 'true color' of the area, but the only thing I want is to have the correct colors in terms of 'these colors are not added' or w/e, I want my image to be 'genuine' to the way I captured the data, which is OSC with a dual band filter.

    Would the orange thing I got meet these requirements?

    Ok, I understand now what you are after - you just want to make sure you preserved color that came out of camera as is, right?

    Well, that is something that is not possible. We stretch our data and in doing so we are changing it. RGB ratios change when you stretch your data - and even if you stretch it in particular way that keeps relative ratios - you are again going to change what we see as color. Different stretches of same RGB ratio will produce different color sensation in our eye.

    Bright orange is well - orange, but dark orange is no longer orange - it is brown to our eye - although it is same spectrum of the light - we see it as different hue.

    Having said that - if you don't intentionally mess with color - you should be able to preserve raw color from camera "in general" (that is not accurate in terms of color reproduction - but can still be considered authentic as made by your equipment). Level of processing involved will determine how "accurate" or rather "preserved" color is (and there are different metrics of "preservation" - one related to light spectrum for example and one related to our perception).

    If you again look at that image I posted - yes, OIII + Ha combination can produce orange color:

    image.png.da1882a2bddeebf4ec8686a579fb586c.png

    It can range from green/blue combination across pale yellow to orange and red.

    In fact, when doing bicolor image - we often intentionally change color. That is because OIII signal can often be much fainter than Ha signal and image will end up being mostly red. If we want to show OIII structure - we will often boost it compared to Ha and that will shift color on above line towards green end.

    So here is what "as is" raw color from camera looks like:

    image.png.4fa9f9fc4c0759532d2ef775da99c19e.png

    This is to be expected - Ha signal is much stronger and image is almost completely red.

    If you want to show OIII a bit more clearly - well, you need to boost it separately. This is already deep in fake color territory - as we adjust color components separately.

    image.png.39dadbabf1bdf659daf7b4a3f72729f8.png

    Now that we made OIII stronger - it shows some structure - in upper right corner there is wall that still has dominant Ha but rest of the nebula shows OIII presence as well.

    This shows that both of your versions are "correct" - just depend on how you define correct. If you want data as out of camera - linked stretch will provide that. If you want to emphasize otherwise weaker OIII signal - use unlinked stretch.

     

  7. 20 hours ago, msacco said:

    I'm interested, but I don't own a copy of PI so would prefer in standard file format like 32bit floating point FITS rather than xisf that is PixInsight only file format.

    On color topic - well answer is rather complex.

    Fact that this is in essence narrowband image does not mean that it is necessarily false color. If color is to be consider accurate - we must decide what color are we talking about.

    - Actual color of the object and stars - well, no luck there, filter used completely obliterated color information there

    - Color of the light that passed thru the filter - we can talk about that color. Since these are emission type objects with Ha and OIII emission lines - we can talk about those colors.

    You can choose to do actual narrowband color image - and you may be partially or fully successful in recreating actual color of the captured light. That largely depends on ratio of Ha to OIII signal.

    Both Ha and OIII colors are outside of sRGB color gamut and we will here talk only about sRGB color gamut as I presume that intent is to display image online and sRGB is implied standard (we could also talk about other wider gamut color spaces - but only few people would be able to use that as it requires both wide gamut display and properly setup operating system that can display wide gamut images).

    image.png.683108c2b47499860f4818f7573cb18b.png

    This is chromaticity diagram that shows all "colors" (at max brightness) that exists - with actual colors from sRGB color space shown. This is because other colors (gray area) simply can't be properly displayed by your screen / sRGB color space.

    I outlined a line on this diagram. It connects ~500nm point at spectral locus to 656nm point at spectral locus. These are OIII and Ha colors. Any light consisting out of these two wavelengths (any combination of strength of each) will have color that lies on that particular line.

    sRGB color space will only be able to show those colors that are in colored triangle along black line. All other colors that are left or right on that line will be too saturated green/teal or too saturated deep red. Computer screen simply can't display those.

    If your image consists of combinations that lie inside triangle - great, you'll be able to fully display color of the light that passed thru the filter. If not - we must use a trick. There are several ways to do it: we can just clip the color along the line - if color lies in deep reds - we just show it as reddest color along the line that we can display. Similarly for OIII side. Another approach would be to do perceptual mapping - we instead choose color that looks most like color that we can't display - like dark deep red for Ha. This process is know as gamut mapping.

    You can also decide to do fake color narrowband image - similar to SHO or HSO images - but you only have two wavelengths captured so you can only create bicolor image. Above is also bicolor image - but with accurate colors for the captured light.

    With fake color - you can choose any bi color scheme you like - like HOO or similar.

    In any case - before you choose any of these - you actually need to extract color data from you image. You used color sensor with duo band filter and you'll need some math to extract the actual data.

    Look here:

    image.png.656d22315eabd135a96128e7be6d6e74.png

    We can see that L-eXtreme only passes ~500nm and ~656nm, but if we look at QE graph for ASI071:

    ASI071-QE-e1509346837511.jpg

    We can see that 500nm ends up being picked by all three channels and so does 656. In fact we can write couple of equations:

    red = OIII * 4.5% + Ha * 78%

    green = OIII * 68% + Ha * 9%

    blue = OIII * 45% + Ha * 3%

    From these you can get OIII and Ha in several different ways using pixel math - I recommend following:

    Ha = (red - green / 15.111) * 1.292

    OIII = (green - red / 8.667)  * 1.46

    If you want to get accurate color, then you need to convert Ha + OIII into XYZ value and then convert XYZ to sRGB, otherwise - just use Ha and OIII to compose RGB like HOO or other bicolor combination.

     

    • Like 1
  8. 45 minutes ago, AbsolutelyN said:

    It's actually an 8" scope - I had to stop the aperture down to the size constraint of an a4 sheet of baader solar film. Taken with a 3x barlow so 3600mm. Interesting comparison - needs good seeing though. 

    There is probably one more variable that is off - at least according to AstroBin info for that animated version - it says SII filter was used (probably to tame the seeing). That is different wavelength than 540nm used for simulation above.

  9. 1 minute ago, Davey-T said:

    It says D3300 is not supported.

    I also checked APT website - D3300 is not supported, Ivo mentioned that Nikon does not publish comms library for this model (although it is basically the same as D5300).

    10 minutes ago, NorthOfNorth said:

    We recently tried to connect it to my laptop for my son to do youtube videos and had a terrible time trying to connect it to any software. Would that be a problem while trying to learn?

    As far as I can tell - you won't be able to control it via computer and that is a bit of a problem - but won't be detrimental for starting in AP.

    You can either work with 30s exposures to start with or get intervalometer for this camera - like this one:

    https://www.amazon.com/PHOLSY-Control-Intervalometer-Shutter-Replaces/dp/B01N133BI6/ref=sr_1_5?dchild=1&keywords=intervalometer+nikon&qid=1623774250&sr=8-5

    or similar.

    You can use remote or tethered one for AP.

    Save your subs to card and later transfer them to computer for processing.

  10. If you are sort of person that enjoys extreme sports / horror movies - then yes :D

    Otherwise - no as it won't make much difference.

    I did that with my first newtonian when I flocked it and at the time I was so worried that I'll damage the mirror in one way or another - that it caused adrenaline rush :D. Now looking back at it after some time - I really do think that it was unnecessary as far as views go - but interesting exercise and possibly good way to get more comfortable around optical equipment.

    • Haha 1
  11. Here is another sim :D

    This time 10" scope capturing granulation:

    image.png.1ec98f7bc51df045f664dc0bbe66507a.png

    Left is excellent capture by @AbsolutelyN using 10" scope and ASI178mm camera and right is simulation of granulation from above image. I'm not sure if I matched pixel scale properly since I'm missing pixel scale for both images - I just measured cells and assigned 2" to measurement. I also tried to match contrast brightness.

    • Like 2
  12. 8 minutes ago, Stu said:

    Sorry to bang on about this, I guess I get a little ‘challenged’ by being told I can’t see something because of theory, vs the real world experience of what is practically achievable.

    Don't worry, I perfectly understand where you are coming from and I do believe that you are seeing what you are seeing.

    This is not the first time people say that they are seeing more at the eyepiece than theory predicts should be possible. Well - not theory, but rather my interpretation of theory / my simulation. I have no doubt that theory is correct (otherwise it would not be scientific theory) - and it is often our misuse of it that makes bad predictions.

    In any case, if practice and theory/simulation strongly diverge, then there are couple of possible explanations.

    - My understanding of theory is incomplete / my application of theory is outside of its domain of validity or I simply made a mistake applying the theory (wrong simulation parameters or just error in simulation process).

    - There are additional parameters that we neglected / omitted that are significant enough to make the difference

    - There are other factors that result in perception of what might not be physical reality. When I say this - I don't mean that you are imagining things, I just simply mean that eye-brain system is very complex and does a lot of things "behind curtains" to enable us to see the way we see and some of that might have particular effect in this case. That is the reason we have optical illusions for example, or the reason we never see photon noise (although we should - we are sensitive enough to detect light at that level - however our brain does noise suppression).

    In any case, I feel that it is beneficial to pursue this discrepancy further, and I already have couple of ideas of how to go about it (not for this particular case - but to test out resolution of actual optics on object such as Jupiter without influence of atmosphere - to see/record how does different quality optics render image).

    • Thanks 1
  13. 22 minutes ago, michael.h.f.wilkinson said:

    I think we agree that the granulation is visible at 80 mm aperture, but that the shapes of the grains themselves are not well resolved.

    Indeed.

    I think that OP has enough material to at least somewhat asses differences between 100mm and 80mm scopes for WL observation - both simulated and first hand accounts.

  14. 6 minutes ago, michael.h.f.wilkinson said:

    But then I mage at roughly 0.5" per pixel and get this

    Is that sharpened version?

    If I take the image you posted from the beginning of the thread and compare that to 80mm scope enlarged to same resolution, we get this:

    image.png.abe197cb915b3de7179fc034686894d7.png

    vs

    80mm_at_100size.jpg.e626df3d12d38db5b748eb8a6d9ae3e9.jpg

    Mind you, second one is not wavelet sharpened / deconvolved and is simulated at 540 and not full spectrum.

    That looks pretty similar to me.

     

  15. 6 minutes ago, michael.h.f.wilkinson said:

    I also see a sharper image than what vlaiv posted as representative of the view of an 80mm. The main issue here is that I am not sure his image reproduces the dynamic range of the live image that well. The input image has been processed, and might not represent contrast faithfully. Higher contrast always results in higher apparent resolution.

    Do keep in mind that image was produced at 540nm - or rather simulated at that light - as Baader Continuum filter is centered at that wavelength.

    At 540nm, critical sampling frequency for 80mm scope is 0.7"/px - that means that typical cell is represented with only 3 pixels. In those three pixels across one needs to put both walls of the cell and interior. I just don't see how can we resolve typical cell at that wavelength to clearly see interior and walls.

    Here is what 0.7"/px looks like regardless of resolution - and that is sharper than the optics will produce:

    image.png.fc17328e89769de71125050a5f73759a.png

    Here we have comparison of original image and one sampled at 0.7"/px and then enlarged (Lanczos resampling) to match the size of right image.

    Even without effects of aperture - we can't really say that we see cells and walls. Add to that blurring by telescope optics (perfect one - better even than the all mighty Tak :D ) and you'll get this:

    image.png.0e469ad03a1fc270583f854840c4bfa8.png

    Here is full procedure for creating the image:

    I took original Jpeg image, converted it to 32bit / monochromatic version, did inverse gamma of 2.2 to bring it to linear (hopefully whoever made an image followed sRGB standard), and measured random cell size. It measured 270px across so we now have 1/135"/px base resolution for our base image.

    I generated Airy disk pattern and calculated critical sampling rate for 540nm and convolved linear image with airy disk and scaled it to critical sampling rate (no need to make it bigger - but we can always enlarge it if needed without loss of detail as there is no detail past critical sampling rate).

    I applied green LUT to further simulate 540nm filter and I did forward gamma of 2.2. Image converted to 8bit and saved as jpeg (high quality settings).

    @Stu

    Please don't think that I'm negating your experience. There are several things that could be wrong with my simulation. I could have made wrong measurement of a cell - took particularly large cell that is actually 3" instead of 2" in size (although I did aim for average one and measured it in shorter direction just to be sure). I could further apply theory in a wrong way - or simply made error in calculation.

    That is why it is important to have repeatable results when doing simulations - similarly to experiments. I would be much happier if someone else also did simulation so we can compare results.

    There is also already mentioned question of dynamic range / contrast and monitor calibration, pixel size and viewing distance.

  16. Well, here is simulation so we can see what it looks like. I took this high resolution image of granulation (best I could find and in fact it's titled: "Highest_resolution_photo_of_Sun_(NSF)_as_of_January_20,_2020"):

    granulation.jpg.2e912fddc2d5429b6b76db25f1f67b8f.jpg

    this is actually scaled down image - original is huge 7320px x 7320px - which is plenty of resolution for simulation, individual cells are huge and nicely resolved:

    Screenshot_1.jpg.117786a74b1689b6649c8a3d61653eef.jpg

    I took two perfect 80 and 100 mm scopes, added Baader Solar Continuum filter (540nm simulation) and produced respective images. In reality, view is going to be worse than this because of imperfect optics and seeing effects:

    80mm:

    80mm.jpg.dafb2817fdbfa79581f53b88bb3efba7.jpg

    100mm:

    100mm.jpg.99ef4f9d97e056063396a60bdb110bc4.jpg

    Both of these squares represent full image posted above (first one scaled down to fit the screen).

    Although you can see the granulation texture - you really can't resolve single cells in either 80 or 100mm scope. Most cells are visually simply joined into larger blob that we see and think is individual cell - but it's not.

     

  17. 1 hour ago, Stu said:

    Did you view granulation when you observed? What sort of powers did you use?

    In most cases - I was seeing limited. I'm not experienced solar observer and most of the times I was actually looking at some phenomena - like eclipse or transit rather than actual solar observation.

    I remember seeing granulation - but it was on one or maybe two occasions.

    My current WL solar equipment is actually very suited to run experiment on this :D - I have both 80mm F/6 APO and 102mm F/10 achromat. Together with Lunt Herschel wedge and Baader solar continuum filter - it is decent WL setup.

    1 hour ago, Stu said:

    I don’t agree that it is not visible in smaller scopes.

    I have no reason to disagree as you have first hand experience.

    To the first approximation, it looks like 80mm is not able to resolve granulation, but that is not what theory is saying (and in fact - when theory disagrees with practice - it is not because theory is wrong - it is because it is not properly applied).

    Best that I could do is make simulation of view in 80mm and 100mm scopes so we can see how much better the view would be in 100mm scope under perfect conditions.

    Alternative to that would be for me to get out and make comparison in WL between two scopes and attempt to see granulation in 80mm one. That would require good seeing. Last time I observed in WL I had trouble even seeing faculae near the limb, and a friend that I invited over for a session - could not see them at all (that clearly shows that even limited observing experience helps in seeing detail).

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.