Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Pixel peeping spreads like wildfire
  2. Here is interesting idea one might try: Any color mapping between NB data and RGB color space will come as transform matrix. This is due to nature or light - it is linear in nature - meaning that you can increase it (multiply intensity by constant) and add it (shine two lights at sensor and number of photons in each wavelengths adds from those two sources). As such - any transform that is applied to light needs to be linear - and in case of vectors - that is matrix multiplication. That really means that R = c1 * Ha + c2 * OIII + c3 * SII G = c4 * Ha + c5 * OIII + c6 * SII B = c7 * Ha + c8 * OIII + c9 * SII Of course - point is to find these coefficients c1 ... c9 that will give pleasing result. One way of doing it is to go "other way around". Take RGB chromaticity chart and select your primaries for the image: In above chart - you can select any three points to represent you Ha, OIII and SII signal. All colors in the image will be within triangle specified by those three colors. For example - you want to avoid green and you want yellow to be Ha. You also don'e really like pink and purple tones in the image - well maybe select following three primaries: Next - open your application for image processing (like Gimp or PS or whichever has color picker - and note down RGB values of colors you have chosen as primaries): So SII has (1.0, 0.095, 0) as RGB triplet. Ha has (1.0, 0.91, 0) as RGB triplet and OIII will have (0, 0.6, 1) triplet Final color will be given as: color = Ha * (1, 0.91, 0) + OIII * (0, 0.6, 1) + SII * (1, 0.095, 0) Now we group first numbers in parenthesis - that being red and we write: red = Ha * 1 + OIII * 0 + SII * 1 = Ha + SII green = Ha*0.91 + 0.6*OIII + 0.095 * SII = 0.91 * Ha + 0.6 * OIII + 0.095* SII blue = Ha*0 + OIII * 1 + SII * 0 = OIII There you go, we created our own palette with three primaries so that Ha is yellow, SII is red and we avoided green and purple / pink colors
  3. Alternative explanation would be - addition of vectors. Think school math problem with vectors - like adding of velocities. You are moving left at 0.71 and moving up at 0.43 - what is your total speed?
  4. Do check out ImageJ and stitching plugins in that case (It is java based so it will work on any OS): https://imagej.net/plugins/image-stitching
  5. I like this version the best - as far as processing goes.
  6. I'm neither NB nor PI guru, but here are few fun facts SHO has Ha as green for a purpose. Green color "participates" the most in luminance information of the image. Ha is often the strongest NB component and will result in best SNR. (out of three primary components - R, G and B - green carries the most luminance information over 80%, red and blue being 54% and 44% respectively) Human eye/brain system is most sensitive for luminance information. It perceives the noise in luminance the most. We are perfectly able to watch "black and white" movie or image - which is just luminance information - while corresponding chrominance part would look flat and would carry far less information. Check out this Ferrari image: If we take luminance of that image: We see a lot of detail in the image -in fact we see all of the detail in the image - we just don't see color, but look what chrominance information carries: We can guess that it is some sort of car - probably racing car - but we have no idea what the setting for the image or anything else. To me - the idea of killing green in SHO images and trying to make "Hubble palette" - something that it is not - is complete abomination. For above reasons and some technical ones - most cameras are simply the most sensitive in green part of spectrum (that helps with daytime photography and coincides with the fact that we are most sensitive in green and it also coincides with the fact that green channel carries the most of luminance information). Since people don't know how to properly handle raw camera data and convert it to proper color - most of astronomic images (that really best start off as raw data) have green cast. 99% of objects in space are not green because they are stellar in nature (which means it is one of these colors ) and instead of learning how to properly deal with color - people came up with "hasta la vista green" or SCRN scripts or whatever. In fact - people get so frustrated with green they get from their raw data - that they started treating all green - even in false color images - as something unnatural. In the end - my advice would be - if you don't mind going against popular expectations - don't kill the green in SHO - embrace it.
  7. Yes, sorry about that. I find these things rather easy as I've been working with computers all my life and I often forget that there is learning curve. Here is what it means in nutshell. Your image has quite large stars. It means that it is over sampled. Over sampled images loose SNR because light is spread too much over pixels. They look blurry as well - like when you magnify image too much. What I'm proposing is to do following. Imagine that your image above was "more zoomed out" - like this: but instead of being small - it was larger - like having that additional parts of the sky imaged next to basic image. This is called mosaic and is made out of panels. You shoot individual images like you did of central portion - but also surrounding parts as well. That is first step. Some imaging software has mosaic planners - which simplify things, but you can do it via Stellarium or other planetarium software. Here is example: You start by determining overlap points - like this (put one in each corner): (Use shift + left click to put marker and shift + right click to remove it) Now move FOV so that it is positioned as particular panel of mosaic - and put marker in the center: This is top right panel from 3x3. You can now read off RA/DEC coordinates for that marker - so you know where to position the scope. If you can't do that precisely - then you'll have to look at the stars and aim for particular star close to marker to be in the center of the FOV. That way you can get positions for each of 9 panels. You divide your session into 9 equal parts and take subs for each of those 9 panels (you only need one set of calibration frames). Once you are done - stack them like you normally would - like you did for image above, but once you are done stacking - while data is still linear - you bin your data x3 (for 3x3 mosaic). This is done in processing software. If you don't have any software that supports this option - get ImageJ (it is open source scientific image manipulation software written in Java so it will run on any OS) - it has option to bin your data under: Image / Transform / bin: Just select 3x3 in this case and average method: Do that with all 9 images. Then make composite image out of them. You can do this with various ImageJ plugins that will create mosaic (it is called Stitching). Advantage of doing it like that is that you can do it while still linear. Alternatively - process each of 9 panels and then use Microsoft ICE to stitch them into final image. There is also iMerge software that will stitch already stretched images together. Or you can do it in Gimp if you want - by using layers, translation and rotation. For example - that is how I created following image (you can actually see panels because image is very stretched and I did not use flats): Above image was also done with very small sensor / planetary type camera on similar focal length to yours - at 500mm. I know this all sounds a bit intimidating - but why not give it a go?
  8. I would say stars do look better. That is with ASI224? Things do still look a bit over sampled. ASI224 is small sensor. Maybe try to do mosaic? 3x3 with each panel being binned?
  9. I'm struggling to understand how this image came to be? Why is it 14224 x 11407 px and how did you combine 6 panels to get that? Why is it so terribly over sampled? Quick measurement gives about 0.7"/px, but you say you took this image with Esprit 150 with 0.77 FR - which would make it about 800mm of FL? Pixel size needed for 0.7"/px at 800 FL is 2.7µm. Did you drizzle x2 base resolution from 8300 for some inexplicable reason? If you look at the image in 100% - stars are huge: This looks over sampled by factor of x6!
  10. That is based on imprecise measurement - same as RA, DEC and total RMS. Same values used to calculate and plot graphs. I'm sure Mesu is excellent mount and I'm sure that actual data is not far from this, but I'd rather have 0.3" total RMS that I know is correct than 0.2" total RMS that is measured without enough precision.
  11. Yes - inside nebula - we would just see empty space around us.
  12. Here is interesting development. I proposed that we take extended object and place it in center of FOV and half way outside field stop - to test for integrated brightness part. @jetstream told me that he had done so many times and there is no change in contrast/brightness perception what so ever.
  13. Well, first - as was point out, visual is not equivalent to prime focus photography - it is equivalent to afocal method. Second - F/ratio does not equate speed for photographic applications. Proper definition of speed would be "Aperture at resolution". In your comparison to HST - you are lacking knowledge of pixel size used for both instruments. F/ratio really does nothing for visual - 4" F/5 scope will show the same view as 4" F/10 scope when paired with suitable eyepiece (say 10mm in F/5 with 20mm in F/10). This has been pointed out by Stu just few posts ago:
  14. How severe does it need to be to significantly impact DSO images with long exposure? It will certainly impact planetary performance, but 130mm scope has critical sampling in OIII at 0.4"/px and above images are probably taken at at least x3 that so 1.2"/px or more. Given 2" seeing, is there going to be significant difference if say spherical aberration is as poor as 1/2 wave?
  15. I'm not buying that Image shows that you have 0.02px error in RA and 0.01px error in DEC which translates to 0.14" and 0.13" respective errors. I really don't think you have reliable data when you are trying to measure star position to 1/50 - 1/100 of a pixel. As far as I know - centroid algorithms can determine precision down to about 1/16-1/20 of single pixel. Your guide system is just too coarse to be able to reliably measure guide error in that range (large pixels, small guide scope).
  16. Yes, but only extended sources - point sources remain of the same brightness. This is the crux of the question. If you take 8" scope and 4" scope and produce same exit pupil - you should get same brightness of extended object. On one hand - 8" scope gathers x4 more photons, but on other hand - effective F/ratio as you put it will be twice as high compared to small scope - if you keep exit pupil the same (twice as long FL) - which will make image x4 as large by surface - and hence "dimmer". Two cancel out perfectly. We should see things equally bright in 8" scope and 4" scope. To put it in astrophotography terms you seem to be familiar with - 8" F/5 scope will be equally fast as 4" F/5 scope - right? Why do we see faint stuff in 8" F/5 scope that we don't see in 4" F/5 scope when using the same eyepiece?
  17. No, pixel scale is not included in conditions. I'm strictly speaking at focal plane of telescope - which one produces sharper image. In your example - 12" at 1.5"/px vs 4" at 1.5"/px - 12" will produce sharper image - smaller star FWHM, if both scopes are diffraction limited. There are however cases - usually vastly under sampled - that will result in virtually the same images. Say we have 12" at 8"/px and 4" at 8"/px - they will produce virtually the same image as image resolution is determined solely by (under) sampling here. Square pixels produce something called pixel blur. In normal cases it is very small and overpowered by other blurs in the image, but with large under sampling - it overpowers all other blurs (similar to seeing - in very poor seeing, it is the seeing that dominates and hides guide errors and aperture size).
  18. Why do you shoot luminance with narrowband? That is defeating the point of doing narrowband in the first place. Divide your imaging time between Ha, OIII and SII. In general - OIII will be much fainter than Ha and SII will be fainter still. However this depends on target. How much you capture of each will depend on how you process your data. Ha being the strongest - needs the least amount of time to reach set SNR. However, if target is such that you can use Ha as luminance - then higher SNR for it will be beneficial as luminance needs more SNR - we are far more sensitive to detail in luminance than in chrominance data and will see noise in luminance much more.
  19. If you want to compare camera systems to visual observation - then use afocal method instead This is analog to visual as we have telescope, eyepiece, camera lens (human eye lens) and sensor (retina). Now if you want to magnify image - well use different eyepiece, but F/ratio of both camera lens and human eye - nor telescope for that matter will change
  20. Maybe using Ha as luminance would solve that? RGB can be binned x2 or x3 if it is only supplying chrominance data as we are far less sensitive to loss of detail in chrominance than in luminance. Here is quick and dirty example of using Lum from Ha version on RGB only version in gimp:
  21. That is tough question to answer. We would need to know its surface brightness. Fact that it is Ha object does not help much as we don't have good sensitivity in Ha. We would need to know how much Hb is there along Ha (how much hydrogen is excited to emit Hb as well) - as that is easier to see. Size of it also plays a part - but how much - that bit is still not quite clear to me
  22. I prefer HaRGB version. Imaging time brings diminishing returns. Adding one more hour of data makes huge difference if you already have just one hour of data - but makes almost no difference at all if you already have 8h of data. Is HaRGB image going deeper? Well, yes it does - and goes deeper as you would expect from roughly doubling exposure - so everything is fine there, but did you "need" to spend that much time on target? I'm afraid - I can't answer that one. There you go - that nebulosity is clearly more visible in HaRGB image
  23. F/ratio is ratio of aperture to focal length of telescope - that does not change irrespective of eyepiece used in telescope. No, F/ratio remains the same and has nothing to do with things getting darker. You really don't have to by into my argument - it is empirical fact - next time when you are at the telescope - observe background sky brightness at two different magnifications. Observe star under two different magnifications (low enough so you don't start to resolve airy disk as that is the point where point source stops being point source).
  24. I think that we would see something similar - a bit darker but rather similar. Anyone who had opportunity to view Milky way from very dark site under great transparency with good dark adaptation - sees tons of features. I had once such experience by accident - but I do remember that I was shocked by what I saw. I was traveling by bus to Greece at night. We stopped at border crossing between Northern Macedonia and Greece. This border crossing is known for constant wind so air is always clear. It is away from major LP sources - or at least then it was. This was back around 2006 or so. While waiting for border crossing - we got out of the bus to stretch our legs - and then I saw it. I was speechless. Quite memorable experience. Here are some experiences under truly dark skies: https://www.cloudynights.com/topic/692768-what-is-the-impression-under-a-bortle-1-sky/ Floating in outer space will be "even darker" - as there is no atmosphere to interfere at all. Not extinction - and no even natural LP. Just darkness.
  25. It is everyone's understanding and experience that larger telescopes allow to see fainter objects - we are just trying to understand why is that. For point like sources - that is sort of easy to understand. When you zoom in on point source - well, it stays point like - star will be point like in both large and small scope. Point like star covers only very limited number of detector cells and that number does not change between large and small scope. Total photon count does change - and when you divide larger total photon count - over same number of receptors - each receptor gets more photons - hence image is brighter. With extended sources - photons spread around with magnification - that is why they are different to point sources and called extended sources - they behave differently. You don't need planet that is bright to see this - just look at background sky. It has some brightness to it and is extended source itself. Increase magnification and it will get darker.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.