Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm not sure how much integration time that is, but I've seen similarly deep or even deeper images done by single imager. Don't know how they combine subs - but look at above post - I'm not aware of any software that will perform necessary computations when having largely different SNR images to combine - and ineffective combination can lead to drop in SNR over even single source (result can be worse than the best submission - if one is not careful).
  2. Could be, but we need to careful about several things. First is SNR compatibility. Each stack that users provide would also need SNR map to go with it. Regular stacking only works when you have the same SNR images. If you have images with different SNR - you can even "spoil" the better stack with worse when you combine them. Here is simple example of how things work in this regard. Say you have SNR of 10 and SNR of 2 images, what SNR will their average have? We can use simple case scenario where signal in each is 10 and Noise in each is 1 and 5 respectively (10/1 = 10 and 10/5 = 2) Average of 10 and 10 is - well 10, so signal stays the same, and what is average of noise? It is noise added divided by number of items added. Only thing we have to be careful about is how noise adds - it does not add like with simple addition - it adds like square root of sum of squares. We will thus have sqrt(5^2 + 1^2) / 2 as noise average. That is sqrt(26) /2 = ~2.55 Resulting stack SNR is 10 / 2.55 = 3.92! We started with one image with SNR 10 and we ended up with resulting SNR of 3.92! Hold, on, you might say, but how does stacking then works? Well - let's try the same thing with two images both having SNR 10. That is signal 10 and noise 1. Signal part is easy - average will simply again be 10. We repeat noise part as sqrt(1^2 + 1^2) / 2 = sqrt(2)/2 = ~0.7071 Resulting SNR is thus 10 / 0.7071 = ~14.14 We have improvement! This is why we must use weighted average - where poor SNR image contributes much less to final stack than good SNR image. Problem with this however is: a) no implemented algorithm uses per pixel weights (PI for example uses per sub weight - and only uniform gray sub will have same SNR for all pixels - regular images have different SNR for each pixel and it can span very large values - from SNR 50+ down to below 1). b) no implemented algorithm has good enough SNR estimation Luckily - there is method that we can use if each of submitted images comes with SNR map (which is SNR value for each pixel). In order to do that, next to signal we must also submit stack with standard deviation stacking instead of regular average stacking. This will give us noise for each pixel. Then we can pose following question: If we have expression sqrt((c1*n1)^2 + (c2*n2)^2 + (c3*n3)^2 + .... + (ck * nk)^2) and c1+c2+c3+....+ck = 1 for any given pixel with noises n1, n2, n3 ...., nk What are c1, c2, c3, .... that minimize it. We can find first derivative and equate it with zero and then we get bunch of equations that we must solve, but in the end - we will have coefficients c1, c2 .... ck that we must use - per pixel in order to stack only that pixel. We then repeat the same thing for all the rest pixels of the final stack. And we did not even touch problem of different resolution in each image (nor difference in QE between sensors and filters and the fact that signal won't be the same because of this).
  3. I just analyzed M51 image from Astrobin. It has sampling rate of ~0.52"/px. Yet, it is over sampled by factor of x2.75 - x3 (frequency analysis suggests between 5.5px - 6px per cycle). This means that data actually captures about 1.5"/px. It did not even reach close to 1"/px
  4. My view is that that sort of imaging is only feasible with large apertures (12"+) and it only provides means to approach upper seeing resolution bound - but not cross it (like in planetary imaging where exposures are in millisecond range). I'd be really surprised to see image that is less than 1"/px in effective resolution.
  5. Not sure if I can find any. There are some images on that website - but no explanation / annotation for them? There are several targets and reference frame. Those pages also contain images of target - but not sure if these are result of collaboration? Download data link says it is only available to members ...
  6. LP is actually not limiting factor. It is just source of light pollution noise and like other types of noise - you just need enough integration time in order to reach target SNR. What does present a challenge - is: a) resolution b) lack of efficient algorithms implemented as available software to be able to efficiently stack data from different sources Do you know if that collaboration actually produced any result so far and where it can be seen?
  7. That barlow that I linked is excellent optically, designed for newtonians (corrects coma over larger field) and has x2.7 magnification (actual magnification varies with distance). Optimum sampling is when F/ratio is x4-x5 pixel size, so in your case that will be 3.75 x 4 = 15 and 3.75 x 5 = 18.75. If your scope is F/6 and you add x2.7 barlow - you get 16.2 - which is right in the middle of the range (in fact, closer to F/15 and I prefer lower of the two numbers for several reasons).
  8. Not sure about that. I think it will take the same time - it has to do with surface covered by APs - and not the number and size of them. If I'm not mistaken - square around AP is "search area" - and for each point in search area - displacement correlation against reference image is measured and best match is added to "displacement mesh" This means that all pixels that belong to AP surface will be checked once as center of correlation (for given AP).
  9. That. Try to make alignment point big enough so that any shimmers are caught by it (imagine point where AP is and see how much it jumps around. all those jumps need to be inside of the box). Don't make it larger than it needs to be as sometimes it can confuse stacking software if AP are large - sometimes same looking features can fall in two APs (think small craters on the moon as example). Stacking software is not very smart - it looks for correlation between detail and can't distinguish craters one from another.
  10. It's not the matter of wanting or not - I do want. It's more matter of finding time and sticking to it to get it done. To be honest there is also somewhat scary part of exposing myself to the public eye (not very extrovert person) thru the videos, but I think that wont be much of a problem once I get going. I'll get it going as soon as I find some spare time today - at least to check out all the ways that I can record video and edit it (must switch between phone camera and DSLR for video recording since I'll be using phone in procedure and I can't shoot video with it at that point).
  11. And scope is 8" F/6 newtonian, right? https://www.teleskop-express.de/shop/product_info.php/info/p5662_APM-Comacorrected-1-25--ED-Barlow-Element-2-7x---photo---visual.html
  12. What is exactly about color issue that is confusing you? I'm going to give it a go of explaining things in simplest of the terms. First I'm going to define the problem and it would be best stated as "Why do people imaging Jupiter end up with different looking images, while people imaging coca cola can don't"? Would that be best description of the color issue? Explanation can be split into two parts 1. Measurement 2. Interpretation We all measure light that is reaching us, but we use different sensors to do so. It's like trying to measure length of a London bridge - but each of us using different stick. We would all get different numerical value for the length of the London bridge. I'd get 300 sticks, someone else would get 250 sticks and so on. No one is wrong although our results are different. If we want to get the same numerical value - we must calibrate our measuring devices. We must determine length of each stick - or create mapping between stick and predefined length used for measurement. This is sensor to XYZ mapping. XYZ is standard for measurement and each sensor needs to be mapped to it. This will provide us with same numerical values when recording. Second part is interpretation (and it can be used or not - depends on the effect we want to achieve). In order to best describe this, I'm going to ask you to imagine what 20C feels like. But I'm going to ask you to imagine what 20C feels like in two different circumstances. Imagine you went ice swimming and you immediately enter a room with 20C temperature. How will that temperature feel to you? Warm? Cold? Now imagine you were sitting in a sauna and you enter a room with 20C temperature. How will that temperature feel to you? Warm? Chilly? Same numerical value from calibrated device measured same physical quantity, however, we can experience it quite differently given our conditions. Same happens with the color. Our visual system adapts to environment and we perceive same light as having different color depending on conditions (like with temperature above). These things combine to create difficulty of capturing image and reproducing it faithfully. What we often call "color balance" - can actually be split into two components. First is color calibration and second is perceptual transform. When we shoot simple image with our phone - software automatically does this for us. Sensor is color calibrated at factory for that particular smart phone model (or camera) and we do perceptual transform in part by choosing "white balance" - to reflect shooting conditions. We must just remember that "white balance" is only half of perceptual transform and other part is "implied" by sRGB standard, but we can choose to perform whole transform. What does perceptual transform mean? It can be explained with temperature analogy. Imagine following scenario - there is device that both records temperature of 20C in that room - but it also records that you were in sauna just minute a go. Some time in the future - you want to "feel" the same temperature feeling from that recording, but this time you are siting in comfortable environment of 25C. 20C is not much colder than 25C but we have additional information that helps us calculate that you must have felt much colder when going to 20C from sauna - and this transform gives us something like 12C - so we cool an glass of water to 12C and say - you dip your hand in it - and you feel the same sensation as you did when going from sauna to 20C So this part is not about actual numbers but about what we feel. Back to the color - we see colors differently given ambient conditions. Perceptual transform will try to keep what we see constant and not numerical values that are recorded (physical quantity related to light spectrum) given two sets of conditions. It works like this: Given set of conditions and XYZ measurement - what is XYZ triplet of values that will produce same mental response in different conditions. White balance is very simplified version of this and it does: given source light temperature and XYZ - what is XYZ triplet when viewing image on computer screen in normally lit room (office ambient). In fact - sRGB standard defines ambient values: More complex perception models exist that let you map between different environments - but I propose not to use those - specifically because no one was floating in orbit around Jupiter to be able to compare "feeling" of colors to those on computer screen. This however does not mean that we can't make recording and reproduction of the color - in terms of physical values. Appearance of color does not change with distance we take image at (with exclusion of atmosphere and effects of atmospheric extinction), so we can be confident that what we record from this distance from Jupiter will be same as recording from space craft in Jupiter's orbit. Furthermore - by following set of standards - we can also ensure that light coming from computer screen creates same stimulus as one coming from Jupiter. That is color matching without perceptual matching (we made our ball be 20C temperature and room and ball feel the same temperature regardless if you would describe that temperature as cold or hot to the touch). We can even rely on our model and attempt to recreate "feel" of the color - but in order to do that we must define environmental factors in which observing of Jupiter will take place. Are you in space suit floating around, or are you in a space ship that has dim lights on and you are looking at Jupiter thru a window or is inside of space craft brightly lit and what is temperature of that ambient illumination and so on ....
  13. Not quite It is a bit more complicated than that (but not much). Instead of using only 3 values as "RGB weights" - one produces 9 values as color calibration matrix. 3 weights are just main diagonal of 3x3 matrix when used like that and we discard other 6 values as being 0. If we want more accurate result - we should use all 9 (or rather 3x3 matrix). Then there is matter of color space. People think that there is single RGB space - but actually there is infinite number of RGB spaces - each defined by actual R, G and B primaries used. What is defacto standard is sRGB variant of RGB color space, but we don't want to produce result in RGB space directly. We want to instead use XYZ color space as it is root of everything color related. It is absolute color space that is well defined and is modeled based on human vision. Y component closely corresponds to luminance. sRGB on the other hand is relative color space (difference being that RGB has white point and black point - while XYZ color space does not - it is photon count - which are non negative integers without upper bound and white point) that is also gamma corrected (XYZ is linear color space). So our first task is to derive 9 values for linear color conversion from camera raw data to XYZ space. We use least squares method with number of measured samples ( say 20 or so different color samples) to do this. It is a bit tedious task, but can be somewhat automated with use of ImageJ and Spreadsheet app. After we have XYZ data - then rest is just mathematical transforms that are well defined - either going directly from XYZ to sRGB or doing some sort of transform in XYZ space (atmospheric correction or perceptual color transform) and then going to target color space. XYZ is really the basis for color management - and it is "real raw" color data.
  14. I can certainly do that. Thing is - we don't all need to do it. People can perform calculation of color calibration matrix for particular model of camera and then other can just reuse it. I can post CCM for ASI178, ASI185 + either ZWO UV/IR cut or Baader UV/IR cut filter, ASI1600 with different combination of filters (Baader CCD LRGB or some other combination of absorption planetary filters - just to show that one needs not use RGB model for imaging and still produce RGB image). With this exercise I was actually hoping for people to confirm what I'm saying and see for themselves that this is actually working - rather than taking my word for it (on more than few occasions people have had doubts about things that I'm saying although I'm not really conveying an opinion but rather facts that can be verified by variety of sources). If many people, utilizing this approach - produce color of object that we all agree is the same - then it must be working, right? (and anyone can compare image to object itself and see if color is accurate or not).
  15. Not a huge problem of course. Certainly - if one has means to attach camera to lens and owns a lens - then sure. Guide scope is another alternative as is finder/guider provided there is means to attach camera to it. Anything that will project more or less focused phone screen image onto sensor will do (as well as that of an object. Well - that gives me an interesting idea. How about pin hole camera? Simple cover of camera nose piece with tiny hole can be used to project phone screen image in a dark room. This can be done at very short distance. In fact here is diagram that explains what happens: Left is phone screen, middle vertical bar is pinhole cover and right is sensor. I marked with arrows distances that can be used in simple ratio equation to give us right phone distance to cover sensor completely. All we need to know is distance from pinhole to sensor, sensor size and phone screen size and we can calculate minimum phone distance. This is maybe the best option as a) it does not require additional optics b) it can be done indoors and does not require very large distance
  16. Well, depends on sensor size and focal length. Simple FOV calculation will show how much of mobile phone will fit on sensor. Alternative is to use finder guider or other sort of optics that does not have as long FL as primary scope.
  17. In fact - it is best if we pair camera with telescope that will be used for recording as telescopes can impart "color cast" onto image. Not all optics provides color neutral image. Reflectors for example have reflectivity curve that is not uniform over whole spectrum (although variations are small - like 1-2%).
  18. Or placing a mobile phone 40-50m away from telescope and not focusing all the way (so pixels won't resolve) and taking average recorded values over some area.
  19. Problem is that those are not "material" colors but rather digital ones (will depend on screen used to display them). I wonder if any of these items are "universal"
  20. Well, yes, "source of truth" is major problem here as we don't have lab equipment to measure color spectra accurately. I think that we can keep things simple and assume that our mobile phones are fairly well calibrated in factory? We can use those as our calibration devices in the scope of this experiment. So idea would be - to take reference color checker pattern, display it on our cell phone and image it with our planetary camera (or cameras - I have two or even three if I use ASI1600 + filters as well) and use that to derive suitable transform. Then we can take object that we all have access to and are fairly confident it is the same color in all our items and we shoot it and post results. Color should match fairly well across all our images. Once we have that - we have a way of matching target color regardless of camera model used. That is important first step.
  21. Thinking about it - we actually need to make sure we all have calibrated equipment and can produce same color image of known object. I tried to illustrate this point in DSO imaging side of things. There people give themselves quite a bit of creative freedom of interpretation of colors in deep sky objects. We can start by a simple challenge. Let's pick an object that is readily available and has well defined color. I think that well known brand of something can serve for that as brands want to be recognizable and often do their best to ensure that packaging is the same all over the world. If we say google for images of CocaCola can - we will see same color of red in most images taken with smart phones. This is because proper color management has been implemented in smart phones (for the most part, although they started adding "beauty filters" more and more ). I bet we can replicate that our mobile phones, but can we do that with our planetary cameras?
  22. We can always use "simplest" baseline color as "accurate". Since color is psycho visual thing - we can kind of utilize "Einstein" approach and ask question like - "what will all observers agree upon?". Atmosphere is earth bound element - we can't expect it to be relevant for martian or "lunarian" for example - so it is light as captured in space. Next - we need to avoid subjective thing (which is based on viewing condition and recollection of what color "felt" or "looked" like) and say following: if we project actual color spectrum from computer screen and one "captured" and then somehow thrown into ones eye - side by side - all will agree that they match or they look the same. This is simple color matching approach that does not try to replicate color (as a broad term) but just to match whatever person sees given conditions. It is also luckily for us - simplest as it does not involve perception space transforms (although those are being developed and are now quite accurate and useful). In order to do that we only really need 2 out of 3 steps above - with third step being replaced with simple XYZ -> sRGB transform that is well defined.
  23. Auto white balance uses fairly simplistic algorithm. In fact there are several algorithms - and none are guaranteed to produce accurate color. It either aligns histogram peaks - which is sort of "gray world" approach - which advocates that images are on average "gray" - meaning same amount of each color component should be present in the image. There is actually very well defined workflow that should be used if we want to present accurate color image of the planet, but it is somewhat complex. I can outline steps needed, and we can then discuss it. First step is always the same and that is color calibration of sensor used. We need common baseline, and luckily we have it. It is called standard XYZ observer and we can think of it as generic sensor that we must match. Here it is - above should be viewed as QE of some standardized sensor. compare that to say ASI678mc or ASI224 These curves are different - and our job is to derive transformation that will transform recorded raw R, G and B into XYZ. Once we have that - raw XYZ components - we need to account for atmospheric extinction. This is effect of reddening of the image if object is at lower altitude (more atmosphere). Last step depends on what we want to show. a) do we want to show actual "color" of the planet b) do we want to match the color of planet as it is in orbit. To understand the difference - imagine holding a white paper - one that is white when we take it outside on a sunny day. Now imagine you are holding the same paper in orbit. Difference being that it is not illuminated by same light. In the same way there is atmospheric extinction for the target - there is for the sun as well (it is also red when viewed close to horizon). In our solar system - objects are illuminated by 5778K black body radiator. However - in daylight (due to all the blue light scattering on atmosphere) - we actually have 6500K illuminant. We need to choose between the two - if we want the image of say Jupiter as it is floating in empty space and so are we as we observe it, or we want the look of "Jupiter material" if it sits in our room as we are working at our computer. You will notice - that none of above steps is actually employed by any of photographers and consequence is - all colors are different.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.