Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I don't think it particularly matters, but can't be sure. Probably depends on method of PA alignment used. Method exists that does not require precise alignment of scope to RA axis - it only examines field rotation in a given scope - after all its about where scope is pointing along the rotation of RA axis and what is expected if RA axis is properly aligned to NCP. On the other hand, different polar alignment routine requires alignment of optics with RA axis - same as with PoleMaster and such - these observe single field around Polaris and how it rotates when the RA rotates.
  2. Next to issues with amp glow (which is not really amp glow in CMOS) there is often issue with bias levels. I think it has something to do with the way cmos cameras work / fast readout. There appears to be two different modes of readout that one can't control - at least that is what I've read (might not be related to this issue after all) - short one, less then 1s where timing is kept in sensor itself and long one - where application controls exposure length. In any case, bias levels can be different in these two regimes (or in general - bias level can depend on exposure length). Not all cameras show this behavior, but I had such issue with my ASI1600 - bias sub had larger mean ADU then 10s dark sub - and that of course should not be the case. Another thing that can happen, but I doubt it happens with serious CMOS cameras, I only had it with guider cams and small sensor - automatic bias offset. Sensor does internal offset calibration on power up. That means that bias is not stable across bias power cycles and that you need to always take bias / darks at the same time (or rather after) lights / flats. No reusing of subs for later. Because of all of those issues - one should not scale darks with CMOS unless they determine that they can do it properly with their sensor data (best way would of course be to take two sets of darks and one set of bias and try dark/dark calibration and examine result for any DC offset and patterns - it should be pure noise with 0 mean ADU). One way to do dark scaling is to use two sets of darks of different exposure and not dark and bias. For example 2 min dark and 1 min dark can be used to get: bias as 2 * 1_min - 2_min and dark current only as 2_min - 1_min. After that it is easy to do scaling.
  3. I just love amount of handles it has at the front - for one to brace themselves when having a first look On topic bino vs mono, as stated - there is no perceived brightness increase, but there is a benefit to it. In the same way as observing a target for longer makes us see more so does binoviewing. Eye/brain system does not work like camera sensor - there is no single exposure - it is continuous feed and brain does all sorts of wonderful things to this. It has noise filter - you never seem to see the photon shot noise although there should be shot noise visible at levels of light eye is able to see (just a hand full of photons can be detected, I believe threshold is something like 6-7). Our brain sort of forms "residual image" - it remembers what it saw just a bit ago, and image sort of stacks in our brain - this is why prolonged observing helps see more - once we see one detail and "know what to expect" at that place - it's easier to see it. Binoviewing does similar thing - our brain can use noise filtering more effectively - if it comes from one eye only, maybe it's noise, but if both eyes detect it - it's probably signal. Btw, I managed to see shot noise once, for anyone interested - here is how it happened. I was sitting in relatively lit up room (daylight no artificial light) - but there were some shades on windows so it was not overly bright. It was bright sunny day outside, so plenty of light there. I held Ha filter at some distance away from my eye (not very close but not at arms length either, about 20-30cm away) and looked thru it - I was looking at bright outside - thru the open terrace door. Image thru the filter was distinctly deep red but it behaved as an old TV when not tuned to any particular channel - just that white noise. In this case it was "red/black" type of noise over the image changing rapidly (really much like TV noise). I think that it happened because of lack of light coming from that direction while surrounding light levels were high enough to "turn off" brain denoise.
  4. Just a small update. I'm now at 95% confidence level that I'll go with Mak 102. This is due to number of reasons: - It turns out that I'll probably need to spend a bit more money on rig than I previously thought. Initial idea was to purchase AzGti + scope package, but it seems that scopes coming in this package don't have collimation screws unlike regular OTA versions - at least Mak 102. This can be seen on images of this scope, but it's also confirmed by Skywatcher rep over on CN in one thread. I would not want to get myself in situation where I can't collimate my scope and need to send it back. There is also concern about quality of scope if it's not the same as regular skymax OTA. I also figured that I want a bit more accessories with the scope - ES 82 6.7mm and quality 1.25" diagonal. - Many people report that tripod coming with this package is rather poor in stability (probably to be expected from entry level scope/package), so purchasing only AzGti head and adapter to Eq5 class steel tripod seems like better option. I have spare tripod for this use (moved my HEQ5 on berlebach planet) - I've came across image of back side of Mak 102 that is good enough to do rough measurement of rear baffle tube diameter. According to this rough measurement - it is about 23 mm in diameter. That would mean that there is some vignetting on 32mm plossl (field stop of 26-27mm), but it should not be more than 50% or so at the edges. It also means that Mak102 can provide larger TFOV in comparison to Mak127 (not by much, but still a bit larger). This is Orion version, but I guess SW one can't be far off (if not exactly the same). - I don't think that stray light will be a problem with these maks even on very close focus according to these images: This is image of Mak127 - it looks like light is reaching about halfway down the baffle tube, so I think placing focus plane roughly 40mm away from end of baffle tube should not be impacted with stray light.
  5. What sort of large dob do you have? Aperture rules in planetary imaging (if decent optical quality). In planetary AP, mount is not as important as in regular AP (where that importance can't be overstated - it really needs to be the best you can afford). Cheapest way into very good planetary imaging would be adding tracking to already existing dob if it is large and of decent optical quality. If it is goto dob - you are pretty much set, if not - have a look at suitable equatorial platform. That combination won't be as "plug&play" as goto mount - there will be learning curve to get scope onto target especially with small chip - maybe flip mirror could help there or similar - but that combination will get the job done and you will get good results - just use good planetary camera and appropriate barlow / telecentric lens. Depending on your budget, next option for planetary - both viewing and imaging would be largest compact scope you can afford and mount on AZEQ6 - something like C9.25 or maybe even 10" Classical Cassegrain. For DSO imaging to start with you want something that is in range of 500-700mm focal length and there are quite a range of options there. - for rather painless imaging - refractor + suitable field flattener/ reducer will do the trick - cheap option would be newtonian + coma corrector (130PDS / 150PDS class scope). You can go with 8" F/4 scope as well - but as you go faster, collimation and tube stiffness and rigidity and quality of focuser become increasingly important so you really step outside cheap range there - Maksutov newtonians are worth a look a well in this category
  6. Very different requirements so it is very hard to package that all together. Depends how serious are you regarding each of requirements. If you are looking at a bit more serious planetary work, 8" scope is start in aperture. Problem with that is that it won't provide wide enough field for larger nebulae (most galaxies and planetaries will be fine at that focal length). On the other hand if planets are something to try out but you have no greater expectations - you can limit your self to 5-6 inch scope. In first case, there are really only few choices: 1. EdgeHD 8" 2. Classical Cassegrain - again 8" - like this one: https://www.teleskop-express.de/shop/product_info.php/info/p10753_TS-Optics-8--f-12-Cassegrain-telescope-203-2436-mm-OTA.html 3. 8" F/6 Newtonian (maybe even F/5, but that one will have larger secondary if optimized for photo work and collimation will be more demanding for planets, also higher power barlow / telecentric is required to reach critical sampling) 4. 8" RC scope (mind you, while better option for DSO photography - it's not the best for planetary work and might not suit visual as much as other scopes - large central obstruction). While you can go with larger scope - I would not recommend that unless you are rather serious with planetary work and are ready to completely neglect wider field DSO work. In second case - there are much more options. 1. Decent maksutov newtonian (intes micro), or even their photo maksutov cassegrain: https://www.teleskop-express.de/shop/product_info.php/info/p3316_Intes-Mikro-Alter-M606---Photo-Maksutov-Cassegrain-152-912mm.html There are other options, like SW 190 (Mak-Newt) and Explore scientific Comet Hunter 150mm Mak Newt 2. Decent apo triplet in 130mm class 3. F/5 newtonian of course - something like 150PDS You should consider cameras that you plan to use for each role and preferred targets when making a choice between scopes.
  7. This is very useful info, thanks for that. "Problem" is that I would not use small sensor at prime focus (too high sampling rate, too small TFOV), but rather in a sort of afocal arrangement. Eyepiece + lens would be used between scope and camera. This acts as sort of focal reducer, depending on focal lengths of eyepiece and lens (you can actually choose reduction factor). For example, ASI178 has ~8.9mm diagonal. With 32mm plossl and 12mm lens - it would behave as ~23.7mm diagonal sensor - and vignetting there would certainly show (even field stop of EP would show depending on AFOV of eyepiece and angle of the lens). Fact that there is slight vignetting on 28.4mm diagonal sensor suggests that fully illuminated field has smaller diameter, but that scope can illuminate up to 28mm of field. I would certainly both use and recommend flats in this role. 32mm plossl (most likely to be used as projection EP) has 27mm field stop if I'm not mistaken.
  8. Test is fairly simple, and here is diagram "of my concerns": These sort of scopes are made to be used with diagonal mirror, although they have extensive range of focus, I suppose that optimal focus is somewhere around 100mm or so behind the end of the scope (end of that receptacle with T2 thread - I would otherwise call it end of focuser, but focuser here is internal ). In the configuration I plan to use it for EEVA, eyepiece will be quite a bit forward - it will be inserted in back tube directly (without diagonal). This is primarily because of weight / stiffness / balance issues - closer to tube - better (there will be EP projection adapter, small lens and camera hanging of the eyepiece). My concern is that if focal plane is put too close to OTA, there could be light leak - light going directly past the secondary and down the main mirror central hole and onto the sensor. This would reduce contrast quite a bit, as most people doing that sort of thing probably live in LP areas (it's not a good thing even in dark skies). Testing for this is quite easy - just point the scope at sky (of course be careful about the Sun) and place your eye at the end of back tube (you can't push your eyeball inside obviously , but place it fairly close - where eyepiece shoulder would be) and move it to the side so that secondary is not centered but to the side (bottom part of the image). If you see sky at the edge - bad, if not, and you only see secondary - good I have couple of scopes that I could use to test this sort of eyepiece projection thing, but my idea was to do it on the scope that is not usually recommended for EEVA because it is too slow, and scope that could fit above role - being beginner scope that can pretty much do it all - visual and "taking some images" (in appropriate configuration). Of course it is also excuse to get another scope - one that I would use in sort of grab&go / lunar scope role. I planned to use Evostar 102 F/10 (not ED) in that role, but it does not quite fit it the way I expected - too large/heavy and a bit shaky on AZ-4 mount, so Evostar 102 will be for other things (solar work with herschel wedge and solar Ha with combo quark one day, maybe a bit of imaging - again trying to see if I can make usable imaging scope out of it with some tweaks - for those people that can't afford APO scopes for that). For above reasons - I'm leaning towards Mak102 - it's lighter, it's more affordable to people and probably better suited for AzGti mount. One of more important things would be - what sort of TFOV each of those provide - I want one with larger TFOV. Normally, one with shorter FL will provide larger TFOV, but in this case, I'm not sure. Both scopes, according to reports online, have smaller back hole than is needed to fully illuminate 1.25" eyepiece. Given that and the fact that this exit hole might be smaller on Mak102, and that F/ratio of Mak102 is a bit slower - I can't really tell which scope would provide larger TFOV - One of the reasons I'm interested in illuminated field of both scopes.
  9. Don't worry about not checking straight away, I'm in no hurry with this. If you manage to find the time and it's not too much trouble for you - just take a look at daylight - as long as you can't see sky past the secondary - that should be ok. Odd reflection on baffle tube is ok - that is what it is there for.
  10. At one point I did actually consider Bresser maks, but given pro/con lists above you can see that it is not quite suitable for my needs. - It's slightly heavier than Mak 127 (which is already where I don't feel comfortable on mount quoted at 5kg carry capacity) - It has very long focal length - something I would not fancy for visual as well as for primary use - EEVA (small TFOV capability) I did look at 102 version from Bresser - but there is also issue of getting one. FLO and TS don't stock 102mm OTAs, and Bresser does not ship to my country for some reason. If I were looking for scope that is primary visual and meant for Moon and planets - I would probably go with classical Cassegrain from TS (either 6" or 8"). Scope I'm after now has different primary role.
  11. I would like to know following if anyone can provide the info: - how "sturdy" is the mount? It's rated at 5kg, but I'm guessing that is a bit optimistic. What's mechanical finish like? - Would counterweight shaft + cw help in AZ mode for a bit heavier scope - like 3 - 3.5 kg OTA + accessories? - I like it is full goto mount, but I will probably not use it as such in visual - can it work without extensive star alignment? Can I just set it level and point it roughly north and tell it: "there you go, you are all set now, I just need fairly decent tracking from you and not precise gotos"? - I have hand controller from HEQ5 syntrek (not synscan) - once I do initial thing via phone app, would I be able to use that just to adjust target position and pan around at low speed - like "cruising" terminator on the Moon or such? - Have you tried it in EQ mode, and if so - what sort of wedge would be recommended for it? I've seen couple of models and they differ in price quite a bit - can I get by with cheap one or should I invest more there. I'll use it in EQ mode very sparsely and only to do some EEVA. - Is there ASCOM software for this mount that present itself like proper mount via Wifi? What sort of connection would I need if I choose not to use wifi and go wired? Any help with above much appreciated.
  12. Yes - if it does what it "says" it does - checks mean ADU level in linear stage (prior to any stretch). One issue that I have with attached master flat is that it is still 16bit format? That is "no-no" in my book - it should be 32bit float. 16bits is simply not enough to preserve all precision needed. Don't know why that is, but all calibration masters (and intermediary masters like - master flat dark) should be in 32bit precision. That way you won't add any noise due to bit precision (and that will happen pretty quickly with 16 bit format if you go above 16 calibration subs - and most people use and should use more than that). As for flat master itself - don't know, it looks rather "normal". It might be a bit noisier than I'm used to, but looks fine otherwise? Here is comparison between your master flat and my master flat (my is 32bit 256 subs stacked with 256 flat darks - at unity gain): Yours (1:1 zoom and crop somewhere near center): Mine (same zoom and roughly somewhere near center): Mine looks less noisy / smoother, but it shows certain pattern that is present on my ASI1600 - it is much better seen when one zooms in more: It is sort of checker board pattern of pixel sensitivities - nothing to worry about as differences between pixels are at max about 2%, and it calibrates lights fine. This is manufacturing "artifact" - but you need a lot of exposures to "expose" that variation - otherwise it stays within noise. By the way - if you shoot your flats at high gain - use a bit more than 32 subs for them - it will reduce noise, or preferably shoot at normal / unity gain.
  13. Christmas presents time ... Nothing like "surprising" oneself with another piece of astro kit, right? Thread title pretty much says it all, need some swaying one way or another, but as mentioned - there is a twist to this story. It will be AZGti mounted scope and its primary use is as testing rig (or rather purchasing excuse? ). I'm strongly motivated to write EEVA software and feel that one of these might be perfect platform to test it out. In part this decision is driven by the fact that many people look for affordable scope that will do it all - do some planetary viewing, some planetary imaging, some DSO viewing, some "DSO imaging" - or as they frequently put it - "Would like to record what they saw in an image". Many will tell that such a slow scope is not suited for EEVA purpose, but I intend to trial it specific "configuration" - Eyepiece + suitable lens + small sensor CMOS camera (can be used for both planetary and EEVA). I concluded that it is possible good combination for doing this and would need to try it out. Secondary role of the scope will be of course some quick viewing, probably mostly Moon gazing and sometimes quick peek at DSOs (those that are within reach in my LP for the time being). Sort of grab&go setup that does not require extensive preparation/setup time and is suitable for "off the balcony" use. I'll list my so far pros/cons list and some technical question people might be able to answer (owners and former owners of said scopes). 102 pros: - I really like compact format and light weight of this scope - Slightly slower - which means easier on the EPs and probably less optical issues for Mak (Gregory type). Smaller secondary? Does this impact baffling / any stray light issues? - Potentially choice for more people looking for above mentioned type of scope due to slightly smaller cost (not issue for me, but could be issue for some) - Focal length of 1300mm - I think I would feel less "boxed in" because it is very close to FL of scope I regularly use to observe - 200/1200 dob - Latest version has "groove" at front of the scope (probably to fit cover) - which can be used to fit dew/stray light shield as well? 102 cons: - I have slight feeling that mechanically this version is not the same as 127 - design more for the "masses" and aimed at being affordable. At some point 127 was marketed as "Black Diamond" if I'm not mistaken along with 150 and 180 versions - which gave impression that these three were higher class instruments (not only by aperture and price) than 90 and 102 which were sort of beginner scopes? - For visual it will have lower resolution and light grasp vs 127 127 pros: - Obviously aperture, both in terms of planetary performance but also for DSO / EEVA - Possibly mechanically better instrument? - Possibly better optically? 127 cons: (should I even write those or we can just "invert" pros for 102? ) - bulkier / heavier - mind you I plan to use it on AZGti - with eyepiece, ep projection adapter, cooled ASI camera (178mmc) and small lens - I have sense it will all build up towards 4kg+ rather quickly. But there is also issue of size - I just love how 102 looks as small (but capable) instrument. - cool down issues? - I feel that 1500mm FL will be a bit too much for grab&go / general purpose scope That would be about it I think - maybe I'll remember something later as well. I'm also interested in couple of technical characteristics of said scopes, so I would be grateful to owners if they can provide such data. - for both scopes, do you know what is fully illuminated circle on both (would need that to calculate usable TFOV for EEVA, and proper combination of Eyepiece / lens). I always assumed that both scopes would illuminate 1.25" field (so up to about 28-30mm) but I'm not quite sure - this might not be full illumination and there could be some vignetting. I've read somewhere that back openings are quite smaller than this? - I'm not interested in working aperture of scopes - I know that it's a bit less than one would believe from scope designation (102mm and 127mm), but it does not really matter - I accept that as design limitation (need for larger secondary and some baffling inside), but would like to know if there is stray light in the field at "close" focus position - or rather when one sights down the tube on EP side - can you see past / to the side of central obstruction and if so - at what distance to T2 thread (end of scope in literal sense). If you check this - move eye to edge rather than keep it on axis to check for gap - Of course, in context of what is written above - any difference that you feel could sway me one way or another. Thanks
  14. vlaiv

    M1

    Nice image, although I'm not fan of artificial diffraction spikes
  15. I did not think of that, you might be right, but I never said that measurement is solely dependent on kit used - in fact, you will be much better off throwing time at the problem rather than money
  16. Again something similar happens to sRGB display if you try to view image that has been recorded in gamut - colors that monitor can't display will be "clipped" to colors that monitor can display. In that sense it is the same, isn't it. In any case - I'll accept that gamut is related to reproduction rather than sensors and will look up metameric failure, so thanks for that.
  17. Correct. As far as I know - there was a version of drivers with locked offset at 50, but now there seems to be version that has advanced settings where you can again adjust offset. This happens with PRO version of cameras. My ASI1600 has offset setting in drivers enabled all the time. I might be wrong at above though. Masterflat should be one with master flat dark applied - not entirely sure, but that is the usual name for final flat used for calibration - masterflat. You can check that by loading single flat and measuring mean ADU value on it and doing the same with master flat - if they are the same, then flat dark was not removed, if they differ (mean value of master is a bit lower) then master flat dark was removed. If you are talking about files that you have uploaded at beginning of this thread - no they do not suffer from such artifacts, and any artifact that you are seeing is due to software used to view them. Some software will use bilinear interpolation when rescaling images for view at smaller size (zoomed out) - try looking at 100%, 1:1 pixel ratio (one screen pixel - one image pixel) - there should not be artifacts present. Bilinear interpolation can create such bands - those are in fact not signal bands, but bands of different SNR (slightly lower noise in some regions due to bilinear interpolation) - and will only show if you stretch so that noise is clearly visible. Here is what your light looks like to me: That is left edge of image stretched to insanity Btw, I can also recreate effect which you are seeing by resampling to quarter size (about fit to screen) with bilinear resampling: Here is what stretched histogram version looks like (gimp level / curves): Again no sign of artifacts that you mention. It's full of stars
  18. I'm still having issues with definition of word gamut Here is from wiki article: I see no reason why we could not adopt above definition to include - device capable of recording - in the same sense as set of colors found within an image.
  19. I have couple of issues with that article. It goes on to state that for example CIE XYZ color space is not good color space for couple of reasons: - because it is not perceptually uniform - because simple linear transforms between that color space and some other color spaces lead to errors I had the impression that for above reasons (both known things) - author deems CIE XYZ color space somehow wrong or inferior. There is another issue that I have - for example using RGB matching functions as an example without giving explanation how experiment was conducted (using three pure wavelength primaries and reflective color arrangement): Here is quote from wiki on CieXYZ describing that: Three primaries used in a test on xy chromaticity diagram - any color that could be produced by additive mixture of primaries lies in triangle. In any case, issues that exist with color reproduction can't be attributed to inferiority of CieXYZ color space.
  20. I had a sneaky suspicion that I don't fully understand term gamut. In some sense I do have similar understanding of it as you - it is subset of all possible chromaticities (lightness is not important here as it is function of intensity, at least I think so). It is useful to think in relative terms, so we can say that human vision is full gamut and any subset of that is narrower/smaller gamut than full gamut? In any case, I fail to see how a sensor does not have property of being capable of recording whole or part of the gamut, or in fact distinguishing what human eye is capable of distinguishing. Not all sensor/filter combinations are like this - some have "full gamut" while others do not, and we can pose a question: how large is gamut of certain sensor/filter combination. Let me give you an example: You have regular tri band filter/sensor combination graphs above - look at sensor QE curve and each filter transmission curve to get idea of combined curves. But one does not need to use such filters, one can use following filters: These are astronomik filters - they have some overlap and are more like human eye response functions - my guess is that such filter/sensor combination has larger gamut. Same is true for OSC sensors that have response curves like this: Above diagram can clearly identify each wavelength by unique ratio of raw "RGB" components - I suspect that it covers whole gamut. Another example of even smaller gamut would be: Take regular rgb filters, any sensor and add light pollution filter to it - something like CLS or IDAS LPS P2: Those filters block light completely in some ranges - and filter sensor combination won't be able to even record these wavelengths, let alone make distinction between them. I think it would be good idea to find a way to characterize sensor filter combination - ones that can produce all colors computer screen is capable of showing (sRGB gamut) and those that can't.
  21. Dark flats are used to create master flat. You take set of flats - stack those (average stack method). You take set of dark flats - stack those (again average stack method). Subtract the two and that will give you master flat (flat stack - flat dark stack). Again, what seems to be the issue with your flats? I've downloaded two attached files but can't really see what is wrong with them?
  22. For anyone interested in this topic it turned out that above is fairly easy to do (at least I think so at this point - I don't yet have full mathematical proof). If we observe any two wavelengths w1 and w2 with arbitrary intensity, it turns out that in XYZ space this forms a plane. X = a * w1 + b * w2 Y = c * w1 + d * w2 Z = e * w1 + f * w2 where w1 and w2 are parameters in this parametric form of plane (being intensities of respective wavelengths) and a,b; c,d; e,f - being values of color matching functions of X, Y and Z for particular wavelengths w1 and w2. After we transform this plane with XYZ -> xyY transform it turns into a line. I still don't have mathematical proof of that, but I did check it numerically (graph plotting), and it is in line with this section on wiki article on CieXYZ space and xy chromaticity diagram: source: https://en.wikipedia.org/wiki/CIE_1931_color_space Projection of that line on xy will also be a line. In last step, if we derive matrix transform (3x3 matrix) between camera raw space and XYZ - it will preserve planes (matrices represent linear transforms), so any two wavelengths on our sensor with their intensities will also form a line once transformed into xy chromaticity diagram. From this it is easy to see that all one needs to do is just take each wavelength (for example 400-700nm with step of 1nm), calculate raw triplet from sensor + filters graph, transform by transform matrix to get matching XYZ and then derive xy known XYZ->xyY transform. After plotting that on xy chromaticity diagram, gamut of sensor/filter combination will be convex shape defined by said points. (mind you, this shape will depend on chosen transform matrix, so choosing that matrix is another topic that is interesting, and so is transform error).
  23. Not sure what this relates to? Bose Einstein statistics of general idea that there is a signal of certain magnitude and that measurement yields numerical value that is polluted by noise? If we are talking about signal and measurement, and if we have mathematical model that describes that process, why do you think it is not useful to think about it as if there were definite signal with definite photon rate, but it is the measurement that can't be done with absolute precision? It's a bit like thinking about probability in propensity sense. We can say that there is a definite probability that dice will land certain number - that being 1/6 - although there is no finite number of measurements that will yield that value with certainty (in sense that adding more measurements can't disturb it / make it more precise). We don't talk about that probability being dependent on number of measurements - we accept it as maybe fictional but in the same time for our purposes very real tendency with definite and precise numerical value associated.
  24. I had a feeling that this sort of thinking will cause much confusion. Idea was to try to introduce a sort of mathematical thinking rather than anecdotal when it comes what signal is, what measurement gives and above all importance of SNR and how it relates to imaging. You are right - I stand corrected. Excluding some subs with measured 0 value will not change image - it will only change SNR of resulting stack - same as if we choose to discard any sub regardless what the measured value is. It will however skew our measurement if we want to determine absolute value (not important for image but it is important if one tries to measure photon flux). Average of 0,0,0,1,1 = 2/5 is not the same as average of 1,1 = 2/2 = 1. Discarding some measurements just because they are zero introduces bias error. I was referring to the statement that I've heard couple times before, and it goes: "You can't record something that is fainter than background noise." That is in fact not true. Suppose that we have some galaxy that is faint and signal from it is less than read noise of the camera. In fact it is so faint that there are some subs that contain no photons from that galaxy. Regardless of above, signal of the galaxy is recorded on each frame - with certain error. Stack enough such subs and you will be able to get the noise down so that SNR is enough to see that galaxy. It is not out of reach by virtue of being below noise in single sub. Let's say that SNR in single sub is something like 0.1 on average - galaxy is 10 times fainter than noise in that sub. You need something like 3-5 SNR to be able to detect galaxy. Stack 10000 such subs and resulting SNR for that galaxy will be 10 - you will be able to see it clearly. No one said that you can take 1 minute sub and multiply it with 10 and get 10 minute sub. You can take measured signal from 1 minute sub and multiply it with 10 and you will get measured signal in 10 minute sub (roughly there will be uncertainty in measured value). Problem is that there is noise associated with each measurement and when you multiply your 1 minute sub - you will multiply measured values - which are not pure, but polluted with measurement noise - so you will end up with same measured value of signal but different noise value - both in sense it will be random but also in numerical value. 100 minute sub will produce smoother result not because signal suddenly became stronger just because we've chosen to do measurement differently. Noise level is associated with measurement, and the way we measure changes noise level - this is why result is smoother image. For image intensity - you can choose to add measured values together or you can choose to multiply single sub with some constant. Both will change intensity of the image. What we are interested in is SNR. That is why we stack - not to increase image intensity but to boost SNR. Ok, I'm so confused with this one. Most of the light that we are dealing with when taking images of astronomical objects is in fact thermal light (star light), but according to sources, Bose Einstein distribution has SNR of: This means that it is always less than 1, regardless of how much exposure time we use?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.