Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Why use approximate methods when we actually have variance for each pixel given that we have a set of images that we are trying to stack. In fact - imagine you have 10 subs with unknown signal in it and each sub is polluted with Gaussian type noise. How are you going to estimate magnitude of that Gaussian noise? This is actually quite useful as a way of determining read noise of sensor for example. Well you stack 9 subs using average method - that will keep the signal the same and reduce noise to 1/3 of original value. Now you subtract remaining sub from that stack - signals have the same magnitude and they cancel out and you are left with 1/3 noise + noise that adds in quadrature. Result will be sqrt(1/9 + 9 / 9) = sqrt(10) / 3 You measure standard deviation of resulting image and divide it with sqrt(10)/3 in order to get noise in each of these subs. If you have large number of subs where noise is a bit different and you average them out and subtract the remaining sub - you'll get very close value of noise in that particular sub. This is because noise adds in quadrature and if you add two noises with significantly different magnitudes result will be very close to larger one. Why not exploit this?
  2. Say we have a session that is conducted in dark skies (no or minimal LP) with target being tracked from zenith down to say 30° latitude. Atmospheric extinction is going to produce considerably different target signal intensity - but empty space is going to have pretty much constant a) SNR - being 0 as signal is 0 and b) Noise - mostly read noise In this case - background is best averaged with equal coefficients - any change in coefficients is going to produce lower noise reduction and higher resulting noise in background. Signal will benefit from different coefficients in order for it to get the best SNR in signal areas. We end up with noisier background if we base our coefficients on Peak SNR. In any case, what I'm trying to say is that there are better ways of doing stacking of uneven SNR data. I developed an approach that gives very good results, and here is outline of it: 1. image is stacked using regular average 2. resulting stack is divided into zones based pixel intensity (similarly to how you produce histogram by putting pixel values into bins - bins can be linearly arranged over pixel range or in quadratic fashion - which I believe is better as it gives fine grain control in low signal areas versus high signal areas where SNR is high already) 3. for each image in stack and for each zone, noise is estimated - by stacking all other subs with their current respective weights and then subtracting current sub. Standard deviation of result is taken and divided with coefficient (1-sub_zone_coefficient) / (sum_other_coefficients) - that gives us noise estimation in current zone of current sub 4. once we have each noise estimation for each zone we then solve minimization problem where we find new coefficients that maximizes SNR of weighted stack for that zone 5. New image is stacked using zones and calculated weights - and we return to step 2. Only couple of iterations are needed to get stable coefficients
  3. Except there is a problem with that - there is no single S/N value for the image and hence single weight can't describe contribution of particular sub.
  4. Too complex - let's just for the sake of argument here adopt definition - "predator is something that is likely to eat you if you let it and often even without you letting it"
  5. I sense that at some point - a spring will be introduced to the assembly or maybe permanent magnet?
  6. Quite right. Maybe this graph will help: (most interesting part is distinction between grazing and browsing - which is, if one believes wiki - eating of low vegetation vs eating of high vegetation )
  7. Apparently it is exactly that: although wiki article on predation gives slightly different definition: So, if a mosquito bits you and sucks your blood - it is not predation although it is consumption of a part of another living organism. When we eat an apple - it is not predation, but when we eat wheat or corn - that is predation as those plants are effectively killed in the process of harvest? Maybe it is not predation as you don't have to kill the plant in order to consume parts of it? What constitutes eating another organism? How much of it you have to eat in order for that to be called: eating another organism.
  8. That is quite condescending. In any case, I felt obligated to write above so that people reading this thread would not be mislead by false information and further - everyone is free to research for themselves. For those interested, here are relevant links: https://en.wikipedia.org/wiki/CIE_1931_color_space https://en.wikipedia.org/wiki/SRGB
  9. Light has exactly defined color - in fact color is light of a certain spectrum. It is big fallacy that there is no proper color balance / accurate color in astrophotography. Every manufacturer could produce RAW to XYZ color transform matrix as that absolutely does not depend on any conditions - it can be mathematically derived from sensor characteristics - spectral sensitivity curve. However manufacturers don't do that. RAW to XYZ matrix is the only thing one needs to accurately match recorded color within camera gamut and gamut of display device - for example sRGB.
  10. Not sure that there has to be special pressure. Components of evolution is what life is all about. 1. Reproduction - without it, when "units" of life "break down" in some way - either thru environmental impact or pure randomness and probability of quantum nature - life ceases to exist 2. Mutations - well that is inherent in randomness of quantum mechanics 3. Fitness function - again related to environment - we do need to live in this universe and this universe is dynamic so there will be changing environment - otherwise there would be no life (entropy and all that).
  11. Would not we all want for chicken to have eight legs? I guess octopuses are next best thing
  12. By the way - evolution seems to be "mathematical construct" in its nature. Heuristic searches can be performed by use of couple of simple concepts - population, offspring, mutation and fitness function. Offspring randomly inherits traits from parents, mutations are randomly added and fitness function is cut off point - should offspring survive into next generation of population or not. It also represents search criteria. You can do very funny things this way - for example create computer program to find computer programs to find a solution of a problem. You start by simple sequences of code that will run on a virtual machine. You create offspring in the similar way genes mix - by taking sequences of code from parents and attaching them in linear fashion. Fitness function equates to 0 if code produces error (can't be run on virtual machine) - or positive integer that describes how close output is to a solution of your problem for a given input - next generation consists out of highest scoring offspring. See this for further details: https://en.wikipedia.org/wiki/Genetic_algorithm and this https://en.wikipedia.org/wiki/Evolutionary_algorithm As such, I believe, evolution is likely to happen all over the universe rather to be specific to the earth.
  13. That is actually quite right. In theory that is "the best" focal ratio to use as it relates to above formula if we plug in 400nm wavelength as shortest wavelength of interest (just realized that I gave wrong formula above - now corrected, if we use focal_ratio then aperture should not be on left hand side). In any case focal_ration = 2 * pixel_size / 0.4 = 2 * 2.5 * pixel_size = 5 * pixel_size (as 0.4 is reciprocal of 2.5). In practice - you can use x4 which would be equivalent of using 500nm wavelength in above formula. This is because blue part of spectrum is heavily impacted by atmosphere and blurs the most (blue part of spectrum is most susceptible to dispersion).
  14. Oh, then you must go for Yad al-Jauzā ("Hand of Orion") (again from wiki article, Etymology section)
  15. I think that is is more based on experience and a lot of times even experienced observers can be surprised if they have not tried such or similar combination before. Sometimes it is hard to extrapolate what is to be expected. In the absence of actual experience - there are tools what can help you figure out what you could possibly expect. One of such tools is this one: https://astronomy.tools/calculators/field_of_view/ For example, Evostar 120 with 25mm MA eyepiece that comes with it will give you following view of the Moon: Same eyepiece will give you following view of Orion's nebula (M42): So you can play with many eyepiece scope combinations to get the feel for things. Only think it can't show you is what different apparent fields of view look like. If I tell you that one eyepiece has AFOV of 52° vs another eyepiece that has 68° AFOV - that is something that can't be displayed on computer screen (well it can but not easily and you really need to get in close for that to work). What you can do is use wall or white board or similar to get the sense of different AFOVs. You can use this tool to get key pieces of information. http://www.1728.org/angsize.htm Say you have a wall and you stand 1 meter away from it. What diameter circle drawn on the wall needs to be in order to represent AFOV of 52 degrees? 97.55cm How about 68°? 134.9cm You can take something that is roughly 98cm or 135cm wide and place it one meter in front of you and that is how wide view will look like. Alternatively - you can take width of your window and calculate how far away you need to stand in order for width of the window to subtend certain angle. Probably not. You can upgrade finder at any time and it is really only matter of comfort. Straight thru finder sometimes requires very awkward head positioning in order to use it - especially if you look for something near zenith (scope point straight up). That is the reason people swap for one with 90° prism / RACI type. Otherwise even simple finder serves the purpose - particularly for planets. It is very easy to just point a scope to a planet even with simple finder (if it's aligned properly). For DSO hunting - it does pay to have better and larger unit - as those will show more stars by which you can orient and sometimes even DSOs themselves (at least bright ones). I used at least 5 Skywatcher scopes so far (two newtonians, two refractors and Maksutov) and while focusers were not very good - I never failed to focus properly with any of them. Difference between good and bad focuser is not whether it can do its job (although there are focusers that can't even achieve that - and those go into awful category) but how does it feel when you are focusing - is it very smooth and easy to use or do you need a bit more force and you feel sort of roughness. Does focuser have play in draw tube or is it very solid and won't budge when you move it side to side and so on. I now have Evostar 102 and it had pretty much the same focuser as Evostar 120 - it is perfectly usable focuser. It is far from buttery smooth but Synta-glue can be removed and proper grease applied to it (btw Synta-glue is some sort of "lubricant" that Synta - manufacturer of these scopes use that feels more like glue than lubricant as it is very sticky). There are a few tutorials online how to make it better. Having said that - If I could choose to have that or hexa foc from Bresser on the scope of my choosing - I'd go for later. FLO often has items on discount - customer returns and ex expo items and so on, but due to current situation - many retailers are very low on stock and prices have risen somewhat. Not sure if I could give sensible recommendation there. You'll probably have some more time to think about your decision before items are back in stock - and in mean time you can scan second hand items.
  16. Although I'm accustomed to totally different pronunciation it seems that something very similar to 'beetle-juice' is accurate English pronunciation. source: https://en.wikipedia.org/wiki/Betelgeuse In my language it is pronounced as: Bɛtlɡɛz (probably closest would be battel - gase )
  17. It can - but I don't think there is much point in doing so. Part of polar alignment routine is to place 0 (or any other marker) at 12 o'clock position as a reference point. This is done by first centering Polaris at central cross and using only alt controls to move Polaris up to 12 o'clock position (mount needs to be level for this to work properly). You release RA clutch then and rotate your mount until 0, 3, 6 or 9 is at 12 o'clock indicated by Polaris. This number then becomes your reference and you need to place it on Hour angle of Polaris (which you should get from external source for date & time and location - or perhaps handset?) and after you polar align by placing Polaris back to reference point.
  18. Bayer matrix is RGGB - so that might be a problem, but in general - OSC cameras need to have color correction performed on their data to get good color. DSLR has color correction matrix embedded depending on which lighting you select (when you choose white balance - you choose particular color transform). Very rudimentary color correction is performed on OSC sensors when you do flat exposure if your flat source is white (and in principle it should be). If you want to do such color correction without taking flats (and I strongly advocate use of proper calibration with flats) - you can do photometric measurement on star of your choice and then scaling R, G and B channels accordingly. Mind you - you need to perform these measurements on star that is not clipped and while data is still linear prior to stretching. Any star that is close to 6500K is good calibration source (F2 for example). If you can't find F2 star you can try with G star and adjust color temperature later in processing. Ideal way to get good color would be to derive color transform matrix. If you have DSLR - you can use any color chart - even your smart phone showing a color chart like this: You shoot your color chart target with both DSLR and ASI533. Leave white balance on DSLR on "authentic" (or however the setting is called - one that does not apply particular white balance). Use DCRAW to extract XYZ image from DSLR. Measure values of colors while data is still linear and then derive color transform matrix between two sets of colors using least squares method. Multiply resulting matrix with XYZ -> sRGB color matrix to get RAW -> sRGB matrix for ASI533.
  19. Predation is again concept related to our terminology. It is no different than consuming plants or inorganic material. We make distinction based on our understanding of life, but it describes very nature of life - gathering resources from surroundings that sustain life. We would not say that cow is predator would we? But it also eats live things. You could argue that it is not actively pursuing its pray but Wildebeest will travel many miles in order to reach grasslands - is that predation? As long as "predator" does not have idea of its "pray" as being living thing that needs to be hunted - what makes it predator?
  20. Hi and welcome to SGL. Sturdier mount is always a better option - so pick EQ5 if you can. If you are mainly interested in lunar / planetary observing, then you would benefit from motorized mount. You don't have to get complete goto setup as these are expensive - you can simply add RA tracking motor to your EQ mount. There come as add on kits - but you can also DIY a solution for mount motorization (usually arduino based controller and stepper motor - there are several open source projects for this). Out of the two scopes, well - it depends, I'll give you my view on this, but you'll have to see what your priorities are. Bresser has more aperture and longer focal length of the two. Aperture is important for both deep sky objects and planetary but focal length is not. Focal length is important for planetary but restricts how wide you can go with a scope. You won't be able to go wider than about 2.2° with 1200mm scope. Some larger objects out there benefit from wider field of view. Difference is not that big - Evostar 120 will give you max field of about 3° so maybe 30% larger? Bresser is of course longer and heavier. Heavier and longer scopes require heavier and sturdier mount. If you are worried about storage - well, that is also minus for bresser - as it is longer (again not by much). By the way - one possible storage solution is to get this: http://oklopbags.com/telescope-bags/bag-for-refractors/ and keep scope under your bed or somewhere where you can lay it down without it being in the way. It is also handy transport bag. I personally don't like the look of Bresser scopes - there are couple of things that bother me. First is use of plastic parts - like screws that hold finder. Finder is also has non standard shoe so if you want to change finder - you'll have to replace whole assembly and possibly even drill holes if provided screw holes don't match stock finder shoe. Focuser on Bresser scope seems better (one on Evostar is usable but not great). I don't particularly like oversized dew shields - but that is personal taste, nothing wrong with wider dew shield. Best dew shields are those that can retract / slide back - but you don't get that on budget scopes. You get two eyepieces with Skywatcher and 2" diagonal - which is better as it allows for 2" eyepieces. 2" Eyepieces will give you wider fields of view for deep sky objects - completely unimportant for planets. Out of these two eyepieces, 25mm is usable and 10mm is going to be used only until you get replacement as it is not really good. 26mm EP that comes with Bresser is probably quite fine. Bresser comes with 1.25" diagonal so if you want to use 2" eyepieces with it - you'll have to get 2" diagonal separately (so additional expense). Both scopes can be stopped down by using simple aperture mask (you can cut one out of cardboard or maybe print it on 3d printer if you have one and want something longer lasting). This means that your reasoning against Evostar 102 is sound one - Evostar 120 can be turned into similarly performing scope as Evostar102 with simple aperture mask. Only difference remaining will be weight of the scope. If your primary targets are planetary and lunar and you have problems with storage - why don't you consider Maksutov scope? It will sit happily on EQ3 mount and take up very little space. It will be free of chromatic aberrations. Only drawback will be that it will have narrowest field of view of the lot - so wide field DSO observing is out of the question with it. Price will be similar for the scope: https://www.firstlightoptics.com/maksutov/skywatcher-skymax-127-ota.html or https://www.firstlightoptics.com/bresser-telescopes/bresser-messier-mc-127-1900-maksutov-cassegrain-ota.html Hope this helps somewhat.
  21. Indeed, on top of that 92° 17mm eyepiece will give larger magnification versus say 52° eyepiece with the same field stop so even if star at the edge will have same coma due to same field stop size - 92° eyepiece will magnify image much more and make coma larger and easier to se.
  22. Here are some guidelines to help you out. - Newtonians have diffraction limited field that is way smaller than the sensor you are planing on using, so if you want to use that camera (or that is your only choice) - you'll probably want to use ROI - region of interest. - ROI helps with achievable FPS (smaller ROI enables for higher FPS because less data needs to be transferred over USB link, but there is a limit). You are right about the Moon allowing for much smaller FPS rates, but the Moon does change over the course of the night. It is not as obvious but it does change both size and orientation as well as position with respect to the Sun - which in turn changes level of shadows. I have not researched this matter extensively but I'll suggest that you keep your sessions limited to maybe half an hour? - Smaller ROI means you won't be able to do full disk at once - that is what mosaics are for - so research how to shoot and stitch those - You can use lower sampling rate than critical sampling rate - but I would advise against going over critical sampling rate as it really does not do anything for image quality and only lowers SNR. You can calculate F/ratio that you need for given pixel size by using following formula: F_ratio = 2 * aperture * pixel_size / wavelength F_ratio = 2 * pixel_size / wavelength (aperture is already in focal ratio, formula with aperture is for focal length instead) Use same units of length for all quantities (say micrometers for all). (reference: https://en.wikipedia.org/wiki/Spatial_cutoff_frequency, https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem) use Barlow lens to achieve close to critical sampling rate if you want the most resolution out of your images - Use narrowband filter to tame atmosphere. People often use Ha filter or IR pass filter or Baader Solar Continuum 540nm filter when doing lunar with mono camera - Don't use histogram to set your exposure, in fact don't look at histogram at all. Set your exposure length below 5-6ms to beat the seeing - Use very high gain settings - as that lowers the read noise. Read noise is one of your worst enemies here - Do proper calibration (means take darks, flats and flat darks) Diffraction limited field of Newtonian is given by following expression: https://www.telescope-optics.net/newtonian_off_axis_aberrations.htm With F/5 scope that gives 1.389 mm from optical axis or 2.77mm diagonal. You need to select your ROI to that size (or that size amplified by barlow lens if you use one).
  23. Also, good and bad are inherently terms that we defined because we evolved to the point of developing sense of ethics and morality.
  24. I've heard about guiding "fighting" with encoders. Maybe if you use longer guide exposure and let encoders do their job between guide exposure? What was your guide exposure like? I would set it to somewhere between 5-8s to start with.
  25. I have not noticed coma in my F/6 8" newtonian with 28mm 68° eyepiece (to be fair - I was not looking for it either). Thing with coma is that it is magnified same as everything other in focal plane when using eyepiece. What could be perceived as large coma in image - could well end up being unresolved by our eye when using small magnification. If you look at this diagram, you'll notice that geometric shape of coma at F/6 is equal to airy disk size at 3mm away from optical axis. Airy disk size for say 8" F/6 is 1.28" and if you are using x40 magnification - it will be 51". Observer with sharp eyesight can't resolve anything below 1 arc minute so you need coma blur to be at least few arc minutes across when magnified in order to be able to tell that it is coma. Note that above is geometric radius, but light will be concentrated away from above circle into "neck".
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.