Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. If you want that - then do what I've suggested above. Make two stacks from same alignment data - you align all your images on stars, so stars will be fixed. Stack using max will show all meteor trails emanating from radiant (like fireworks image), while stacking with sigma clip will give you nice star field image without meteor trails. Then you need to combine the two to superimpose meteor trails onto nice star field image.
  2. What exactly do you want to achieve with that? What sort of effect? Stacking is performed in order to enhance features that are repeating in the images. Meteor trails are transient features and appear on single frame. With normal stacking they will be seriously weakened or even erased (if using sigma clipping). If you want to achieve similar effect like in star trail images - background sky full with meteor trails - then stack with maximum rather then average function (just align on stars as you normally would) and then use max function. Alternatively - you can combine two approaches for best effect: 1. Stack one image with sigma reject to get nice background image 2. Stack one image with max method to get all the star trails 3. Combine the two images by stretching first more and second less and again using max method (or some alternative with layers in PS/Gimp)
  3. Yes, that is right, but in order to relate RMS to something meaningful - it is best to think in terms of Gaussian distribution: So sigma is just a measure of this bell shaped curve. Another way to think about it in terms of spot diagram is - about 68% of all rays land within circle with RMS radius. We have similar thing with guiding - when you have RMS value and this: RMS is just root square mean of where all rays land with respect to dead center (their distance from center - where they would ideally land). Roughly that translates into "blur" of sort of Gaussian shape - but actual shape of blur is much more complicated to calculate and involves wave superposition. Diagram is simply plot of all points and does not show "density" of points properly. There are other ways to do spot diagram - for example this one: here we are zoomed in enough so you can see individual ray hits. How dense spot diagram is - depends on two things, first how many rays are cast and second - how "zoomed" in spot diagram is and how large individual spots are. These "sparse" spot diagrams show just a bit better density of spots than "compact" diagrams like that one above. For example - in middle image we can see that red is concentrated in center but does have little tail. This is actually some coma (and possibly something else) and actual star image would look like this: (I managed to find wave simulation as well as corresponding spot diagram - they look similar except that wave simulation shows "ripples" and interference stuff from light waves). Here you can see that "core" is bright and that coma tail is fainter - which corresponds to spot density. In any case - GEO radius is simply radius of a circle that contains all ray hits (again that does not mean it is full extent of the blur as you need to account for wave properties of light - but like in above example they do correspond well enough). I think that they are given for all wavelengths unless otherwise stated.
  4. Surely F/8 (I don't own a single scope with this F/ratio). It allows you to get max exit pupil (7mm with 56mm 2" plossl eyepiece), while still being easiest on the eyepieces (slower the scope - better image in the eyepiece except for the most expensive eyepieces).
  5. Well, in that case, if it is FOV that you are interested - just see what will give you fov that you want and is within your budget and that you can easily mount. As long as you don't try to get high resolution images and just want to do wider field, you don't have to worry. Do be aware that you'll probably over sample with that camera in most cases and maybe use super pixel mode to debayer your images, or bin x2 data after stacking. If you want to have both high resolution and wide FOV - then consider creating mosaics
  6. Not entirely. If I understand you correctly - by practical, you mean: we do experiments to confirm stuff and we use science for practical applications - like knowing how to calculate things to make radio or whatever. That is large part, but science has predictive side as well. Once we accept certain model, thinking about that model will yield some insight about the nature we previously did not think about. It will certainly end up in practical again - like atom is made up out of some particles - let's try to split it. Without insight that atom is made out of smaller particles and idea of splitting it and how to best go about it - we would not have practical part. In that sense - there is a part of science that is just mental process without practical part. This is where theoretical physicist come in (there simply is too much stuff for one person to perform and we must split effort - have theoretical physicists and experimental physicists and even split them by field of study).
  7. When you say scope in 400-600mm range - what actually do you mean? Why do you need that FL range? Is it the FOV? Is it the sampling rate? What sort of camera do you have and what sort of resolution of the image do you want to do?
  8. Have you read this? https://en.wikipedia.org/wiki/Libration Maybe there is an answer to your question there.
  9. I don't think you should be that harsh on spot diagrams of astrographs. As a rule, they have optical elements that are aimed at providing good correction over larger flat field. That comes at a price - you simply can't maintain diffraction limit in that scenario. For the most part - that is fine as final resolution of the image depends not only on optics but on mount performance and of course seeing. If you put a good 4" astrograph next to regular 4" diffraction limited telescope and compare their center field in average imaging conditions - you won't see much difference - both will be limited to about 2"/px sampling (astrograph will be just a tad less sharp in center of the field - but it might be barely noticeable). In any case, they are sharp if you treat them the right way. Problem with majority of imaging today is that pixels are getting too small. Just a decade or two, in CCD era, pixel size of about 5-9um and even more - was standard. Nowadays it is at least half that size if not less. Above telescope will produce sharp images at 9um pixel size - no problem (we calculated that). Nice and simple way to check what your optics is capable of is to get artificial star, do some shots of it across the field and then measure FWHM of those star profiles (just make sure artificial star is far enough when doing this - like 50m or more). Divide FWHM with 1.6 and that is your optimum sampling rate for optics alone. For larger diffraction limited scopes - you might actually run into issue of pixels being too large. Planetary imagers often need barlows to match plate scale to pixel size. For this reason, it is sensible to perform this test on astrographs rather than diffraction limited scopes
  10. In that combination you are actually over sampling rather than under sampling. From above, telescope is able to deliver ~4.3"/px and you want it to record 2"/px. How about if we compare it to perfect 100mm aperture to see the difference. We have RMS radius of about 3" with this scope. FWHM of airy disk is 1.025 * lambda / diameter_of lens (in radians) so this gives us 1.025 * 0.55um / 100000um in radians. That is (180 / pi) * 1.025 * 0.55 / 100000 in degrees or 58.728174 / 100000 degrees. Now we need to convert that to arc seconds so we multiply it with 3600 (which is 60*60) and we end up with 58.728174 * 3600 / 100000 = 211421.42 / 100000 = 2.11" We have that FWHM of perfect telescope is 2.11" - we need to calculate equivalent RMS so we divide with 2.355 and we get ~0.9" RMS That is x3.33 times less than above telescope.
  11. I just use simple formulae that are available for different things. For example - airy disk radius in radians is 1.22 x lambda / aperture diameter, where lambda and aperture diameter are expressed in meters (or micrometers - it does not matter as long as they are the same). Lambda is usually taken to be 550nm (or 0.55um) as that sits in the middle of 400-700nm visible range. Find more information here: https://en.wikipedia.org/wiki/Airy_disk For FWHM - look here: https://en.wikipedia.org/wiki/Full_width_at_half_maximum It is usually taken that FWHM = 2.355 * RMS as that holds for normal distribution (Gaussian bell shape) - when you have RMS spot radius. In the end there is handy formula to relate angles and micrometers for a telescope (a bit of trigonometry really) which goes: angle in arc seconds = size_in_um * 206.3 / focal_length_in_mm It is handy to calculate arc seconds per pixel - if you for example put some pixel size 3.75 * 206.3 / 700mm = 1.105"/px (700mm is focal length, 3.75um is pixel size and it solves for angle in arc seconds). Alternatively it can serve to convert micrometers to arc seconds for some focal length - just use pixel size of 1um so you get 1 * 206.3 / 700mm = ~0.295" So there is 0.295" per 1um at 700mm or 1/0.295 = ~3.4um per arc second. In the end - if you want to get ideal sampling rate for long exposure - you take FWHM size of star and divide that with 1.6 to get arc seconds per pixel for optimum sampling (explanation for this is rather complex and involves Fourier transform and convolution theorem and Nyquist sampling). With that we can see that above telescope with x0.6 reducer will produce around 7" FWHM stars without even having influence of seeing or mount guide error. That is ~4.375"/px or at 420mm ideal pixel size would be 8.9um based on that alone. It is important to remember - shorter the focal length - "tighter" the spot diagram needs to be in micrometers to provide sharper image - because with less focal length - there is more sky covered with every micrometer.
  12. There are spot diagrams published for all combinations - with x1 flattener, x0.8 reducer and x0.6 reducer. Here we have RMS radius of spot diagram. With focal length of 700mm and reduction of 0.6 - that turns out to be 420mm of effective focal length. 6um RMS spot size is equal to 3" at 420mm - so that is quite low resolution. If we were to calculate effective aperture it would be 3" RMS radius spot size is equivalent to 3" * 2.355 = 7.065" FWHM which is in turn equal to 2.44 * 7.065" / 1.025 = 16.82" airy disk diameter That is the same as 16.46mm aperture scope. Like I said - not much of a resolving potential.
  13. As far as I can tell - that diagram shows lateral chromatic aberration - although scale is really exaggerated. Here is comparison of two types of chromatic aberration: and associated ray trace diagrams: This is longitudinal - or one that we are more familiar with, when different colors have different focal length and converge at different points along the optical axis Lateral behaves like this: All wavelengths of light are focused in same place along the optical axis (at focal plane), but at different distances from center of the optical plane. You need complex optical system to produce lateral chromatic aberration - like telescope + eyepiece (especially wide AFOV eyepieces exhibit this) or camera lens with multiple elements (7+) which correct other aberrations but introduce this one.
  14. I looked up Tic Tac encounter and found this: https://www.history.com/videos/uss-nimitz-tic-tac-ufo-declassified-video Honestly, I don't see any sort of piloting, intelligent control or anything similar.
  15. How do you assert that something is "reacting to its surroundings" as opposed to randomness in phenomena?
  16. FreeCad? You can make a 3D model to help you visualize things and then it is rather easy to make technical drawings of different projections of that model. Software is free / open source, so it might not be as polished - but it works. I use it for modeling of 3d parts that I 3d print.
  17. Were you guiding? If so - what is your guide log like?
  18. For best wifi experience you really need to have dedicated wifi gear paired with parabolic antennae. That way you can create links that are up to couple of kilometers long that work well. For simple use cases - cable is hard to beat.
  19. If you are running indy / indigo, why not have only server on remote machine and other software on your work computer instead of using VNC?
  20. Sorry that's supposed to be klipper Advanced firmware for 3d printers, and banana pi is well - a fruity flavored single board computer . much like raspberry pi.
  21. I have another suggestion, but it is sort of advanced use case. You can have RPI just sitting next your gear to provide you with USB over ethernet functionality. With linux it is fairly easy to setup that and can have "remote" usb ports so you can run all the software on your laptop (provided it also use linux) inside of the house if you provide ethernet connection between the two. I'm now thinking of converting my 3d printed which currently runs kipper on banana pi - to that configuration. There are orange pi SBC-s that are very small but well suited to provide remote USB functionality for fraction of the cost of raspberry pi. I'd run my klipper on virtual machine on my virtualization server and connect to 3d printer via USB over ethernet.
  22. Well, I thought that I saw some star wobble up and down - and yes, you can see some if you pay attention. That is the phenomena I'm talking about - but to be sure, why don't you take your latest session - one from which you posted crop in initial post - and simply do animation of frames. You can even crop them to reduce size - just make sure you crop every image to same area.
  23. Do you have external power outlet on your house? Wired connection will most definitively help. Wifi can really be poor quality, especially if there are multiple networks around. If you have external power outlet (and I'm guessing you do since you power your rig somehow) - there are ethernet over power line devices that don't cost much that you can use for up to gigabit link speeds.
  24. Not ideal because it is aligned on comet - as mount tracks and makes errors - they will affect both stars and comet the same. By aligning on comet - you removed these errors.
  25. Ok, so let's do this step by step. This is definition of magnitude: So, say we have a star that is x100 times dimmer than our reference star - its magnitude will be -2.5 * log_base_10 ( 1/100) = -2.5 * -2 = 5 So magnitude 5 star is x100 less bright than magnitude 0 star. Now, from properties of logarithm If star A is x10 less bright than star B and star B is x20 less bright than reference star - star A will be x10 * x20 = x200 times less bright than reference star - brightness multiplies - but magnitudes add / subtract so magA = magRefB + magBA (or magnitude between reference to star A is magnitude from reference to B and magnitude from B to A) It is just simple rule of logs and products. Now back to discussion of sky background brightness. What is SQM? SQM is defined like this - if you had star of some magnitude and you spread it's light on 1 arc second x 1 arc second - and do this with every arc second squared of the sky - that is your brightness. Amount of photons (or flux) coming from patch of the sky must be the same as if coming from a single star of certain magnitude - then we say that sky has brightness of given magnitude per arc second squared. Given above - it is easy to calculate magnitude of patch of the sky if you can find ratio of flux between it and some star in the image. Say you have 20ADU per arc second squared (again - you average per pixel and use plate scale to get per arc second squared value) and you've identified mag9 star in the image which you measured to have 40000ADU. Hypothetical star that corresponds to 1x1" patch of the sky would be 40000/20 = 2000 dimmer than our mag9 star. If we calculate magnitude of that ratio it will be mag = -2.5 * log_base_10(1/2000) = ~ 8.25mag Now we have mag9 star and difference of additional 8.25 magnitude, this means that our sky is 9 + 8.25 = mag17.25 or SQM 17.25 It is as simple as that. Not necessarily - it would depend on spectra of reference star and that of sky background. Say we have two stars with the same spectrum: One is just x10 brighter than the other. Sensor + filter will simply integrate over above spectrum (with added sensor sensitivity as well). If stars have same spectrum - only one is x10 brighter - then that means that in every wavelength this holds true. It does not matter if you integrate in 400-500 range or 500-600 range or some other range like 656 +/- 3nm, once you perform integral and find ratio of those two - you will get same number: it will be x10 Problem arises when you have two sources with different spectral characteristics and you try to compare sections of those spectra - like this: here we have comparison of common spectra for different stellar classes. Here you can see that for example B5V has higher B filter values than K0V star and ratio of integration of these regions won't be the same as ratio of integration performed in V region as most values are simply higher. This is by the way how we define stellar color (in astronomical sense) - as difference in magnitudes between two filters compared to reference star. So there are cases where you have to be careful of how you "transition" from one filter to other in terms of magnitudes and apply correction, but for simple SQM - you can simply use green filter (or green component of color image). That will be very good approximation for V filter in most cases.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.