Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I think it will be like that. In your case - you don't even have to worry about horizontal / vertical aspect - as you have square sensor so it does not matter if RA is aligned with horizontal or vertical side of the image. Only thing to pay attention to would be rotation direction. I would personally do the following - find a pair of stars that are bright enough and are horizontal / vertical (line going thru them is either parallel with horizontal or vertical edge of the frame) in your wanted orientation. Then in field, after rotating camera for 90 and taking frame / focus shots (or plate solving) - check to see if those stars are in fact as expected. If you rotate to -50 degrees - then stars will be at an angle of 10 degrees to the edge.
  2. vlaiv

    NGC 1333

    You've came a long way, young padawan
  3. If you want to be very specific about it - then yes, choose channel that has lowest background signal. But you really don't have to be with OSC data because of how OSC sensors work and how human vision works. Here, look at this example (which you can do yourself in any image processing software): First image is baseline. I've chosen image that does not have much green in it - rather like astrophotograpy - blue and red (or orange) are base colors. Second image is with some level of noise added to blue channel alone. Third image is same level of noise this time added to green channel only. Thing is - we are much more sensitive to variation in brightness than in color / hue. Green carries the bulk of brightness information. Even OSC sensors are built that way - look at raw image from say 533mc - you will see that it is very green and needs "color balance" to produce proper image. Bayer matrix of OSC sensor has 2 green pixels and 1 red and 1 blue - again because green carries the most information about the image. If you swamp the read noise in green channel - you've done 99% of the work as far as visual quality of the image goes. Want to be very accurate - sure go for lowest signal, but fact is - it won't make much of a difference.
  4. Not sure that physical law being the same constitutes symmetry. Time has two symmetries - time reversal and time shift symmetry. First one states that physical laws are equally applicable if you turn the arrow of time. If instead of going forward - time is taken to flow backward - laws would still hold the same. Shape / form of physics laws does not change at all. Crude representation of this would be the fact that ball follows parabola trajectory no matter if time moves forward or backward. Take movie of flying ball and play it backwards - trajectory will "look right" regardless of the fact that time is flowing backward. Similarly - if you take a sped up video of planets orbiting the star - there is no way of telling if that movie is played forward or backward. Our physical laws (mostly) follow this symmetry down to particle level - or should we say that what we see stems from universe behaving in that way on particle level. However - that symmetry is "broken" or T symmetry does not stand, here is interesting (and light weight) article on the subject: https://bigthink.com/starts-with-a-bang/laws-physics-not-time-symmetric/ Then there is time shift symmetry - which states that result of experiment should be the same if one performs it "in the evening" or "in the morning". Time T0 is just arbitrary moment and as such does not influence result of physics experiment (again, laws do not change). This is also not the case. In general relativity where time is tied in with space in space-time - it can bend and flow differently and it does matter at what time you perform the experiment as time is no longer linear and omnipresent / universal.
  5. I'm now wondering if we could be able to explain such symmetry in layman's terms? What would happen if we were to take some object and rotate it 360 degrees around one axis - while simultaneously rotating it around another by 180. Depending on object - we would probably need to do two such rotations in order to get back were we started. Think of say a car - rotate it around Z axis (which means spin it up so that headlights sweep whole horizon) for 360 degrees while rotating it about X axis for 180 degrees - so it lands on its roof after half of rotation. Two such rotations would put car in the same orientation as it was to start with. Problem with this approach that you can combine both of these rotations into single rotation around some third axis (neither X nor Z - but some vector in 3d) - and as such - it will still behave like "normal" rotations in 3d - after 360 it will return to original state. But it does show that if you have two "linked" properties that can't be readily interchanged (like vector directions are in 3d) - that you can have case where you need to do two rotations to get the object back to normal. Another option would be to have car that switches its color if ever placed upside down - then one rotation would swap its color from red to blue and another rotation would return from blue to red thus rendering initial state after two rotations.
  6. Maybe this will help? Haven't watched it yet, but I have a sense it will be good enough to explain things (it apparently has 6 parts, above is part 1).
  7. You could still be seeing effect of resolving the artificial star. If you get the same pattern on airy disk if you move star further - then it is probably related to mirror pinch. However, it is highly unlikely that you'll see 9 sides in your airy disk even if it is a pinch. That would mean that each point is equally pinching the mirror. Not likely thing to happen. I'd think that most likely scenario is triangle or similar with pinch.
  8. One of distances is related to resolving the small opening of artificial star - your being 50 micron star. Other has to do with spherical aberration. You can calculate first distance quite easily - look into what is resolving power of your aperture and then look at which distance star presents angular size that is smaller than this value. For spherical aberration - things are not quite as easy as you need to understand quite a bit of optics to calculate that one - but it boils down to this: Point source "radiates spherical wavefront" around it. When this wavefront hits aperture - it is slightly bent. When we look at objects that are far away - they are effectively at "infinity" as far as numbers are concerned - and that sphere has infinite radius and thus is almost flat - we get flat wavefront. This is what we want, and what the scopes are optimized for. If our point source is not at infinity - it will introduce some level of spherical aberration to the wave front. When you examine image in telescope - you won't know how much of spherical aberration comes from telescope itself and how much is from the fact that source is close and not at infinity. Closer the source - more spherical aberration there will be (ratio of distance to aperture changes - so part of sphere changes - this is because aperture is fixed but we change the distance). at some distance this spherical aberration induced by closeness of the target gets very small and can't be detected and won't affect results of optical tests. That is the second distance at which you need to place artificial star if you want to check for spherical aberration of your telescope and want to get accurate results.
  9. Being a developer - it actually took me quite a bit of time in order to understand the joke
  10. Reducer is worth using if you want to widen the FOV somewhat. However - you can't really go to arbitrary reduction factor with it. Maybe best way to think about it in first iteration would be not in terms of focal length reduction but rather as "sensor size increase". Say you want to use x0.6 reducer on your scope. Your sensor is 16mm diagonal as is. With x0.6 reducer - you will have 16mm / 0.6 = 26.666mm sensor size. Almost APS-C size, and at that size - several things start to happen. First - edge of the field astigmatism starts to show, second - depending on type of reducer - you will start to get vignetting. Pretty much all the things that you already noticed. You can certainly use x0.5 reducer on this scope - but in that case, I'd look into sensor below 10mm in diagonal. In fact - for best results, stick to 4/3 sensor size - approximately 22-23mm in diagonal. This would mean 16/22 = ~0.72 in your case.
  11. Why don't you just bin the data? That will act as x2 reducer as far as arc seconds per pixel is concerned. It won't widen you field of view though, but not sure if you are interested in that
  12. What is 0? It is the only thing that stands between + and - !
  13. Even if that means breaking odd physics law here and there, but what is that in comparison to satisfied customer?
  14. Look at that: taken from this page: https://www.firstlightoptics.com/stellamira-telescopes/stellamira-90mm-ed-triplet-f6-refractor-telescope.html
  15. You know that old copy&paste technique for getting the text on the web page? Sometimes you copy the wrong thing
  16. Yep and it is not much worse than fully opened. You can see both here: https://www.samyanglens.com/en/product/product-view.php?seq=323 under data tab. Fact that both look pretty much the same at F2 and F8 is just a sign of a good design. You can't really have diffraction limited optics with so much elements and corrected for most aberrations including flat field over large sensor size. That of course does not mean that this lens can't be used for lower resolution work - and indeed it works excellent for that. Blockiness of stars (I just love how we use non existent words - no spell check support, is it blockyness or blockiness? ) is function of resampling method used. If one uses nearest neighbor - they get square, other methods give different results. The closer one gets to ideal filter (unfortunately, ideal filter does not exist as we would need it to be spatially infinite - because it needs to be finite in frequency domain) - more "airy disk" like enlarged star becomes. This is single pixel enlarged by 2000% (x20 enlargement) - with different algorithms used. Top left is nearest neighbor - which simply produces square, is the most simple method and is the root of the blocky stars myth. Back in the day most software used this for enlarging images as it is by far easiest method to both implement and execute - does not require fancy math knowledge, so people concluded that under sampled stars look like squares because that is how they saw it in software. Top right is bilinear interpolation - it creates sort of "star" pattern in that one "pixel". Bottom left is bicubic and bottom right is quintic b-spline. Note how "first diffraction ring" starts forming with advanced methods. This is because both telescope aperture and limited number of samples do the same thing - they limit maximum frequency in frequency domain which results in ripple kind of effect in spatial domain: This animation is brilliant at explaining actual effect - if you use infinite number of sine waves - you can actually get square pattern by summing them all - but if stop at certain point - you can't produce sharp edge and there is some residual rippling left. This is why airy rings form - if we had aperture of infinite size and we could capture all frequency components - we would simply have star image as a single dot - but since we cut off higher frequencies - we get this residual rippling that has not been canceled out by higher frequencies.
  17. Undersampling does not produce blocky stars - that is just a myth (and very persistent one - like many other myths in amateur astronomy). At 5.7"/px - you will certainly not be under sampling with lens - you might be even over sampling. Just to give you some idea - at F/2.8 and with 135mm of focal length - your aperture will be opened to ~48mm. Airy disk diameter alone is 5.77" at 550nm. Add to that tracking and seeing and you'll be close to optimal sampling even with perfect aperture. However - look at this: At 30 lpmm we already have ~95% attenuation of signal. 1000um in a mm so 1000um / 30 = 33.33um - even for 16.66um pixel size we will see some sharpness loss, let alone for 3.75um pixel size. This is solely due to sharpness of the optics and not related to other things. Lens are simply not diffraction limited. Under sampling does not produce softening at any pixel scale - quite the opposite. What under sampling does is to produce aliasing artifacts when conditions are right (and you are truly over sampling). For example: this image as is already shows some effects of aliasing, which means that it is already under sampled. Look at the side of this building - if you look at pattern produced by windows as a whole - you will see strange curved "wave" appear formed by windows - although in reality we know that there are no curved wavy like features on the building. Look what happens when I resize down this image without taking care of under sampling issues (there are algorithms that can deal with this to some extent by using anti aliasing filters): pattern just got worse and more visible. That is aliasing and is effect of under sampling. There are no other effects - we don't get "softening" as effect of under sampling - both images are very sharp - just look at windows on the face of the building - they are very nicely defined and edges are high contrast. Nothing soft about this image. With astronomical images it is very very hard to see effects of under sampling because of the way astronomical images are composed and because of the way under sampling artifact - namely aliasing works. Even when you are grossly over sampled with astronomical images - you won't see any aliasing issues. In a nut shell - you don't need to worry about blocky stars, don't need to worry about under sampling and if your image turns out to be slightly soft when viewed at full size - that is because you are probably over sampling for that lens with 3.75um pixel size, as lens itself is not diffraction limited.
  18. https://www.firstlightoptics.com/adapters/astro-essentials-sky-watcher-9x50-finder-to-c-adapter.html This page suggests that 9x50 from Skywatcher is 51mm with pitch 0.75 On the other hand, it might be 2" - 32tpi? That would be 50.4mm and thread pitch of 0.79375mm In any case, do you want metal part or 3d printed one? Both are available online for 9x50 from sky watcher (above is to C adapter, but here is T2 version: https://www.teleskop-express.de/shop/product_info.php/info/p4520_TS-Optics-T2-Adapter-for-autoguiders-to-Skywatcher-50-mm-and-30-mm--finder-scopes.html) Or 3d printed version: https://www.thingiverse.com/thing:3444749 (this was just a quick search). Note that 3d printed version suggests 2" SCT connection on this finder which would be 2"-24 tpi, according to https://agenaastro.com/articles/guides/astronomy-threads-explained.html#small
  19. I don't know of any simple method except comparing stars in center of the field with and without coma corrector. If they look the same - then you are ok. Here are a few interesting links comparing various coma correctors: https://www.astrofotoblog.eu/?p=856 and this one: https://www.cloudynights.com/topic/554686-coma-corrector-compariosn/ Note on the second link - there are a few comparison images that clearly show star bloat / softness of the image in center of the field with simple 2 element coma correctors.
  20. Bortle class does not play into this. Idea of binning 2x2 is a sound one and I support that line of reasoning. I also think that it will be good working resolution given your setup. What is usual total guide RMS you achieve with CGX-L? If it is around 0.5"-0.7" RMS, then you'll be optimally (or close to optimally) matched in your setup. Just make sure you use good coma corrector - one that does not add spherical aberration and can correct over needed field at F/4. Poorer coma corrector will make image softer than it needs to be - robbing you of some resolution.
  21. Maybe post raw unprocessed stack to see what other people can make of it? I'm guessing that you'll be surprised by results, but that also means that various pieces of software will be used - not necessarily what you have available. In any case - there is a learning curve for processing, and seeing what is possible with your data will give you incentive to hone your skills and maybe even try different approaches.
  22. Well, it boils down to this: 1.2"/px - 1"/px : use large scope (8"+) of good figure on premium mount in good seeing conditions to fully exploit that resolution 1.5"/px - 1.2"/px : use largish telescope (5"-8") of a good figure on vg to premium mount in good seeing conditions 1.8"/px - 1.5"/px : this is "standard" high resolution range for most people. Use 4"+ aperture on good enough mount <1" RMS guiding (preferably below ~0.7" RMS total) 1.8"/px - 3"/px is normal range of resolution where you don't chase close up / detail but rather determine resolution based on focal length, pixel size, sensor size .. all the usual stuff, most mounts will work as well as scopes from 60mm upwards (but one should really move towards 2"/px+ with imprecise mounts and smaller scopes). 3"/px and above is wide field stuff. Can be said that it is over sampling in some cases - but you are far away from seeing aliasing issues (or blocky stars for that matter - those don't exist really) and you can happily over sample. In fact - at lower resolutions you hit the issue of pixel size and focal length and you can only image with lenses rather than scopes (below 300mm of FL) - which brings in another thing - lenses are not like scopes, they are not diffraction limited and have blur of their own, so you again stop being over sampled because of sharpness of optics - or rather lack of it.
  23. First expression is in mm second is in inches (I misread explanation on the page - first expression is PV error, second is RMS error, different numerical constants go for mm / inches - can be found on page) D is diameter, F is F/ratio, alpha is field angle and h is linear height in focal plane. Maybe best to use h as we have field stop diameters. For 16mm it is 18.2mm or h=9.1mm and for 24mm we have 27.2mm field stop diameter or h = 13.6mm So we have 38 * 9.1 / 4^3 = 38 * 9.1 / 64 = ~5.4 waves of coma at the edge of the field with 16mm 68 degree EP at F/4 telescope vs 38 * 13.6 / 6^3 = 38 * 13.6 / 216 = ~2.4 waves of coma at the edge of the field with 24mm 68 degree EP at F/6 telescope Coma would be about half with F/6 scope source: https://www.telescope-optics.net/newtonian_off_axis_aberrations.htm
  24. It's still pretty much proven mathematical theorem (https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem) and as such - can't really become outdated. Sure you can sample at much higher frequencies - but what is the point if you have band limited signal? You can perfectly reconstruct it with sampling at Nyquist frequency - no need for higher frequencies. With imaging, sampling at higher rate actually hurts your SNR as light is spread over more pixels and each pixel gets less signal as a consequence. Here is thread that I started:
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.