Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Very interesting point, and indeed I think you are right, however, I have observation to make - I'm not sure if I'm right about this as it just sprang to my mind: Sun is much more massive and is much further, in fact, it accounts for only 46% of the influence of the Moon according to this article: https://oceanservice.noaa.gov/education/tutorial_tides/tides02_cause.html summarized in this image: However, if we look at three points - one in the Center of the earth and one on each side - those distances are much larger in comparison to distance to the Moon than are with respect to distance to the Sun. Not sure if I put that correctly, here is another attempt - radius of earth in comparison to distance to Moon is much larger than in comparison to distance to the Sun. This means that tides due to the Sun should be much smaller although Sun exerts about half of the pull that of the Moon, right?
  2. I think you are overthinking it. However you explain it - Moon pulling on water - Moon making Earth pull less on the water - Waver just following straight path in bent space time you are essentially describing the same phenomena. Perhaps, third option would be "the most precise" explanation - as far as our understanding goes at this point. There is more to the whole story - low/high tide change every 12h rather than 24h - this means there is "counter bulge" on the opposite side. That one is due to system having certain springiness in it and it is just oscillation in sync with the motion of the Moon (similar to what happens when marching on the bridge and wind blowing over structures - ah, yes, resonant frequency is the term). As for water moving away from earth - there is spinning but also there is pressure. Water is not hugely compressible but it does compress somewhat and there are enormous pressures in depths. If we were to suddenly turn off gravity - all that ocean water would bounce of the surface of the earth into space. Mass of the water is not the same on sea level - because a) - sea level is not the same height over surface of the earth, and b) surface of the earth is not same distance from the center of the Earth
  3. I was thinking the same thing. How much print speed influences all of this? My reasoning is that one can lower fan speed if printing slowly and let layers bond better together. Time spent on slower printing would be the same as above preparation and annealing (at least for small parts).
  4. I think that TS made copy/paste error from their RC line of scopes. They even have reflectivity graph on their website for 8" RC model - dielectric coatings:
  5. There is no real answer to that question. I'm going to say - yes, that is rather normal under some circumstances. In any case, there are some other things that can be said - for example QHY183c is vastly over sampled. Both images are processed / stretched more than the data allows. Both have very high background levels.
  6. That is a lot of hours and I'm sure SNR of resulting image must be high, however, you seem to have lost the "bridge" and background looks too compressed / flat. Colors are also way too saturated in my opinion. Btw, the "bridge" that I'm talking about is apparent brighter connection between M110 and M31 like seen in this image (crop) from Wikipedia on M31 page: Same region crop from my M31 image - again this part is clearly visible, although total exposure is nowhere near yours (16h total in LRGB - 4h each).
  7. Not sure why you decided to crop it like that? In any case, data is rather good. There are some issues with flats, but I did not try to correct those. Here is one way it can be processed:
  8. That depends on distance between primary and secondary, and consequently focus position. Here is quick scheme: Secondary in position to the left will not reduce effective aperture of primary. Secondary on the right will reduce effective aperture of the primary. If you want to know if your primary will be stopped down - just measure your focus position and diameter of the tube. For example, you have 245mm primary, and let's say you have 255 tube diameter. You also have focus position at about 100mm above tube. This means that focus plane to center of secondary is 100 + 255 /2 = 227.5mm. You have F/6.3 scope, so size of beam at secondary is 227.5 / 6.3 = 36.11mm In this case, using 46mm secondary would not stop down your aperture, but it would reduce fully illuminated field in comparison to 50mm. Of course, for your exact case, you would need to measure distance between secondary center and focal plane as above is just an example.
  9. Hi @c3dr1c What I'm seeing is just regular SkyWatcher sample to sample variation and periodic error in line with most mounts of that class. Periodic error seems to be around 20" or so P2P. As I see it - you can return the mount and try another one in that class - but there is no guarantee that you won't get similar issues (some mounts are fine, and some are like one you have now). Other option is to tune the mount yourself. Of course, there is third option - stretch your budget and get better mount. I also had issues with my HEQ5 mount. Like you - I had very bad spike in RA that repeated regularly and it had more than 5" amplitude. It was due to broken housing on one of the bearings: So I stripped down my mount and replaced all the bearings and replaced grease and tightened properly everything. I also did a belt mod (you don't have to that on your mount since you already have belt transmission). Here is an example of PE analysis that I did at some point: You will see that I had something like 46" P2P periodic error prior to belt mod.
  10. Indeed, sorry for out of the place mention, I corrected wrong autocomplete
  11. I would say that over 2"/px is still sensible value and over 4"/px as well - it really depends on what you are doing. You can't shoot M31 with regular camera and single panel if you don't go at 2-3"/px. M31 is over 3 degrees long, that is 3 x 60 x 60 = 10800 arc seconds. If you camera has something like 3000 x 2000 px, you are bound to use 3"/px to get it in frame. One can always do 3x3 panel over M31 at 1"/px but that is going to take some time to complete
  12. I'm with @AstroTim on this one SCT has changing focal length with respect to focus position - "further out" you focus (on a regular telescope that would be further out, but in case of SCT - closer two mirrors are) focal length is increased. It is not uncommon for F/10 SCT to operate at F/12 or more. This is particularly pronounced when using 2" accessories (longer FL) and using focal reducers. Focal reducers shift focal point towards the scope - this means that one needs to push it "further out" to reach focus in standard configuration (diagonal + eyepiece) - again extending focal length of the scope. Take a look here:
  13. I'm not sure what is the FOV of electronic view finder? It also means that above 1.04m dot display does not have 500ppi as each R, G and B dot is counted separated.
  14. I found this interesting comparison online: In theory, with lucky imaging technique and processing style, even 4" scope should start to resolve this target. There are 4 nice stars around this target that can be used to judge how good particular frame is (FWHM value of each).
  15. Indeed it does: I doubt it uses above system, it probably uses eyepiece and a small display, something like inch or two. It really needs to have high DPI in order show good image, however, I believe that limiting factor in eVscope is not the display. It uses IMX224 sensor (same as ASI224) which has 1280 x 1024 pixels. This means that regardless of display used, resolution of the view will be about the same as proposed above - ~1000 px per 50 degrees or pixel size of about 3 arc minutes. It also has ability to save the image and show it on tablet - this will also be available in "open source" version If you are referring to LCD displays on modern DSLR/mirrorless type cameras? I'm not sure those have very high DPI? Do you have any example? Quick search for Canon M6 screen specs returned this: It has 3:2 screen size, so 1250 x 832 or something like that. Diagonal is ~1501 and it is 3" so about 500 ppi. It is a bit higher but not much higher than 400ppi of display that is both cheap and already adapted for use on RPI. If we do a search on high PPI devices we get this: That is double resolution of above mentioned screen. Ideally we would want something like 1200ppi ready to be used on raspberry pi and not very big in size - about 4-5" in diagonal. Not sure we can find such a device, but I think for the time being, 400ppi will suffice. Really, in recent test, if one was not looking for pixels - they were not obvious and only seen when graphics was such that emphasized them - sharp edges and saturated colors. In nice astronomical image that has smooth transition - I doubt they will be seen. Another alternative would be to use narrower AFOV eyepiece - like 30 degrees, but I guess that would be rather unpleasant.
  16. Just a quick update - I had very rewarding quick session on M31 a moment ago Opened image of M31 from wiki page - very nice rendition with Ha combined, placed my phone on the floor and held finder scope against the desk for added stability. Some notes: - when stable, image is really sharp, there is some pincushion distortion that is evident if straight lines enter FOV (edge of image for example) - about 70cm is enough to match phone width (portrait mode) to eyepiece FOV (my desk is about 70cm from the floor and I held finder scope pressed against desk) and about 5cm or so was distance of the eyepiece from the back of finder scope - I was rather hard pressed to tell single pixels on astronomical image. On phone UI it took some effort but it could be seen - I guess this is because of graphics used. On a bit more zoom (finder lowered about 10cm below desk) - pixels started to be obvious even in astro image. - Heavily processed image looks rather artificial thru the eyepiece. It really needs just basic processing to look more pleasing - finding noise floor, from that finding max brightness (either dictated by noise floor distribution or by max brightness of the target) and then applying gamma of 2.4 for natural look.
  17. I'm not sure we are talking about the same thing, so I'll expand a bit on my idea if I was not clear in initial post. Most people familiar with eVscope know that this stacking platform uses display in front of eyepiece to display stacking results - thus creating "sensation" of real observation thru the telescope with enhanced capabilities. For those who want to try to recreate this concept - it is really easy to do so. Take an eyepiece, I used 32mm GSO plossl, and unscrew 1.25" barrel from it, exposing the field stop. Take your smartphone and turn it on to show some interesting picture. If your eyepiece does not have exposed field lens - you can carefully place field stop onto phone screen. This should not scratch neither field lens nor phone surface. If you look thru the eyepiece - you will see image on the phone quite enlarged. You can do this with computer screen as well. This works because eyepiece is supposed to show us image formed at focal plane of the telescope. It is actual image and if one placed piece of paper - that image would show and it would be rather tiny, but it would be in focus. Same thing happens with sensor - image is rendered on surface of the sensor and this is why sensor can record it. If eyepiece can magnify image placed at its focal plane (field stop should be on focal plane of the eyepiece) - if we place computer or phone screen there - eyepiece will show that image as well and it will show it rather nicely in focus and all. Since 32mm plossl has field stop diameter of about 27mm - image that we see will be whatever is on phone screen in those 27mm. This is a bit of a problem for nice image as you will see. Phone pixels are not sufficiently dense to render nice image without showing. I'll explain why. First thing to understand is that humans resolve down to one arc minute. It we have line pair that is separated by one arc minute gap - most of us should be able to tell that there are two lines there. This also means that we will see individual pixels / pixelation if pixels are significantly bigger than one arc minute. But how big are phone pixels? You can calculate that, but we can also use internet search. My particular phone has around 420ppi (pixel per inch) I believe. It has 5.5" diagonal and 1920 x 1080 resolution. Diagonal of screen will contain sqrt(1920^2 + 1080^2) = ~2203 pixels. This divided with 5.5 inch will give ~400ppi. I was a bit off, but close enough. We will use 400ppi figure as I'm sure it is more accurate. Ok, so how big are pixels when we are viewing with eyepiece directly against the screen? Well, we have 27mm field stop and we have 50 degrees of AFOV (or 52 degrees - depending on who you ask , but let's go with 50). 27mm is 1.063" and at 400ppi that will give around 425 pixels across diameter. That is about 8.5 pixels across one degree of AFOV or 8.5 pixels per 60 arc minutes. This makes pixel 7 arc minutes big. No wonder we can easily see it. How do we make pixels smaller? Well, there is very simple way of doing it, and it involves contraption from this image: Yes, it is simple lens (or in our case we won't be using simple lens, we will be using achromatic doublet). Note that we can make larger object look smaller by using lens. We can also make small object look larger - it just depends where we place object and where we want our image to form. But what will we achieve by using lens? Well, in my case, and by the way, same exact IPS panel is available for purchase online for about $120 to be used with raspberry pi, so one does not need to do this with their phone, having 5.5" screen means that we can have more than 27mm used to show image. Height of the phone in landscape mode is 2.7" (1080 / 400ppi) or 1080px. In millimeters that is 68.58mm. We want to squeeze 68.58mm into 27mm so we need magnification (or rather minification?) factor of about x2.54. With proper spacing of display screen, 50mm achromat doublet from finder scope (I just used finder scope as it was easy to try and it kind of works well and is cheap - I bet almost anyone doing EEVA will have one in a drawer) we can have 1000 pixels in 50 degrees (1080 to be precise but I think we can safely assume that at least 1000 will be visible - it would take very precise placement to have all 1080 fit in FOV without having gaps at top and bottom). Now things look a bit better. This is 20 pixels per degree, or each pixel is only 3 arc minutes. Still not what we need, but I don't think we could find higher density display that easily. This btw makes me wonder what sort of display was used in original eVscope? When I written that I tried all of this - I actually took 32mm Plossl, my phone and 50mm finder scope and was able to fit almost full height (in landscape mode) of my phone into FOV. Just remember to remove finder scope eyepiece (mine just unscrews easily). Actual diagram of this telescope would look something like this: If I were to build this rig, I would also include following as EEVA equipment: 1. ASI183mc/mm (or other vendor) - I would probably go for cooled version, but one could also use regular non cooled version to save a few bob. 2. 102mm F/7 ED refractor (~4kg weight) 3. x0.6 long perng FF/FR 4. AzGti in EQ mode This should provide rather decent FOV and also plenty of different resolutions / magnifications. At lowest magnification it could almost fit whole Pleiades into FOV: (actual FOV is a bit larger but astronomy tools does not have 0.6 focal reducer, and we should really only observe circle that can be fitted inside this rectangle While at highest magnification it would actually have FOV like this - for small galaxies: So how do I figure this? ASI183 has 3672px in height. If we do ROI at native resolution and take only 1/4 of those pixels that is 918px and stretch that to our 1000 display we will get that sort of magnification. This is "native" resolution. But we might want to use 1/2 ROI instead - that will mean using super pixel mode instead of interpolating and we will "squeeze" 1836 of camera pixels onto 1000 display pixels for this sort of fov: In the end we can have super pixel and bin that x2 so we squeeze whole 3672px of camera onto 1000 px of display for this sort of FOV: So as you see - you not only get enhanced scope, you also get 3 eyepieces with 3 different magnifications to go with it BTW, @nfotis hope this explains things a bit better - and as you see, it has nothing to do with planetary observation, although, we could use same approach to do lucky imaging and automatic sharpening? Not sure how would one develop sharpening part to be automatic though.
  18. It was quite long ago and if I remember correctly - I think I did each one separately - until I got the best looking image. Btw, I've manage to find theoretical value since and it shows that I was very close with this simulation and measurement. This is expression for cutoff frequency, found here https://en.wikipedia.org/wiki/Spatial_cutoff_frequency Lambda is expressed in millimeters and F# is telescope F/number Since airy disk size is give as: or X = 1.22 * lambda * F/number We can see that maximum spatial frequency relates to airy disk radius by factor of 1.22, sampling rate relates to it by factor of 2.44 and airy disk diameter relates to it by factor of 4.88 - this is very close to my measured value of 4.8.
  19. Both scopes are operating at f/4.7 - F/4.8, right? Should give the roughly the same brightness. Are you using binoviewer in 24" and not in 15". Viewing something with both eyes can make it appear brighter although each eye gets only 50% of the light.
  20. This reminds me of another "well established truth" - F/ratio of the scope determines how fast one will acquire an image. Both lack crucial piece of information. In this case it is magnification used. 4" of aperture can create same surface brightness of projection on retina as 8" by using suitable magnification (in the case of f/ratio it is the sampling rate or pixel size).
  21. I'm not sure I'll build a prototype of that as I'm not overly interested in having artificial scope. I was just interested in principle of operation. However, I am interested in slightly different thing that is in principle the same as above, except it allows for wide audience to observe at the same time. Everything would be the same, except the output would not be on high density display housed in OTA but rather to a pico projector. That way we could have image projected on projection screen - enough for small audience of up to 10-15 people. As for stacking software, I'm aware of Jocular, not sure where it is hosted as open source project. I have all the algorithms needed and some fancy additional ones In fact, I would be happy to share them if you want. Here is an example of background detection and background removal: Here is one image someone here on SGL posted (I found it in my downloads section) that has very strong gradient: First iteration step estimated background: Gradient: After removal of gradient in first iteration we can do few additional iterations for better results, so second iteration background is: Additional gradient: Final image after two iterations: Algorithm models background as linear feature, although it can easily be adopted to work with higher order polynomials.
  22. Happened to me only once - very damp conditions, but I did get some dew on secondary.
  23. Depends on a target. There are two different types of targets - extended targets and point sources. Any target is point source provided it is sensed by more than one receptor. This holds for both visual and imaging - if target is spread over single pixel or single "visual cell" (here it could actually be few real cells - not sure how this works in combination with brain). In simple terms - point source is a star that has not been resolved into airy disk. Everything else is extended source. Extended source keeps the same brightness if you increase aperture but keep exit pupil the same. In order to understand why - one just needs to see condition to keep the same exit pupil with increasing aperture. Exit pupil is image of aperture and its size is size of aperture divided with magnification. If we increase aperture, to keep the exit pupil constant, we need to increase magnification by the same amount. If we have 100mm scope and 1mm exit pupil, this means that we are at x100 magnification. If we increase aperture to 200mm, in order to keep 1mm exit pupil, we need to change magnification to x200. This change in magnification means that light on sensor (eye or camera sensor) is spread around more, and in fact - it is now on 4 times larger surface (if we increased aperture and magnification by x2). Surface of aperture changed by factor of x4 and also area that light gets spread over increased x4 - brightness per unit sensor area remains the same. We see same brightness and sensor captures same number of photons per each pixel. With point sources - it does not get spread and it stays "in the same place", since image is not resolved. With point source brightness is increased by increase of aperture - up to moment when point source gets resolved and is no longer point source.
  24. No, actually, this is not going to be tutorial how to do that, although it could develop into one over time - open source kind of eVscope. Here is the idea and list of parts: 1. 114mm F/4.5 Newtonian 2. Cheap 2" Coma corrector like SW 2 element x0.9CC or Baader MPCC 3. 2" helical focuser 4. Raspberry PI 5. Az-GTI mount 6. 50mm finder scope 7. 32mm plossl eyepiece 8. PVC pipe 9. Your mobile phone Scope is mounted on AzGti and stock focuser is replaced with 2" helical focuser model (don't throw away stock focuser). This step will probably need 3d printed base for helical focuser and some tweaking of the telescope. Rpi will operate mount and camera and stacking software which will be accessed via phone. All of this up until now is just regular EEVA. Now comes the optical part - I just tested it and it works very well. We just need re-imaging lens and 50mm finder scope works very well. 32mm Plossl has about 27mm field stop. Decently sized phone (I'm using Xiaomi mi a1) with 1080p display will have about x3 that size in height and 1080 pixels to display. This is something like x3 less resolution than human eye can resolve and indeed - one can almost see pixels, but point is to "shrink" phone by width to fit into field stop. If we place our phone at appropriate distance from 50mm lens and use eyepiece on the other side again at appropriate distance (lens formula 1/phone_distance + 1/eyepiece_distance = 1/focal_length_of_finder and also we account for magnification, or rather minification factor) we will actually get very nice view of the phone screen filling the FOV. All we need is some sort of tubing and means to mount everything so it is on optical axis. Do remember to unscrew finder eyepiece first before. I just did a test holding everything in hand and I managed to get very nice, almost aberration free image (some shake and tilt was inevitable with hand held configuration). There you go - EEVA "scope" - complete with refractor looking scope and focuser and real eyepiece. For those that want more quality - high dpi OLED screen for raspberry pi might be solution. All we need is stacking software that works with INDILIB or INDIGO. Of course, one does not need to use above configuration (which is one used by eVscope) and is free to use whatever EEVA setup is already using - point is just to display image on small high DPI device and use 50mm lens and eyepiece to view it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.