Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Build your "Unistellar eVscope" for fraction of the price


vlaiv

Recommended Posts

No, actually, this is not going to be tutorial how to do that, although it could develop into one over time - open source kind of eVscope.

Here is the idea and list of parts:

1. 114mm F/4.5 Newtonian

2. Cheap 2" Coma corrector like SW 2 element x0.9CC or Baader MPCC

3. 2" helical focuser

4. Raspberry PI

5. Az-GTI mount

6. 50mm finder scope

7. 32mm plossl eyepiece

8. PVC pipe

9. Your mobile phone :D

Scope is mounted on AzGti and stock focuser is replaced with 2" helical focuser model (don't throw away stock focuser). This step will probably need 3d printed base for helical focuser and some tweaking of the telescope.

Rpi will operate mount and camera and stacking software which will be accessed via phone.

All of this up until now is just regular EEVA. Now comes the optical part - I just tested it and it works very well. We just need re-imaging lens and 50mm finder scope works very well.

32mm Plossl has about 27mm field stop. Decently sized phone (I'm using Xiaomi mi a1) with 1080p display will have about x3 that size in height and 1080 pixels to display. This is something like x3 less resolution than human eye can resolve and indeed - one can almost see pixels, but point is to "shrink" phone by width to fit into field stop.

If we place our phone at appropriate distance from 50mm lens and use eyepiece on the other side again at appropriate distance (lens formula 1/phone_distance + 1/eyepiece_distance = 1/focal_length_of_finder and also we account for magnification, or rather minification factor) we will actually get very nice view of the phone screen filling the FOV. All we need is some sort of tubing and means to mount everything so it is on optical axis.

Do remember to unscrew finder eyepiece first before.

I just did a test holding everything in hand and I managed to get very nice, almost aberration free image (some shake and tilt was inevitable with hand held configuration).

There you go - EEVA "scope" - complete with refractor looking scope and focuser and real eyepiece. For those that want more quality - high dpi OLED screen for raspberry pi might be solution.

All we need is stacking software that works with INDILIB or INDIGO.

Of course, one does not need to use above configuration (which is one used by eVscope) and is free to use whatever EEVA setup is already using - point is just to display image on small high DPI device and use 50mm lens and eyepiece to view it.

 

 

  • Like 4
Link to comment
Share on other sites

Hi Vlaiv

I look forward to seeing a prototype 🙂

As for stacking software, surely thall that is needed for it to work with INDILIB/GO is to have the FITs dumped to some directory which the stacking software knows about. I have some open source Python stacking software that could be adapted to this scenario (it would need some work but in principle it is all there so long as the RPi can handle the required Python dependencies). As well as stacking it is useful to be able to perform automatic background level estimation and gradient removal for this type of application, to give users a limited but still useful palette of adjustments. 

Best

Martin

Link to comment
Share on other sites

2 minutes ago, Martin Meredith said:

Hi Vlaiv

I look forward to seeing a prototype 🙂

As for stacking software, surely thall that is needed for it to work with INDILIB/GO is to have the FITs dumped to some directory which the stacking software knows about. I have some open source Python stacking software that could be adapted to this scenario (it would need some work but in principle it is all there so long as the RPi can handle the required Python dependencies). As well as stacking it is useful to be able to perform automatic background level estimation and gradient removal for this type of application, to give users a limited but still useful palette of adjustments. 

Best

Martin

I'm not sure I'll build a prototype of that as I'm not overly interested in having artificial scope. I was just interested in principle of operation.

However, I am interested in slightly different thing that is in principle the same as above, except it allows for wide audience to observe at the same time. Everything would be the same, except the output would not be on high density display housed in OTA but rather to a pico projector.

That way we could have image projected on projection screen - enough for small audience of up to 10-15 people.

As for stacking software, I'm aware of Jocular, not sure where it is hosted as open source project. I have all the algorithms needed and some fancy additional ones :D

In fact, I would be happy to share them if you want. Here is an example of background detection and background removal:

Here is one image someone here on SGL posted (I found it in my downloads section) that has very strong gradient:

image.png.075c98b3d5882c772fef0aa9d036ab0c.png

First iteration step estimated background:

image.png.a2892df95032aeef10a717f0309de295.png

Gradient:

image.png.45a025a2a98608c16a4d06ef0f0f957d.png

After removal of gradient in first iteration we can do few additional iterations for better results, so second iteration background is:

image.png.dae2d39904ba0242e17e6b9268548f70.png

Additional gradient:

image.png.560e1b26277d6831d576582a42607bf2.png

Final image after two iterations:

image.png.396d27af7f34c812fb2e1c7051b209e5.png

Algorithm models background as linear feature, although it can easily be adopted to work with higher order polynomials.

  • Like 1
Link to comment
Share on other sites

I suppose that an eyepiece holder for phones would help immensely. You could also capture up to 4k video for planetary imaging (subject to phone storage, if you don't have SD card capability)

 

N.F.

Link to comment
Share on other sites

52 minutes ago, nfotis said:

I suppose that an eyepiece holder for phones would help immensely. You could also capture up to 4k video for planetary imaging (subject to phone storage, if you don't have SD card capability)

 

N.F.

I'm not sure we are talking about the same thing, so I'll expand a bit on my idea if I was not clear in initial post.

Most people familiar with eVscope know that this stacking platform uses display in front of eyepiece to display stacking results - thus creating "sensation" of real observation thru the telescope with enhanced capabilities.

For those who want to try to recreate this concept - it is really easy to do so. Take an eyepiece, I used 32mm GSO plossl, and unscrew 1.25" barrel from it, exposing the field stop. Take your smartphone and turn it on to show some interesting picture. If your eyepiece does not have exposed field lens - you can carefully place field stop onto phone screen. This should not scratch neither field lens nor phone surface. If you look thru the eyepiece - you will see image on the phone quite enlarged.

You can do this with computer screen as well.

This works because eyepiece is supposed to show us image formed at focal plane of the telescope. It is actual image and if one placed piece of paper - that image would show and it would be rather tiny, but it would be in focus. Same thing happens with sensor - image is rendered on surface of the sensor and this is why sensor can record it. If eyepiece can magnify image placed at its focal plane (field stop should be on focal plane of the eyepiece) - if we place computer or phone screen there - eyepiece will show that image as well and it will show it rather nicely in focus and all.

Since 32mm plossl has field stop diameter of about 27mm - image that we see will be whatever is on phone screen in those 27mm. This is a bit of a problem for nice image as you will see. Phone pixels are not sufficiently dense to render nice image without showing.

I'll explain why. First thing to understand is that humans resolve down to one arc minute. It we have line pair that is separated by one arc minute gap - most of us should be able to tell that there are two lines there. This also means that we will see individual pixels / pixelation if pixels are significantly bigger than one arc minute.

But how big are phone pixels? You can calculate that, but we can also use internet search. My particular phone has around 420ppi (pixel per inch) I believe. It has 5.5" diagonal and 1920 x 1080 resolution. Diagonal of screen will contain sqrt(1920^2 + 1080^2) = ~2203 pixels. This divided with 5.5 inch will give ~400ppi. I was a bit off, but close enough. We will use 400ppi figure as I'm sure it is more accurate.

Ok, so how big are pixels when we are viewing with eyepiece directly against the screen? Well, we have 27mm field stop and we have 50 degrees of AFOV (or 52 degrees - depending on who you ask :D , but let's go with 50). 27mm is 1.063" and at 400ppi that will give around 425 pixels across diameter.

That is about 8.5 pixels across one degree of AFOV or 8.5 pixels per 60 arc minutes. This makes pixel 7 arc minutes big. No wonder we can easily see it.

How do we make pixels smaller?

Well, there is very simple way of doing it, and it involves contraption from this image:

image-formation-in-convex-lens-diagram.p

Yes, it is simple lens (or in our case we won't be using simple lens, we will be using achromatic doublet).

Note that we can make larger object look smaller by using lens. We can also make small object look larger - it just depends where we place object and where we want our image to form.

But what will we achieve by using lens?

Well, in my case, and by the way, same exact IPS panel is available for purchase online for about $120 to be used with raspberry pi, so one does not need to do this with their phone, having 5.5" screen means that we can have more than 27mm used to show image.

Height of the phone in landscape mode is 2.7" (1080 / 400ppi) or 1080px. In millimeters that is 68.58mm. We want to squeeze 68.58mm into 27mm so we need magnification (or rather minification?) factor of about x2.54.

With proper spacing of display screen, 50mm achromat doublet from finder scope (I just used finder scope as it was easy to try and it kind of works well and is cheap - I bet almost anyone doing EEVA will have one in a drawer) we can have 1000 pixels in 50 degrees (1080 to be precise but I think we can safely assume that at least 1000 will be visible - it would take very precise placement to have all 1080 fit in FOV without having gaps at top and bottom).

Now things look a bit better. This is 20 pixels per degree, or each pixel is only 3 arc minutes. Still not what we need, but I don't think we could find higher density display that easily.

This btw makes me wonder what sort of display was used in original eVscope?

When I written that I tried all of this - I actually took 32mm Plossl, my phone and 50mm finder scope and was able to fit almost full height (in landscape mode) of my phone into FOV. Just remember to remove finder scope eyepiece (mine just unscrews easily).

Actual diagram of this telescope would look something like this:

image.png.859593e4bd68f6bb0bebaf84d6655211.png

If I were to build this rig, I would also include following as EEVA equipment:

1. ASI183mc/mm (or other vendor) - I would probably go for cooled version, but one could also use regular non cooled version to save a few bob.

2. 102mm F/7 ED refractor (~4kg weight)

3. x0.6 long perng FF/FR

4. AzGti in EQ mode

This should provide rather decent FOV and also plenty of different resolutions / magnifications.

At lowest magnification it could almost fit whole Pleiades into FOV:

image.png.129ffa7a1bc24a5d120f8acde3d14467.png

(actual FOV is a bit larger but astronomy tools does not have 0.6 focal reducer, and we should really only observe circle that can be fitted inside this rectangle

While at highest magnification it would actually have FOV like this - for small galaxies:

image.png.36a4a880277f839bc62c2bdfe01c1732.png

So how do I figure this?

ASI183 has 3672px in height. If we do ROI at native resolution and take only 1/4 of those pixels that is 918px and stretch that to our 1000 display we will get that sort of magnification. This is "native" resolution.

But we might want to use 1/2 ROI instead - that will mean using super pixel mode instead of interpolating and we will "squeeze" 1836 of camera pixels onto 1000 display pixels for this sort of fov:

image.png.6e72e6538b739191cfd7f3dc3df9a130.png

In the end we can have super pixel and bin that x2 so we squeeze whole 3672px of camera onto 1000 px of display for this sort of FOV:

image.png.7a3497eb08c1da49f8eeb17203667c99.png

So as you see - you not only get enhanced scope, you also get 3 eyepieces with 3 different magnifications to go with it :D

BTW, @nfotis hope this explains things a bit better - and as you see, it has nothing to do with planetary observation, although, we could use same approach to do lucky imaging and automatic sharpening? Not sure how would one develop sharpening part to be automatic though.

  • Like 1
Link to comment
Share on other sites

Just a quick update - I had very rewarding quick session on M31 a moment ago :D

Opened image of M31 from wiki page - very nice rendition with Ha combined, placed my phone on the floor and held finder scope against the desk for added stability.

Some notes:

- when stable, image is really sharp, there is some pincushion distortion that is evident if straight lines enter FOV (edge of image for example)

- about 70cm is enough to match phone width (portrait mode) to eyepiece FOV (my desk is about 70cm from the floor and I held finder scope pressed against desk) and about 5cm or so was distance of the eyepiece from the back of finder scope

- I was rather hard pressed to tell single pixels on astronomical image. On phone UI it took some effort but it could be seen - I guess this is because of graphics used. On a bit more zoom  (finder lowered about 10cm below desk) - pixels started to be obvious even in astro image.

- Heavily processed image looks rather artificial thru the eyepiece. It really needs just basic processing to look more pleasing - finding noise floor, from that finding max brightness (either dictated by noise floor distribution or by max brightness of the target) and then applying gamma of 2.4 for natural look.

  • Like 1
Link to comment
Share on other sites

Sounds an interesting project, I hadn't realised the EVscope used an electronic eyepiece, thought it just displayed out to a tablet. Maybe they use the type of display in modern cameras for the improved resolution in a small panel?

Link to comment
Share on other sites

3 minutes ago, DaveL59 said:

Sounds an interesting project, I hadn't realised the EVscope used an electronic eyepiece, thought it just displayed out to a tablet. Maybe they use the type of display in modern cameras for the improved resolution in a small panel?

Indeed it does:

image.png.0e638a0166db99f66e41542a1937a7c0.png

I doubt it uses above system, it probably uses eyepiece and a small display, something like inch or two. It really needs to have high DPI in order show good image, however, I believe that limiting factor in eVscope is not the display. It uses IMX224 sensor (same as ASI224) which has 1280 x 1024 pixels.

This means that regardless of display used, resolution of the view will be about the same as proposed above - ~1000 px per 50 degrees or pixel size of about 3 arc minutes.

It also has ability to save the image and show it on tablet - this will also be available in "open source" version :D

If you are referring to LCD displays on modern DSLR/mirrorless type cameras? I'm not sure those have very high DPI? Do you have any example?

Quick search for Canon M6 screen specs returned this:

image.png.8a8feefd6ef0858651b9f615f4a3f83e.png

It has 3:2 screen size, so 1250 x 832 or something like that. Diagonal is ~1501 and it is 3" so about 500 ppi. It is a bit higher but not much higher than 400ppi of display that is both cheap and already adapted for use on RPI.

If we do a search on high PPI devices we get this:

image.png.b21831f8164eaf034c7eb1d3c883522a.png

That is double resolution of above mentioned screen. Ideally we would want something like 1200ppi ready to be used on raspberry pi and not very big in size - about 4-5" in diagonal. Not sure we can find such a device, but I think for the time being, 400ppi will suffice. Really, in recent test, if one was not looking for pixels - they were not obvious and only seen when graphics was such that emphasized them - sharp edges and saturated colors. In nice astronomical image that has smooth transition - I doubt they will be seen.

Another alternative would be to use narrower AFOV eyepiece - like 30 degrees, but I guess that would be rather unpleasant.

Link to comment
Share on other sites

I don't have any specs, was just thinking to what I'd read in reviews of some of Sony's SLT cameras. The A77 SLT says the finder is "2.4M dot OLED viewfinder" so a reasonable resolution. Of course if there's no optics where the eyepiece is in the eVscope then a larger panel could be dropped in as per your suggestion to give a better impression of looking at the real object directly via the eyepiece.

  • Like 1
Link to comment
Share on other sites

6 minutes ago, DaveL59 said:

I don't have any specs, was just thinking to what I'd read in reviews of some of Sony's SLT cameras. The A77 SLT says the finder is "2.4M dot OLED viewfinder" so a reasonable resolution. Of course if there's no optics where the eyepiece is in the eVscope then a larger panel could be dropped in as per your suggestion to give a better impression of looking at the real object directly via the eyepiece.

image.png.4a62ab95ffbf89c193c53e83167b8b84.png

I'm not sure what is the FOV of electronic view finder? It also means that above 1.04m dot display does not have 500ppi as each R, G and B dot is counted separated.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.