Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

The software lens


NickK

Recommended Posts

Yes. You read that right. No physical lens - just a CCD sensor and a hole (not a pinhole either).

The idea is that a software mathematical model is created to take the light passing through a hole that reconstructs the image.

Where do you focus? Well the mathematical model essentially provides the focusing within it.

This has been done for microscopes - the question is.. could the same be done for telescopes?

At 2130.. I've just finished a work call.. the work day has ended.. but it's left my brain pondering beyond the box.. 

Link to comment
Share on other sites

Is this like litro? (Think that's spelled right).

Camera that stores vector data for light so focus can be changed after shooting.

Could always strap one to a scope and find out ;)

Sent from my iPad using Tapatalk

Link to comment
Share on other sites

This sounds great i have often wondered if software could transform the light from something like a 4" non achro lens at say F1 into a pin sharp image, i know my camera already has lens correction data that goes part way.

Alan

Link to comment
Share on other sites

http://techland.time.com/2013/06/03/finally-a-camera-without-a-lens-and-a-sensor-the-size-of-a-pixel/

And this was the original item:

http://www.technologyreview.com/view/515651/bell-labs-invents-lensless-camera/

In this case they use an LCD screen to section the image, sampling each square - applying deconvolution, then moving to the next square and building up an image.

The system is similar to the Litro system in that it breaks the image into subsections but the litro system uses micro lenses although the micro lenses are not one per pixel. In short it's using a lens array as an array but using the properties of the technique to allow the focal plane (focus) to be adjusted at any time using the image data and processing.

It would be possible to do the same with a micro lens array from Thorlabs and then using a GPU to process the image.

My only question is - how does the LCD camera differ from a pinhole camera but just with multiple holes being sampled? Sort of dithering using a LCD screen. The same techniques used in scanning spectrographs could also work for this but without the need for a len (albeit in 1 dimension but the LCD would be better for the scanning spectrograph but it would have to be on the light side of the defraction assembly).

Link to comment
Share on other sites

Thinking about the LCD screen.. as a DIY version of this..

You could use a LCD pixel array from a standard LCD screen - then dither a couple of screens together so you have smaller sub-pixels.

Naturally the amount of light required for multiple screens would be higher.. it would make a great scanning spectrograph mechanism.

Link to comment
Share on other sites

The only issue I see with astro in this area is:

1. concentration of light onto a pixel - the amount of light in a 7.4um pixel for example is going to be limited

2. The amount of time a camera will need to image means a EQ tracking mount - although AZ would also be able to be modelled as the image moves over time (actually self dithering).

Now the annoying bit - Point 1 means we'd need to collect more light from the specific point in space radiated as parallel collimated light - that means a lens, mirror or using a pinhole to bend the light.. the light still needs to be in-phase at the sensor plane.

So unless we have an 8" sensor array (or array of sensors).. but the idea is interesting non-the less.

Using the micro lens array (lytro) would mean never having to focus your AP scope ever again.. never having focuser droop, never loosing AP subs again that are out of focus.

Using a LCD screen may be great for high brightness targets - planets, moon and solar (although the power transferred by a 0.4A slit is quite low for a sensor).

Link to comment
Share on other sites

For those bent on terrifying themselves on the maths: http://statweb.stanford.edu/~markad/publications/ddek-chapter1-2011.pdf :D

From an initial scan over it - I'm assuming that the way the LCD camera works is by utilising sub-samples, then building up the image by possibly inverse coefficient processing..

So in theory - a single tube to provide a more collimated light (although light within the scene would not be truly parallel due to the atmosphere) with a sensor at the end, and then dithered over the scene would work.

If you put a 50/50 prism in there you could work out what was actually parallel and what wasn't and apply that in software. However that breaks the no glass rule..

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.