|Optics are cool. No-focus cameras.
||[Aug. 9th, 2003|02:27 am]
Forget auto-focus cameras. What would be cool is a camera that stored an image of a scene for all focal depths simulatenously. Later, in software, you could run an analysis on the sharpness at every region and infer the physical depths at each point.|
Then, so much cool stuff is posisble:
-- show the image with varying focal depths based on eye position sensors (already available), so you could "look around" an image like it's real, focusing on different parts, not choosing the focal depth the photographer decided on. that'd be a trippy effect on a single surface. more lifelike, though:
-- make stereoscopic images like in ViewFinder cameras, by creating two images from two virtual points on either side, based on recompositing two images from the original 50-100 simultaenous images, and then either keeping that as-is (with a fixed focal point), or doing the eye tracking again and make it not only the-mind-things-3-D, but also focal depth tracking as well, so you could explore it, like above. (good for VR?)
-- compositing images that are in focus everywhere at once, automatically. another weird effect.
But, how to make such a camera? Lots of ways, depending on the time period.
Near-future: camera starts in 0-focus and after button press, take 50-100 pictures back-to-back with ultra-sensitive CCDs as the camera quickly zooms from 0 to infinity during the "shutter speed". The sensitive CCDs would be the trick, otherwise the extremely short exposure times ("shutter speed" divide pictures) wouldn't work. The memory requirements aren't as hard. You should be able to fix everything up in software later if the data is good enough. I don't think this is far out.
More-future (my idea after reading the excellent book "Hacking Matter"): a programmable matter surface with a grid of quantum dots, with large sections of the grid varying their index of refraction atop a light sensor for each pixel. Now, the camera could be continually focusing each individual point with no moving parts. The on-camera computer continually processes the blurriness data from each region of pixels and varies each pixel's grid's index of refraction in an algorithm designed to maximize sharpness everywhere, but then also recording the inferred physical depth at each point so the blurriness could be re-generated if you wanted to do the above effects later.