Conference PaperPDF Available

Portable Holoscopic 3D Camera Adaptor for Raspberry Pi

Authors:

Abstract

Holoscopic 3D imaging (also known as Integral imaging) is a promising technique for capturing full colour spatial images using a single aperture holoscopic 3D camera. It mimics fly's eye technique with a microlens array, which will view the scene at a slightly different angle to its adjacent lens that records three-dimensional information onto two-dimensional surface. To date, holoscopic 3D camera adaptors are designed and prototyped for a large scale SLR cameras. This paper proposes an innovative design solution for prototyping a holoscopic 3D camera adaptor for Raspberry Pi, which is a credit-card-sized single board computer. This proposed method extends utilisation of holoscopic 3D imaging and enables the expansion of the technology for various trends applications such as security, medical, entertainment, inspection, autonomous and robotics systems where 3D depth sensing and measurement are concern.
Portable Holoscopic 3D Camera Adaptor
tor
Raspberry
Pi
A.
Albar
Brunel University, Electronic and Computer Engineering,
College
of
Engineering, Design and Physical Sciences
London, United Kingdom
abdul.albar@outlook.com
Abstract -Holoscopic 3D imaging (also known as Integral imaging)
is
a promising technique for
capturing
full colour spatial images
using a single
aperture
holoscopic 3D camera.
It
mimics fly's eye
technique with a microlens
array,
wh
ich
will view the scene
at
a
slightly different angle to its adjacent lens
that
records three-
dimensional information onto two-dimensional surface. To date,
holoscopic 3D camera
adaptors
are
designed and prototyped for a
large scale SLR cameras. This paper proposes an innovative design
solution for prototyping a holoscopic 3D camera
adaptor
for
Raspberry
Pi, which
is
a credit-card-sized single board computer.
This proposed method extends utilisation
of
holoscopic 3D imaging
and
enables the expansion
of
the technology for various
trends
applications such as security, medical, entertainment, inspection,
autonomous and robotics systems where 3D depth sensing
and
measurement
are
concern.
Keywords -Holoscopic
3D
image, integral image, Light Field,
Plenoptic, Microlens array, 3D camera, Raspberry
Pi,
3D depth
measurement, digital re[ocusing, 3D Camera adaptor, 3D Pi-camera
I.
INTRODUCTION
3D imaging systems have been advanced
in
recent years and
are still being researched and utilized
in
a wide range
of
applications, such
as
broadcasting, medical, robotics visions,
inspections, security, self-driven automotive and entertainment
[1
].
There are different types
of
3D imaging principles such
as
stereoscopic [2], Multiview [3], holographic
[4]
holoscopic
[5]
,
with each one having its own advantages based on the
application used. The most commonly used technique
is
the
stereoscopic "Stereo vision", which mimics the human eye
technique for both acquisition and visualization using two
cameras that are slightly distanced from each other
to
capture
images from different viewing angle, and this facilitates the
perception
of
depth when the left eye image and right eye image
are viewed
by
the left eye and right eye respectively
[6].
In
such
systems, the location and optical parameters
of
each separate
camera must be synchronised and calibrated,
so
that
triangulation methods can be used
in
each image to determine
the correspondence between pixels
[7]
. Based
on
the
correspondence between pixels, a disparity map with depth
information can
be
generated. One
of
the limitations
of
using a
978-1-5090-1000-4/16/$31.00 ©2016 IEEE
185
M.
R.
Swash
Brunel University, Electronic and Computer Engineering,
College
of
Engineering, Design and Physical Sciences
London, United Kingdom
rafiq.swash@brunel.ac.uk
stereoscopic system
is
the
use
of
two cameras
as
this adds a
great complexity to the system
as
weil
as
depth perception
because the cameras needs to be accurately calibrated for
different setup
[7].
and
it
also increases the cost and the size
of
the system and hence,
it
is
not an ideal solution for dynamic
applications. Multiview 3D system
is
based
on
stereoscopic
as
it uses human eye technique but it uses more than two cameras
to accommodate more users
as
weIl
as
to create motion 3D
effect.
11.
HOLOSCOPIC
3D
IMAGING
PRINCIPLE
Recent research exhibits that holoscopic
3D
imaging (H3D) or
integral imaging
is
a promising technique due
to
its simplistic
form
of
data acquisition and visualization and yet offers robust
and scalable spatial information. The H3D technique was first
proposed by Lippmann in 1908
[8]
for capturing and
reproducing a three-dimensional optical model
of
a scene [9].
However, the development
of
H3D was slow at the time, due to
the limited technology. Nevertheless, with the rapid
advancement in electronic sensors and display technologies in
the last two decades H3D was revived [10]. The main idea
behind Lippmann was that one can record many elemental
images
of
a 3D scene
in
a 2D matrix sensor (i.e. CMOS or
CCD) [11] , with each elemental image storing a different
perspective
of
the 3D scene. This
is
achieved
by
inserting a
microlens array (MLA)
in
front
of
a sensor or use a camera
array to capture the 3D object from multiple views. The object
can then be reproduced using a flat LCD display technology
with a reintegrated MLA
in
the display to replay the original
object or scene in full colour and with continuous parallax
in
all
directions
[9].
Moreover, since images with different
perspective are captured, the triangulation method can
be
used
to generate a disparity map.
To date , holoscopic 3D cameras are designed for a large
scale
of
SLR camera [12]
as
shown
in
Fig. 2 and has not been
explored to miniaturisation yet. The current H3D adaptor used
for the SLR uses a relay lens to relay the Holoscopic 3D image
from the MLA onto the sensor. This solves the pseudoscopic
problem[12], however, the relay lens adds a substantial length
to the adaptor which limits the applications in which a H3D
system
is
capable
of
achieving. One
of
the advantages
of
a
DMIAF 2016
holoscopic system compared to stereoscopic
is
its compactness,
and hence,
it
has a huge potential to be used in areas such as
robotic vision, inspection and medical applications for the
purpose
of
3D depth measurement [13], digital refocusing [14],
3D model reconstruction and many more.
Pickup
Device
(CHOS)
""
t
Microlens
array
(a) Recording
Fl
at
Panel
L
CD
I
t
Microlens
array
Full3D
Optical
Mode
l
(b) Replay
t>-
Viewer
F
ig.
1 - (A) and
(B)
demonstrates principJes ofhoJoscopic 3D system
[9]
Therefore, the proposed portable holoscopic 3D camera
adaptor will enable application users to apply the holoscopic 3D
imaging system for wider range applications such as robotics,
medical and entertainment.
It
will also promote seamlessly and
cost effectively integration with commonly used devices (such
as raspberry pi) thus,
it
will accelerate the expansion and
development
of
the technology
in
a wide range
of
applications.
Objective
Jens
ReJay
Jens
(a) Schematics
(b) Prototype
Fig. 2 -
HoJo
scopic 3D camera prototype
by
3DVJVANT project at BruneJ
University [J2]
III.
PROPOSED
MINIA
TURISED
HOLOSCOPIC
3D
CAMERA
ADAPTOR
FOR
RASPBERRY
PI
Innovative approaches are proposed for an effective design
of
portable holoscopic 3D camera adaptor for Raspberry
Pi
and
186
this is the first time for designing and prototyping an adaptor for
embedded systems. This will maximise the use
of
a H3D
imaging system, especially for the purpose
of
3D depth
measurement, reducing the size as weil as enabling portability
are key. Two different designs and prototypes
of
the adaptor are
done for Raspberry Pi using a single aperture, wh ich pursues
seamless integration.
The proposed adaptor is specially designed for a raspberry
pi camera; however, it can be adjusted to work with various
devices. The main principle behind this proposed adaptor
is
to
integrate fly's eye MLA into a raspberry pi camera to create a
portable holoscopic 3D Camera lens adaptor. This will enable
the camera to capturing H3D images, for which
it
can be
processed on the raspberry pi for 3D depth sensing, depth
measureable as weIl as generating different perspective
viewpoint images for robust visual signal processing and
classifIcation e.g. robotic vision.
Design 1: Without a close-up lens
The main design
ofthe
adaptor beside the pi-camera module
with its objective lens, consists
of
a fly's eye microienses array,
an objective lens and a fIeld lens. The key here
is
to reduce the
depth
of
fIeld
of
the pi-camera lens to reduce the length
of
the
adaptor. This is done
by
twisting the lens outward and away
from the sensor, that way there are less parts
in
the adaptor and
less distortion
in
the image captured. Fig. 3.a shows the design
construction. To further enhance the captured images, a fIeld
lens back to back with the MLA is added to remove most
of
the
vignetting [12].
Design 2: With a close-up lens
The second design consists
of
a fly's eye microiens array, an
objective lens and a close-up lens as illustrated in Fig. 3.b. The
purpose
of
the close-up lens
is
to focus the back
of
the MLA
into the built
in
lens
of
the raspberry pi camera and to keep a
minimal depth
of
fIeld,
in
order to keep the length
of
the
adaptor minimal.
Camera Module
with
(a) Design 1
Camera Module with
lens
twisted outward
(b) Design 2
Fig. 3 - Illustration
of
the proposed designs
of
the adaptor (a) with a close-up
lens. (b) without a close-up lens
Prototypes
Based on design 1 &
2,
we constructed 2 different
prototypes; Prototype 1 (Fig. 4.a) which
is
based on Design 1
(Fig. 3.a) in its construction and method and like wise Prototype
2 (Fig. 4.b)
is
based on Design 2 (Fig. 3.b).
Pi-camera MLA Objective
90mm
(a) Prototype 1
F
ig.
4 - Adaptor Prototypes (a) Ex perimental setup, without close-up lens, (b)
Assembled prototype
ofthe
adaptor with a close-up lens.
187
Prototype
1 Specification
Raspberry
Pi
camera: sensor
si
ze:
25
x
24
x 9
mm
; 2D
Resolution: 2592 x 1944
pixels
Lens Per Inch: 40.
03
Pixels Per lens: 50
Prototype
2 Specification
Raspberry
Pi
camera: sensor
si
ze:
25
x 24 x 9 mm; 2D
Resolution: 2592 x 1944
pixels
Lens Per Inch: 40.03
Pixels Per lens: 70
The two prototype are relatively cheap to assemble, the two
most expensive components are the Pi-camera (f20) and the
objective lens
(
~
f5)
,
the rest come at much cheaper price. The
MLA comes in a
A4
sheet costing any where between
f3
to
fl5
per sheet depending on the pitch and LPI.
IV.
RESULTS
Using the prototypes, we captured and processed the H3D
images below to extract viewpoints and elemental images. Fig.
5 shows a H3D image (l649X1245 Pixels) acquired using
prototype 1 and processed to extract the viewpoint images
(31X1235 Pixels), which are
in
an
acceptable utilisation.
,
'
~
t
I
1I~\lL
. :
mt
\
''',
ui,
.
II'I
I
I
~~~
~
'I'
1
~
1I"',
1II111
11
tl
l\ \ I I
'"
"ff'"
I'"''
(a) H3D image from prototype 1
(d) Disparity Map
(c) Viewpoint 2
F
ig.
5 - (a) Aequired holoseopie 3D image from prototype 1; (b) extraeted
organie viewpoint I; (e) extraeted organie
vi
ewpoint 20; (d) VPI and VP2
disparity map.
The disparity map generated from the organic viewpoints
in
Fig. 5 shows some sort
of
depth information but it far from
perfect. This can be improved by using computational
resolution enhancement algorithrns in post-production
[9][15][16][17].
Fig. 6 shows the acquired H3D image fonn prototype 2 and
processed to extract the elemental images, which are used to
generate a disparity map for 3D depth information.
(a) H3D image from (b) (c) (d)
Prototype 2
F
ig.
6 - (a) Acquired holoscopic 3D image from prototype
2,
(b) Extracted
elemental image I; (b) elemental image 2; (c) disparity map generated from EIl
and
E12
, (red
is
close and blue is far).
The two designs use very similar approach
in
the way they
are implemented though, the results
of
each are quite distinct. In
a way that the first prototype outcome showed a strong H3D
features from the captured image to the processing method
of
extraction
of
the viewpoints. On the other hand, the second
prototype exhibited stereoscopic imaging features more than
H3D.
v.
CONCLUSJON
In
this paper, we have proposed a miniaturised portable
holoscopic 3D camera adaptor for Raspberry Pi , wh ich
is
a
credit card size digital board. Holoscopic 3D imaging pursues
simplistic form
of
spatial imaging and offers a true and robust
spatial imaging therefore it can
be
utilised for various
applications and such as interaction, depth sensing and
inspection. We designed and prototyped two different adaptors
and resulting images are illustrated and evaluated. Both
orthographic viewpoint images and perspective elemental
images are extracted from the acquired holoscopic 3D images.
In
addition, the elementals images are used to generate 3D
depth map by ca1culating disparity map that exhibits a useable
depth details. All acquired and generated images are
in
the
organic fonnat but the images can be further improved using
computational resolution enhancement algorithms
in
post-
production.
188
VI.
REFERENCES
[I] D. S. Pankaj, R.
R.
Nidamanuri,
B.
Pinnamaneni, and B.
P.
Prasad, "3-D
imaging techniques and review ofproducts," Sep. 2013.
[2]
C.
Connolly, "Stereoscopie imaging," Sensor Review, vol. 26, no. 4, pp.
266-271, Oct. 2006.
[3]
A. Kubota, A. Smolic,
M.
Magnor,
M.
Tanimoto, T . Chen, and C. Zhang ,
"Multiview
Im
aging and 3DTV," Signal Processing Magazine, IEEE, vol.
24, no.
6,
pp.
10
-
21
, Nov. 2007.
[4]
P.
Hariharan, Optical Holography: Principles, techniques and applications.
Cambridge University Press, 1996.
[5]
A. E, A. A, A. Maysam, Swash. M.R, A.
F.
0 , and F. J, "Scene depth
extraction from Holoscopic imaging technology," IEEE, 2008, pp. 1- 4.
[6]
"How 3-D
PC
glasses work," HowStuffWorks, 2003.
[7]
A. Wilson , "Choosing a 3D vision system for automated robotics
applications,"
in
Vision Systems, 2014.
[8]
G. Lippmann, "Epreuves reversibles donnant
la
sensation
du
relief,"
Journal de Physique Theorique et Appliquee, vol.
7,
no.
I, pp.
821
-825,
1908.
[9]
M.R. Swash,
C.
Fernandez Juan , A. Aggoun,
O.
Abdulfatah, and E.
Tsekleves, "Reference based holoscopic 3D camera aperture stitching for
widening the overall viewing angle," IEEE, 2004, pp. 1-
3.
[10] J. C . Barreiro, M. Martinez-Corral , G. Saavedra, H. Navarro, and
B.
Javidi, "High-resolution far-field integral-imaging camera by double
snapshot," Optics Express, vol. 20, no. 2,
pp.
890-895, Jan. 2012.
[11]
B.
Javidi, J.-Y . Son, J. T. Thomas, and D. D. Desjardins, "Three-
Dimensional imaging, visualization, and display 2010 and display
technologies a
nd
applications for defen se, security, a
nd
av ionics
IV
,"
2010.
[12] Aggoun, A.; Tsekleves, E.; Swash, M.R.; Zarpalas, D.; Dimou,
A.
; Daras,
P.
; Nunes, P.; Soares,
LD.
, "Immersive 3D Holoscopic Video System,"
MultiMedia, IEEE ,
vo1.20
, no . l, pp.28,37, Jan .-March 2013.
[13]
E.
Alazawi,
A.
Aggoun,
O.
Abdulfatah , M.R. Swash, "Adaptive Depth
Map Estimation from 3D Integral Images", IEEE International
Symposium
on
Broadband Multimedia Systems and Broadcasting,
London, UK, June 2013.
[14]
R.
Ng, "Digital light field photography," 2006.
[15]
J.
Makanjuola,
A.
Aggoun ,
M.
Swash ,
P.
Grange ,
B.
Challacombe,
P.
Dasgupta "3 D-Holoscopic Imaging: A Novel Way To enhance
im
aging
in
Minimally invasive therapy
in
urological oncology", Journal
of
Endourology. September 2012, 26(S I): P
I-A5n.
[16] M.R. Swash,
A.
Aggoun, O. Abdulfatah,
B.
Li,
J.
C.
Jacome,
E.
Alazawi,
E.
Tsekleves "Pre-Processing
of
Holoscopic 3D Image For
Autostereoscopic 3D Display", 5th International Conference
on
3D
Imaging (IC3D). 2013.
[17] E. Alazawi,
M.
Abbod,
A.
Aggoun,
M.
R.
Swash, and
O.
Abdulfatah,
"Super Depth-map rendering by converting holoscopic
vi
ewpoint to
perspective projection", 3DTV-CON
in
Pursuit
of
Next Generation 3D
Display, Budapest, Hungary, 2-4th July 2014.
Chapter
Full-text available
Technology and Engineering / Power Resources / General Key Features • Includes innovative research outcomes, programs, algorithms, and approaches that consolidate current status and future of modern power systems • Discusses how uncertainties will impact on the performance of power systems, • Offers solutions to significant challenges in power systems planning to achieve the best operational performance of the different electric power sectors Uncertainties in Modern Power Systems brings together several aspects of uncertainty management in power systems at planning and operation stages within an integrated framework. Providing state-of-the-art aspects in electric network planning including timescales , reliability, quality, optimal allocation of compensators and distributed generators, mathematical formulation, and search algorithms this book introduces innovative research outcomes, programs, algorithms, and approaches that consolidate the present status and future and opportunities and challenges of power systems. With a comprehensive description of the overall process in terms of understanding, creating, data gathering, and managing complex electrical engineering applications with uncertainties this reference is useful for researchers, engineers and operators in power distribution systems.
Conference Paper
Full-text available
In this paper, we present a new corresponding and matching technique based on a novel automatic Feature-Match Selection (FMS) algorithm. The aim of this algorithm is to estimate and extract an accurate full parallax 3D model form from 3D Omni-directional Holoscopic Imaging (3DOHI) system. The novelty of the paper lies on two contributions: feature blocks selection and optimization corresponding process automatically without human interaction. Three main problems in depth map estimation from 3DHI have been solved, uncertainty and region homogeneity at image location, dissimilar displacements within the matching block around object borders and computational complexity.
Conference Paper
Full-text available
Integral Imaging (InIm) is one of the most promising technologies for producing full color 3-D images with full parallax. InIm requires only one recording in obtaining 3D information and therefore no calibration is necessary to acquire depth values. The compactness of using InIm in depth measurement has been attracting attention as a novel depth extraction technique. In this paper, an algorithm for depth extraction that builds on previous work by the authors is presented. Three main problems in depth map estimation from InIm have been solved; i) the uncertainty and region homogeneity at image location where errors commonly appear in disparity process, ii) dissimilar displacements within the matching block around object borders, iii) object segmentation. This method is based on the distribution of the sample variance in sub-dividing non-overlapping blocks. A descriptor which is unique and distinctive for each feature on InIm has been achieved. Comparing to state-of-the-art techniques, it is shown that the proposed algorithm has improvements on two aspects: 1) depth map extraction level, 2) computational complexity.
Conference Paper
Full-text available
Holoscopic 3D (H3D) imaging also known as Integral imaging is an attractive technique for creating full color 3D optical models that exist in space independently of the viewer. The images exhibit continuous parallax throughout the viewing zone. In order to achieve depth control, robust and real-time, a single aperture holoscopic 3D imaging camera is used for recording holoscopic 3D image using a regularly spaced array of microlens arrays (MLA) which view the scene at a slightly different angle to its neighbor. However, the main problem is the MLA introduce a dark borders and this cause errors at playback on the H3D Display. This paper proposes a reference based pre-processing of H3D image for H3D displays. The proposed method takes advantages of MLA for reference, detecting and removing the dark borders in the H3D image.
Conference Paper
Holoscopic 3D imaging also known as Integral imaging is a promising technique for creating full color 3D optical models that exist in space independently of the viewer. The images exhibit continuous parallax throughout the viewing zone. In order to achieve depth control, robust and real-time, a single aperture holoscopic 3D imaging camera is used for recording holoscopic 3D image using a regularly spaced array of small lenslets, which view the scene at a slightly different angle to its neighbor. However, the main problem the holoscopic 3D camera aperture faces is that it is not big enough for recording larger scene with existing 2D camera sensors. This paper proposes a novel reference based holoscopic 3D camera aperture stitching method that enlarges overall viewing angle of the holoscopic 3D camera in post-production after the capture.
Conference Paper
The expansion of 3D technology will enable observers to perceive 3D without any eye-wear devices. Holoscopic 3D imaging technology offers natural 3D visualisation of real 3D scenes that can be viewed by multiple viewers independently of their position. However, the creation of a super depth-map and reconstruction of the 3D object from a holoscopic 3D image is still in its infancy. The aim of this work is to build a high-quality depth map of a real 3D scene from a holoscopic 3D image through extraction of multi-view high resolution Viewpoint Images (VPIs) to compensate for the poor features of VPIs. To manage this, we propose a reconstruction method based on the perspective formula to convert sets of directional orthographic low resolution VPIs into perspective projection geometry. Following that, we implement an Auto-Feature point algorithm for synthesizing VPIs to distinctive Feature-Edge (FE) blocks to localize and provide an individual feature detector that is responsible for integration of 3D information. Detailed experiments proved the reliability and efficiency of the proposed method, which outperforms state-of-the-art methods for depth map creation.
Conference Paper
3-D imaging has applications in several fields, such as design, entertainment, manufacturing, defense, security, construction, medical, medicine and for visual aid. Even though 3-D imaging is known since long time, the advancement of 3-D technology was gradual because of the slow advancements in 3D visualization, quality of 3-D sensors, less availability of processing power etc. Recent years, we have computers with more power with multi-cores, high precision 3D image sensors and 3D displays. This has led to the adoption of 3D imaging across various verticals. This paper discusses the techniques used to obtain the 3-D image, the essential features required for 3-D camera and reviews some of the 3-D cameras in the academic and market sectors.