ArticlePDF Available

High Resolution Image Reconstruction with Smart Camera Network

Authors:

Abstract and Figures

In this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. Nevertheless, this amount is sufficient to efficiently route queries through the network without any degradation of the matching performance. Digital image processing approaches have been investigated to reconstruct a high resolution image from aliased low resolution images. The accurate registrations between low resolution images are very important to the reconstruction of a high resolution image. The proposed feature distribution scheme results in far lower network traffic load. To achieve the maximum performance as with the full distribution of feature vectors, a set of requirements regarding abstraction, storage space, similarity metric and convergence has been proposed to implement this work in C++ andQT.
Content may be subject to copyright.
ISSN (ONLINE) : 2395-695X
ISSN (PRINT) : 2395-695X
Available online at www.ijarbest.com
International Journal of Advanced Research in Biology, Ecology, Science and Technology (IJARBEST)
Vol. 1, Issue 4, July 2015
1
All Rights Reserved © 2015 IJARBEST
High Resolution Image Reconstruction with Smart
Camera Network
R.Nikitha1, C.K.Sankavi2, H.Mehnaz3, N.Rajalakshmi4, Christo Ananth5
U.G.Scholars, Department of ECE, Francis Xavier Engineering College, Tirunelveli1,2,3,4
Associate Professor, Department of ECE, Francis Xavier Engineering College, Tirunelveli 5
Abstract In this work, a framework of
feature distribution scheme is proposed for object
matching. In this approach, information is distributed in
such a way that each individual node maintains only a
small amount of information about the objects seen by
the network. Nevertheless, this amount is sufficient to
efficiently route queries through the network without
any degradation of the matching performance. Digital
image processing approaches have been investigated to
reconstruct a high resolution image from aliased low
resolution images. The accurate registrations between
low resolution images are very important to the
reconstruction of a high resolution image. The proposed
feature distribution scheme results in far lower network
traffic load. To achieve the maximum performance as
with the full distribution of feature vectors, a set of
requirements regarding abstraction, storage space,
similarity metric and convergence has been proposed to
implement this work in C++ andQT.
Index Terms—Computer vision, Object Reconstruction,
Visual Sensor Networks
I. INTRODUCTION
A Visual Sensor Network is a network of
spatially distributed smart camera devices capable of
processing and fusing images of a scene from a variety of
viewpoints into some form more useful than the individual
images. A visual sensor network may be a type of wireless
sensor network, and much of the theory and application of
the latter applies to the former. The network generally
consists of the cameras themselves, which have some local
image processing, communication and storage capabilities,
and possibly one or more central computers, where image
data from multiple cameras is further processed and fused.
Local processing of the image data reduces the total
amount of data that needs to be communicated through the
network. Local processing can involve simple image
processing algorithms (such as background substraction for
motion/object detection, and edge detection) as well as
more complex image/vision processing algorithms (such as
feature extraction, object classification, scene reasoning).
Thus, depending on the application, the camera nodes may
provide different levels of intelligence, as determined by the
complexity of the processing algorithms they use. The
cameras can collaborate by exchanging the detected object
features, enabling further processing to collectively reason
about the object's appearance or behavior. At this point the
visual sensor network becomes a user-independent,
intelligent system of distributed cameras that provides only
relevant information about the monitored phenomena.
Therefore, the increased complexity of vision processing
algorithms results in highly intelligent camera systems that
are oftentimes called smart camera networks.
The issue of ensuring and preserving coverage of
an area with controlled redundancy using WSNs has been
widely investigated, and efficient algorithms have been
proposed.
The main goals of coverage optimization
algorithms are to preserve coverage in case of sensor failure
and to save energy by putting redundant sensor nodes to
sleep. Choosing which nodes to put in sleeping or active
mode should be done carefully to prolong the network
lifetime, preserve coverage and connectivity, and perform
the task at hand. However, when camera sensors are
involved, three-dimensional coverage of space is required,
which increases the complexity of the coverage issue.
Coverage of networked cameras can be simplified by
assuming that the cameras have a fixed focal length lens, are
mounted on the same plane, and are monitoring a parallel
plane.
Visual data collected by camera nodes should be
processed and all or relevant data streamed to the BS. It is
largely agreed that streaming all the data is impractical due
to the severe energy and bandwidth constraints of WSNs.
And since processing costs are significantly lower than
communication costs, it makes sense to reduce the size of
data before sending it to the BS. However, visual data
processing can be computationally expensive. Reliable data
transmission is an issue that is more crucial for VSNs than
for conventional scalar sensor networks. While scalar sensor
networks can rely on redundant sensor readings through
spatial redundancy in the deployment of sensor nodes to
compensate for occasional losses of sensor measurements,
this solution is impractical for VSNs, which are
characterized by higher cost and larger data traffic.
Moreover, most reliable transmission protocols proposed for
conventional scalar data WSNs are based on link layer
acknowledgment messages and retransmissions. They are
therefore not suitable for visual data transmission due to
their stringent bandwidth and delayrequirements. The initial
phase of visual data processing usually involves object
detection. Object detection may trigger a camera's
processing activity and data communication. Object
ISSN (ONLINE) : 2395-695X
ISSN (PRINT) : 2395-695X
Available online at www.ijarbest.com
International Journal of Advanced Research in Biology, Ecology, Science and Technology (IJARBEST)
Vol. 1, Issue 4, July 2015
2
All Rights Reserved © 2015 IJARBEST
detection is mostly based on light-weight background
substraction algorithms and presents the first step toward
collective reasoning by the camera nodes about the objects
that occupy the monitored space.
Since detection of objects on the scene is usually the first
step in image analysis, it is important to minimize the
chances of objects` fault detection. Thus, reliability and
light-weight operations will continue to be the main
concerns of image processing algorithms for object
detection and occupancyreasoning.
The main objective of [1] is to provide
reconstruction theory and techniques for image
reconstruction and creating enhanced resolution images
from irregularly sampled data. The relationship between the
aperture function, the measurement sampling, and the
reconstruction has been examined in this paper. The
methodology used in this paper is image reconstruction and
resolution enhancement algorithm. This algorithm provide
improved resolution images by taking advantage of
oversampling and the response characteristics of the
aperture function to reconstruct the underlying surface
function sampled by the sensor. This algorithm can generate
images from the observations at a resolution better than the
mainlobe aperture resolution of the sensor.
The algebraic reconstruction technique (ART) and
scatterometer image reconstruction (SIR) algorithms can be
termed resolution enhancement algorithms because of their
ability to fully reconstruct attenuated signal components.
SIR is more robust than Multiplicative ART and Additive
ART in the presence of noise. Both AART and MART
produce slightly different results based on the different
regularizations. The results show that the image
reconstruction and resolution enhancement algorithms such
as AART, MART, and SIR provide an effective way to
increase the effective resolution of remotely sensed
imagery. The advantage of this paper is that the sampling
and aperture function considerations in the design of the
sensor system provide better resolution and the high-pass
nature of the reconstruction filter increases the noise power.
The main drawback of this paper is that it will limit the
number of iterations before noise overtakes the
reconstruction.
The main objective of [2] is to develop a new
algorithm for density estimation using the EM algorithm
with a ME constraint. The proposed Maximum- Entropy
Expectation-Maximization (MEEM) algorithm provides a
recursive method to compute a smooth estimate of the
maximum likelihood estimate. The MEEM algorithm is
particularly suitable for tasks that require the estimation of
a smooth function from limited or partial data, such as image
reconstruction and sensor fieldestimation. The methodology
used in this paper is Maximum-Entropy Expectation
Maximization algorithm. The MEEM algorithm is used to
provide the optimal estimates of the weight, mean,
covariance for kernel density estimation. The basic EM
algorithm estimates a complete set from partial data sets and
therefore we propose to use the EM and MEEM algorithms
in these image reconstruction and sensor network
applications. The EM algorithm relies on a simple extension
of the lower-bound maximization method to prove that our
algorithm converges to a local maximum on the local
generated by the Cauchy-Schwartz inequality, which serves
as a lower bound on the augmented likelihood function.
The results indicate that, in most cases the results
under maximum entropy show better results than the
conventional EM algorithm. When we use a small number
of centers, the result of minimum entropy penalty shows
better results than the results of the conventional EM
algorithm and maximum entropy penalty. This is due to the
characteristics of maximum and minimum entropy. The
advantages of this paper are that the maximum entropy
solution provides smooth solution and the minimum entropy
solution provides the least smooth distribution. It provides a
very high performance than various othermethods.
The objective of [3] is to develop a theory of phase
singularities (PSs) for image representation. PSs is
calculated by the Laguerre-Gauss filters which contain
important information of an image and provide an efficient
and effective tool for image analysis and presentation. PSs
are invariant to translation and rotation and the positions of
PSs contain nearly complete information for reconstructing
the original image up to a scale. To examine the usefulness
of PSs, we develop two applications: object tracking and
image matching. In object tracking, the iterative closest
point (ICP) algorithm is used to determine the
correspondences of PSs between two adjacent frames. The
use of PSs allows us to precisely determine the motions of
tracked objects. In image matching, we combine PSs and
scale-invariant feature transform (SIFT) descriptor to deal
with the variations between two images and examine the
proposed method on a benchmark database. The ICP
algorithm is used for aligning two groups of points based on
geometrical information. The ICP starts with a rough initial
estimation on the transformation between the two groups of
points, and then iteratively refines the transformation by
identifying the matching points and minimizing an error
metric.
The result shows that PSs are generally stable to
real noise and image deformation and the proposed method
is used to find a large number of matching points for each
pair, which distribute over the whole images. The advantage
of this paper is that this method is more robust and we can
find correct matchingpairs.
The main objective of [4] is to collect considerably
less data than conventional systems, and display only what is
relevant for the task at hand. The proposed method is not an
alternative when the perfect reconstruction of arbitrary
images is required, but nevertheless operates within the
same framework by extracting information from
compressive measurements. Compressed sensing holds the
promise for radically novel sensors that can perfectly
reconstruct images using comparatively simple hardware
and considerably fewer samples of data. In surveillance
applications vast regions of the image may not contains
object of interest, and may therefore not be of significance to
the operator.
ISSN (ONLINE) : 2395-695X
ISSN (PRINT) : 2395-695X
Available online at www.ijarbest.com
International Journal of Advanced Research in Biology, Ecology, Science and Technology (IJARBEST)
Vol. 1, Issue 4, July 2015
3
All Rights Reserved © 2015 IJARBEST
Reconstruction Algorithms is the methodology
used in this paper. In this algorithm reconstruction using
compressed sensing will always require more samples than
if it were possible to directly measure projections on an
underlying basis in which the object is sparse. This paper is
not concerned with perfect reconstruction of the full image
from a relatively few samples, but with the reconstructions
of specific objects that are present in the image. The results
of simulation shown that the proposed approach can be
realized assuming different basis sets to represent the object
and irrespective of the choice of basis set, the weighting
process always yields a better result. The advantage of the
paper is to achieve the greatest possible compression and
reconstruction fidelity and the weights can be optimized to
emphasize greater discrimination between the objects and
background which should lead to enhanced visualization of
interested objects in the image.
The objective of [5] is to present a novel approach
for the study of signal reconstruction from randomly
scattered sensors in a multidimensional space. The random
sampling using constant-mean point processes yields an
unbiased estimate of the signal. Iterative reconstruction
scheme is the methodology used in this paper. The classical
iterative reconstruction forms a sequence of unbiased
estimates of band-limited signals, which converges to the
true function in the mean-square sense. The use of an ideal
band-limited operator in the iterative reconstruction method
improves the reconstruction substantially and removes
many of the artifacts. The iterative estimation method
performs efficiently even when the sensors are sparse. The
performance of the iterative estimation method for 2-D
image reconstruction and field estimation from Poisson and
uniformly distributed sensors are also demonstrated in this
method. The field estimation problem is formulated as
signal reconstruction from scattered sensors. This approach
is an extension of the problem of image reconstruction from
limited samples. The solution to these problems is based on
classical methods for function estimation from irregular
samples. When the samples are distributed according to a
homogeneous Poisson process in the plane, the point
process is constant mean and corresponds to the density of
the process in the limit as the number of samples approaches
infinity.
The simulation results rely on a finite number of
Poisson distributed random samples on a bounded region
.We interpret these random samples as an extraction of a
bounded region from an unbounded plane with an infinite
number of Poisson samples. The advantage of this paper is
that the energy is confined within a certain bandwidth and
improves the reconstruction of images.
II. EXISTING SYSTEM
Computer Vision Algorithm is used in this existing
system. By using this algorithm large amount of digitized
visual data is processed. High end hardware is required for
processing. It leads to the formation of star network
structure, a powerful processing unit. In this system, only
one sensor node is used, so the processing of large amount
of digitized data is difficult. The advantage of this method
is that it allows simplicity of routing. A conceptual problem
of this centralized approach is that it is not scalable, i.e., it
does not scale with the number of sensors used. When
additional nodes are added to such a configuration, the
central processor becomes a major bottleneck. In some
cases, the number of visual sensors may go into the hundreds
.It is obvious that the requirements for transmitting and
processing the data in such a large system are
correspondingly large.
III. PROPOSED SYSTEM
In this project, a framework of feature distribution
scheme is proposed for object matching. Each individual
node maintains only a small amount of information about
the objects seen by the network. Nevertheless, this amount
is sufficient to efficiently route queries through the network
without any degradation of the matching performance.
Efficient processing has to be done on the images received
from nodes to reconstruct the image and respond to user
query. The proposed feature distribution scheme results in
far lower network traffic load. To achieve the maximum
performance as with the full distribution of feature
vectors, a set of requirements regarding abstraction, storage
space, similarity metric and convergence has to be proposed
to implement this work in C++.
The SQL database package is used for database
connectivity. The SQL database performs functions such
as insert, delete and update. The insert function is used to
insert the data into the database. The delete function is
used to delete the entire row in a database. The update
function updates all the data in thedatabase.
IV. RESULTS AND DISCUSSION
In this the user first sets the path of the folder
where the background images are present. After selecting
the path click the start button to process the filename of
the background image and find out the node number of that
background and stored it in the database. Click close to exit
from the window. Fig.1. shows the storage of background
images in the database.
Fig.1. Receive background image
ISSN (ONLINE) : 2395-695X
ISSN (PRINT) : 2395-695X
Available online at www.ijarbest.com
International Journal of Advanced Research in Biology, Ecology, Science and Technology (IJARBEST)
Vol. 1, Issue 4, July 2015
4
All Rights Reserved © 2015 IJARBEST
A. RECEIVE FOREGROUND OBJECT
Fig.2. Receive foregroundobject
In this the user first sets the path of the folder
where the foreground objects are present. After selecting
the path click the start button to process the filename of
the foreground object and find out the node number and
frame number of that foreground and stored it in the
database. Click close to exit from the window. Fig.2. shows
the storage of processed foreground object in the database.
C. IMAGE STITCHING
Fig.3. Image Stitching
In image stitching the node number, frame number
and the co-ordinates of the foreground objects in the
background image have to be found in the database. The
objects which are not stitched with the background in the
database are taken first and then find out the corresponding
node number of that object. Then the node’s corresponding
background image is taken and stitches it with the
foreground object and is stored in the database. After
stitching the process gets completed and this message will
be shown to the user. Click close to exit from the window.
Fig.3. shows the process of image stitching.
D. USER QUERY PROCESSING
Fig.4.Object Reconstruction
When a user wants to know about the foreground
objects that is present during a time, then the user enters the
starting date that is from date and to date and also he enters
the node number that is from which node, the user wants
the foreground object to be seen. The server retrieves the
correct background and foreground objects from the
database and displays it to the user. Fig.4. shows the
reconstructedobject.
V. CONCLUSION
In this work, a framework of feature distribution
scheme is proposed for object matching. In this approach,
information is distributed in such a way that each individual
node maintains only a small amount of information about the
objects seen by the network. Nevertheless, this amount is
sufficient to efficiently route queries through the network
without any degradation of the matching performance.
Digital image processing approaches have been investigated
to reconstruct a high resolution image from aliased low
resolution images. The accurate registrations between low
resolution images are very important to the reconstruction
of a high resolution image. The proposed feature
distribution scheme results in far lower network traffic load.
To achieve the maximum performance as with the full
distribution of feature vectors, a set of requirements
regarding abstraction, storage space, similarity metric and
convergence has been proposed to implement this work in
C++ andQT.
REFERENCES
[1] Foroosh, Zerubia, and Berthod, “Extension of phase correlation to
subpixel registration,” IEEE Trans. Image Process., vol. 11, no. 3, pp.
188–200, Mar. 2002.
[2] Huang, Burnett, and Deczky, “The importance of phase in image
processing filters,” IEEE Trans. Acoust., Speech, Signal Process., vol.
ASSP-23, no. 6, pp. 529–542, Jun. 1975.
[3] Khan and M. Shah. Consistent labeling of tracked objects in multiple
cameras with overlapping fields of view. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 25(10):1355–1360, October 2003.
[4] Khan and Shah, “Tracking multiple occluding people by localizing
on multiple scene planes,” IEEE Trans. Pattern Anal. Mach. Intell.,vol.
31, no. 3, pp. 505– 519, Mar. 2009.
[5] Lee, Romano, and Stein, "Monitoring activities from multiple video
streams: establishing a common coordinate frame," IEEE Trans. Pattemr
Anal. Mach. Intel. vol. 22, no. 8, pp. 75 8-767, 2000.
[6] Long, Hardin, and Whiting, “Resolution enhancement of spaceborne
scatterometer data,” IEEE Trans. Geosci. Remote Sensing, vol. 31, pp.
700–715, May 1993.
ISSN (ONLINE) : 2395-695X
ISSN (PRINT) : 2395-695X
Available online at www.ijarbest.com
International Journal of Advanced Research in Biology, Ecology, Science and Technology (IJARBEST)
Vol. 1, Issue 4, July 2015
5
All Rights Reserved © 2015 IJARBEST
[7] Lowe, D.G. 2001. Local feature view clustering for 3D object
recognition. IEEE Conference on Computer Vision and Pattern
Recognition, Kauai, Hawaii,pp. 682-688.
... This method is tested with different set of images including CT and MR images especially 3D images and produced perfect segmentation results. [52] proposed a work, in this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. ...
Preprint
Full-text available
At the forefront of technological innovation and scholarly discourse, the Journal of Electrical Systems (JES) is a peer-reviewed publication dedicated to advancing the understanding and application of electrical systems, communication systems and information science. With a commitment to excellence, we provide a platform for researchers, academics, and professionals to contribute to the ever-evolving field of electrical engineering, communication technology and Information Systems. The mission of JES is to foster the exchange of knowledge and ideas in electrical and communication systems, promoting cutting-edge research and facilitating discussions that drive progress in the field. We aim to be a beacon for those seeking to explore, challenge, and revolutionize the way we harness, distribute, and utilize electrical energy and information systems..
... This method is tested with different set of images including CT and MR images especially 3D images and produced perfect segmentation results. [52] proposed a work, in this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. ...
Preprint
Full-text available
The research on Quantum Networked Artificial Intelligence is at the intersection of Quantum Information Science (QIS), Artificial Intelligence, Soft Computing, Computational Intelligence, Machine Learning, Deep Learning, Optimization, Etc. It Touches On Many Important Parts Of Near-Term Quantum Computing And Noisy Intermediate-Scale Quantum (NISQ) Devices. The research on quantum artificial intelligence is grounded in theories, modelling, and significant studies on hybrid classical-quantum algorithms using classical simulations, IBM Q services, PennyLane, Google Cirq, D-Wave quantum annealer etc. So far, the research on quantum artificial intelligence has given us the building blocks to achieve quantum advantage to solve problems in combinatorial optimization, soft computing, deep learning, and machine learning much faster than traditional classical computing. Solving these problems is important for making quantum computing useful for noise-resistant large-scale applications. This makes it much easier to see the big picture and helps with cutting-edge research across the quantum stack, making it an important part of any QIS effort. Researchers — almost daily — are making advances in the engineering and scientific challenges to create practical quantum networks powered with artificial intelligence
... This method is tested with different set of images including CT and MR images especially 3D images and produced perfect segmentation results. [52] proposed a work, in this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. ...
Preprint
Full-text available
JoWUA is an online peer-reviewed journal and aims to provide an international forum for researchers, professionals, and industrial practitioners on all topics related to wireless mobile networks, ubiquitous computing, and their dependable applications. JoWUA consists of high-quality technical manuscripts on advances in the state-of-the-art of wireless mobile networks, ubiquitous computing, and their dependable applications; both theoretical approaches and practical approaches are encouraged to submit. All published articles in JoWUA are freely accessible in this website because it is an open access journal. JoWUA has four issues (March, June, September, December) per year with special issues covering specific research areas by guest editors. The editorial board of JoWUA makes an effort for the increase in the quality of accepted articles compared to other competing journals..
... This method is tested with different set of images including CT and MR images especially 3D images and produced perfect segmentation results. [52] proposed a work, in this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. ...
Preprint
Full-text available
Proceedings on Engineering Sciences examines new research and development at the engineering. It provides a common forum for both front line engineering as well as pioneering academic research. The journal's multidisciplinary approach draws from such fields as Automation, Automotive engineering, Business, Chemical engineering, Civil engineering, Control and system engineering, Electrical and electronic engineering, Electronics, Environmental engineering, Industrial and manufacturing engineering, Industrial management, Information and communication technology, Management and Accounting, Management and quality studies, Management Science and Operations Research, Materials engineering, Mechanical engineering, Mechanics of Materials, Mining and energy, Safety, Risk, Reliability, and Quality, Software engineering, Surveying and transport, Architecture and urban engineering.
... This method is tested with different set of images including CT and MR images especially 3D images and produced perfect segmentation results. [52] proposed a work, in this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. ...
Preprint
Full-text available
Utilitas Mathematica Journal is a broad scope journal that publishes original research and review articles on all aspects of both pure and applied mathematics. This journal is the official publication of the Utilitas Mathematica Academy, Canada. It enjoys good reputation and popularity at international level in terms of research papers and distribution worldwide. Offers selected original research in Pure and Applied Mathematics and Statistics. UMJ coverage extends to Operations Research, Mathematical Economics, Mathematics Biology and Computer Science. Published in association with the Utilitas Mathematica Academy. The leadership of the Utilitas Mathematica Journal commits to strengthening our professional community by making it more just, equitable, diverse, and inclusive. We affirm that our mission, Promote the Practice and Profession of Statistics, can be realized only by fully embracing justice, equity, diversity, and inclusivity in all of our operations. Individuals embody many traits, so the leadership will work with the members of UMJ to create and sustain responsive, flourishing, and safe environments that support individual needs, stimulate intellectual growth, and promote professional advancement for all.
... This method is tested with different set of images including CT and MR images especially 3D images and produced perfect segmentation results. [52] proposed a work, in this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. ...
Preprint
Full-text available
Most experts would consider this the biggest challenge. Quantum computers are extremely sensitive to noise and errors caused by interactions with their environment. This can cause errors to accumulate and degrade the quality of computation. Developing reliable error correction techniques is therefore essential for building practical quantum computers. While quantum computers have shown impressive performance for some tasks, they are still relatively small compared to classical computers. Scaling up quantum computers to hundreds or thousands of qubits while maintaining high levels of coherence and low error rates remains a major challenge. Developing high-quality quantum hardware, such as qubits and control electronics, is a major challenge. There are many different qubit technologies, each with its own strengths and weaknesses, and developing a scalable, fault-tolerant qubit technology is a major focus of research. Funding agencies, such as government agencies, are rising to the occasion to invest in tackling these quantum computing challenges. Researchers — almost daily — are making advances in the engineering and scientific challenges to create practical quantum computers.
... This method is tested with different set of images including CT and MR images especially 3D images and produced perfect segmentation results. [52] proposed a work, in this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. ...
Preprint
Full-text available
It is no surprise that Quantum Computing will prove to be a big change for the world. The practical examples of quantum computing can prove to be a good substitute for traditional computing methods. Quantum computing can be applied to many concepts in today’s era when technology has grown by leaps and bounds. It has a wide beach of applications ranging from Cryptography, Climate Change and Weather Forecasting, Drug Development and Discovery, Financial Modeling, Artificial Intelligence, etc. Giant firms have already begun the process of quantum computing in the field of artificial intelligence. The search algorithms of today are mostly designed according to classical computing methods. While Comparing Quantum Computers with Data Mining with Other Counterpart Systems, we are able to understand its significance thereby applying new techniques to obtain new real-time results and solutions.
... This method is tested with different set of images including CT and MR images especially 3D images and produced perfect segmentation results. [52] proposed a work, in this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. ...
Preprint
Full-text available
Published since 2004, Periódico Tchê Química (PQT) is a is a triannual (published every four months), international, fully peer-reviewed, and open-access Journal that welcomes high-quality theoretically informed publications in the multi and interdisciplinary fields of Chemistry, Biology, Physics, Mathematics, Pharmacy, Medicine, Engineering, Agriculture and Education in Science. Researchers from all countries are invited to publish on its pages. The Journal is committed to achieving a broad international appeal, attracting contributions, and addressing issues from a range of disciplines. The Periódico Tchê Química is a double-blind peer-review journal dedicated to express views on the covered topics, thereby generating a cross current of ideas on emerging matters.
... This method is tested with different set of images including CT and MR images especially 3D images and produced perfect segmentation results. [52] proposed a work, in this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. ...
Preprint
Full-text available
Onkologia I Radioterapia is an international peer reviewed journal which publishes on both clinical and pre-clinical research related to cancer. Journal also provide latest information in field of oncology and radiotherapy to both clinical practitioner as well as basic researchers. Submission for publication can be submitted through online submission, Editorial manager system, or through email as attachment to journal office. For any issue, journal office can be contacted through email or phone for instatnt resolution of issue. Onkologia I Radioterapia is a peer-reviewed scopus indexed medical journal publishing original scientific (experimental, clinical, laboratory), review and case studies (case report) in the field of oncology and radiotherapy. In addition, publishes letters to the Editorial Board, reports on scientific conferences, book reviews, as well as announcements about planned congresses and scientific congresses. Oncology and Radiotherapy appear four times a year. All articles published with www.itmedical.pl and www.medicalproject.com.pl is now available on our new website.
... This method is tested with different set of images including CT and MR images especially 3D images and produced perfect segmentation results. [52] proposed a work, in this work, a framework of feature distribution scheme is proposed for object matching. In this approach, information is distributed in such a way that each individual node maintains only a small amount of information about the objects seen by the network. ...
Preprint
Full-text available
The journal is published every quarter and contains 200 pages in each issue. It is devoted to the study of Indian economy, polity and society. Research papers, review articles, book reviews are published in the journal. All research papers published in the journal are subject to an intensive refereeing process. Each issue of the journal also includes a section on documentation, which reproduces extensive excerpts of relevant reports of committees, working groups, task forces, etc., which may not be readily accessible, official documents compiled from scattered electronic and/or other sources and statistical supplement for ready reference of the readers. It is now in its nineteenth year of publication. So far, five special issues have been brought out, namely: (i) The Scheduled Castes: An Inter-Regional Perspective, (ii) Political Parties and Elections in Indian States : 1990-2003, (iii) Child Labour, (iv) World Trade Organisation Agreements, and (v) Basel-II and Indian Banks.
Article
Full-text available
We address the issue of tracking moving objects in an environment covered by multiple uncalibrated cameras with overlapping fields of view, typical of most surveillance setups. In such a scenario, it is essential to establish correspondence between tracks of the same object, seen in different cameras, to recover complete information about the object. We call this the problem of consistent labeling of objects when seen in multiple cameras. We employ a novel approach of finding the limits of field of view (FOV) of each camera as visible in the other cameras. We show that, if the FOV lines are known, it is possible to disambiguate between multiple possibilities for correspondence. We present a method to automatically recover these lines by observing motion in the environment, Furthermore, once these lines are initialized, the homography between the views can also be recovered. We present results on indoor and outdoor sequences containing persons and vehicles.
Article
Occlusion and lack of visibility in crowded and cluttered scenes make it difficult to track individual people correctly and consistently, particularly in a single view. We present a multi-view approach to solving this problem. In our approach we neither detect nor track objects from any single camera or camera pair; rather evidence is gathered from all the cameras into a synergistic framework and detection and tracking results are propagated back to each view. Unlike other multi-view approaches that require fully calibrated views our approach is purely image-based and uses only 2D constructs. To this end we develop a planar homographic occupancy constraint that fuses foreground likelihood information from multiple views, to resolve occlusions and localize people on a reference scene plane. For greater robustness this process is extended to multiple planes parallel to the reference plane in the framework of plane to plane homologies. Our fusion methodology also models scene clutter using the Schmieder and Weathersby clutter measure, which acts as a confidence prior, to assign higher fusion weight to views with lesser clutter. Detection and tracking are performed simultaneously by graph cuts segmentation of tracks in the space-time occupancy likelihood data. Experimental results with detailed qualitative and quantitative analysis, are demonstrated in challenging multi-view, crowded scenes.
Article
In this paper, we have derived analytic expressions for the phase correlation of downsampled images. We have shown that for downsampled images the signal power in the phase correlation is not concentrated in a single peak, but rather in several coherent peaks mostly adjacent to each other. These coherent peaks correspond to the polyphase transform of a filtered unit impulse centered at the point of registration. The analytic results provide a closed-form solution to subpixel translation estimation, and are used for detailed error analysis. Excellent results have been obtained for subpixel translation estimation of images of different nature and across different spectral bands.
Conference Paper
There have been important recent advances in object recognition through the matching of invariant local image features. However, the existing approaches are based on matching to individual training images. This paper presents a method for combining multiple images of a 3D object into a single model representation. This provides for recognition of 3D objects from any viewpoint, the generalization of models to non-rigid changes, and improved robustness through the combination of features acquired under a range of imaging conditions. The decision of whether to cluster a training image into an existing view representation or to treat it as a new view is based on the geometric accuracy of the match to previous model views. A new probabilistic model is developed to reduce the false positive matches that would otherwise arise due to loosened geometric constraints on matching 3D and non-rigid models. A system has been developed based on these approaches that is able to robustly recognize 3D objects in cluttered natural images in sub-second times.
Article
A method for generating enhanced resolution radar images of the A method for generating enhanced resolution radar images of the Earth's surface using spaceborne scatterometry is presented. TheEarth's surface using spaceborne scatterometry is presented. The technique is based on an image reconstruction technique that takestechnique is based on an image reconstruction technique that takes advantage of the spatial overlap in scatterometer measurements made atadvantage of the spatial overlap in scatterometer measurements made at different times to provide enhanced imaging resolution. Thedifferent times to provide enhanced imaging resolution. The reconstruction algorithm is described, and the technique is demonstratedreconstruction algorithm is described, and the technique is demonstrated using both simulated and actual Seasat-A Scatterometer (SASS)using both simulated and actual Seasat-A Scatterometer (SASS) measurements. The technique can also be used with ERS-1 scatterometermeasurements. The technique can also be used with ERS-1 scatterometer data. The SASS-derived images, which have approximately 4-km resolution,data. The SASS-derived images, which have approximately 4-km resolution, illustrate the resolution enhancement capability of the technique, whichillustrate the resolution enhancement capability of the technique, which permits utilization of both historic and contemporary scatterometer datapermits utilization of both historic and contemporary scatterometer data for medium-scale monitoring of vegetation and polar ice. The tradeofffor medium-scale monitoring of vegetation and polar ice. The tradeoff between imaging noise and resolution inherent in the technique isbetween imaging noise and resolution inherent in the technique is discussed
Article
Monitoring of large sites requires coordination between multiple cameras, which in turn requires methods for relating events between distributed cameras. This paper tackles the problem of automatic external calibration of multiple cameras in an extended scene, that is, full recovery of their 3D relative positions and orientations. Because the cameras are placed far apart, brightness or proximity constraints cannot be used to match static features, so we instead apply planar geometric constraints to moving objects tracked throughout the scene. By robustly matching and fitting tracked objects to a planar model, we align the scene's ground plane across multiple views and decompose the planar alignment matrix to recover the 3D relative camera and ground plane positions. We demonstrate this technique in both a controlled lab setting where we test the effects of errors in the intrinsic camera parameters, and in an uncontrolled, outdoor setting. In the latter, we do not assume synchronized cameras and we show that enforcing geometric constraints enables us to align the tracking data in time. In spite of noise in the intrinsic camera parameters and in the image data, the system successfully transforms multiple views of the scene's ground plane to an overhead view and recovers the relative 3D camera and ground plane positions
Article
We demonstrate that phase accuracy is extremely important in image processing filters and express the hope that more work will be done on the development of filter design techniques which use phase as well as magnitude specifications.
Resolution enhancement of spaceborne scatterometer data ISSN (ONLINE) : 2395-695X ISSN (PRINT) : 2395-695X Available online at www.ijarbest.com
  • Long
  • Whiting Hardin
Long, Hardin, and Whiting, " Resolution enhancement of spaceborne scatterometer data, " IEEE Trans. Geosci. Remote Sensing, vol. 31, pp. 700–715, May 1993. ISSN (ONLINE) : 2395-695X ISSN (PRINT) : 2395-695X Available online at www.ijarbest.com International Journal of Advanced Research in Biology, Ecology, Science and Technology (IJARBEST) Vol. 1, Issue 4, July 2015