ArticlePDF Available

Reconstruction of Objects with VSN

Authors:

Abstract and Figures

By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. Object matching methods form the foundation of many state-of-the-art algorithms. Therefore, this feature distribution scheme can be directly applied to several state-of-the-art matching methods with little or no adaptation. The future challenge lies in mapping state-of-the-art matching and reconstruction methods to such a distributed framework. The reconstructed scenes can be converted into a video file format to be displayed as a video, when the user submits the query. This work can be brought into real time by implementing the code on the server side/mobile phone and communicate with several nodes to collect images/objects. This work can be tested in real time with user query results.
Content may be subject to copyright.
Available online at www.ijarbest.com
International Journal of Advanced Research in Biology, Ecology, Science and Technology (IJARBEST)
Vol. 1, Issue 1, April 2015
17
All Rights Reserved © 2015 IJARBEST
Reconstruction of Objects with VSN
M.Priscilla1, B.Nandhini2, S.Manju3, S.Shafiqa Shalaysha4, Christo Ananth5
U.G.Scholars, Department of ECE, Francis Xavier Engineering College, Tirunelveli1,2,3,4
Assistant Professor, Department of ECE, Francis Xavier Engineering College, Tirunelveli 5
Abstract By this object reconstruction with feature
distribution scheme, efficient processing has to be done
on the images received from nodes to reconstruct the
image and respond to user query. Object matching
methods form the foundation of many state- of-the-art
algorithms. Therefore, this feature distribution scheme
can be directly applied to several state-of- the-art
matching methods with little or no adaptation. The
future challenge lies in mapping state-of-the-art
matching and reconstruction methods to such a
distributed framework. The reconstructed scenes can be
converted into a video file format to be displayed as a
video, when the user submits the query. This work can
be brought into real time by implementing the code on
the server side/mobile phone and communicate with
several nodes to collect images/objects. This work can be
tested in real time with user query results.
Index Terms Computer vision, Object Reconstruction,
Visual SensorNetworks.
I. INTRODUCTION
Visual sensor networks are envisioned as
distributed and autonomous systems, where cameras
collaborate and, based on exchanged information, reason
autonomously about the captured event and decide how to
proceed. Through collaboration, the cameras relate the
events captured in the images, and they enhance their
understanding of the environment. Similar to wireless
sensor networks, visual sensor networks should be data-
centric, where captured events are described by their names
and attributes. Communication between cameras should be
based on some uniform ontology for the description of the
event and interpretation of the scene dynamics. The main
goals of coverage optimization algorithms are to preserve
coverage in case of sensor failure and to save energy by
putting redundant sensor nodes to sleep. Choosing which
nodes to put in sleeping or active mode should be done
carefully to prolong the network lifetime, preserve coverage
and connectivity, and perform the task at hand. However,
when camera sensors are involved, three-dimensional
coverage of space is required, which increases the
complexity of the coverage issue. Coverage of networked
cameras can be simplified by assuming that the cameras
have a fixed focal length lens, are mounted on the same
plane, and are monitoring a parallel plane.
Visual data collected by camera nodes should be
processed and all or relevant data streamed to the BS. It is
largely agreed that streaming all the data is impractical due
to the severe energy and bandwidth constraints of WSNs.
And since processing costs are significantly lower than
communication costs, it makes sense to reduce the size of
data before sending it to the BS. However, visual data
processing can be computationally expensive.
Reliable data transmission is an issue that is more
crucial for VSNs than for conventional scalar sensor
networks. While scalar sensor networks can rely on
redundant sensor readings through spatial redundancy in the
deployment of sensor nodes to compensate for occasional
losses of sensor measurements, this solution is impractical
for VSNs, which are characterized by higher cost and larger
data traffic. Moreover, most reliable transmission protocols
proposed for conventional scalar data WSNs are based on
link layer acknowledgment messages and retransmissions.
They are therefore not suitable for visual data transmission
due to their stringent bandwidth and delayrequirements.
In redundantly deployed visual sensor networks a
subset of cameras can perform continuous monitoring and
provide information with a desired quality. This subset of
active cameras can be changed over time, which enables
balancing of the cameras' energy consumption, while
spreading the monitoring task among the cameras. In such a
scenario the decision about the camera nodes' activity and
the duration of their activity is based on sensor management
policies. Sensor management policies define the selection
and scheduling of the camera nodes' activity in such a way
that the visual information from selected cameras satisfies
the application-specified requirements while the use of
camera resources is minimized.
In visual sensor networks, sensor management
policies are needed to assure balance between the oftentimes
opposite requirements imposed by the wireless networking
and vision processing tasks. While reducing energy
consumption by limiting data transmissions is the primary
challenge of energy-constrained visual sensor networks, the
quality of the image data and application QoS improve as the
network provides more data. In such an environment, the
optimization methods for sensor management developed
for wireless sensor networks are oftentimes hard to directly
apply to visual sensor networks. Such sensor management
policies usually do not consider the event- driven nature of
visual sensor networks, nor do they consider the
unpredictability of data traffic caused by eventdetection.
The main obje ctive of [ 1 ] is Hierar chical feature
distribution scheme for object recognition. This paper
focuses on the principle that each individual node in the
network hold only a small amount of information about
Available online at www.ijarbest.com
International Journal of Advanced Research in Biology, Ecology, Science and Technology (IJARBEST)
Vol. 1, Issue 1, April 2015
18
All Rights Reserved © 2015 IJARBEST
objects seen by the network, this small amount is sufficient
to efficiently route queries through network.In computer
vision algorithm, large amount of data has to be processed,
stored or transmitted. It requires enormous amount of
network resources. Due to restrictions in memory resources
of the nodes and high cost of data transmission there is a
need to distribute visual data in a more efficientway.
The methodology used in this paper is hierarchical
feature distribution scheme. This scheme has to fulfill some
additional requirements regarding feature abstraction,
feature storage space, measure of similarity and
convergence. The visual sensor that originally sees the
unknown object retains complete information about the
object. The Hierarchical feature distribution scheme, results
lower total network traffic load and it does not affect
performance of tested computer vision methods and the
cons of this is, computer vision methods operate on large
amount of data, which can overload communication
constrained over distributed network.
The objective of [2] is to study the classical
problem of object recognition in low-power, low-bandwidth
distributed camera networks. In this paper an effective
framework is proposed to perform distributed object
recognition using smart cameras and computer at the base
station. Base station uses multiple decoding schemes to
recover multiple view object features based on distributed
compressive sensing theory. Object recognition has been a
well studied problem in computer vision. Distributed object
recognition has been mainly focused on two directions.
First, when multiple images share a set of common visual
features, correspondence can be established across camera
views. Second, when the camera sensors do not have
sufficient communication resources to streamline the
high-dimensional visual features among camera views and
perform feature matching, distributed data compression can
be utilized to encode and transmit the features.
The methodology used here is random projection,
in which a projection function is used to encode histogram
vectors in a lower-dimensional space. The multiple
decoding schemes are used to recover multiple view object
features based on distributed compressive sensing theory. A
distributed object recognition system is suitable for band-
limited camera sensor networks. Random projection has
gained much publicity in application where the prior
information of the source data and the computational power
of the sensor modalities.In this paper reduction function is
used to compress high dimensional histogram. The main
disadvantage is that, it will recognize the nearest object and
it does not recognize distant objects.
In [3], the authors had proposed a high-resolution
image reconstruction algorithm considering inaccurate
subpixel registration. Use multichannel image
reconstruction algorithm for application with multiframe
environments. The proposed algorithm is robust against
registration error noise and do not require any prior
information about original image. An iterative
reconstruction algorithm is adopted to determine the
regularization parameter and to reconstruct the image.
Ill-posedness problem will occur from inaccurate pixel
registration. The quality of image resolution is limited by
physical constraints. Subpixel level motion estimation is a
very difficult problem. Multichannel image convolution
approaches are particularly well suited for application with
multiframe environments. The problem of estimating a
high-resolution image from low resolution images is
ill-posed, since a number of solutions satisfy the constraints
of the observation model. A well-posed problem will be
formulated using the regularized multichannel image
deconvolution technique. In this a set of theoretical approach
is used to obtain solution. A prior knowledge about the
image is assumed which restricts the solution. The
significant increase in the high-frequency detailed
information of the reconstructed image, especially when
compared with the results of the bicubic interpolation and
conventional approach.
The proposed algorithm is robust and insensitive
to registration error noise, and they do not require any prior
information about the original image or the registrationerror
process.
The objective of [4] is to capture a short video
sequence, scanning a certain object and utilize information
from multiple frames to improve the chance of a successful
match in the database. Object instance matching is a
cornerstone component in application such as image search,
augmented reality and unsupervised tagging. The common
flow in this application is to take an input image and match
it against a datab ase of previously enrolled image of objects
of interest. Capturing image corresponding to an object
already present in the database especially in case of 3D
object is difficult.The methodology used in this paper is
object matching algorithm. In this algorithm two types of
matching techniques have been used. They are (1) Object
Instance Matching (2) Keypoint Filtering. Object instance
matching is used to incorporate previous frame keypoints
upto the maximum time window, for matching. In keypoint
filtering two filtering schemes are used. The first is to select
and to propagate the keypoint from the previous frame and
the next is to compare the matching image with respect to
highest matching image according to threshold on the
database matchingscore.
By using this object matching algorithm object
instance identification for exploiting time sequence
information in matching objects that are captured in a video
and stored to a database of images can be improved. The
drawbacks of this paper are performance is affected with
larger sets and, space and computational complexity will
occur.
The main objective of [5] is to reconstruct the
complete object trajectories across the field-of-views of
multiple semi-overlapping cameras. The reconstruction of
global trajectories across the multiple fields of view requires
the fusion of multiple simultaneous views of the same
object as well as the linkage of trajectory fragments
captured by individual sensors.
In this paper global trajectory reconstruction
algorithm is used. This approach uses segments of
Available online at www.ijarbest.com
International Journal of Advanced Research in Biology, Ecology, Science and Technology (IJARBEST)
Vol. 1, Issue 1, April 2015
19
All Rights Reserved © 2015 IJARBEST
trajectories generated by individual cameras and performs
trajectory association and fusion on a common ground
plane. The association and fusion identifies fragments of
transformed trajectories generated by each object. These
fragments are fused and connected across the fields of view
using temporal consistency and object identity. The
proposed approach does not require any inter-camera
handoff procedure when objects enter or exit the
fields-of-view of each individual sensor. Local trajectory
segments are extracted from each sensor and those
corresponding to the same object are first associated and
then fused. Then, a spatiotemporal linkage procedure is
applied to connect the fused segments in order to obtain the
global complete trajectories across the distributed setup.
The results of trajectory association shows that the
matching performance is related to the segment length
which in turn is related to tracking performance and ground
plane transformation as it can affect the accuracy of objects’
attributes. The main drawbacks of this paper are tracking
failure, transformation errors and the trajectory metadata
from individual sensors may be corrupted by errors and
inaccuracies caused by noise, objects re-entrances and
occlusions.
II. PROPOSED SYSTEM
So inorder to overcome the inefficient behaviour
of the existing method, in this paper a method has been
proposed which uses hierarchical feature distribution
scheme for object matching. In this, the information is
distributed hierarchically in such a way that each individual
node maintains only a small amount of information about the
objects seen by the network. This amount is sufficient to
efficiently route queries through the network without any
degradation of the matching performance. A set of
requirements that have to be fulfilled by the
object-matching method to be used in such a framework is
defined. Four requirements (abstraction, storage, existence
of a metric, convergence) are defined for the matching
method and thus provide an algorithm for the efficient
routing of the network queries during the matching. The
object matching and reconstruction is performed in the base
station. The proposed method is implemented in C++ and
QT and it works on linux environment. Object
Reconstruction can be used in mobile applications and is
open source. In mobile applications, this method is used to
view the reconstructed object through mobile at any time.
electronically for review.
In this project, a framework of feature distribution
scheme is proposed for object matching. Each individual
node maintains only a small amount of information about
the objects seen by the network. Nevertheless, this amount
is sufficient to efficiently route queries through the network
without any degradation of the matching performance.
Efficient processing has to be done on the images received
from nodes to reconstruct the image and respond to user
query. The proposed feature distribution scheme results in
far lower network traffic load. To achieve the maximum
performance as with the full distribution of feature
vectors, a set of requirements regarding abstraction, storage
space, similarity metric and convergence has to be proposed
to implement this work in C++.
III. RESULTS AND DISCUSSION
At first all the background images are collected
and stored in a folder. The co- ordinates (X1,X2,Y1,Y2)
are noted for all the background images stored in that
folder. During file processing the filename of the
background image is processed and find out the node
number of that background and stored it in the database.
Fig.1. shows the sample background image.
Fig.1.Background Image
A. RECEIVE FOREGROUND OBJECT
The separated foreground objects from the
background images are collected and stored it in a folder
which does not have the background images. The
co-ordinates (X1,X2,Y1,Y2) are noted for all the foreground
objects stored in that folder. During file processing the
filename of the foreground object is processed and find
out the node number and frame number of that foreground
and stored it in the database. F i g. 2 . shows the processed
foreground object.
Fig.2. Foreground Object
B. IMAGE STITCHING
The node number, frame number and the
co-ordinates of the foreground objects in the background
image have to be found in the database. The objects which
are not stitched with the background in the database are
taken first and then find out the corresponding node number
of that object. Then the node’s corresponding background
Available online at www.ijarbest.com
International Journal of Advanced Research in Biology, Ecology, Science and Technology (IJARBEST)
Vol. 1, Issue 1, April 2015
20
All Rights Reserved © 2015 IJARBEST
image is taken and stitches it with the foreground object and
is stored in the database. Fig.3. shows the stitched foreground
object in the backgroundimage.
Fig.3. Image Stitching
C. USER QUERY PROCESSING
When a user wants to know about the foreground
objects that is present during a time, then the user enters the
starting date that is from date and to date and also he enters
the node number that is from which node, the user wants
the foreground object to be seen. The server retrieves the
correct background and foreground objects from the
database and displays it to the user. Fig.4. shows the result
of user query which retrieves the stitched image from the
database and shown it to theuser.
Fig.4.User Query
IV. CONCLUSION
By this object reconstruction with feature distribution
scheme, efficient processing has to be done on the images
received from nodes to reconstruct the image and respond to
user query. Object matching methods form the foundation of
many state- of-the-art algorithms. Therefore, this feature
distribution scheme can be directly applied to several
state-of- the-art matching methods with little or no
adaptation. The future challenge lies in mapping
state-of-the-art matching and reconstruction methods to such
a distributed framework. The reconstructed scenes can be
converted into a video file format to be displayed as a video,
when the user submits the query. This work can be brought
into real time by implementing the code on the server
side/mobile phone and communicate with several nodes to
collect images/objects. This work can be tested in real time
with user queryresults.
REFERENCES
[1] Baumberg, "Reliable feature matching across widely separated
views," in IEEE Intl. Conf Comp. Vision, and Patter Recog., 2000.
[2] Byrne, “Iterative image reconstruction algorithms based on
cross-entropy minimization,” IEEE Trans. Image Process., vol. 2, no. 1,
pp.96–103, Jan. 1993.
[3] Cand`es and Tao. Near optimal signal recovery from random
projections: Universal encoding strategies? IEEE Trans. Information
Theory, 52(12):5406– 5425, 200 6.
[4] Can d`es, Romberg, and Tao, T. Robust uncertainty principles: Exact
signal reconstruction from highly incomplete frequency information. IEEE
Transactions on Information Theory, 52, 2 (Feb.2006), 489—509.
[5] Chen, Defrise, and Deconinck, “Symmetric phase-only matched
filtering of Fourier-Mellin transforms for image registration and
recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 16, no.12, pp.
1156–1168, Dec. 1994.
[6] Chen, Varshney, and Arora, “Performance of mutual information
similarity measure for registration of multitemporal remote sensing
images,” IEEE Trans. Geosci. Remote Sens., vol. 41, no. 11, pp.
2445–2454, Nov. 2003.
... The proposed optimized scheme outperforms the conventional schemes with respect to blocking probability. [25] discussed about Reconstruction of Objects with VSN. By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. ...
Preprint
Full-text available
At the forefront of technological innovation and scholarly discourse, the Journal of Electrical Systems (JES) is a peer-reviewed publication dedicated to advancing the understanding and application of electrical systems, communication systems and information science. With a commitment to excellence, we provide a platform for researchers, academics, and professionals to contribute to the ever-evolving field of electrical engineering, communication technology and Information Systems. The mission of JES is to foster the exchange of knowledge and ideas in electrical and communication systems, promoting cutting-edge research and facilitating discussions that drive progress in the field. We aim to be a beacon for those seeking to explore, challenge, and revolutionize the way we harness, distribute, and utilize electrical energy and information systems..
... The proposed optimized scheme outperforms the conventional schemes with respect to blocking probability. [25] discussed about Reconstruction of Objects with VSN. By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. ...
Preprint
Full-text available
The research on Quantum Networked Artificial Intelligence is at the intersection of Quantum Information Science (QIS), Artificial Intelligence, Soft Computing, Computational Intelligence, Machine Learning, Deep Learning, Optimization, Etc. It Touches On Many Important Parts Of Near-Term Quantum Computing And Noisy Intermediate-Scale Quantum (NISQ) Devices. The research on quantum artificial intelligence is grounded in theories, modelling, and significant studies on hybrid classical-quantum algorithms using classical simulations, IBM Q services, PennyLane, Google Cirq, D-Wave quantum annealer etc. So far, the research on quantum artificial intelligence has given us the building blocks to achieve quantum advantage to solve problems in combinatorial optimization, soft computing, deep learning, and machine learning much faster than traditional classical computing. Solving these problems is important for making quantum computing useful for noise-resistant large-scale applications. This makes it much easier to see the big picture and helps with cutting-edge research across the quantum stack, making it an important part of any QIS effort. Researchers — almost daily — are making advances in the engineering and scientific challenges to create practical quantum networks powered with artificial intelligence
... The proposed optimized scheme outperforms the conventional schemes with respect to blocking probability. [25] discussed about Reconstruction of Objects with VSN. By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. ...
Preprint
Full-text available
JoWUA is an online peer-reviewed journal and aims to provide an international forum for researchers, professionals, and industrial practitioners on all topics related to wireless mobile networks, ubiquitous computing, and their dependable applications. JoWUA consists of high-quality technical manuscripts on advances in the state-of-the-art of wireless mobile networks, ubiquitous computing, and their dependable applications; both theoretical approaches and practical approaches are encouraged to submit. All published articles in JoWUA are freely accessible in this website because it is an open access journal. JoWUA has four issues (March, June, September, December) per year with special issues covering specific research areas by guest editors. The editorial board of JoWUA makes an effort for the increase in the quality of accepted articles compared to other competing journals..
... The proposed optimized scheme outperforms the conventional schemes with respect to blocking probability. [25] discussed about Reconstruction of Objects with VSN. By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. ...
Preprint
Full-text available
Proceedings on Engineering Sciences examines new research and development at the engineering. It provides a common forum for both front line engineering as well as pioneering academic research. The journal's multidisciplinary approach draws from such fields as Automation, Automotive engineering, Business, Chemical engineering, Civil engineering, Control and system engineering, Electrical and electronic engineering, Electronics, Environmental engineering, Industrial and manufacturing engineering, Industrial management, Information and communication technology, Management and Accounting, Management and quality studies, Management Science and Operations Research, Materials engineering, Mechanical engineering, Mechanics of Materials, Mining and energy, Safety, Risk, Reliability, and Quality, Software engineering, Surveying and transport, Architecture and urban engineering.
... The proposed optimized scheme outperforms the conventional schemes with respect to blocking probability. [25] discussed about Reconstruction of Objects with VSN. By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. ...
Preprint
Full-text available
Utilitas Mathematica Journal is a broad scope journal that publishes original research and review articles on all aspects of both pure and applied mathematics. This journal is the official publication of the Utilitas Mathematica Academy, Canada. It enjoys good reputation and popularity at international level in terms of research papers and distribution worldwide. Offers selected original research in Pure and Applied Mathematics and Statistics. UMJ coverage extends to Operations Research, Mathematical Economics, Mathematics Biology and Computer Science. Published in association with the Utilitas Mathematica Academy. The leadership of the Utilitas Mathematica Journal commits to strengthening our professional community by making it more just, equitable, diverse, and inclusive. We affirm that our mission, Promote the Practice and Profession of Statistics, can be realized only by fully embracing justice, equity, diversity, and inclusivity in all of our operations. Individuals embody many traits, so the leadership will work with the members of UMJ to create and sustain responsive, flourishing, and safe environments that support individual needs, stimulate intellectual growth, and promote professional advancement for all.
... The proposed optimized scheme outperforms the conventional schemes with respect to blocking probability. [25] discussed about Reconstruction of Objects with VSN. By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. ...
Preprint
Full-text available
Most experts would consider this the biggest challenge. Quantum computers are extremely sensitive to noise and errors caused by interactions with their environment. This can cause errors to accumulate and degrade the quality of computation. Developing reliable error correction techniques is therefore essential for building practical quantum computers. While quantum computers have shown impressive performance for some tasks, they are still relatively small compared to classical computers. Scaling up quantum computers to hundreds or thousands of qubits while maintaining high levels of coherence and low error rates remains a major challenge. Developing high-quality quantum hardware, such as qubits and control electronics, is a major challenge. There are many different qubit technologies, each with its own strengths and weaknesses, and developing a scalable, fault-tolerant qubit technology is a major focus of research. Funding agencies, such as government agencies, are rising to the occasion to invest in tackling these quantum computing challenges. Researchers — almost daily — are making advances in the engineering and scientific challenges to create practical quantum computers.
... The proposed optimized scheme outperforms the conventional schemes with respect to blocking probability. [25] discussed about Reconstruction of Objects with VSN. By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. ...
Preprint
Full-text available
It is no surprise that Quantum Computing will prove to be a big change for the world. The practical examples of quantum computing can prove to be a good substitute for traditional computing methods. Quantum computing can be applied to many concepts in today’s era when technology has grown by leaps and bounds. It has a wide beach of applications ranging from Cryptography, Climate Change and Weather Forecasting, Drug Development and Discovery, Financial Modeling, Artificial Intelligence, etc. Giant firms have already begun the process of quantum computing in the field of artificial intelligence. The search algorithms of today are mostly designed according to classical computing methods. While Comparing Quantum Computers with Data Mining with Other Counterpart Systems, we are able to understand its significance thereby applying new techniques to obtain new real-time results and solutions.
... The proposed optimized scheme outperforms the conventional schemes with respect to blocking probability. [25] discussed about Reconstruction of Objects with VSN. By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. ...
Preprint
Full-text available
Published since 2004, Periódico Tchê Química (PQT) is a is a triannual (published every four months), international, fully peer-reviewed, and open-access Journal that welcomes high-quality theoretically informed publications in the multi and interdisciplinary fields of Chemistry, Biology, Physics, Mathematics, Pharmacy, Medicine, Engineering, Agriculture and Education in Science. Researchers from all countries are invited to publish on its pages. The Journal is committed to achieving a broad international appeal, attracting contributions, and addressing issues from a range of disciplines. The Periódico Tchê Química is a double-blind peer-review journal dedicated to express views on the covered topics, thereby generating a cross current of ideas on emerging matters.
... The proposed optimized scheme outperforms the conventional schemes with respect to blocking probability. [25] discussed about Reconstruction of Objects with VSN. By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. ...
Preprint
Full-text available
Onkologia I Radioterapia is an international peer reviewed journal which publishes on both clinical and pre-clinical research related to cancer. Journal also provide latest information in field of oncology and radiotherapy to both clinical practitioner as well as basic researchers. Submission for publication can be submitted through online submission, Editorial manager system, or through email as attachment to journal office. For any issue, journal office can be contacted through email or phone for instatnt resolution of issue. Onkologia I Radioterapia is a peer-reviewed scopus indexed medical journal publishing original scientific (experimental, clinical, laboratory), review and case studies (case report) in the field of oncology and radiotherapy. In addition, publishes letters to the Editorial Board, reports on scientific conferences, book reviews, as well as announcements about planned congresses and scientific congresses. Oncology and Radiotherapy appear four times a year. All articles published with www.itmedical.pl and www.medicalproject.com.pl is now available on our new website.
... The proposed optimized scheme outperforms the conventional schemes with respect to blocking probability. [25] discussed about Reconstruction of Objects with VSN. By this object reconstruction with feature distribution scheme, efficient processing has to be done on the images received from nodes to reconstruct the image and respond to user query. ...
Preprint
Full-text available
The journal is published every quarter and contains 200 pages in each issue. It is devoted to the study of Indian economy, polity and society. Research papers, review articles, book reviews are published in the journal. All research papers published in the journal are subject to an intensive refereeing process. Each issue of the journal also includes a section on documentation, which reproduces extensive excerpts of relevant reports of committees, working groups, task forces, etc., which may not be readily accessible, official documents compiled from scattered electronic and/or other sources and statistical supplement for ready reference of the readers. It is now in its nineteenth year of publication. So far, five special issues have been brought out, namely: (i) The Scheduled Castes: An Inter-Regional Perspective, (ii) Political Parties and Elections in Indian States : 1990-2003, (iii) Child Labour, (iv) World Trade Organisation Agreements, and (v) Basel-II and Indian Banks.
Article
Full-text available
The related problems of minimizing the functionals F( x )=αKL( y ,P x )+(1-α)KL( p , x ) and G( x )=αKL(P x , y )+(1-α)KL( x , p ), respectively, over the set of vectors x ⩾0 are considered. KL( a , b ) is the cross-entropy (or Kullback-Leibler) distance between two nonnegative vectors a and b . Iterative algorithms for minimizing both functionals using the method of alternating projections are derived. A simultaneous version of the multiplicative algebraic reconstruction technique (MART) algorithm, called SMART, is introduced, and its convergence is proved
Conference Paper
Full-text available
We present a robust method for automatically matching features in images corresponding to the same physical point on an object seen from two arbitrary viewpoints. Unlike conventional stereo matching approaches we assume no prior knowledge about the relative camera positions and orientations. In fact in our application this is the information we wish to determine from the image feature matches. Features are detected in two or more images and characterised using affine texture invariants. The problem of window effects is explicitly addressed by our method-our feature characterisation is invariant to linear transformations of the image data including rotation, stretch and skew. The feature matching process is optimised for a structure-from-motion application where we wish to ignore unreliable matches at the expense of reducing the number of feature matches
Article
Full-text available
Accurate registration of multitemporal remote sensing images is essential for various change detection applications. Mutual information has recently been used as a similarity measure for registration of medical images because of its generality and high accuracy. Its application in remote sensing is relatively new. There are a number of algorithms for the estimation of joint histograms to compute mutual information, but they may suffer from interpolation-induced artifacts under certain conditions. In this paper, we investigate the use of a new joint histogram estimation algorithm called generalized partial volume estimation (GPVE) for computing mutual information to register multitemporal remote sensing images. The experimental results show that higher order GPVE algorithms have the ability to significantly reduce interpolation-induced artifacts. In addition, mutual-information-based image registration performed using the GPVE algorithm produces better registration consistency than the other two popular similarity measures, namely, mean squared difference (MSD) and normalized cross correlation (NCC), used for the registration of multitemporal remote sensing images.
Article
Full-text available
Presents a new method to match a 2D image to a translated, rotated and scaled reference image. The approach consists of two steps: the calculation of a Fourier-Mellin invariant (FMI) descriptor for each image to be matched, and the matching of the FMI descriptors. The FMI descriptor is translation invariant, and represents rotation and scaling as translations in parameter space. The matching of the FMI descriptors is achieved using symmetric phase-only matched filtering (SPOMF). The performance of the FMI-SPOMF algorithm is the same or similar to that of phase-only matched filtering when dealing with image translations. The significant advantage of the new technique is its capability to match rotated and scaled images accurately and efficiently. The innovation is the application of SPOMF to the FMI descriptors, which guarantees high discriminating power and excellent robustness in the presence of noise. This paper describes the principle of the new method and its discrete implementation for either image detection problems or image registration problems. Practical results are presented for various applications in medical imaging, remote sensing, fingerprint recognition and multiobject identification
Article
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Article
Suppose we are given a vector f in a class F ⊂ ℝN, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision ε in the Euclidean (ℓ2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n) ≤ R n^(1-p), where R > 0 and p > 0. Suppose that we take measurements yk = {f,Xk}, k = 1, . . .,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0 < p < 1 and with overwhelming probability, our reconstruction f#, defined as the solution to the constraints yk = 〈f#, Xk〉 with minimal ℓ1 norm, obeys [equation]. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed.
Article
Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K/logN)-r, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed