[show abstract][hide abstract] ABSTRACT: RNA interference (RNAi) is considered one of the most powerful genomic tools which allows the study of drug discovery and understanding of the complex cellular processes by high-content screens. This field of study, which was the subject of 2006 Nobel Prize of medicine, has drastically changed the conventional methods of analysis of genes. A large number of images have been produced by the RNAi experiments. Even though a number of capable special purpose methods have been proposed recently for the processing of RNAi images but there is no customized compression scheme for these images. Hence, highly proficient tools are required to compress these images. In this paper, we propose a new efficient lossless compression scheme for the RNAi images. A new predictor specifically designed for these images is proposed. It is shown that pixels can be classified into three categories based on their intensity distributions. Using classification of pixels based on the intensity fluctuations among the neighbors of a pixel a context-based method is designed. Comparisons of the proposed method with the existing state-of-the-art lossless compression standards and well-known general-purpose methods are performed to show the efficiency of the proposed method.
IEEE journal of biomedical and health informatics. 03/2013; 17(2):259-68.
[show abstract][hide abstract] ABSTRACT: The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal-oxide-semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed.
[show abstract][hide abstract] ABSTRACT: Determining an object location in a specific region is an important task in many machine vision applications. Different parameters affect the accuracy of the localization process. The quantization process in charge-coupled device of a camera is one of the sources of error that causes estimation rather than identifying the exact position of the observed object. A cluster of points, in the field of view of a camera are mapped into a pixel. These points form an uncertainty region. In this paper, we present a geometrical model to analyze the volume of this uncertainty region as a criterion for object localization error. The proposed approach models the field of view of each pixel as an oblique cone. The uncertainty region is formed via the intersection of two cones, each emanating from one of the two cameras. Because of the complexity in modeling of two oblique cones' intersection, we propose three methods to simplify the problem. In the first two methods, only four lines are used. Each line goes through the camera's lens, modeled as a pinhole, and then passes one of the four vertices of a square that is fitted around the circular pixel. The first proposed method projects all points of these four lines into an image plane. In the second method, the line-cone intersection is used instead of intersection of two cones. Therefore, by applying line-cone intersection, the boundary points of the intersection of the two cones are determined. In the third approach, the extremum points of the intersection of two cones are determined by the Lagrangain method. The validity of our methods is verified through extensive simulations. In addition, we analyze effects of parameters, such as the baseline length, focal length, and pixel size, on the amount of the estimation error.
[show abstract][hide abstract] ABSTRACT: Visual surveillance of a designated air space can be achieved by a randomly distributed camera sensor network spread over a large area. The location and field of view of each battery operated sensor, after a calibration phase, will be known to a central processing node. To increase the lifetime of the network, the density of distributed sensors could be such that a subset of sensors can cover the required air space. As a sensor dies another sensor should be selected to compensate for the dead one and reestablish the complete coverage. This process should be continued until complete coverage is not achievable by the existing sensors. Thereafter, a graceful degradation of the coverage is desirable.
The goal is to elongate the lifetime of the network while maintaining a maximum possible coverage of the designated air space. Since the selection of a subset of sensors for complete coverage of the target area is an NP-complete problem, we present a number of heuristics for this case. The proposed methods are categorized in two groups. In one category, the sensors are prioritized based on their visual and communicative properties and the selection is performed according to the prioritizing function. In the other group, we propose traditional evolutionary and swarm intelligence algorithms. The performance of the proposed methods is evaluated through extensive simulations.
Journal of Network and Computer Applications 01/2013; 36(1):409–419. · 1.47 Impact Factor
[show abstract][hide abstract] ABSTRACT: Population of old generation is growing in most countries. Many of these seniors are living alone at home. Falling is amongst the most dangerous events that often happen and may need immediate medical care. Automatic fall detection systems could help old people and patients to live independently. Vision based systems have advantage over wearable devices. These visual systems extract some features from video sequences and classify fall and normal activities. These features usually depend on cameras view direction. Using several cameras to solve this problem increases the complexity of the final system. In this paper we propose to use variations in silhouette area that are obtained from only one camera. We use a simple background separation method to find the silhouette. We show that the proposed feature is view invariant. Extracted feature is fed into a support vector machine for classification. Simulation of the proposed method using a publicly available dataset shows promising results.
IEEE transactions on bio-medical engineering 11/2012; · 2.15 Impact Factor
[show abstract][hide abstract] ABSTRACT: Population of old generation that live alone is growing in most countries. Surveillance systems help them stay home and reduce the burden on the healthcare system. Automatic visual surveillance systems have advantages over wearable devices. They extract features from video sequences and use them for event classification. But these features are dependent on the position of cameras relative to the person. Therefore they need multi-camera for more accuracy that increases cost and complexity. In this paper we propose using silhouette area combined with inclination angle as robust features that can be measured using only one camera with an arbitrary direction. Through rigorous simulations on a publicly available dataset the error rate of the system is found to be less than 1%.
IEEE International Conference on Multimedia and Expo; 01/2012
[show abstract][hide abstract] ABSTRACT: Block-based motion estimation (ME) is the most computationally intensive part of a video encoder. In this paper a new genetic based search algorithm is proposed to speed up the process and find the best matching block. In the proposed method, the fitness function is compared with a predefined threshold to stop performing unnecessary computations. The initial population of the genetic algorithm is selected based on the spatial and temporal correlation among motion vectors. Considering the codirectionality among neighboring vectors, we develop our algorithm. Since many blocks in a frame can be regarded as stationary or quasi-stationary the suggested initial population involves these kinds of blocks. Simulation results on popular test video sequences indicate higher performance of the proposed algorithm compared to the latest fast motion estimation algorithm.
Iranian Conference on Electrical Engineering (ICEE); 01/2012
[show abstract][hide abstract] ABSTRACT: As concerns about copyright protection increased amongst multimedia owners in recent years, many watermarking algorithms proposed to protect copyright of digital images. These methods are either spatial or frequency domain techniques. It is essential for a watermarking method to have acceptable robustness. That is why many existing methods try to improve their robustness against signal processing modifications. In this paper a block based watermarking scheme is proposed that embeds a binary logo into Contourlet coefficients of image. To increase robustness, embedding is done in two scales and watermark is inserted into DCT coefficients of Contourlet blocks to diffuse the effects of the embedding throughout the coefficients. Experimental results, and comparison with a robust Contourlet domain method, show that the proposed scheme has better robustness against some image processing attacks, while improving fidelity. Furthermore the proposed algorithm has the advantage of having a blind extraction phase.
International Conference on Communications (Workshop); 01/2012