Multimedia and Imaging Databases
... Consequently, a compact and reliable shape representation and a well-suited similarity distance are necessary. An interesting shape description should be invariant to translation, rotation, scaling and starting point transformations1011. In general, shape representations are classified into two categories: boundary-based and region-based. ...
... Several shape description approaches have been developed in the two categories. For example, area, compactness, bounding box for the region-based category, and perimeter, curvature for the boundary-based category may be cited1011. Other methods of shape description are the Fourier theory-based method and the Moment theory-based method. ...
... A set of 7 moments was identified by Hu, and is invariant to geometric changes. These moments are called invariant moments1011. A simple method to represent a contour is Freeman code (chain code), a coding method of closed shape by approximation of the continuous contour with a sequence of numbers, each number corresponding to a segment direction. ...
Content-based image retrieval (CBIR) is an important issue in the computer vision community. Both visual and textual descriptions are employed when the user formulates his queries. Shape feature is one of the most important visual features. The shape feature is essential as it corresponds to the region of interest in images. Consequently, the shape representation is fundamental. The shape comparisons must be compact and accurate, and must own properties of invariance to several geometric transformations such as translation, rotation and scaling, though the representation itself may be variant to rotation. In this paper, we propose a shape comparison technique based on the flat segments of the contour. The segmentation utilizes the Freeman coding and run length coding. The lengths of the flat segments make up a length vector, which are used to compare the similarity of the shapes. Experimental results from the test on the standard SQUID are reported.
... In programming or database languages, the concept of a unique object identity is commonplace (Khoshafian and Copeland 1986) . Object identity has been defined as the trait that distinguishes an object from all others (Khoshafian and Baker 1996) . ...
... The use of identifier keys (i.e., attributes that uniquely identify a tuple) to distinguish objects as practiced in current GISs and database systems is not completely satisfactory as it confuses issues of data value with identity (Khoshafian and Copeland 1986; Bonfatti and Pazzi 1995; Khoshafian and Baker 1996) . One problem with the identifier keys approach is that these keys cannot be changed. ...
... In programming or database languages, the concept of a unique object identity is commonplace (Khoshafian and Copeland 1986). Object identity has been defined as the trait that distinguishes an object from all others (Khoshafian and Baker 1996). Identity provides a way to represent the individuality or uniqueness of an object, independent of its attributes and values. ...
Current geographic information systems (GISs) have been designed for querying and maintaining static databases representing static phenomena and give little support to those users who wish to represent dynamic information or incorporate temporality into their studies. In order to integrate phenomena that change over space and time in GISs, a better understanding of the underlying components of change and how people reason about change is needed. This paper focuses on a qualitative representation of change. It offers a classification of change based on object identity and the set of operations that either preserve or change identity. These operations can be applied to single or composite objects and combined to express the semantics of sequences of change. An iconic, visual language is developed to represent the various types of change and applied to examples to illustrate the application of this language. Such a formalization of the basic components of change lays the foundation for a new generation of formal models that captures the semantics of change and leads to improved interoperability between GISs and process models or simulation software.
... An alternative approach in either case is to fully decompose the data [Kho96]. At first sight, this may not appear to be promising from a performance point of view, but studies [Cop85,Kho87] have shown that it can work very well, and commercial databases based on this philosophy have been successfully marketed. ...
... The present project has extended or complemented previous work in various ways. To take just two examples, Copeland and Khoshafian [Cop85,Kho96,Kho87] used only one sort order in their implementation, whereas two sort orders are employed here on the triple store, and potentially on the lexical store as well, which has been shown to be highly beneficial. Monet [Bon96] uses Binary Association Tables, which appear to introduce a great deal of redundancy, whereas in the lexical store described here, data values are held just once. ...
p>This thesis introduces a new approach to understanding the issues relating to the efficient implementation of a binary relational database built upon a triple store. The place of the binary relational database is established with reference to other database models, and a detailed description of a new triple store implementation is presented, together with a definition of the architecture. The use of a model, which reflects the performance of the triple store database, is described, and the results of performance investigations are presented. In the first, the use of more than one sort order in the triple store database is analyzed, and the use of two sort orders is found to be optimal. In the second, the effect of compression in the triple store is considered, and compared with other approaches to compressing the non-index portion of a database management system. In conclusion, the model successfully predicts the effect of using two sort orders, and this was confirmed upon subsequent incorporation into the database. It is also found that significant performance gains can be made by the use of compression in the triple store. It is shown that by extending the compression algorithm even greater gains could be made. In addition, it is found that by keeping the design of the database as simple and pure as possible, a foundation for a variety of higher level views can be achieved, leading to the possibility of the triple store being used as the foundation for new databases.</p
... In programming or database languages, the concept of a unique object identity is commonplace (Khoshafian and Copeland 1986) . Object identity has been defined as the trait that distinguishes an object from all others (Khoshafian and Baker 1996) . Identity provides a way to represent the individuality or uniqueness of an object, independent of its attributes and values. ...
... Address-based identity mechanisms, therefore, are considered to compromise identity when ideally the language should provide separate mechanisms for the two concepts. The use of identifier keys (i.e., attributes that uniquely identify a tuple) to distinguish objects as practiced in current GISs and database systems is not completely satisfactory as it confuses issues of data value with identity (Khoshafian and Copeland 1986; Bonfatti and Pazzi 1995; Khoshafian and Baker 1996) . One problem with the identifier keys approach is that these keys cannot be changed. ...
Current geographic information systems (GISs) have been designed for querying and maintaining static databases representing static phenomena and give little support to those users who wish to represent dynamic information or incorporate temporality into their studies. In order to integrate phenomena that change over space and time in GISs, a better understanding of the underlying components of change and how people reason about change is needed. This paper focuses on a qualitative representation of change. It offers a classification of change based on object identity and the set of operations that either preserve or change identity. These operations can be applied to single or composite objects and combined to express the semantics of sequences of change. An iconic, visual language is developed to represent the various types of change and applied to examples to illustrate the application of this language. Such a formalization of the basic components of change lays the foundation for a new generation of formal models that captures the semantics of change and leads to improved interoperability between GISs and process models or simulation software.
... With a query the user of a system can retrieve data or get information about the existence or not of some specific data. A query is called exact matching [KB96] if the required data is in the database as it was specified by the query. We call a query partial matching if it is not exact matching but the domain space of the query is a subset of the domain space in the database. ...
In several applications the information disposable is incomplete or imprecise. If the system includes temporal information, by means of a temporal database, also the temporal information may be incomplete. We present her a prototype which stores and retrieves incomplete data and temporal information. We show how the system treats several types of queries and how monotonic and non-monotonic updates are performed.
... Com uma consulta o usuário de um sistema pode obter informação sobre uma aplicação representada em um banco de dados. Uma consulta é chamada de casamento exato com relação a um banco de dados [KB96], se os dados requeridos existem neste banco de dados de forma similar. Dizemos que uma consulta é de casamento parcial com relação a um banco de dados, se ela não é de casamento exato, mas seu domínio é um subconjunto do domínio neste banco de dados. ...
Resumo. Para que se modele mais fielmente o mundo real é importante podermos representar conceitos como tempo e incerteza dos dados. Este artigo mostra nossa tentativa de criação de uma teoria para o tratamento de objetos temporais incompletos. Inicialmente implementamos um sistema para manipulação de objetos temporais incompletos, chamado MITO, como forma de adquirir experiência. Apresentamos aqui um pouco do MITO e uma lógica modal temporal usada para formalizar a parte de recuperação de objetos temporais. Palavras-chaves: informação imprecisa, objetos temporais incompletos, lógica modal temporal.
... One of the reasons is the lack of reliable methods for content analysis of the different media types, thus basic mechanisms and technologies for the realisation of multimedia database management systems are not available yet. Many existing relational and object-oriented databases handle multimedia objects as BLOBs (Binary Large Objects) [1,2] and describe their content by a manually compiled and limited set of keywords. The retrieval is then realised by a full text search in the assigned set of keywords. ...
... It must be noticed that, the actual image of a photograph or a newspaper clipping, is stored in the CD-Jukebox. The Image Handling Tool enables the system administrator to scan the images of newspapers clippings and photographs and relate them with information stored in the database [8]. Thereby, the result of a database query for photographs, contains also the actual images related to the database information. ...
Publishing organisations have been traditionally innovative in adapting new technologies and business practices. The organisation of their information as well as its presentation to consumers has been their major concern for several years. The use of the Internet for the presentation of this information in a multimedia format is an extremely interesting topic, especially for content providers, such as newspaper organisations. Especially for those organisations, the Internet could be seen as a new market place to conduct business and promote their products. In this paper a case study of a newspaper organisation going online is presented. Specifically, it illustrates an example of how a new architecture for storing and retrieving information has been used to support the publishing process and provide an alternative automated and more sophisticated way of delivering the final product i.e. the newspaper to the customer.
... It is noted that most of the existing techniques for multimedia information retrieval are based on the use of conventional database structures to handle large collections of high-dimensional multimedia data. Although the recent research on multimedia database systems [3], [6], [11] has made advances in creation of large multimedia databases with effective facilities for query processing, it is mainly focused on data modeling and structuring. Object-oriented models have the capacity for retrieving one or more media samples by satisfying a particular set of conditions, however, when faced with large quantities of data that are stored at a low level of information granularity, they can be very slow. ...
This paper describes a new approach to fast content-based color image retrieval in a wavelet-based hierarchical structure with data warehousing techniques. To tackle the key issues such as image data indexing, similarity measures, search methods and query processing in retrieval for large color image archives, we extend the concepts of conven-tional data warehouse and image database to image data warehouse for effective color indexing. In contrast to the existing systems which employ a fixed mechanism for simi-larity measurement, we propose to integrate wavelet multi-resolution decomposition with data summarization for hi-erarchical color representation and flexible similarity mea-surement. In addition, a guided search scheme is introduced in conjunction with data partitioning and aggregation tech-niques to speed up query processing. The proposed retrieval method is tested on RGB and YUV color spaces and the experimental results demonstrates its effectiveness and effi-ciency for content-based color image retrieval.
... Obviously, there are many constraints related to the use of these indexes: manual annotation represents a big problem in large databases (time-consuming); the domain of the application and the personal knowledge bias the choice of these indexes, etc. Unfortunately, the existing indexes are always limited in the sense of capturing the salient content of an image ("a picture is worth a thousand words") [11,14,44,3,7,20]. ...
... With a query the user of a system can retrieve data or get information about the existence or not of some specific data. A query is called exact matching [KB96] if the required data is in the database as it was specified by the query. We call a query partial matching if it is not exact matching but the domain space of the query is a subset of the domain space in the database. ...
In several applications the information disposable is incomplete or imprecise. If the system includes temporal information, by means of a temporal database, also the temporal information may be incomplete. We present her a prototype which stores and retrieves incomplete data and temporal information. We show how the system treats several types of queries and how monotonic and non-monotonic updates are performed.
Proceedings of the SBBD1997
... On the other hand, their ability to accommodate non-textual values is limited, which makes them second choice for multimedia developments, as is their suitability for nonstandard applications involving complex objects in engineering , science, publishing and similar fields [11, 15, 27]. Object-oriented databases have more to offer in this respect (see e.g. [21] regarding suitability for multimedia). Since most of them are implemented as persistent extensions of C++, which acts simultaneously as implementation, data definition and data manipulation language, the separation of layers becomes fuzzier. ...
Database theory is a mature science with many well researched and implemented techniques. Examples
are the layered architecture with external, conceptual, and internal model, integrity constraints, descriptive query languages, transactions, etc. Visual Information systems pose a new challenge to the database community and database researchers must open their world for the new media and the needs of world-wide networking. At the same time projects in the area of visual information systems, spatial data, multimedia, digital libraries, etc. should look at database technology for trusted solutions. This paper exemplifies
the above theme with experiences gained from the ESCHER database prototype which features visual interaction for browsing and editing all kinds of data.
... This explosion of multimedia data has rapidly created the need of new suited tools to enable users to manage and to retieve efficiently his new kind of information. A research objective in the management of multimedia data, is to develop new retrieval tools that permit users to manipulate (retievd and representation) multimedia information as easdy as traditional data (strings, numbers), and as ittteltigenfly as textual information [10]. Research has started by developing content-based image retrieval systems such as QBIC [3], Virage [9], Chabot [13], VisurdSEEK [15], ~TRA [11], Photobook [14], etc. fin [7], 40 systems have been analyzed). ...
Many types of user would find it valuable to search collections
of music via queries representing music fragments, but such
searching requires a reliable technique for identifying whether a
provided fragment occurs within a piece of music. The problem ...
... The technological evolution of communication systems and the degree of maturity achieved in areas as different as signal processing, databases or computer vision, has brought about the proliferation of information systems whose objective is the efficient storage and management of large amounts of multimedia data [1,8]. ...
This paper presents a parallel implementation of a content based information retrieval (CBIR) system which deals with an image database composed of data from over 29 million bidimensional RGB images, which would be equivalent to 1.45 TB of graphical data. The application has been designed for a distributed memory multiprocessor environment, and has been implemented in a cluster of twenty five PCs using MPI. The paradigm that best fits the problem's needs is a farm based solution: a master process distributes the work load between the slave processes, and when these have finished, the master recollects the partial results computed on each slave process. In order to evaluate this solution, the experimental results have been compared with those achieved using a Silicon Graphics Origin 2000, a shared memory machine with eight processors. This paper analyzes the performances offered by both approaches from the viewpoints of speed, price and scalability, presenting the conclusions that can be extracted from the results' comparison
... audio, video, image, text) to a wide variety of machines in enterprisewide, heterogeneous environments. DBMSs have done an important step to support Client/Server architecture and non-standard applications as lijpermedia systems [13], imaging database [24], on-line control systems [21], or CAD-CAM applications. However, none of the DBMSs is general enough to support effectively a large spectrum of different applications yet as it is shown in [31]. ...
This paper presents the architecture of Phasme – a high performance application-oriented database system manager providing key facilities for the fast development of object-oriented, relational-based or other kinds of applications. Differing from conventional database systems, application-oriented servers are independent of a particular data model but cooperate with any, offer facilities to exploit new hardware architecture trend, are fully general to support efficiently wide range of heterogeneous objects, and offer facilities to enforce applications consistency of related objects. Phasme, a Parallel Application-Oriented Database System (AODMS) has been designed to meet the new information systems’ requirements and to use the power and the trends of the new generation of hardware. The major contributions of Phasme are the application-oriented architecture, the data storage manager, the dynamic optimization of both inter and intra-operation parallelism, and the exploitation of operating system services such as multi-threading or memory mapping for efficient concurrent executions.
... Recent advances in the research on multimedia databases [144,48,195,99] enable creation of large multimedia databases which can be queried in an effective way. These advances, in combination with the research into multimedia database and advances in data mining in relational databases [92], created a possibility for the creation of multimedia data mining systems. ...
Thesis (Ph. D.)--Simon Fraser University, 1999. Includes bibliographical references.
... 20 E' ovvio che l'archivio fotografico di un quotidiano debba privilegiare le immagini di cronaca piuttosto, mettiamo, che quelle di carattere scientifico, mentre il rapporto può essere invertito per una rivista mensile con finalità culturali 21 cfr. [7] ...
Dimensioning mass memories for a digital library is a very challenging problem. That’s for the uncertainty in defining the typological classes of the library “objects” and their amounts. In fact, the size of mass memories depends not only on amounts of volumes in each class, but also on the page sizes, the type of printing and the images represented in each volumes, which affect the efficiency of the various compression methods used in recording digital pages (both lossless and lossy). As a consequence, in dimensioning a digital library every deterministic (analytical) method is useless, at least in the majority of cases. It seems more appropriate using statistical and probability calculus methods, such as Monte Carlo methods. These methods allow to simulate the amount of memory needed by each volume in library, as a limit. Of course, statistics allow to simulate only “few” volumes, extending results to all the volumes in library. Above all, simulation can allow very interesting “what-if” analyses, varying the hypotheses about library structure and policies about its management and costs. Last but non least, evaluation of needed mass memories amounts must be completed by the design of the memories hierarchy and analysis of mean life expectation of physical supports, together with their technological obsolescence. All items strictly correlated to system efficiency for online digital libraries. An example of digital library dimensioning is presented using an “ad hoc” software (DBD.EXE), used by the author during the CILEA course “Design of a Digital Library”.
With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.
Die zunehmende Möglichkeit von Rechnersystemen, neben den konventionellen Medien auch umfangreiches Stand- und Bewegtbildmaterial verarbeiten zu können, wird an vielfältigen Anwendungsszenarien deutlich (s. z.B. [WiIm96]). Hierzu zählen Digital Libraries, Videoverwaltungssysteme, Medienarchive, Video-on-Demand-Systeme oder verstärkt auch betriebliche Systeme, die Medien unterschiedlicher Art speichern und verwalten. Grundlage solcher Systeme sind leistungsfähige Datenbankmanagement- sowie Speichersysteme, die somit wesentliche Komponenten von Multimediasystemen darstellen.
In this chapter, a new approach that enables building active 3D/VR applications, called X-VR, is presented. The term “active” is used to describe applications that enable server-side user interaction, dynamic composition of virtual scenes, access to on-line data, selection of content, alternative visualizations, personalization, and implementation of persistency. In the X-VR approach, two new techniques are used: dynamic content modeling, which provides the prerequisite infrastructure for building active 3D/VR applications, and database modeling of virtual worlds, which enables building high-level database models of 3D virtual environments. Dynamic content modeling is accomplished by the use of the X-VRML language. X-VRML is a new high-level XML-based language that extends 3D content standards allowing convenient access to databases, object-orientation, parameterization, and imperative programming techniques. For database modeling of virtual worlds, a model called X-VRDB is proposed. In this model, a virtual world is conceptually divided into several distinct elements that are separately represented in a database. These two techniques constitute the main building blocks of advanced interactive 3D/VR applications.
The sections in this article are1Image Sensing Equipment2Image Computing Equipment3Video and Image Compression Equipment4Image Database Equipment5Image Display Equipment6Image Printing EquipmentKeywords:image acquisitions;scanners;video capture;image display;DSP chips;image databases;computer networks;hardware approaches to video and image coding;printers
Resumo. Este trabalho descreve a elaboração de um protótipo de sistema de informação multimídia para pesquisa e divulgação de coleções de documentos históricos permanentes pertencentes ao Arquivo Público Mineiro e as técnicas e ferramentas de informática utilizadas na sua construção. 1. Introdução Este trabalho descreve a elaboração de um protótipo de sistema de informação multimídia, para pesquisa e divulgação de coleções de documentos históricos permanentes, pertencentes ao Arquivo Público Mineiro, e as técnicas e ferramentas de informática utilizadas na sua construção. O objetivo maior do desenvolvimento do protótipo foi incrementar o processo de informatização do Arquivo Público Mineiro, que armazena cerca de 2000 metros lineares de documentação administrativa e histórica sobre o estado de Minas Gerais (mais de dez milhões de páginas documentais!), a partir do século XVIII. Nesse trabalho, procurou-se enfatizar o uso de novas tecnologias de informática para armazenamento e gerenciamento de dados multimídia como imagens, vídeo, áudio e textos livres (GHAFOOR, 1995), já que estes meios de representação da informação são os mais utilizados nos documentos das coleções componentes de um arquivo público. 2. Aplicações em Multimídia Sistemas de informação multimídia utilizam concorrentemente vários tipos de dados multimídia sendo capazes de organizar, sincronizar e apresentar esse conjunto complexo e abrangente de informações de forma interativa (DAVID,1996; KHOSAFIASN, 1996). Segundo ADJEROH (1997) esses sistemas se caracterizam pela integração de diferentes tipos de dados multimídia oriundos de diversas fontes. Aplicações multimídia podem ser encontradas onde existe a necessidade de se gerenciar dados complexos. Como exemplos clássicos, podemos citar as áreas de educação (treinamento local e a distância, bibliotecas digitais), saúde (telemedicina, banco de dados de imagens médicas), entretenimento (jogos, vídeo sob demanda, TV interativa) e negócios (vídeoconferência, comércio eletrônico). A Internet e a multimídia caminham para uma parceria indissolúvel. Uma variedade de ferramentas e técnicas está sendo desenvolvida para suportar multimídia em ambientes de rede. Prova disso é a INTERNET-2, uma rede de alta velocidade para aplicações multimídia, já em implantação no Brasil pela RNP. 3. Arquivo Público: Conceito e Desafios Um arquivo público é constituído pelo conjunto de documentos produzidos ou recebidos por instituições governamentais em decorrência de suas funções específicas, administrativas, judiciárias ou legislativas (Arquivo...., 1996; Guia..., 1993). As instituições mantenedoras de arquivos públicos ou outros acervos documentais enfrentam diversos problemas, decorrentes geralmente do grande acúmulo de documentos e de sua fragilidade, destacando-se o risco decorrente da degradação dos originais devido a sua manipulação direta e freqüente e da dificuldade de acesso às informações por parte dos pesquisadores e do público em geral (Arquivo..., 1996). A demanda crescente por informações completas e facilmente recuperáveis de grandes acervos documentais provocou o surgimento de métodos e tecnologias avançadas no campo da digitalização, armazenamento, recuperação e apresentação de documentos históricos (GARCIA, 1994).
Indexing structures of modern multimedia databases almost always use a predefined measure of distance, which is usually independent of the data. However, knowing how a given data point relates to other data points in the database can often yield insight into what data points should be considered similar. Computationally expensive data mining operations allow users to classify data into clusters. However, these operations require several passes through data, and classifications can be made obsolete when data is inserted, deleted, or modified. As a response to this problem, we developed and compared two indexing schemes that facilitate cluster-based queries.
The paper presents the file structure of a multimedia database management designed to manage and query medium sized personal digital collections that contain both alphanumerical information and digital images. The software tool allows creating and deleting databases, creating and deleting tables in databases, updating data in tables and querying. Along with the classical functions of a DBMS, an element of originality of this system is that offers the possibility to insert new images in the database, together with their relevant information in a special data type called IMAGE. This type can be used to store all the information regarding the image: color characteristics, texture characteristics, width, height, etc.
Current research in the domain of geographic information science considers possibilities of including another dimension, time, which is generally missing to this point. Users interested in changes have few functions available to compare datasets of spatial configurations at different points in time. Such a comparison of spatial configurations requires large amounts of manual labor. An automatic derivation of changes would decrease amounts of manual labor. The thesis introduces a set of methods that allows for an automatic derivation of changes. These methods analyze identity and topological states of objects in snapshots and derive types of change for the specific configuration of data. The set of change types that can be computed by the methods presented includes continuous changes such as growing, shrinking, and moving of objects. For these continuous changes identity remains unchanged, while topological relations might be altered over time. Also discrete changes such as merging and splitting where both identity and topology are affected can be derived. Evaluation of the methods using a prototype application with simple examples suggests that the methods compute uniquely and correctly the type of change that applied in spatial scenarios captured in two snapshots.
The paper presents a relational database management system for managing and querying visual information. In order to accomplish
this, the DBMS has implemented the Image data type. In a record of this type, there are stored the binary image and the automatically
extracted color and texture characteristics. The color histogram with 166 colors in HSV space represents the image color information.
A vector with 12 values represents the texture information obtained by applying Gabor filters. The two characteristics vectors
are used for content-based visual query process. Beside this original way for visual information storage, the DBMS has a visual
interface for building content-based queries using color, texture or both. Adapted for this type of queries, a Select command
is built and executed by the DBMS. This system might by easily used in areas, where medium sized image collections are collected,
in an efficient way and with low cost.
With the fast development of E-commerce, there is an immediate need for an efficient and effective personal identification and verification system for the
security of network access. To overcome the limitations of the current existing password-based authentication services on
the Internet, we apply biometrics computing technology to achieve high performance. To tackle the challenge of the integration
of multiple biometrics features within a single platform to satisfy the requirements of various identification purposes, we
introduce a new approach to personal identification with multimodal biometrics and data warehousing techniques. In contrast
to the existing systems which employ a fixed mechanism for data representation and similarity measurement, we extend the concepts
of conventional data warehouse and biometrics database to biometrics data warehouse for effective data representation and
storage. In addition, we propose a fuzzy neural network to provide automatic and autonomous classification for the verification
and identification outputs by integrating fuzzy logic technology and the Back Propagation Feed Forward (BPFF) neural network.
To increase the speed and flexibility of the process, we use mobile agents as the steering tool for parallel processing in
a distributed environment, which includes hierarchical biometric feature representation, multiple feature integration, dynamic
biometric data indexing and flexible search. An agent development tool named “Aglet” is used as a programming framework to
illustrate the combination of multiple biometrics features in a hierarchical structure for fast and reliable identity verification
and identification. The experimental results demonstrate the feasibility of the proposed approach to network security with
eCommerce applications.
KeywordsPersonal identification–biometrics computing–data warehousing–feature extraction and indexing–fuzzy neural network–mobile agent–parallel processing–distributed computing
Image data are omnipresent for various applications. A considerable volume of data is produced and we need to develop tools to efficiently retrieve relevant information. Image mining is a new and challenging research field which tries to overcome some limitations reached by content-based image retrieval. Image mining deals with making associations between images from large database and presenting a resumed view. After a state of the art in the image retrieval field, this chapter presents some work and ideas about the need to define new descriptors to integrate image semantics. Clustering and characterization rules are combined to reduce the research space and produce a resumed view of an annotated image database. These data mining techniques are performed separately on visual descriptors and textual information (annotations, keywords, web pages). A visual ontology is derived from the textual part, and enriched with representative images associated to each concept of the ontology. Ontology-based navigation can also be seen as a user-friendly and powerful tool to retrieve relevant information. These two approaches should make the exploitation and the exploration of a large image database easier.
Multimedia databases include many types of new data. One common property of these data items is that they are very large.
We exploit the concept of transformational representations to logically store images without the bitmap. This technique is
similax to views in traditional database systems. The technique is useful when users axe editing images in the database to
create new images. Our method produces significant space savings over already compressed images, and potentially greater savings for video and audio. The method requires the support of multimedia editors. This
paper emphasizes the meta-structure needed by the database to support multimedia editing.
Nowadays there is an explosion of multimedia information. A huge quantity of static and video images has been stored on the
Internet. A large number of images stored on different media were converted to digital format. For example, TV images and
newspapers have been converted to digital form, making an easy task their processing, distribution and storage.
More than 2700 digital pictures are made in every second (in total 85 billion images yearly). For example, PhotoWorks includes
tens of millions of images on its web site. The common images are completed by special purpose images, like medical images
with an estimation of 2 billion per year.
This chapter presents an original dedicated integrated software system for managing and querying alphanumerical information
and images from medical domain. The software system has a modularized architecture with the following functions: medical data
acquisition from three primary sources, processing this information for extracting useful information, compacting data in
a unitary format and storing the information in a database controlled by a multimedia relational database management system.
The main and original function of the multimedia database management system (MMDBMS) is the possibility to execute content-based
query, because the visual information is very important in the medical domain. The MMDBMS has a visual interface for building
complex content-based visual query. This interface gives the possibility to choose the query image and the characteristics
that will be used, i.e. colour, texture or their combination.
Integrating semantically heterogeneous databases requires rich data models to homogenize disparate distributed entities with relationships and to access them through consistent views using high level query languages. In this paper, we first survey the IRO-DB system, which federates object and relational databases around the ODMG data model. Then, we point out some technical issues to extend IRO-DB to support multimedia databases on the Web. We propose to make it evolve towards a three-tiered architecture including local data sources with adapters to export objects, a mediator to integrate the various data sources, and an interactive user interface supported by a Web browser. We show through an example that new heuristics and strategies for distributed query processing and optimization have to be incorporated.
Current and future telecommunication systems will rely heavily on some key IT baseline technologies including databases. We
present our analysis of distribution requirements in databases for telecommunications. We discuss those needs in GSM and IN
CS-2. Our requirements analysis indicates that the following are important issues in databases for telecommunications: single-node
read and temporal transactions, clustered read and temporal transactions, distributed low-priority read transactions, distributed
low and middle level management transactions, support for partitioning and scalability, dynamic groups, replication control,
deadlines for local transactions, and limited global serialization control.
In this paper a system called CLIMS (CLausthal Image Management System) for content based image retrieval as an important subsystem of a general multimedia database is presented. It offers querying
by sketch and image example and uses colour and wavelet based features for the comparison of images. Each image in the database
is represented by a set of wavelet coefficients and colour attributes, which form the fundament for the retrieval.
In order to enable efficient similarity search two index structures, VP-Trees and Lq metric, are introduced and discussed. With the extension of the original VP-tree algorithm a ranking of the n most similar images is possible. The efficiency of the proposed retrieval methods is evaluated on a sample, general image
catalogue.
The integrated management of standard structured data and
time-dependent data is still too complex to be covered by a single
system. So, there will be distinct systems such as multimedia DBMS and
media servers for e.g., video data, each doing only what it can do best.
However, some functions desired by users and applications necessarily
involve both systems. A prime candidate for this is the use of
hypermedia link anchors in continuous media. The idea behind this
approach is to enhance the client software to improve the interplay
between structured data in the database (hypermedia structures and
metadata) and the video stream. On initiating the playout of the video,
the client receives a list of anchors from the MMDBS and organizes them
in local structures. It also receives a handle for the video to be
played out and uses it to contact the media server. The overall
architecture and the role of the different kinds of systems are shown.
Query and update processing are distributed over the different kinds of
servers
We present a content-based query system in an image database. Queries are based on color, shape content and textual description of images. We show that without efficient synergy between image analysis techniques, more particularly mathematics, and database technology, it is very difficult to find effective, feasible and efficient solutions to the problem of retrieval in visual information systems. We focus our presentation on the usefulness of mathematics representation of color and shape, and its integration in the database. Finally, we present the usefulness of knowledge discovery in the extraction of shared features of relevant feedback
Querying image databases with similarity searches and relevance feedback has been largely investigated in the literature. In contrast, browsing has not been as much studied. We propose a browsing technique based on clustering and Galois' (concept) lattices. The for- mer technique help to avoid the high cost incurred by Galois' lattices alone. The result is a kind of hypertext of images that combines classification and visualization issues in a high- dimensionality space. RÉSUMÉ. L'interrogation des bases de données d'images grâce aux méthodes de recherche par similarité et par rétroaction a été largement explorée dans la littérature. Par contraste, la navi- gation est un domaine qui a été moins étudié. Nous proposons dans cet article une technique de navigation basée sur un processus de résumé de données et les treillis de Galois. La première technique contribut à réduire le coût élevé qu'induit la construction d'un treillis de Galois lors- qu'il est utilisé seul. Le résultat est une sorte d'hypertexte d'images qui combine des propriétés de classification et de visualisation dans un espace à haute dimension.
This paper presents an overview over parallel architectures for the efficient realisation of digital libraries by considering image databases as an example. The state of the art approach for image retrieval uses a priori extracted features and limits the applicability of the retrieval techniques, as a detail search for objects and for other important elements can't be performed. Well-suited algorithms for dynamic feature extraction and comparison are not often applied, as they require huge computational and memory resources. Integration of parallel methods and architectures enables the use of these alternative approaches for improved classification and retrieval of documents in digital libraries. Therefore implemented prototypes on a symmetric multiprocessor (SMP) and on cluster architecture are introduced in the paper. Performance measurements with a wavelet-based template matching method resulted into a reasonable speedup.
The spatial, temporal, storage, retrieval, integration and
presentation requirements of multimedia data differ significantly from
those for traditional data. A multimedia database management system
provides for the efficient storage and manipulation of multimedia data
in all its varied forms. We look into the basic nature of multimedia
data, highlight the need for multimedia DBMSs, and discuss the
requirements and issues necessary for developing such systems
Aquest treball vol fer conèixer d'una manera més o menys acurada el concepte de base de dades multimèdia i tot el que hi està relacionat. Amb aquesta finalitat, el treball s'ha estructurat de manera que el lector conegui els conceptes fonamentals de les bases de dades en general i de les multimèdia en concret, com també les tecnologies disponibles per a gestionar-les, saber com funcionen, què ofereixen, etc. Este trabajo pretende dar a conocer, de forma más o menos detallada, el concepto de base de datos multimedia y todo lo que está relacionado con él. Con esta finalidad, el trabajo se ha estructurado de modo que el lector conozca los conceptos básicos de las bases de datos en general y de las multimedia en particular, así como las tecnologías disponibles para gestionarlas, saber cómo funcionan, qué ofrecen, etc. This project sets out to show more or less painstakingly the concept of multimedia database and everything relating to it. With this in mind, the project has been structured so that the reader learns about the basic concepts of databases in general and multimedia databases in particular, including the available technologies to manage them, learn how they work, what they offer, etc.
Ce travail de recherche entre dans le cadre des systèmes de recherche d'images par le contenu, en particulier la recherche par la texture. Le but de ce travail est de permettre à l'utilisateur de naviguer dans de grande base de données d'images sans formulation de requêtes en un langage d'interrogation spécifique. Pour atteindre cet objectif, nous avons réparti le travail en deux grands volets. Le premier volet concerne l'extraction et l'identification d'un modèle de texture composé d'attributs pertinents. Pour atteindre cet objectif, nous avons proposé d'étudier deux modèles de texture : les matrices de co-occurrences et les attributs de Tamura. La sélection et la validation du modèle caractéristique ont été faites à partir de plusieurs applications que nous avons proposées dans le cadre de cette thèse après réduction de la dimension de l'espace de représentation des modèles de texture. Ensuite, la navigation s'effectue à l'aide de treillis de Galois avec une interface HTML tout en passant par une phase d'interprétation du modèle de texture numérique en un modèle sémantique. Le problème de transcription du numérique au sémantique est considéré comme un problème de discrétisation des valeurs numériques continues. Un autre problème se manifeste lorsque la taille de la base des images augmente, les performances du système de navigation se dégradent. Pour pallier à ce problème, nous proposons de créer des résumés qui de plus permettent de focaliser la recherche et la navigation sur un ensemble d'images cibles et non pas sur toute la base.
Multimedia objects are components of distributed systems. They communicate via a network. We present a framework for the organisation of a Web-based application design tool - multimedia object repository (MOR). The brief structure of the MOR was described. We discuss the design consideration of multimedia contents, data organisations, storage, image processing, retrieval and security issues.
We present a prototype implementation of a metadata generation method by exemplary textual indexing to digital images for image retrieval. The proposed method uses image processing for computing structural similarity between the prepared sample images with sample indexes and query images based on visual features, and selects the sample indexes of the most similar image as metadata to the requested query images. The use of image processing allows cost-efficient implementation for content-based metadata indexing to visual objects.
ResearchGate has not been able to resolve any references for this publication.