Handling Large Volumes of Mined Knowledge with a Self-Reconfigurable Topology on Distributed Systems.
Sch. of Comput. Sci. & Inf., Univ. Coll. Dublin, Dublin, IrelandDOI: 10.1109/ICMLA.2008.30 Conference: Seventh International Conference on Machine Learning and Applications, ICMLA 2008, San Diego, California, USA, 11-13 December 2008
Nowadays, massive amounts of data which are often geographically distributed and owned by different organisations, are being mined. As consequence, large volumes of knowledge is being generated. This causes the problem of efficient knowledge management in distributed data mining (DDM). The main aim of is to exploit fully the benefit of distributed data analysis while minimising the communication overhead. Existing DDM techniques perform partial analysis of local data at individual sites and then generate global models by aggregating the local results. These two steps are not independent since naive approaches to local analysis may produce incorrect and ambiguous global data models. To overcome this problem, we introduce a distributed knowledge map based on an efficient self-reconfiguration network topology to represent easily and exploit efficiently the knowledge mined in large scale distributed platforms. This will also facilitate the integration/coordination of local mining processes and existing knowledge to build global models. In this paper, we implement this knowledge map and present some preliminary results about its performance.
Conference Paper: The Data Wave: Data Management and Mining[Show abstract] [Hide abstract]
ABSTRACT: Nowadays, massive amounts of data that are often geographically distributed and owned by different organisations are being mined. As consequence, a large mount of knowledge is being produced. This causes the problem of efficient knowledge management and mining. The main aim is to develop DM infrastructures to fully exploit the benefit of the knowledge contained in these very large data repositories. To this end, we introduced ”knowledge map” approach to represent easily and efficiently the knowledge mined in a large-scale platform such as Grid. This also facilitates the integration and coordination of local mining processes along with existing knowledge to increase the accuracy of the final models. In this paper, we discuss its advantages and its design issues.19th IEEE International Workshops on Enabling Technologies: Infrastructures for Collaborative Enterprises, WETICE 2010, Larissa, Greece, 28-30 June 2010, Proceedings; 01/2010
Conference Paper: Ontology for knowledge management and improvement of data mining result[Show abstract] [Hide abstract]
ABSTRACT: Nowadays, large bodies of data in different domains are collected and stored. An efficient extraction of useful knowledge from these data becomes a huge challenge. This leads to the need for developing distributed data mining techniques (DDM). Moreover, it creates a complex problem of the management of the mined results. To solve this problem, we propose the Knowledge Map Ontology (KMO) architecture that allows an efficient representation of knowledge to guide the users in the extraction of such knowledge. KMO uses repositories built from Ontologies. The distribution of this architecture is done according to Tree P2P (TreeP) because Ontologies are structured as trees. We show that this architecture is very efficient and necessary in the field, where knowledge is distributed, varied, and representing very large quantities of data.IEEE International Conference on Spatial Data Mining and Geographical Knowledge Services, ICSDM 2011, Fuzhou, China, June 29 - July 1, 2011; 01/2011
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.