
Felix Heine- Prof. Dr.
- Hochschule Hannover
Felix Heine
- Prof. Dr.
- Hochschule Hannover
About
52
Publications
2,922
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
372
Citations
Current institution
Publications
Publications (52)
Advanced persistent threat (APT) attacks present a significant challenge for any organization, as they are difficult to detect due to their elusive nature and characteristics. In this paper, we conduct a comprehensive literature review to investigate the various APT attack detection systems and approaches and classify them based on their threat mod...
Although machine learning (ML) for intrusion detection is attracting research, its deployment in practice has proven difficult. Major hindrances are that training a classifier requires training data with attack samples, and that trained models are bound to a specific network. To overcome these problems, we propose two new methods for anomaly-based...
Data is getting more and more ubiquitous while its importance rises. The quality and outcome of business decisions is directly related to the accuracy of data used in predictions. Thus, a high data quality in database systems being used for business decisions is of high importance. Otherwise bad consequences in the form of commercial loss or even l...
Nowadays business decisions heavily rely on data in data warehouse systems (DWH), thus data quality (DQ) in DWH is a highly relevant topic. Consequently, sophisticated yet still easy to use solutions for monitoring and ensuring high data quality are needed. This paper is based on the IQM4HD project in which a prototype of an automated data quality...
Apache Hadoop is a popular technology that proved itself as an effective and powerful framework for Big Data analytics. It broke from many of its predecessors in the “computing at scale” space by being designed to run in a distributed fashion across large amounts of commodity hardware instead of a few expensive computers. Many organizations have co...
Cloud architectures are being used increasingly to support Big Data analytics by organizations that make ad hoc or routine use of the cloud in lieu of acquiring their own infrastructure. On the other hand, Hadoop has become the de-facto standard for storing and processing Big Data. It is hard to overstate how many advantages come with moving Hadoop...
Data streams become more and more important in modern computer infrastructures. The amount of data processed by computer systems continuously increased in the past years, in a way that storing all of the data is not effective or even impossible. Consequently the most feasible way to get information out of the data is to compute it in a stream proce...
Outlier detection is an important tool for many application areas. Often, data has some multidimensional structure so that it can be viewed as OLAP cubes. Exploiting this structure systematically helps to find outliers otherwise undetectable. In this paper, we propose an approach that treats streaming data as a series of OLAP cubes. We then use an...
In computer networks many components produce valuable information about themselves or other participants, especially security analysis relevant information. Although such information is intrinsically related as components are connected by a network, most of them still operate independently and do not share data amongst each other. Furthermore, the...
Automated assessment of computer programs submitted by students serves two main purposes: it may be used to increase grading efficiency and process optimization in large courses on one hand. On the other hand, if integrated properly into a suitable learning context, it may also improve student learning by supporting tutoring capabilities. Both aspe...
Für die automatisierte Bewertung von Lösungen zu Programmieraufgaben wurde mittlerweile eine Vielzahl an Grader-Programmen zu unterschiedlichen Programmiersprachen entwickelt. Um Lernenden wie Lehrenden Zugang zur möglichst vielen Gradern über das gewohnte LMS zu ermöglichen wird das Konzept einer generischen Web-Serviceschnittstelle (Grappa) vorge...
Monitoring a computer network's security state is a di cult
task as network components rarely share their information. The IF-MAP
speci fication de fines a client/server-based protocol that enables network
components to share security information among each other, which is
represented in a graph structure. Visualization of this data is challenging...
In this paper we present a concept and prototypical implementation of a software system (aSQLg) to automatically assess SQL statements. The software can be used in any introductory database class that teaches students the use of SQL. On one hand it increases the efficiency of grading students submissions of SQL statements for a given problem statem...
Die automatisierte Programmbewertung als ergänzendes Hilfsmittel in der Programmierausbildung ermöglicht eine zusätzliche Lernerfahrung für Studierende. Das unmittelbare Feedback dieser Systeme auf die eingereichten Aufgabenlösungen bietet den Studierenden für sie wichtige Hilfestellung zur erfolgreichen Bearbeitung der Aufgabe. An vielen Hochschul...
Als Bestandteil der Informatik-Lehre werden für die Programmierausbildung vermehrt Methoden der automatisierten Programmbewertung eingesetzt. Für die Programmiersprachen Java und SQL stehen hierfür an der Hochschule Hannover die selbst entwickelten bzw. speziell angepassten Werkzeuge „Graja“ und „aSQLg“ zur Verfügung. In einer Evaluationsstudie sol...
In p2p based data mangement applications, it is unrealistic to rely upon a centralized schema or ontology. The p2p paradigm is more than a new underlying infrastructure. It supports an emergent approach to data management where the data is generated and inserted into the network in a decentralized fashion. Thus, each peer or group of peers will hav...
Within an enterprise various information systems have to be run. Enterprise Application Integration (EAI) has become a well-established way to integrate such heterogeneous business information systems and to map business processes to the technical system level. To do so, workflow systems and middleware are employed to constitute SOAs. Thereby, Web...
In this chapter, we describe the BabelPeers project. The idea of this project is to develop a system for Grid resource description and matching, which is semantically rich while maintaining scalability and reliability. This is achieved by the distribution of resource data over a p2p network, combined with sophisticated mechanisms for query processi...
XGR (XML Data Grid) and BabelPeers are both data man- agement systems based on distributed hash tables (DHT) that use the Pastry DHT to store data and meta data. XGR is based on the XML data model; BabelPeers uses the Resource Description Framework (RDF) for its data. XGR and Ba- belPeers have different but complementary functionality. On the one h...
Berners-Lee’s vision of the Semantic Web describes the idea of providing machine readable and processable information using
key technologies such as ontologies and automated reasoning in order to create intelligent agents.
The prospective amount of machine readable information available in the future will be large. Thus, heterogeneity and scalabil...
The Resource Description Framework provides a powerful model for structured knowledge representation that allows the inference
of new knowledge. Because of the anticipated scope of semantic information available in the future, centralized databases
will become incapable of handling the load. Peer-to-Peer based distributed databases offer better sca...
In large-scale distributed systems, information is typically generated decentralized. However, for many applications it is desirable to have a unified view on this knowledge, allowing to query it without regarding the heterogeneity of the underlying systems. In this context, two main requirements have to be fulfilled. On the one hand, we need a fle...
In this paper, we describe the architecture of the virtual resource manager VRM, a management system designed to reside on
top of local resource management systems for cluster computers and other kinds of resources. The most important feature of
the VRM is its capability to handle quality-of-service (QoS) guarantees and service-level agreements (SL...
In large-scale distributed systems, information is typically generated decentralized. However, for many applications it is desirable to have a unified view on this knowledge, allowing to reason about it and to query it without regarding the heterogeneity of the underlying systems. In this context, two main requirements have to be fulfilled. On the...
Advance Reservations are an important concept to support QoS and Work∞ow Scheduling in Grid environments. However, the im- pact of reservations from the Grid on the performance of local schedulers is not yet known. Using discrete event simulations we evaluate the im- pact of reservations on planning-based resource management of standard batch jobs....
The Next Generation Grid applications will demand Grid middleware for a flexible negotiation mechanism supporting various ways of Quality-of-Service (QoS) guarantees. In this context, a QoS guarantee covers simultaneous allocations of various kinds of different resources, such as processor runtime, storage capacity, or network bandwidth, which are...
In this paper we present a new approach to semantic resource discovery in the grid. A peer-to-peer network is used to distribute and query the resource catalogue. Each peer can provide resource descriptions and background knowledge, and each peer can query the network for existing resources. We do not require a central ontology for resource descrip...
Grid Computing promises an efficient sharing of world-wide distributed resources, ranging from hardware, software, expert knowledge to special I/O devices. However, although the main Grid mechanisms are already developed or are currently addressed by tremendous research effort, the Grid environment still suffers from a low acceptance in different u...
Highly scalable parallel computers, e.g. SCI-coupled workstation clusters, are NUMA architectures. Thus good static locality is essential for high performance and scalability of parallel programs on these machines. This paper describes novel techniques to optimize static locality at compilation time by application of data transformations and data d...
Abstract The BIS-Grid project, a BMBF-funded project in the context of the German D-Grid initiative, focusses on realising Enterprise Application Integration using Grid technologies, enabling small and medium en- terprises both to integrate heterogeneous business information systems and to use external D-Grid resources and services with aordable,eo...