Future Internet

Published by MDPI

Online ISSN: 1999-5903

Articles


Figure 1. Converged Networks QoS Management Framework (CNQF) subsystems and operational entities in a converged network scenario.
Table 1 . MIB OIDs used in CNQF netmon class for bandwidth monitoring (RFC 1213).
Figure 2. Linux-based testbed for CNQF development and experimental evaluation.  
Table 2 .
Figure 3. CNQF enabled adaptive measurement-based QoS management procedures scenario.  

+8

Adaptive Measurement-Based Policy-Driven QoS Management with Fuzzy-Rule-based Resource Allocation
  • Article
  • Full-text available

November 2013

·

112 Reads

·

·

·

Fixed and wireless networks are increasingly converging towards common connectivity with IP-based core networks. Providing effective end-to-end resource and QoS management in such complex heterogeneous converged network scenarios requires unified, adaptive and scalable solutions to integrate and co-ordinate diverse QoS mechanisms of different access technologies with IP-based QoS. Policy-Based Network Management (PBNM) is one approach that could be employed to address this challenge. Hence, a policy-based framework for end-to-end QoS management in converged networks, CNQF (Converged Networks QoS Management Framework) has been proposed within our project. In this paper, the CNQF architecture, a Java implementation of its prototype and experimental validation of key elements are discussed. We then present a fuzzy-based CNQF resource management approach and study the performance of our implementation with real traffic flows on an experimental testbed. The results demonstrate the efficacy of our resource-adaptive approach for practical PBNM systems.
Download
Share

Three Steps to Heaven: Semantic Publishing in a Real World Workflow

June 2012

·

162 Reads

Semantic publishing offers the promise of computable papers, enriched visualisation and a realisation of the linked data ideal. In reality, however, the publication process contrives to prevent richer semantics while culminating in a `lumpen' PDF. In this paper, we discuss a web-first approach to publication, and describe a three-tiered approach which integrates with the existing authoring tooling. Critically, although it adds limited semantics, it does provide value to all the participants in the process: the author, the reader and the machine.

The Street Network Evolution of Crowdsourced Maps: OpenStreetMap in Germany 2007–2011

January 2012

·

2,016 Reads

The OpenStreetMap (OSM) project is a prime example in the field of Volunteered Geographic Information (VGI). Worldwide, several hundred thousand people are currently contributing information to the “free” geodatabase. However, the data contributions show a geographically heterogeneous pattern around the globe. Germany counts as one of the most active countries in OSM; thus, the German street network has undergone an extensive development in recent years. The question that remains is this: How does the street network perform in a relative comparison with a commercial dataset? By means of a variety of studies, we show that the difference between the OSM street network for car navigation in Germany and a comparable proprietary dataset was only 9% in June 2011. The results of our analysis regarding the entire street network showed that OSM even exceeds the information provided by the proprietary dataset by 27%. Further analyses show on what scale errors can be reckoned with in the topology of the street network, and the completeness of turn restrictions and street name information. In addition to the analyses conducted over the past few years, projections have additionally been made about the point in time by which the OSM dataset for Germany can be considered “complete” in relative comparison to a commercial dataset.

Figure 3. Assumed and actual communication between architectural layers.  
Figure 4. Conjecture and confirmation in an agile process.  
Test Driven Development: Advancing Knowledge by Conjecture and Confirmation

December 2011

·

2,852 Reads

Test Driven Development (TDD) is a critical agile software development practice that supports innovation in short development cycles. However, TDD is one of the most challenging agile practices to adopt because it requires changes to work practices and skill sets. It is therefore important to gain an understanding of TDD through the experiences of those who have successfully adopted this practice. We collaborated with an agile team to provide this experience report on their adoption of TDD, using observations and interviews within the product development environment. This article highlights a number of practices that underlie successful development with TDD. To provide a theoretical perspective that can help to explain how TDD supports a positive philosophy of software development, we have revised Northover et al.’s conceptual framework, which is based on a four stage model of agile development, to reinterpret Popper’s theory of conjecture and falsification in the context of agile testing strategies. As a result of our findings, we propose an analytical model for TDD in agile software development which provides a theoretical basis for further investigations into the role of TDD and related practices.

Evolving Web-Based Test Automation into Agile Business Specifications

December 2011

·

117 Reads

Usually, test automation scripts for a web application directly mirror the actions that the tester carries out in the browser, but they tend to be verbose and repetitive, making them expensive to maintain and ineffective in an agile setting. Our research has focussed on providing tool-support for business-level, example-based specifications that are mapped to the browser level for automatic verification. We provide refactoring support for the evolution of existing browser-level tests into business-level specifications. As resulting business rule tables may be incomplete, redundant or contradictory, our tool provides feedback on coverage.

How Can We Study Learning with Geovisual Analytics Applied to Statistics?

January 2012

·

258 Reads

It is vital to understand what kind of processes for learning that Geovisual Analytics creates, as certain activities and conditions are produced when employing Geovisual Analytics tools in education. To understand learning processes created by Geovisual Analytics, first requires an understanding of the interactions between the technology, the workplace where the learning takes place, and learners’ specific knowledge formation. When studying these types of interaction it demands a most critical consideration from theoretical perspectives on research design and methods. This paper first discusses common, and then a more uncommon, theoretical approach used within the fields of learning with multimedia environments and Geovisual Analytics, the socio-cultural theoretical perspective. The paper next advocates this constructivist theoretical and empirical perspective when studying learning with multiple representational Geovisual Analytic tools. To illustrate, an outline of a study made within this theoretical tradition is offered. The study is conducted in an educational setting where the Statistics eXplorer platform is used. Discussion of our study results shows that the socio-cultural perspective has much to offer in terms of what kind of understanding can be reached in conducting this kind of studies. Therefore, we argue that empirical research to analyze how specific communities use various Geovisual Analytics to evaluate information is best positioned in a socio-cultural theoretical perspective. Learn more about this project at: http://ncva.itn.liu.se/vise and Geovisual Analytics at: Visit also http://ncva.itn.liu.se

Low-Cost Mapping and Publishing Methods for Landscape Architectural Analysis and Design in Slum-Upgrading Projects

December 2011

·

1,038 Reads

The research project “Grassroots GIS” focuses on the development of low-cost mapping and publishing methods for slums and slum-upgrading projects in Manila. In this project smartphones, collaborative mapping and 3D visualization applications are systematically employed to support landscape architectural analysis and design work in the context of urban poverty and urban informal settlements. In this paper we focus on the description of the developed methods and present preliminary results of this work-in-progress.

Architecture and Design for Virtual Conferences: A Case Study

December 2011

·

818 Reads

This paper presents a case study of the design issues facing a large multi-format virtual conference. The conference took place twice in two different years, each time using an avatar-based 3D world with spatialized audio including keynote, poster and social sessions. Between year 1 and 2, major adjustments were made to the architecture and design of the space, leading to improvement in the nature of interaction between the participants. While virtual meetings will likely never supplant the effectiveness of face-to-face meetings, this paper seeks to outline a few design principles learned from this experience, which can be applied generally to make computer mediated collaboration more effective.

A Service-Oriented Architecture for Proactive Geospatial Information Services

December 2011

·

238 Reads

The advances in sensor network, linked data, and service-oriented computing has indicated a trend of information technology, i.e., toward an open, flexible, and distributed architecture. However, the existing information technologies show a lack of effective sharing, aggregation, and cooperation services to handle the sensors, data, and processing resources to fulfill user’s complicated tasks in near real-time. This paper presents a service-orientated architecture for proactive geospatial information services (PGIS), which integrates the sensors, data, processing, and human services. PGIS is designed to organize, aggregate, and co-operate services by composing small scale services into service chains to meet the complicated user requirements. It is a platform to provide real-time or near real-time data collection, storage, and processing capabilities. It is a flexible, reusable, and scalable system to share and interoperate geospatial data, information, and services. The developed PGIS framework has been implemented and preliminary experiments have been performed to verify its performance. The results show that the basic functions such as task analysis, managing sensors for data acquisition, service composition, service chain construction and execution are validated, and the important properties of PGIS, including interoperability, flexibility, and reusability, are achieved.

Figure 1. The location of the island of Sardinia (http://maps.google.com).  
Figure 3. Cont.  
Figure 3. (a) An example of descriptive and relational slots assigned to the class " Scoping report " in Protégé; (b) A graphical representation of relations corresponding to these slots.  
An Ontology of the Strategic Environmental Assessment of City Masterplans

December 2011

·

110 Reads

Following a discussion on the semantics of the term “ontology”, this paper discusses some key points concerning the ontology of the Strategic Environmental Assessment procedure applied to city Masterplans, using sustainability as a reference point. It also assumes the implementation of Guidelines of the Autonomous Region of Sardinia as an experimental context, with the objective of proposing the SEA ontology as an important contribution to improve SEA’s effectiveness.

When Atoms Meet Bits: Social Media, the Mobile Web and Augmented Revolution

December 2012

·

1,629 Reads

The rise of mobile phones and social media may come to be historically coupled with a growing atmosphere of dissent that is enveloping much of the globe. The Arab Spring, UK Riots, Occupy and many other protests and so-called “flash-mobs” are all massive gatherings of digitally-connected individuals in physical space; and they have recently become the new normal. The primary role of technology in producing this atmosphere has, in part, been to effectively link the on and the offline. The trend to view these as separate spaces, what I call “digital dualism”, is faulty. Instead, I argue that the digital and physical enmesh to form an “augmented reality”. Linking the power of the digital–creating and disseminating networked information–with the power of the physical–occupying geographic space with flesh-and-blood bodies–is an important part of why we have this current flammable atmosphere of augmented revolution.

Mobile Phones Bridging the Digital Divide for Teens in the US?

December 2011

·

2,179 Reads

In 2009, just 27% of American teens with mobile phones reported using their devices to access the internet. However, teens from lower income families and minority teens were significantly more likely to use their phones to go online. Together, these surprising trends suggest a potential narrowing of the digital divide, offering internet access to those without other means of going online. This is an important move, as, in today’s society, internet access is central to active citizenship in general and teen citizenship in particular. Yet the cost of this move toward equal access is absorbed by those who can least afford it: Teenagers from low income households. Using survey and focus group data from a national study of “Teens and Mobile Phone Use” (released by Pew and the University of Michigan in 2010), this article helps identify and explain this and other emergent trends for teen use (as well as non-use) of the internet through mobile phones.

Figure 2. Mind-mapping the Crime-Social-Landuse relationships. 
Figure 6. ( a ) Original CLC Colour Scheme ( b ) Initial GeoServer Output ( c ) Final GeoServer Output. 
Sharing Integrated Spatial and Thematic Data: The CRISOLA Case for Malta and the European Project Plan4all Process

December 2011

·

317 Reads

Sharing data across diverse thematic disciplines is only the next step in a series of hard-fought efforts to ensure barrier-free data availability. The Plan4all project is one such effort, focusing on the interoperability and harmonisation of spatial planning data as based on the INSPIRE protocols. The aims are to support holistic planning and the development of a European network of public and private actors as well as Spatial Data Infrastructure (SDI). The Plan4all and INSPIRE standards enable planners to publish and share spatial planning data. The Malta case tackled the wider scenario for sharing of data, through the investigation of the availability, transformation and dissemination of data using geoportals. The study is brought to the fore with an analysis of the approaches taken to ensure that data in the physical and social domains are harmonised in an internationally-established process. Through an analysis of the criminological theme, the Plan4all process is integrated with the social and land use themes as identified in the CRISOLA model. The process serves as a basis for the need to view sharing as one part of the datacycle rather than an end in itself: without a solid protocol the foundations have been laid for the implementation of the datasets in the social and crime domains.

Table 1 . Assessment of functionality of existing systems complementary to the EASY vision.
Figure 4. EASY prototype topic package with aerial imagery and polygonal data illustrating property boundaries, water courses and road networks. 
Extension Activity Support System (EASY): A Web-Based Prototype for Facilitating Farm Management

December 2012

·

240 Reads

·

Lindsay Smith

·

Ian Miller

·

[...]

·

Eloise Seymour
In response to disparate advances in delivering spatial information to support agricultural extension activities, the Extension Activity Support System (EASY) project was established to develop a vision statement and conceptual design for such a system based on a national needs assessment. Personnel from across Australia were consulted and a review of existing farm information/management software undertaken to ensure that any system that is eventually produced from the EASY vision will build on the strengths of existing efforts. This paper reports on the collaborative consultative process undertaken to create the EASY vision as well as the conceptual technical design and business models that could support a fully functional spatially enabled online system.

Tool or Toy? Virtual Globes in Landscape Planning

December 2011

·

533 Reads

Virtual globes, i.e., geobrowsers that integrate multi-scale and temporal data from various sources and are based on a globe metaphor, have developed into serious tools that practitioners and various stakeholders in landscape and community planning have started using. Although these tools originate from Geographic Information Systems (GIS), they have become a different, potentially interactive and public tool set, with their own specific limitations and new opportunities. Expectations regarding their utility as planning and community engagement tools are high, but are tempered by both technical limitations and ethical issues [1,2]. Two grassroots campaigns and a collaborative visioning process, the Kimberley Climate Adaptation Project case study (British Columbia), illustrate and broaden our understanding of the potential benefits and limitations associated with the use of virtual globes in participatory planning initiatives. Based on observations, questionnaires and in-depth interviews with stakeholders and community members using an interactive 3D model of regional climate change vulnerabilities, potential impacts, and possible adaptation and mitigation scenarios in Kimberley, the benefits and limitations of virtual globes as a tool for participatory landscape planning are discussed. The findings suggest that virtual globes can facilitate access to geospatial information, raise awareness, and provide a more representative virtual landscape than static visualizations. However, landscape is not equally representative at all scales, and not all types of users seem to benefit equally from the tool. The risks of misinterpretation can be managed by integrating the application and interpretation of virtual globes into face-to-face planning processes.

Metadata For Identity Management of Population Registers

December 2011

·

506 Reads

A population register is an inventory of residents within a country, with their characteristics (date of birth, sex, marital status, etc.) and other socio-economic data, such as occupation or education. However, data on population are also stored in numerous other public registers such as tax, land, building and housing, military, foreigners, vehicles, etc. Altogether they contain vast amounts of personal and sensitive information. Access to public information is granted by law in many countries, but this transparency is generally subject to tensions with data protection laws. This paper proposes a framework to analyze data access (or protection) requirements, as well as a model of metadata for data exchange.

An Online Landscape Object Library to Support Interactive Landscape Planning

December 2011

·

513 Reads

Using landscape objects with geo-visualisation tools to create 3D virtual environments is becoming one of the most prominent communication techniques to understand landscape form, function and processes. Geo-visualisation tools can also provide useful participatory planning support systems to explore current and future environmental issues such as biodiversity loss, crop failure, competing pressures on water availability and land degradation. These issues can be addressed by understanding them in the context of their locality. In this paper we discuss some of the technologies which facilitate our work on the issues of sustainability and productivity, and ultimately support for planning and decision-making. We demonstrate an online Landscape Object Library application with a suite of geo-visualisation tools to support landscape planning. This suite includes: a GIS based Landscape Constructor tool, a modified version of a 3D game engine SIEVE (Spatial Information Exploration and Visualisation Environment) and an interactive touch table display. By integrating the Landscape Object Library with this suite of geo-visualisation tools, we believe we developed a tool that can support a diversity of landscape planning activities. This is illustrated by trial case studies in biolink design, whole farm planning and renewable energy planning. We conclude the paper with an evaluation of our Landscape Object Library and the suite of geographical tools, and outline some further research directions.

A Land Use Planning Ontology: LBCS

December 2012

·

697 Reads

Urban planning has a considerable impact on the economic performance of cities and on the quality of life of their populations. Efficiency at this level has been hampered by the lack of integrated tools to adequately describe urban space in order to formulate appropriate design solutions. This paper describes an ontology called LBCS-OWL2 specifically developed to overcome this flaw, based on the Land Based Classification Standards (LBCS), a comprehensive and detailed land use standard to describe the different dimensions of urban space. The goal is to provide semantic and computer-readable land use descriptions of geo-referenced spatial data. This will help to make programming strategies available to those involved in the urban development process. There are several advantages to transferring a land use standard to an OWL2 land use ontology: it is modular, it can be shared and reused, it can be extended and data consistency maintained, and it is ready for integration, thereby supporting the interoperability of different urban planning applications. This standard is used as a basic structure for the “City Information Modelling” (CIM) model developed within a larger research project called City Induction, which aims to develop a tool for urban planning and design.

Natural Resource Knowledge and Information Management via the Victorian Resources Online Website

December 2011

·

1,281 Reads

Since 1997, the Victorian Resources Online (VRO) website (http://www.dpi.vic.gov.au/vro) has been a key means for the dissemination of landscape-based natural resources information via the internet in Victoria, Australia. The website currently consists of approximately 11,000 web pages, including 1900 maps and 1000 downloadable documents. Information is provided at a range of scales—from statewide and regional overviews to more detailed catchment and sub-catchment levels. At all these levels of generalisation, information is arranged in an organisationally agnostic way around key knowledge “domains” (e.g., soil, landform, water). VRO represents a useful model for the effective dissemination of a wide range of natural resources information; relying on partnerships with key subject matter experts and data custodians, including a “knowledge network” of retired land resource assessment specialists. In this paper, case studies are presented that illustrate various approaches to information and knowledge management with a focus on presentation of spatially contexted soil and landscape information at different levels of generalisation. Examples are provided of adapting site-based information into clickable maps that reveal site-specific details, as well as “spatialising” data from specialist internal databases to improve accessibility to a wider audience. Legacy information sources have also been consolidated and spatially referenced. More recent incorporation of interactive visualisation products (such as landscape panoramas, videos and animations) is providing interactive rich media content. Currently the site attracts an average of 1190 user visits per day and user evaluation has indicated a wide range of users, including students, teachers, consultants, researchers and extension staff. The wide range of uses for information and, in particular, the benefits for natural resource education, research and extension has also been identified.

Theoretical Foundations of the Web: Cognition, Communication, and Co-Operation. Towards an Understanding of Web 1.0, 2.0, 3.0

March 2010

·

3,090 Reads

Currently, there is much talk of Web 2.0 and Social Software. A common understanding of these notions is not yet in existence. The question of what makes Social Software social has thus far also remained unacknowledged. In this paper we provide a theoretical understanding of these notions by outlining a model of the Web as a techno-social system that enhances human cognition towards communication and co-operation. According to this understanding, we identify three qualities of the Web, namely Web 1.0 as a Web of cognition, Web 2.0 as a Web of human communication, and Web 3.0 as a Web of co-operation. We use the terms Web 1.0, Web 2.0, Web 3.0 not in a technical sense, but for describing and characterizing the social dynamics and information processes that are part of the Internet.

Figure 1. Learning spaces and negotiated meaning. 
Learning Space Mashups: Combining Web 2.0 Tools to Create Collaborative and Reflective Learning Spaces

September 2009

·

239 Reads

In this paper, Web 2.0 open content mashups or combinations are explored. Two case studies of recent initial teacher training programmes are reviewed where blogs and wikis were blended to create new virtual learning spaces. In two separate studies, students offer their views about using these tools, and reflect on the strengths and weaknesses of this approach. There is also discussion about aggregation of content and a theorization of how community and personal spaces can create tension and conflict. A new ‘learning spaces’ model will be presented which aids visualization of the processes, domains and territories that are brought into play when content and Web 2.0 tools are mashed up within the same space.

Deficit Round Robin with Fragmentation Scheduling to Achieve Generalized Weighted Fairness for Resource Allocation in IEEE 802.16e Mobile WiMAX Networks

December 2010

·

863 Reads

Deficit Round Robin (DRR) is a fair packet-based scheduling discipline commonly used in wired networks where link capacities do not change with time. However, in wireless networks, especially wireless broadband networks, i.e., IEEE 802.16e Mobile WiMAX, there are two main considerations violate the packet-based service concept for DRR. First, the resources are allocated per Mobile WiMAX frame. To achieve full frame utilization, Mobile WiMAX allows packets to be fragmented. Second, due to a high variation in wireless channel conditions, the link/channel capacity can change over time and location. Therefore, we introduce a Deficit Round Robin with Fragmentation (DRRF) to allocate resources per Mobile WiMAX frame in a fair manner by allowing for varying link capacity and for transmitting fragmented packets. Similar to DRR and Generalized Processor Sharing (GPS), DRRF achieves perfect fairness. DRRF results in a higher throughput than DRR (80% improvement) while causing less overhead than GPS (8 times less than GPS). In addition, in Mobile WiMAX, the quality of service (QoS) offered by service providers is associated with the price paid. This is similar to a cellular phone system; the users may be required to pay air-time charges. Hence, we have also formalized a Generalized Weighted Fairness (GWF) criterion which equalizes a weighted sum of service time units or slots, called temporal fairness, and transmitted bytes, called throughput fairness, for customers who are located in a poor channel condition or at a further distance versus for those who are near the base stations, or have a good channel condition. We use DRRF to demonstrate the application of GWF. These fairness criteria are used to satisfy basic requirements for resource allocation, especially for non real-time traffic. Therefore, we also extend DRRF to support other QoS requirements, such as minimum reserved traffic rate, maximum sustained traffic rate, and traffic priority. For real-time traffic, i.e., video traffic, we compare the performance of DRRF with deadline enforcement to that of Earliest Deadline First (EDF). The results show that DRRF outperforms EDF (higher achievable throughput under the promised delay latency) and maintains fairness under an overload scenario.

Figure 1. Virtual service networks and VPNs in SINET3.
Figure 3. Core network topology and initial BoD users.
Dynamic Resource Allocation and QoS Control Capabilities of the Japanese Academic Backbone Network

September 2010

·

76 Reads

Dynamic resource control capabilities have become increasingly important for academic networks that must support big scientific research projects at the same time as less data intensive research and educational activities. This paper describes the dynamic resource allocation and QoS control capabilities of the Japanese academic backbone network, called SINET3, which supports a variety of academic applications with a wide range of network services. The article describes the network architecture, networking technologies, resource allocation, QoS control, and layer-1 bandwidth on-demand services. It also details typical services developed for scientific research, including the user interface, resource control, and management functions, and includes performance evaluations.

QoS Provisioning Techniques for Future Fiber-Wireless (FiWi) Access Networks

June 2010

·

55 Reads

A plethora of enabling optical and wireless access-metro network technologies have been emerging that can be used to build future-proof bimodal fiber-wireless (FiWi) networks. Hybrid FiWi networks aim at providing wired and wireless quad-play services over the same infrastructure simultaneously and hold great promise to mitigate the digital divide and change the way we live and work by replacing commuting with teleworking. After overviewing enabling optical and wireless network technologies and their QoS provisioning techniques, we elaborate on enabling radio-over-fiber (RoF) and radio-and-fiber (R&F) technologies. We describe and investigate new QoS provisioning techniques for future FiWi networks, ranging from traffic class mapping, scheduling, and resource management to advanced aggregation techniques, congestion control, and layer-2 path selection algorithms.

A Service Oriented Architecture for Personalized Universal Media Access

December 2011

·

153 Reads

Multimedia streaming means delivering continuous data to a plethora of client devices. Besides the actual data transport, this also needs a high degree of content adaptation respecting the end users’ needs given by content preferences, transcoding constraints, and device capabilities. Such adaptations can be performed in many ways, usually on the media server. However, when it comes to content editing, like mixing in subtitles or picture-in-picture composition, relying on third party service providers may be necessary. For economic reasons this should be done in a service-oriented way, because a lot of adaptation modules can be reused within different adaptation workflows. Although service-oriented architectures have become widely accepted in the Web community, the multimedia environment is still dominated by monolithic systems. The main reason is the insufficient support for working with continuous data: generally the suitability of Web services for handling complex data types and state-full applications is still limited. In this paper we discuss extensions of Web service frameworks, and present a first implementation of a service-oriented framework for media streaming and digital item adaptation. The focus lies on the technical realization of the services. Our experimental results show the practicality of the actual deployment of service-oriented multimedia frameworks.

Top-cited authors