BookPDF Available

Innovations in Computing Sciences and Software Engineering

Authors:

Abstract

Innovations in Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Topics Covered: •Image and Pattern Recognition: Compression, Image processing, Signal Processing Architectures, Signal Processing for Communication, Signal Processing Implementation, Speech Compression, and Video Coding Architectures. •Languages and Systems: Algorithms, Databases, Embedded Systems and Applications, File Systems and I/O, Geographical Information Systems, Kernel and OS Structures, Knowledge Based Systems, Modeling and Simulation, Object Based Software Engineering, Programming Languages, and Programming Models and tools. •Parallel Processing: Distributed Scheduling, Multiprocessing, Real-time Systems, Simulation Modeling and Development, and Web Applications. •Signal and Image Processing: Content Based Video Retrieval, Character Recognition, Incremental Learning for Speech Recognition, Signal Processing Theory and Methods, and Vision-based Monitoring Systems. •Software and Systems: Activity-Based Software Estimation, Algorithms, Genetic Algorithms, Information Systems Security, Programming Languages, Software Protection Techniques, Software Protection Techniques, and User Interfaces. •Distributed Processing: Asynchronous Message Passing System, Heterogeneous Software Environments, Mobile Ad Hoc Networks, Resource Allocation, and Sensor Networks. •New trends in computing: Computers for People of Special Needs, Fuzzy Inference, Human Computer Interaction, Incremental Learning, Internet-based Computing Models, Machine Intelligence, Natural Language.
Tarek
Sobh
Khaled
Elleithy
Editors
Innovations
in
Computing
Sciences
and
Software
Engineering
4^
Springer
TECHNI.SC
HE
IMFGRiuATiGi
iS3!DLIOTHEK
R
S
iT
ATS
B
i
B
LI
OTH
EK
HANNOVER
Contents
Reviewers
List
xix
1.
Recursive
Projection
Profiling
for
Text-Image
Separation
1
Shivsubramani
Krishnamoorthy
et
al.
2.
Risk
in the
Clouds?:
Security
Issues
Facing
Government
Use
of
Cloud
Computing
7
David
C.
Wyld
3.
Open
Source
Software
(OSS)
Adoption
Framework
for
Local
Environment
and
its
Comparison
13
U.
Laila
and
S.
F.
A.
Bukhari
4.
Ubiquitous
Data
Management
in
a
Personal
Information
Environment
17
AtifFarid
Mohammad
5.
Semantics
for
the
Asynchronous
Communication
in
LIPS,
a
Language
for
Implementing
Parallel/distributed
Systems
23
Amala
VijayaSelvi
Rajan
et
al.
6.
Separation
of
Concerns
in
Teaching
Software
Engineering
29
hzat
M.
Alsmadi
and
Mahmoud
Dieri
7.
Student
Model
Based
On
Flexible
Fuzzy
Inference
39
Dawod
Kseibat
et
al.
8.
PlanGraph:
An
Agent-Based
Computational
Model
for
Handling
Vagueness
in
Human-GIS
Communication
of
Spatial
Concepts
45
Hongmei
Wang
9.
Risk-Based
Neuro-Grid
Architecture
for
Multimodal
Biometrics
51
Sitalakshmi
Venkataraman
and
Siddhivinayak
Kidkarni
10.
A
SQL-Database
Based
Meta-CASE
System
and
its
Query
Subsystem
57
Erki
Eessaar
and
Riinno
Sgirka
11.
An
Intelligent
Control
System
Based
on
Non-Invasive
Man
Machine
Interaction
63
Darius
Drungilas
et
al.
xi
xii
CONTENTS
12.
A
UML
Profile
for
Developing
Databases
that
Conform
to
The
Third
Manifesto
69
Erki
Eessaar
13.
Investigation
and
Implementation
of
T-DMB
Protocol
in
NCTUns
Simulator
75
Tatiana
Zuyeva
et
al.
14.
Empirical
Analysis
of
Case-Editing
Approaches
for
Numeric
Prediction
79
Michael
A.
Redmond
and
Timothy
Highley
15.
Towards
a
Transcription
System
of
Sign
Language
for
3D
Virtual
Agents
85
Wanessa
Machado
do
Amaral and
Jose
Mario
De
Martino
16.
Unbiased
Statistics
of
a
Constraint
Satisfaction
Problem
-
a
Controlled-Bias
Generator
91
Denis
Berthier
17.
Factors
that
Influence
the
Productivity
of
Software
Developers
in
a
Developer
View
99
Edgy
Paiva
et
al.
18.
Algorithms
for
Maintaining
a
Consistent
Knowledge
Base
in
Distributed
Multiagent
Environments
105
Stanislav
Ustymenko
and
Daniel
G.
Schwartz
19.
Formal
Specifications
for
a
Document
Management
Assistant
Ill
Daniel
G.
Schwartz
20.
Towards
a
Spatial-Temporal
Processing
Model
117
Jonathan
B.
Lori
21.
Structure,
Context
and
Replication
in
a
Spatial-Temporal
Architecture
123
Jonathan
Lori
22.
Service
Oriented
E-Government
129
Margareth
Stoll
and
Dietmar
Laner
23.
Fuzzy-rule-based
Adaptive
Resource
Control
for
Information
Sharing
in
P2P
Networks
135
Zhengping
Wu
and
Hao
Wu
24.
Challenges
In
Web
Information
Retrieval
141
Monika
Arora
et
al.
25.
An
Invisible
Text
Watermarking
Algorithm
using
Image
Watermark
147
Zunera
Jalil
and
Anwar
M.
Mirza
26.
A
Framework
for
RFID
Survivability
Requirement
Analysis
and
Specification
153
Yanjun
Zuo
et
al.
27.
The
State of
Knowledge
Management
in
Czech
Companies
161
P.
Maresova and
M.
Hedvicakova
CONTENTS
xiii
28.
A
Suitable
Software
Process
Improvement
Model
for the
UK
Healthcare
Industry
167
Tien
D.
Nguyen
et
al.
29.
Exploring
User
Acceptance
of
FOSS: The
Role
of
the
Age
of the
Users
173
M.
Dolores
Gallego
and
Salvador
Bueno
30.
GFS
Tuning
Algorithm
Using
Fuzzimetric
Arcs
177
Issam
Kouatli
31.
Multi-step
EMG
Classification
Algorithm
for
Human-Computer
Interaction
183
Peng
Ren
et
al.
32.
Affective
Assessment
of
a
Computer
User
through
the
Processing
of
the
Pupil
Diameter
Signal
189
Ying
Gao
et
al.
33.
MAC,
A
System
for
Automatically
IPR
Identification,
Collection
and
Distribution
195
Carlos Serrao
34.
Testing
Distributed
ABS
System
with
Fault
Injection
201
Dawid
Trawczynski
et
al.
35.
Learning
Based
Approach
for
Optimal
Clustering
of
Distributed
Program's
Call
Flow
Graph
207
Yousef
Abofathi
and
Eager
Zarei
36.
Fuzzy
Adaptive
Swarm
Optimization
Algorithm
for
Discrete
Environments
213
M. Hadi
Zahedi
and
M.
Mehdi
S.Haghighi
37.
Project
Management
Software
for Distributed
Industrial
Companies
221
M.
Dobrojevic
et
al.
38.
How
to
Construct
an
Automated Warehouse
Based
on
Colored
Timed
Petri
Nets
227
Fei
Cheng
and
Shanjun
He
39.
Telecare
and
Social
Link
Solution
for
Ambient
Assisted
Living
Using
a
Robot
Companion
with
Visiophony
235
Thibaut
Varene
et
al.
40.
Contextual
Semantic:
A
Context-aware
Approach
for
Semantic
Web
Based
Data
Extraction
from
Scientific
Articles
241
Deniss
Kumlander
41.
Motivating
Company
Personnel
by
Applying
the
Semi-self-organized
Teams
Principle
245
Deniss
Kumlander
42.
Route
Advising
in
a
Dynamic
Environment
-
A
High-Tech
Approach
249
MF
MFirdhous
et
al.
43.
Building
Security
System
Based
on
Grid
Computing
To
Convert
and
Store
Media
Files
255
Hieu
Nguyen
Trung
et
al.
xiv
CONTENTS
44.
A
Tool
Supporting
C
code
Parallelization
259
Ilona
Bluemke
and
Joanna
Fugas
45.
Extending
OpenMP
for
Agent
Based
DSM
on
GRID
265
Mahdi
S.
Haghighi
et
al.
46.
Mashup
-
Based
End
User
Interface
for
Fleet
Monitoring
273
M.
Popa
et
al.
47.
The
Performance
of
Geothermal
Field
Modeling
in Distributed
Component
Environment
279
A.
Piorkowski
et
al.
48.
An
Extension
of
Least
Squares
Methods
for
Smoothing
Oscillation
of
Motion
Predicting
Function
285
O.
Starostenko
et
al.
49.
Security
of
Virtualized
Applications:
Microsoft
App-V
and
VMware
ThinApp
291
Michael
Hoppe
and
Patrick
Seeling
50.
Noise
Performance
of
a
Finite
Uniform
Cascade
of
Two-Port
Networks
297
Shmuel
Y.
Miller
51.
Evaluating
Software
Agent
Quality:
Measuring
Social
Ability
and
Autonomy
301
Fernando
Alonso
et
al.
52.
An
Approach
to
Measuring
Software
Quality
Perception
307
Radoslaw
Hofman
53.
Automatically
Modeling
Linguistic
Categories
in
Spanish
313
M.
D.
Lopez
De
Lvise
et
al.
54.
Efficient
Content-based
Image
Retrieval
using
Support
Vector
Machines
for
Feature
Aggregation
319
Ivica
Dimitrovski
et
al.
55.
The
Holistic,
Interactive
and
Persuasive
Model
to
Facilitate
Self-care
of
Patients
with
Diabetes
325
Miguel
Vargas-Lombardo
et
al.
56.
Jawi
Generator
Software
Using
ARM
Board
Under
Linux
331
O.
N.
Shalasiah
et
al.
57.
Efficient
Comparison
between
Windows
and
Linux
Platform
Applicable
in
a
Virtual
Architectural
Walkthrough
Application
337
P.
Thubaasini
et
al.
58.
Simulation-Based
Stress
Analysis
for
a
3D
Modeled
Humerus-Prosthesis
Assembly
343
S.
Herle
et
al.
CONTENTS
xv
59.
Chaos-Based
Bit
Planes
Image
Encryption
349
Jiri
Giesl
et
al.
60.
FLEX:
A
Modular
Software
Architecture
for
Flight
License
Exam
355
Taner
Arsan
et
al.
61.
Enabling
and
Integrating
Distributed
Web
Resources
for
Efficient
and
Effective
Discovery
of
Information
on
the
Web
361
Neeta
Verma
et
al.
62.
Translation
from
UML
to
Markov
Model:
A
Performance
Modeling
Framework
365
Razib
Hayat
Khan
and
Poitl
E.
Heegaard
63.
A
Comparative
Study
of
Protein
Sequence
Clustering
Algorithms
373
A.
Sharaf
Eldin
et
al.
64.
OpenGL
in
Multi-User
Web-Based
Applications
379
K.
Szostek
and
A.
Piorkowski
65.
Testing
Task
Schedulers
on
Linux
System
385
Leonardo
Jelenkovic
et
al.
66.
Automatic
Computer
Overhead
Line
Design
391
Lucie
Nohdcova
and
Karel
Nohdc
67.
Building
Test
Cases
through
Model
Driven
Engineering
395
Helaine
Soitsa
et
al.
68.
The
Effects
of
Educational
Multimedia
for
Scientific
Signs
in
the
Holy
Quran
in
Improving
the
Creative
Thinking
Skills
for
Deaf
Children
403
Sumaya
Abitsaleh
et
al
69.
Parallelization
of
Shape
Function
Generation
for
Hierarchical
Tetrahedral
Elements
409
Sara
E.
McCaslin
70.
Analysis
of
Moment
Invariants
on
Image
Scaling
and
Rotation
415
Dongguang
Li
71.
A
Novel
Binarization
Algorithm
for
Ballistics
Firearm
Identification
421
Dongguang
Li
72.
A
Schema
Classification
Scheme
for
Multilevel
Databases
427
Tzong-An
Su
andHong-Ju
Lu
73.
Memory
Leak
Sabotages
System
Performance
433
Nagm
Mohamed
xvi
CONTENTS
74.
Writer
Identification
Using
Inexpensive
Signal
Processing
Techniques
437
Sergnei
A.
Mokhov
et
al.
75.
Software
Artifacts
Extraction
for
Program
Comprehension
443
Ghnlam
Rasool
and
Ilka
Philippow
16.
Model-Driven
Engineering
Support
for
Building
C#
Applications
449
Anna
Derezinska
and
Przemyslaw
Oltarzewski
11.
Early
Abnormal
Overload
Detection
and
the
Solution
on
Content
Delivery
Network 455
Cam
Nguyen
Tan
et
al.
78.
ECG
Feature
Extraction
using
Time
Frequency
Analysis
461
Mahesh
A
Nair
19.
Optimal
Component
Selection
for
Component-Based
Systems
467
Muhammad
All
Khan
and
Sajjad
Mahmood
80.
Domain-based
Teaching
Strategy
for
Intelligent
Tutoring
System
Based
on
Generic
Rules
473
Dawod
Kseibat
et
al.
81.
Parallelization
of
Edge
Detection
Algorithm
using
MPI
on
Beowulf
Cluster
477
Nazleeni
Haron
et
al.
82.
Teaching
Physical
Based
Animation
via
OpenGL
Slides
483
Miao
Song
et
al.
83.
Appraising
the
Corporate
Sustainability
Reports
-
Text
Mining
and
Multi-Discriminatory
Analysis
489
J.
R.
Modapothala
et
al.
84.
A
Proposed
Treatment
for
Visual
Field
Loss
caused
by
Traumatic
Brain
Injury
using
Interactive
Visuotactile Virtual
Environment
495
Attila
J.
Farkas
et
al.
85.
Adaptive
Collocation
Methods
for
the
Solution
of
Partial
Differential
Equations
499
Paido
Brito
and
Antonio
Portugal
86.
Educational
Virtual
Reality
through
a
Multiview
Autostereoscopic
3D
Display
505
Emiliyan
G.
Petkov
87.
An
Approach
for
Developing
Natural
Language
Interface
to
Databases
Using
Data
Synonyms
Tree
and
Syntax
State
Table
509
Safwan
shatnawi
and
Rajeh
Khamis
88.
Analysis
of
Strategic
Maps
for
a
Company
in
the
Software
Development
Sector
515
Marisa
de
Camargo
Silveira
et
al.
CONTENTS
xvii
89.
The
RDF
Generator
(RDFG)
-
First
Unit
in
the
Semantic
Web
Framework
(SWF)
523
Ahmed
Nada
and
Badie
Sartawi
90.
Information
Technology
to
Help
Drive
Business
Innovation
and
Growth
527
Igor
Aguilar
Alonso
et
al.
91.
A
Framework
for
Enterprise
Operating
Systems
Based
on
Zachman
Framework
533
S.
Shervin
Ostadzadeh
andAmir
Masoud
Rahmani
92.
A
Model
for
Determining
the
Number
of
Negative
Examples
Used
in
Training
a
MLP
537
Cosmin
Cernazanu-Glavan
and
Stefan
Holban
93.
GPU
Benchmarks
Based
On
Strange
Attractors
543
Tomds
Podoba
et
al.
94.
Effect
of
Gender
and
Sound
Spatialization
on
Speech
Intelligibility
in
Multiple
Speaker
Environment
547
M.
Joshi
et
al.
95.
Modeling
Tourism
Sustainable
Development
551
O.
A.
Shcherbina
andE.
A.
Shembeleva
96.
Pi-ping
-
Benchmark
Tool
for
Testing
Latencies
and
Throughput
in
Operating
Systems
557
J.
Abaffy
and
T.
Krajiovic
97.
Towards
Archetypes-Based
Software
Development
561
Gimnar
Piho
et
al.
98.
Dependability
Aspects
Regarding
the
Cache
Level
of
a
Memory
Hierarchy
Using
Hamming
Codes
567
O.
Novae
et
al.
99.
Performance
Evaluation
of
an
Intelligent
Agents
Based
Model
within
Irregular
WSN
Topologies
571
Alberto
Piedrahita
Ospina
et
al.
100.
Double
Stage
Heat
Transformer
Controlled
by
Flow
Ratio
577
S.
Silva-Sotelo
et
al.
101.
Enforcement
of
Privacy
Policies
over
Multiple
Online
Social
Networks
for
Collaborative
Activities
583
Zhengping
Wu
and
Lifeng
Wang
102.
An
Estimation
of
Distribution
Algorithms
Applied
to
Sequence
Pattern
Mining
589
Paulo
Igor
A.
Godinho
et
al.
103.
TLATOA
COMMUNICATOR:
A
Framework
to
Create
Task-Independent
Conversational
Systems
595
D.
Perez
and
I.
Kirschning
xviii
CONTENTS
104.
Using
Multiple
Datasets
in
Information
Visualization
Tool
601
Rodrigo
Augnsto
de
Moraes
Lourenqo
et
al.
105.
Improved
Crack
Type
Classification
Neural
Network
based
on
Square
Sub-images
of
Pavement
Surface
607
Byoung
JikLee
andHosin
"David"
Lee
106.
Building
Information
Modeling
as
a
Tool
for the
Design
of
Airports
611
Julio
Tollendal
Gomes
Ribeiro
et
al.
107.
A
Petri-Nets
Based
Unified
Modeling
Approach
for
Zachman
Framework
Cells
615
S.
Shervin
Ostadzadeh
and
Mohammad
Ali
Nekoui
108.
From
Perspectiva
Artificialis
to
Cyberspace:
Game-Engine
and
the
Interactive
Visualization
of
Natural
Light
in
the
Interior
of
the
Building
619
Evangelos
Dimitrios
Christakou
et
al.
109.
Computational
Shape
Grammars
and
Non-Standardization:
a
Case
Study
on
the
City
of
Music
of
Rio
de
Janeiro
623
Felix
A.
Silva
Junior
andNeander
Furtado
Silva
110.
Architecture
Models
and
Data
Flows
in
Local
and
Group
Datawarehouses
627
R.M.
Bogza
et
al.
Index
633

Chapters (100)

This paper presents an efficient and very simple method for separating text characters from graphical images in a given document image. This is based on a Recursive Projection Profiling (RPP) of the document image. The algorithm tries to use the projection profiling method [4] [6] to its maximum bent to bring out almost all that is possible with the method. The projection profile reveals the empty space along the horizontal and vertical axes, projecting the gaps between the characters/images. The algorithm turned out to be quite efficient, accurate and least complex in nature. Though some exceptional cases were encountered owing to the drawbacks of projection profiling, they were well handled with some simple heuristics thus resulting in a very efficient method for text-image separation.
Cloud computing is poised to become one of the most important and fundamental shifts in how computing is consumed and used. Forecasts show that government will play a lead role in adopting cloud computing - for data storage, applications, and processing power, as IT executives seek to maximize their returns on limited procurement budgets in these challenging economic times. After an overview of the cloud computing concept, this article explores the security issues facing public sector use of cloud computing and looks to the risk and benefits of shifting to cloud-based models. It concludes with an analysis of the challenges that lie ahead for government use of cloud resources.
According to Business Software Alliance (BSA) Pakistan is ranked in the top 10 countries having highest piracy rate [1]. To overcome the problem of piracy local Information Technology (IT) companies are willing to migrate towards Open Source Software (OSS). Due to this reason need for framework/model for OSS adoption has become more pronounced. Research on the adoption of IT innovations has commonly drawn on innovation adoption theory. However with time some weaknesses have been identified in the theory and it has been realized that the factors affecting the adoption of OSS varies country to country. The objective of this research is to provide a framework for OSS adoption for local environment and then compare it with the existing framework developed for OSS adoption in other advanced countries. This paper proposes a framework to understand relevant strategic issues and it also highlights problems, restrictions and other factors that are preventing organizations from adopting OSS. A factor based comparison of propose framework with the existing framework is provided in this research.
This paper presents a novel research work on Personal Information Environment (PIE), which is a relatively new field to get explored. PIE is a self managing pervasive environment. It contains an individual’s personal pervasive information associated within user’s related or non-related contextual environments. Contexts are vitally important because they control, influence and affect everything within them by dominating its pervasive content(s). This paper shows in depth the achievement of Personal Information Environment, which deals with a user’s devices, which are to be spontaneous, readily self-manageable on autonomic basis. This paper shows an actual implementation of pervasive data management of a PIE-user, which contains append and update of PIE’s data from the last device used by the user to another PIE devices for further processing and storage needs. Data recharging is utilized to transmit and receive data among PIE devices.
This paper presents the operational semantics for the message passing system for a distributed language called LIPS. The message passing system is based on a virtual machine called AMPS(Asynchronous Message Passing System) designed around a data structure that is portable and can go with any distributed language. The operational semantics that specifies the behaviour of this system uses structured operational semantics to reveal the intermediate steps that helps with analysis of its behaviour. We are able combine this with the big-step semantics that specifies the computational part of the language to produce a cohesive semantics for the language as a whole.
Software Engineering is one of the recently evolving subjects in research and education. Instructors and books that are talking about this field of study lack a common ground of what subjects should be covered in teaching introductory or advance courses in this area. In this paper, a proposed ontology for software engineering education is formulated. This ontology divides the software engineering projects and study into different perspectives: projects, products, people, process and tools. Further or deeper levels of abstractions of those fields can be described on levels that depend on the type or level of the course to teach. The goal of this separation of concerns is to organize the software engineering project into smaller manageable parts that can be easy to understand and identify. It should reduce complexity and improve clarity. This concept is at the core of software engineering. The 4Ps concerns overlap and distinct. The research will try to point to the two sides. Concepts such as; ontology, abstraction, modeling and views or separation of concerns (which we are trying to do here) always include some sort of abstraction or focus. The goal is to draw a better image or understanding of the problem. In abstraction or modeling for example, when we model students in a university in a class, we list only relevant properties, meaning that there are many student properties that are ignored and not listed due to the fact that they are irrelevant to the domain. The weight, height, and color of the student are examples of such properties that will not be included in the class. In the same manner, the goal of the separation of the concerns in software engineering projects is to improve the understandability and consider only relevant properties. In another goal, we hope that the separation of concerns will help software engineering students better understand the large number of modeling and terminology concepts.
In this paper we present a design of a student model based on generic fuzzy inference design. The membership functions and the rules of the fuzzy inference can be fine-tuned by the teacher during the learning process (run time) to suit the pedagogical needs, creating a more flexible environment. The design is used to represent the learner’s performance. In order to test the human computer interaction of the system, a prototype of the system was developed with limited teaching materials. The interaction with the first prototype of the system demonstrated the effectiveness of the decision making using fuzzy inference.
A fundamental challenge in developing a usable conversational interface for Geographic Information Systems (GIS) is effective communication of spatial concepts in natural language, which are commonly vague in meaning. This paper presents a design of an agent-based computational model, PlanGraph. This model is able to help the GIS to keep track of the dynamic human-GIS communication context and enable the GIS to understand the meaning of a vague spatial concept under constrains of the dynamic context.
Meta-CASE systems simplify the creation of CASE (Computer Aided System Engineering) systems. In this paper, we present a meta-CASE system that provides a web-based user interface and uses an object-relational database system (ORDBMS) as its basis. The use of ORDBMSs allows us to integrate different parts of the system and simplify the creation of meta-CASE and CASE systems. ORDBMSs provide powerful query mechanism. The proposed system allows developers to use queries to evaluate and gradually improve artifacts and calculate values of software measures. We illustrate the use of the systems by using SimpleM modeling language and discuss the use of SQL in the context of queries about artifacts. We have created a prototype of the meta-CASE system by using PostgreSQL™ ORDBMS and PHP scripting language.
The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.
Investigation of T-DMB protocol forced us to create simulation model. NCTUns simulator which is open source software and allows addition of new protocols was chosen for implementation. This is one of the first steps of research process. Here we would like to give small overview of T-DMB (DAB) system, describe proposed simulation model and problems which we have met during the work. KeywordsT-DMB-Digital Radio-NCTUns
One important aspect of Case-Based Reasoning (CBR) is Case Selection or Editing – selection for inclusion (or removal) of cases from a case base. This can be motivated either by space considerations or quality considerations. One of the advantages of CBR is that it is equally useful for boolean, nominal, ordinal, and numeric prediction tasks. However, many case selection research efforts have focused on domains with nominal or boolean predictions. Most case selection methods have relied on such problem structure. In this paper, we present details of a systematic sequence of experiments with variations on CBR case selection. In this project, the emphasis has been on case quality – an attempt to filter out cases that may be noisy or idiosyncratic – that are not good for future prediction. Our results indicate that Case Selection can significantly increase the percentage of correct predictions at the expense of an increased risk of poor predictions in less common cases.
Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.
We show that estimating the complexity (mean and distribution) of the instances of a fixed size Constraint Satisfaction Problem (CSP) can be very hard. We deal with the main two aspects of the problem: defining a measure of complexity and generating random unbiased instances. For the first problem, we rely on a general framework and a measure of complexity we presented at CISSE08. For the generation problem, we restrict our analysis to the Sudoku example and we provide a solution that also explains why it is so difficult.
To measure and improve the productivity of software developers is one of the greatest challenges faced by software development companies. Therefore, aiming to help these companies to identify possible causes that interfere in the productivity of their teams, we present in this paper a list of 32 factors, extracted from the literature, that influence the productivity of developers. To obtain the ranking of these factors, we have applied a questionnaire with developers. In this work, we present the results: the factors that have the greatest positive and negative influence on productivity, the factors with no influence and the most important factors and what influences them. To finish, we present a comparison of the results obtained from the literature.
In this paper, we design algorithms for a system that allows Semantic Web agents to reason within what has come to be known as the Web of Trust. We integrate reasoning about belief and trust, so agents can reason about information from different sources and deal with contradictions. Software agents interact to support users who publish, share and search for documents in a distributed repository. Each agent maintains an individualized topic taxonomy for the user it represents, updating it with information obtained from other agents. Additionally, an agent maintains and updates trust relationships with other agents. When new information leads to a contradiction, the agent performs a belief revision process informed by a degree of belief in a statement and the degree of trust an agent has for the information source. The system described has several key characteristics. First, we define a formal language with well-defined semantics within which an agent can express the relevant conditions of belief and trust, and a set of inference rules. The language uses symbolic labels for belief and trust intervals to facilitate expressing inexact statements about subjective epistemic states. Second, an agent’s belief set at a given point in time is modeled using a Dynamic Reasoning System (DRS). This allows the agent’s knowledge acquisition and belief revision processes to be expressed as activities that take place in time. Third, we explicitly describe reasoning processes, creating algorithms for acquiring new information and for belief revision.
The concept of a dynamic reasoning system (DRS) provides a general framework for modeling the reasoning processes of a mechanical agent, to the extent that those processes follow the rules of some well-defined logic. It amounts to an adaptation of the classical notion of a formal logical system that explicitly portrays reasoning as an activity that takes place in time. Inference rule applications occur in discrete time steps, and, at any given point in time, the derivation path comprises the agent’s belief set as of that time. Such systems may harbor inconsistencies, but these do not become known to the agent until a contradictory assertion appears in the derivation path. When this occurs one invokes a Doyle-like reason maintenance process to remove the inconsistency, in effect, disbelieving some assertions that were formerly believed. The notion of a DRS also includes an extralogical control mechanism that guides the reasoning process. This reflects the agent’s goal or purpose and is context dependent. This paper lays out the formal definition of a DRS and illustrates it with the case of ordinary first-order predicate calculus, together with a control mechanism suitable for reasoning about taxonomic classifications for documents in a library. As such, this particular DRS comprises formal specifications for an agent that serves as a document management assistant.
This paper discusses architecture for creating systems that need to express complex models of real world entities, especially those that exist in hierarchical and composite structures. These models need to be persisted, typically in a database system. The models also have a strong orthogonal requirement to support representation and reasoning over time.
This paper examines some general aspects of partitioning software architecture and the structuring of complex computing systems. It relates these topics in terms of the continued development of a generalized processing model for spatial-temporal processing. Data partitioning across several copies of a generic processing stack is used to implement horizontal scaling by reducing search space and enabling parallel processing. Temporal partitioning is used to provide fast response to certain types of queries and in quickly establishing initial context when using the system.
Due to different directives, the growing request for citizen-orientation, improved service quality, effectiveness, efficiency, transparency and reduction of costs, as well as administrative burden public administrations apply increasingly management tools and IT for continual service development and sustainable citizens’ satisfaction. Therefore public administrations implement always more standard based management systems, such as quality ISO9001, environmental ISO 14001 or others. Due to this situation we used in different case studies as basis for e-government a the administration adapted, holistic administration management model to analyze stakeholder requirements and to integrate, harmonize and optimize services, processes, data, directives, concepts and forms. In these case studies the developed and consequently implemented holistic administration management model promotes constantly over more years service effectiveness, citizen satisfaction, efficiency, cost reduction, shorter initial training periods for new collaborators, employee involvement for sustainable citizen-oriented service improvement and organizational development.
With more and more peer-to-peer (P2P) technologies available for online collaboration and information sharing, people can launch more and more collaborative work in online social networks with friends, colleagues, and even strangers. Without face-to-face interactions, the question of who can be trusted and then share information with becomes a big concern of a user in these online social networks. This paper introduces an adaptive control service using fuzzy logic in preference definition for P2P information sharing control, and designs a novel decision-making mechanism using formal fuzzy rules and reasoning mechanisms adjusting P2P information sharing status following individual users’ preferences. Applications of this adaptive control service into different information sharing environments show that this service can provide a convenient and accurate P2P information sharing control for individual users in P2P networks. Keywordsadaptive resource control-fuzzy logic-P2P technology-information sharing-collaborative social network
The major challenge in information access is the rich data available for information retrieval, evolved to provide principle approaches or strategies for searching. The search has become the leading paradigm to find the information on World Wide Web. For building the successful web retrieval search engine model, there are a number of challenges that arise at the different levels where techniques, such as Usenet, support vector machine are employed to have a significant impact. The present investigations explore the number of problems identified its level and related to finding information on web. This paper attempts to examine the issues by applying different methods such as web graph analysis, the retrieval and analysis of newsgroup postings and statistical methods for inferring meaning in text. We also discuss how one can have control over the vast amounts of data on web, by providing the proper address to the problems in innovative ways that can extremely improve on standard. The proposed model thus assists the users in finding the existing formation of data they need. The developed information retrieval model deals with providing access to information available in various modes and media formats and to provide the content is with facilitating users to retrieve relevant and comprehensive information efficiently and effectively as per their requirements. This paper attempts to discuss the parameters factors that are responsible for the efficient searching. These parameters can be distinguished in terms of important and less important based on the inputs that we have. The important parameters can be taken care of for the future extension or development of search engines
Copyright protection of digital contents is very necessary in today’s digital world with efficient communication mediums as internet. Text is the dominant part of the internet contents and there are very limited techniques available for text protection. This paper presents a novel algorithm for protection of plain text, which embeds the logo image of the copyright owner in the text and this logo can be extracted from the text later to prove ownership. The algorithm is robust against content-preserving modifications and at the same time, is capable of detecting malicious tampering. Experimental results demonstrate the effectiveness of the algorithm against tampering attacks by calculating normalized hamming distances. The results are also compared with a recent work in this domain
Many industries are becoming dependent on Radio Frequency Identification (RFID) technology for inventory management and asset tracking. The data collected about tagged objects though RFID is used in various high level business operations. The RFID system should hence be highly available, reliable, and dependable and secure. In addition, this system should be able to resist attacks and perform recovery in case of security incidents. Together these requirements give rise to the notion of a survivable RFID system. The main goal of this paper is to analyze and specify the requirements for an RFID system to become survivable. These requirements, if utilized, can assist the system in resisting against devastating attacks and recovering quickly from damages. This paper proposes the techniques and approaches for RFID survivability requirements analysis and specification. From the perspective of system acquisition and engineering, survivability requirement is the important first step in survivability specification, compliance formulation, and proof verification.
In the globalised world, Czech economy faces many challenges brought by the processes of integration. The crucial factors for companies that want to succeed in the global competition are knowledge and abilities to use the knowledge in the best possible way. The purpose of the work is a familiarization with the results of a questionnaire survey with the topic of "Research of the state of knowledge management in companies in the Czech Republic" realized in the spring 2009 in the cooperation of the University of Hradec Králové and the consulting company Per Partes Consulting, Ltd under the patronage of the European Union.
Over the recent years, the UK Healthcare sector has been the prime focus of many reports and industrial surveys, particularly in the field of software development and management issues. This signals the importance of growing concerns regarding quality issues in the Healthcare domain. In response to this, a new tailored Healthcare Process Improvement (SPI) model is proposed, which takes into consideration both signals from the industry and insights from literature. This paper discusses and outlines the development of a new software process assessment and improvement model based on ISO/IEC 15504-5 model. The proposed model will provide the Healthcare sector with newly specific process practices that focus on addressing current development concerns, standard compliances and quality dimension requirements for this domain.
Evolutionary learning and tuning mechanism to fuzzy systems is the main concern to researchers in the filed. The optimized final performance on the fuzzy system is dependent on the ability of the system to find the best optimized rule-set(s) as well as the optimized fuzzy variable definition. This paper proposes a mechanism of selection and optimization of fuzzy variables termed as “Fuzzimetric Arcs” and then discusses how this mechanism can become a standard of selection and optimization of fuzzy set shapes to tune the performance of GFS. Genetic algorithm is the technique that can be utilized to alter/modify the initial shape of fuzzy sets using two main operators (Crossover and Mutation). Optimization of rule-set(s) is mainly dependent on the measurement of fitness factor and the level of deviation from fitness factor.
A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.
This study proposes to achieve the affective assessment of a computer user through the processing of the pupil diameter (PD) signal. An adaptive interference canceller (AIC) system using the H∞ time-varying (HITV) adaptive algorithm was developed to minimize the impact of the PLR (pupil size changes caused by light intensity variations) on the measured pupil diameter signal. The modified pupil diameter (MPD) signal, obtained from the AIC, was expected to reflect primarily the pupillary affective responses (PAR) of the subject. Additional manipulations of the AIC output resulted in a Processed MPD (PMPD) signal, from which a classification feature, “PMPDmean”, was extracted. This feature was used to train and test a support vector machine (SVM), for the identification of “stress” states in the subject, achieving an accuracy rate of 77.78%. The advantages of affective recognition through the PD signal were verified by comparatively investigating the classification of “stress” and “relaxation” states through features derived from the simultaneously recorded galvanic skin response (GSR) and blood volume pulse (BVP) signals, with and without the PD feature. Encouraging results in affective assessment based on pupil diameter monitoring were obtained in spite of intermittent illumination increases purposely introduced during the experiments. Therefore, these results confirmed the possibility of using PD monitoring to evaluate the evolving affective states of a computer user.
Controlling Intellectual Property Rights (IPR) in the Digital World is a very hard challenge. The facility to create multiple bit-by-bit identical copies from original IPR works creates the opportunities for digital piracy. One of the most affected industries by this fact is the Music Industry. The Music Industry has supported huge losses during the last few years due to this fact. Moreover, this fact is also affecting the way that music rights collecting and distributing societies are operating to assure a correct music IPR identification, collection and distribution. In this article a system for automating this IPR identification, collection and distribution is presented and described. This system makes usage of advanced automatic audio identification system based on audio fingerprinting technology. This paper will present the details of the system and present a use-case scenario where this system is being used.
The paper deals with the problem of adapting software implemented fault injection technique (SWIFI) to evaluate dependability of reactive microcontroller systems. We present an original methodology of disturbing controller operation and analyzing fault effects taking into account reactions of the controlled object and the impact of the system environment. Faults can be injected randomly (in space and time) or targeted at the most sensitive elements of the controller to check it at high stresses. This approach allows identifying rarely encountered problems, usually missed in classical approaches. The developed methodology has been used successfully to verify dependability of ABS system. Experimental results are commented in the paper.
Optimal clustering of call flow graph for reaching maximum concurrency in execution of distributable components is one of the NP-Complete problems. Learning automatas (LAs) are search tools which are used for solving many NP-Complete problems. In this paper a learning based algorithm is proposed to optimal clustering of call flow graph and appropriate distributing of programs in network level. The algorithm uses learning feature of LAs to search in state space. It has been shown that the speed of reaching to solution increases remarkably using LA in search process, and it also prevents algorithm from being trapped in local minimums. Experimental results show the superiority of proposed algorithm over others.
This paper gives an overview of the development of a new software solution for project management, intended mainly to use in industrial environment. The main concern of the proposed solution is application in everyday engineering practice in various, mainly distributed industrial companies. Having this in mind, special care has been devoted to development of appropriate tools for tracking, storing and analysis of the information about the project, and in-time delivering to the right team members or other responsible persons. The proposed solution is Internet-based and uses LAMP/WAMP (Linux or Windows - Apache - MySQL - PHP) platform, because of its stability, versatility, open source technology and simple maintenance. Modular structure of the software makes it easy for customization according to client specific needs, with a very short implementation period. Its main advantages are simple usage, quick implementation, easy system maintenance, short training and only basic computer skills needed for operators. KeywordsProject management software-Web based software-Resources planning-Task accomplishment tracking-Project team communication improvement
The automated warehouse considered here consists of a number of rack locations with three cranes, a narrow aisle shuttle, and several buffer stations with the roller. Based on analyzing of the behaviors of the active resources in the system, a modular and computerized model is presented via a colored timed Petri net approach, in which places are multicolored to simplify model and characterize control flow of the resources, and token colors are defined as the routes of storage/retrieval operations. In addition, an approach for realization of model via visual c++ is briefly given. These facts allow us to render an emulate system to simulate a discrete control application for online monitoring, dynamic dispatching control and off-line revising scheduler policies.
An increasing number of people are in need of help at home (elderly, isolated and/or disabled persons; people with mild cognitive impairment). Several solutions can be considered to maintain a social link while providing tele-care to these people. Many proposals suggest the use of a robot acting as a companion. In this paper we will look at an environment constrained solution, its drawbacks (such as latency) and its advantages (flexibility, integration…). A key design choice is to control the robot using a unified Voice over Internet Protocol (VoIP) solution, while addressing bandwidth limitations, providing good communication quality and reducing transmission latency
The paper explores whether the semantic context is good enough to cope with ever increasing number of available resources in different repositories including the web. Here a problem of identifying authors of scientific papers is used as an example. A set of problem still do arise in case we apply exclusively the semantic context. Fortunately contextual semantic can be used to derive more information required to separate ambiguous cases. Semantic tags, well-structured documents and available databases of articles do provide a possibility to be more context-aware. Under the context we use co-authors names, references and headers to extract key-words and identify the subject. The real complexity of the considering problem comes from the dynamical behaviour of authors as they can change the topic of the research in the next paper. As the final judge the paper proposes applying words usage patterns analysis. Final the contextual intelligence engine is described.
The only way nowadays to improve stability of software development process in the global rapidly evolving world is to be innovative and involve professionals into projects motivating them using both material and non material factors. In this paper self-organized teams are discussed. Unfortunately not all kind of organizations can benefit directly from agile method including applying self-organized teams. The paper proposes semi-self-organized teams presenting it as a new and promising motivating factor allowing deriving many positive sides of been self-organized and partly agile and been compliant to less strict conditions for following this innovating process. The semi-self organized teams are reliable at least in the short-term perspective and are simple to organize and support.
Finding the optimal path between two locations in the Colombo city is not a straight forward task, because of the complex road system and the huge traffic jams etc. This paper presents a system to find the optimal driving direction between two locations within the Colombo city, considering road rules (one way, two ways or fully closed in both directions). The system contains three main modules - core module, web module and mobile module, additionally there are two user interfaces one for normal users and the other for administrative users. Both these interfaces can be accessed using a web browser or a GPRS enabled mobile phone. The system is developed based on the Geographic Information System (GIS) technology. GIS is considered as the best option to integrate hardware, software, and data for capturing, managing, analyzing, and displaying all forms of geographically referenced information. The core of the system is MapServer (MS4W) used along with the other supporting technologies such as PostGIS, PostgreSQL, pgRouting, ASP.NET and C#.
In the recent years, Grid Computing (GC) has made big steps in development, contributing to solve practical problems which need large store capacity and computing performance. This paper introduces an approach to integrate the security mechanisms, Grid Security Infrastructure (GSI) of the open source Globus Toolkit 4.0.6 (GT4) into an application for storing, format converting and playing online media files, based on the GC.
In this paper a tool, called ParaGraph, supporting C code parallelization is presented. ParaGraph is a plug-in in Eclipse IDE and enables manual and automatic parallelization. A parallelizing compiler inserts automatically OpenMP directives into the outputted source code. OpenMP directives can be also manually inserted by a programmer. ParaGraph shows C code after parallelization. Visualization of parallelized code can be used to understand the rules and constraints of parallelization and to tune the parallelized code as well.
This paper discusses some of the salient issues involved in implementing the illusion of a shared-memory programming model across a group of distributed memory processors from a cluster through to an entire GRID. This illusion can be provided by a distributed shared memory (DSM) system implemented by using autonomous agents. Mechanisms that have the potential to increase the performance by omitting consistency latency intra site messages and data transfers are highlighted. In this paper we describe the overall design/architecture of a prototype system, AOMPG which integrates DSM and Agent paradigms and may be the target of an OpenMP compiler. Our goal is to apply this to GRID Applications.
Fleet monitoring of commercial vehicles has received a major attention in the last period. A good monitoring solution increases the fleet efficiency by reducing the transportation durations, by optimizing the planned routes and by providing determinism at the intermediate and final destinations. This paper presents a fleet monitoring system for commercial vehicles using the Internet as data infrastructure. The mashup concept was implemented for creating a user interface.
An implementation and performance analysis of heat transfer modeling using most popular component environments is a scope of this article. The computational problem is described, and the proposed solution of decomposition for parallelization is shown. The implementation is prepared for MS .NET, Sun Java and Mono. Tests are done for various operating systems and hardware platform combinations. The performance of calculations is experimentally indicated and analyzed. The most interesting issue is the communication tuning in distributed component software – proposed method can speed up computational time, but the final time depends also on the network connections performance in component environments. These results are presented and discussed.
A novel hybrid technique for detection and predicting the motion of objects in video stream is presented in this paper. The novelty consists in extension of Savitzky-Golay smoothing filter applying difference approach for tracing object mass center with or without acceleration in noised images. The proposed adaptation of least squares methods for smoothing the fast varying values of motion predicting function permits to avoid the oscillation of that function with the same degree of used polynomial. The better results are obtained when the time of motion interpolation is divided into subintervals, and the function is represented by different polynomials over each subinterval. Therefore, in proposed hybrid technique the spatial clusters with objects in motion are detected by the image difference operator and behavior of those clusters is analyzed using their mass centers in consecutive frames. Then the predicted location of object is computed using modified algorithm of weighted least squares model. That provides the tracing possible routes which now are invariant to oscillation of predicting polynomials and noise presented in images. For irregular motion frequently occurred in dynamic scenes, the compensation and stabilization technique is also proposed in this paper. On base of several simulated kinematics experiments the efficiency of proposed technique is analyzed and evaluated. Index TermsImage processing-motion prediction-least squares model-interpolating polynomial oscillation and stabilization
Virtualization has gained great popularity in recent years with application virtualization being the latest trend. Application virtualization offers several benefits for application management, especially for larger and dynamic deployment scenarios. In this paper, we initially introduce the common application virtualization principles before we evaluate the security of Microsoft App-V and VMware ThinApp application virtualization environments with respect to external security threats. We compare different user account privileges and levels of sandboxing for virtualized applications. Furtherwmore, we identify the major security risks as well as trade-offs with ease of use that result from the virtualization of applications.
The noise performance of a lumped passive uniform cascade of identical element two port networks is investigated. The N-block network is characterized in close form based on the eigenvalues of the element two-port ABCD transmission matrix. The thermal noise performance is derived and demonstrated in several examples.
Research on agent-oriented software has been developed around different practical applications. The same cannot, however, be said about the development of measures to evaluate agent quality by its key characteristics. In some cases, there have been proposals to use and adapt measures from other paradigms, but no agent-related quality model has been investigated. As part of research into agent quality, this paper presents the evaluation of two key characteristics: social ability and autonomy. Additionally, we present some results for a case study on a multi-agent system.
Perception measuring and perception management is an emerging approach in the area of product management. Cognitive, psychological, behavioral and neurological theories, tools and methods are being employed for a better understanding of the mechanisms of a consumer’s attitude and decision processes. Software is also being defined as a product, however this kind of product is significantly different from all other products. Software products are intangible and it is difficult to trace their characteristics which are strongly dependant on a dynamic context of use. Understanding customer’s cognitive processes gives an advantage to the producers aiming to develop products “winning the market”. Is it possible to adopt theories, methods and tools for the purpose of software perception, especially software quality perception? The theoretical answer to this question seems to be easy, however in practice the list of differences between software products and software projects hinders the analysis of certain factors and their influence on the overall perception. In this article the authors propose a method and describe a tool designed for the purpose of research regarding perception issues of software quality. The tool is designed to defeat the above stated problem, adopting the modern behavioral economics approach.
This paper presents an approach to process Spanish linguistic categories automatically. The approach is based in a module of a prototype named WIH (Word Intelligent Handler), which is a project to develop a conversational bot. It basically learns category usage sequence in a sentence. It extracts a weighting metric to discriminate most common structures in real dialogs. Such a metric is important to define the preferred organization to be used by the robot to build an answer.
In this paper, a content-based image retrieval system for aggregation and combination of different image features is presented. Feature aggregation is important technique in general content-based image retrieval systems that employ multiple visual features to characterize image content. We introduced and evaluated linear combination and support vector machines to fuse the different image features. The implemented system has several advantages over the existing content-based image retrieval systems. Several implemented features included in our system allow the user to adapt the system to the query image. The SVM-based approach for ranking retrieval results helps processing specific queries for which users do not have knowledge about any suitable descriptors.
The patient, in his multiple facets of citizen and user of services of health, needs to acquire during, and later in his majority of age, favorable conditions of health to accentuate his quality of life and it is the responsibility of the health organizations to initiate the process of support for that patient during the process of mature life. The provision of services of health and the relation doctor-patient are undergoing important changes in the entire world, forced to a large extent by the indefensibility of the system itself. Nevertheless decision making requires previous information and, what more the necessity itself of being informed requires having a “culture” of health that generates pro activity and the capacity of searching for instruments that facilitate the awareness of the suffering and the self-care of the same. Therefore it is necessary to put into effect a ICT model (hiPAPD) that has the objective of causing Interaction, Motivation and Persuasion towards the surroundings of the diabetic Patient facilitating his self-care. As a result the patient himself individually manages his services through devices and AmI Systems (Ambient Intelligent). KeywordsICT-Emotional Design-Captologic-Patient diabetic-self-Care
Jawi knowledge is becoming important not just to adult but also to growing children to learn at the initial stage of their life. This project basically is to study and develop Embedded Jawi Generator Software that will generate and create Jawi script easily. The user could choose and enter Jawi scripts and learn each of the script. The sum of scripts was from alif until yaa, approximately about 36 scripts with the colorful button. The system also should b creating as an interactive system that will attract users especially kids. This Jawi Generator Software developed using java language in a Linux operating system (fedora). This software will be running on UP-NETARM2410-S Linux. Later the performances of Jawi Generator System are investigate for its accuracy in displaying of the words, and also board performances.
This paper describes Linux, an open source platform used to develop and run a virtual architectural walkthrough application. It proposes some qualitative reflections and observations on the nature of Linux in the concept of Virtual Reality (VR) and on the most popular and important claims associated with the open source approach. The ultimate goal of this paper is to measure and evaluate the performance of Linux used to build the virtual architectural walkthrough and develop a proof of concept based on the result obtain through this project. Besides that, this study reveals the benefits of using Linux in the field of virtual reality and reflects a basic comparison and evaluation between Windows and Linux base operating system. Windows platform is use as a baseline to evaluate the performance of Linux. The performance of Linux is measured based on three main criteria which is frame rate, image quality and also mouse motion.
The development of mechanical models of the humerus-prosthesis assembly represent a solution for analyzing the behavior of prosthesis devices under different conditions; some of these behaviors are impossible to reproduce in vivo due to the irreversible phenomenon that can occur. This paper presents a versatile model of the humerus-prosthesis assembly. The model is used for stress analysis and displacement distributions under different configurations that correspond to possible in vivo implementations later on. A 3D scanner was used to obtain the virtual model of the humerus bone. The endoprosthesis was designed using 3D modeling software and the humerus-prosthesis assembly was analyzed using Finite Element Analysis software.
Bit planes of discrete signal can be used not only for encoding or compressing, but also for encrypting purposes. This paper investigates composition of bit planes of an image and their utilization in the encryption process. Proposed encryption scheme is based on chaotic maps of Peter de Jong and it is designed for image signals primarily. Positions of all components of bit planes are permutated according to chaotic behaviour of Peter de Jong’s system.
This paper is about the design and implementation of an examination system based on World Wide Web. It is called FLEX-Flight License Exam Software. We designed and implemented flexible and modular software architecture. The implemented system has basic specifications such as appending questions in system, building exams with these appended questions and making students to take these exams. There are three different types of users with different authorizations. These are system administrator, operators and students. System administrator operates and maintains the system, and also audits the system integrity. The system administrator can not be able to change the result of exams and can not take an exam. Operator module includes instructors. Operators have some privileges such as preparing exams, entering questions, changing the existing questions and etc. Students can log on the system and can be accessed to exams by a certain URL. The other characteristic of our system is that operators and system administrator are not able to delete questions due to the security problems. Exam questions can be inserted on their topics and lectures in the database. Thus; operators and system administrator can easily choose questions. When all these are taken into consideration, FLEX software provides opportunities to many students to take exams at the same time in safe, reliable and user friendly conditions. It is also reliable examination system for the authorized aviation administration companies. Web development platform – LAMP; Linux, Apache web server, MySQL, Object-oriented scripting Language – PHP are used for developing the system and page structures are developed by Content Management System – CMS.
National Portal of India [1] integrates information from distributed web resources like websites, portals of different Ministries, Departments, State Governments as well as district administrations. These websites are developed at different points of time, using different standards and technologies. Thus integrating information from the distributed, disparate web resources is a challenging task and also has a reflection on the information discovery by a citizen using a unified interface such as National Portal. The existing text based search engines would also not yield desired results [7]. Couple of approaches was deliberated to address the above challenge and it was concluded that a metadata replication based approach would be most feasible and sustainable. Accordingly solution was designed for replication of metadata from distributed repositories using services oriented architecture. Uniform Metadata specifications were devised based on Dublin core standard [9]. To begin with solution is being implemented among National Portal and 35 State Portals spread across length and breadth of India. Metadata from distributed repositories is replicated to a central repository regardless of the platform and technology used by distributed repositories. Simple Search Interface has also been developed for efficient and effective information discovery by the citizens.
Performance engineering focuses on the quantitative investigation of the behavior of a system during the early phase of the system development life cycle. Bearing this on mind, we delineate a performance modeling framework of the application for communication system that proposes a translation process from high level UML notation to Continuous Time Markov Chain model (CTMC) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and activity diagram as reusable specification building blocks, while deployment diagram highlights the components of the system. The collaboration and activity show how reusable building blocks in the form of collaboration can compose together the service components through input and output pin by highlighting the behavior of the components and later a mapping between collaboration and system component identified by deployment diagram will be delineated. Moreover the UML models are annotated to associate performance related quality of service (QoS) information which is necessary for solving the performance model for relevant performance metrics through our proposed framework. The applicability of our proposed performance modeling framework in performance evaluation is delineated in the context of modeling a communication system.
In this paper, we survey four clustering techniques and discuss their advantages and drawbacks. A review of eight different protein sequence clustering algorithms has been accomplished. Moreover, a comparison between the algorithms on the basis of some factors has been presented.
In this article construction and potential of OpenGL multi-user web-based application are presented. The most common technologies like: .NET ASP, Java and Mono were used with specific OpenGL libraries to visualize tree-dimensional medical data. The most important conclusion of this work is that server side applications can easily take advantage of fast GPU and produce efficient results of advanced computation just like the visualization.
Testing task schedulers on Linux operating system proves to be a challenging task. There are two main problems. The first one is to identify which properties of the scheduler to test. The second problem is how to perform it, e.g., which API to use that is sufficiently precise and in the same time supported on most platforms. This paper discusses the problems in realizing test framework for testing task schedulers and presents one potential solution. Observed behavior of the scheduler is the one used for “normal” task scheduling (SCHED_OTHER), unlike one used for real-time tasks (SCHED_FIFO, SCHED_RR).
Approach to design of outer electric lines has changed in last years very significantly. Especially new demands in branch reliability should designer keep in mind. These new requests are basis of new European and national standards. To simplify design layout, automate verification of all rules and limitations and minimize mistakes computer application was developed to solve these tasks. This article describes new approach to this task, features and possibilities of this software tool.
Recently, Model Driven Engineering (MDE) has been proposed to face the complexity in the development, maintenance and evolution of large and distributed software systems. Model Driven Architecture (MDA) is an example of MDE. In this context, model transformations enable a large reuse of software systems through the transformation of a Platform Independent Model into a Platform Specific Model. Although source code can be generated from models, defects can be injected during the modeling or transformation process. In order to delivery software systems without defects that cause errors and fails, the source code must be submitted to test. In this paper, we present an approach that takes care of test in the whole software life cycle, i.e. it starts in the modeling level and finishes in the test of source code of software systems. We provide an example to illustrate our approach.
This paper investigates the role of the scientific signs in the holy Quran in improving the creative thinking skills for the deaf children using multimedia. The paper investigates if the performance made by the experimental group’s individuals is statistically significant compared with the performance made by the control group’s individuals on Torrance Test for creative thinking (fluency, flexibility, originality and the total degree) in two cases: 1. Without considering the gender of the population. 2. Considering the gender of the population.
Research has gone into parallelization of the numerical aspects of computationally intense analysis and solutions. Recent advances in computer algebra systems have opened up new opportunities for research: generating closed-form, symbolic solutions more efficiently by parallelizing the symbolic manipulations.
Multilevel secure (MLS) database models provide a data protection mechanism different from traditional data access control. The MLS database has been used in various application domains including government, hospital, military, etc. The MLS database model protects data by grouping them into different classification and creates different views to the users of different clearance levels. Previous models have focused on data level classification like tuples and elements. In this study, we introduce a schema level classification mechanism, i.e. attribute and relation classification. We first define the basic model, and then give definitions of integration properties and operations of database. The schema classification scheme will reduce semantics inferences and thus prevent users from compromising the database.
This refers to the inability of a program to release the memory-or part it-that it has accessed to perform certain task(s) in computer systems [1]. The unintended consequences of such behavior are manifested in forms of diminishing performance at best. In worse case scenarios, memory leaks could lead the computer system to freeze and/or complete application failure. Memory leaks are particularly disastrous in limited memory embedded systems and client-server environments where applications share memories across multiple-user platforms. It is up to operating system designers to make sure that the currently running applications release memory after program termination. This work accesses and quantifies the impact of memory leak in system performance.
The maintenance of legacy software applications is a complex, expensive, quiet challenging, time consuming and daunting task due to program comprehension difficulties. The first step for software maintenance is to understand the existing software and to extract the high level abstractions from the source code. A number of methods, techniques and tools are applied to understand the legacy code. Each technique supports the particular legacy applications with automated/semi-automated tool support keeping in view the requirements of the maintainer. Most of the techniques support the modern languages but lacks support for older technologies. This paper presents a lightweight methodology for extraction of different artifacts from legacy COBOL and other applications
Realization of Model-Driven Engineering (MDE) vision of software development requires a comprehensive and user-friendly tool support. This paper presents a UML-based approach for building trustful C# applications. UML models are refined using profiles for assigning class model elements to C# concepts and to elements of implementation project. Stereotyped elements are verified on life and during model to code transformation in order to prevent creation of an incorrect code. The Transform OCL Fragments into C# system (T.O.F.I.C.) was created as a feature of the Eclipse environment. The system extends the IBM Rational Software Architect tool.
From articles of H. Yu Chen about early detection of network attacks [1], the authors applied his approach to Early Abnormal Overload Detection (EAOD) on Content Delivery Network (CDN) and suggested solutions for the problem, to limit abnormal overload to be occurred on a large network, ensuring users always being accessed to desired resources. Early overload detection mechanism are based on three levels: at each router, in each autonomous system domain (AS domain) and on inter-autonomous domains (inter-AS domains). At each router, when abnormal load exceeds a threshold, it will notify to a server that contains the Change Aggregation Tree (CAT) in the autonomous domain. Each node of the tree is an overloaded router above. On inter-AS domains, the CAT servers exchange information with each other to create global CAT. Based on the height and shape (dense) of the global CAT tree, the overload can be determined on which destination network and which user network caused this overload. Next, the administrator decided to move the content (as a service) which causes overload, to a user network. By this way, it prevents overload on intermediate and destination networks. This approach asks the cooperation among network providers.
The proposed algorithm is a novel method for the feature extraction of ECG beats based on Wavelet Transforms. A combination of two well-accepted methods, Pan Tompkins algorithm and Wavelet decomposition, this system is implemented with the help of MATLAB. The focus of this work is to implement the algorithm, which can extract the features of ECG beats with high accuracy. The performance of this system is evaluated in a pilot study using the MIT-BIH Arrhythmia database.
In Component-based Software (CBS) development, it is desirable to choose software components that provide all necessary functionalities and at the same time optimize certain nonfunctional attributes of the system (for example, system cost). In this paper we investigate the problem of selecting software components to optimize one or more nonfunctional attributes of a CBS. We approach the problem through the lexicographic multi-objective optimization perspective and develop a scheme that produces Pareto-optimal solutions. Furthermore we show that the Component Selection Problem (CSP) can be solved in polynomial time if the components are connected by serial interfaces and all the objectives are to be minimized, whereas the corresponding maximization problem is NP-hard.
In this paper we present a framework for selecting the proper instructional strategy for a given teaching material based on its attributes. The new approach is based on a flexible design by means of generic rules. The framework was adapted in an Intelligent Tutoring System to teach Modern Standard Arabic language to adult English-speaking learners with no pre-knowledge of Arabic language is required.
In this paper, we present the design of parallel Sobel edge detection algorithm using Foster’s methodology. The parallel algorithm is implemented using MPI message passing library and master/slave algorithm. Every processor performs the same sequential algorithm but on different part of the image. Experimental results conducted on Beowulf cluster are presented to demonstrate the performance of the parallel algorithm. Keywordscomponent-Beowulf Cluster-Edge detection-MPI-Parallel Programming
The voluntary disclosure of the sustainability reports by the companies attracts wider stakeholder groups. Diversity in these reports poses challenge to the users of information and regulators. This study appraises the corporate sustainability reports as per GRI (Global Reporting Initiative) guidelines (the most widely accepted and used) across all industrial sectors. Text mining is adopted to carry out the initial analysis with a large sample size of 2650 reports. Statistical analyses were performed for further investigation. The results indicate that the disclosures made by the companies differ across the industrial sectors. Multivariate Discriminant Analysis (MDA) shows that the environmental variable is a greater significant contributing factor towards explanation of sustainability report.
In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.
An integration algorithm that conjugates a Method of Lines (MOL) strategy based on finite differences space discretizations, with a collocation strategy based on increasing level dyadic grids is presented. It reveals potential either as a grid generation procedure and a Partial Differential Equation (PDE) integration scheme. It copes satisfactorily with a example characterized by a steep travelling wave and a example that presented a forming steep shock, which demonstrates its versatility in dealing with different types of steep moving front problems, exhibiting features like advection-diffusion, widely common in the standard Chemical Processes simulation models KeywordsPartial Differential Equation-Numerical Methods-Adaptive Methods-Collocation Methods-Dyadic Grids
The basic idea addressed in this research is developing a generic, dynamic, and domain independent natural language interface to databases. The approach consists of two phases; configuration phase and operation phase. The former builds data synonyms tree based on the database being implemented. The idea behind this tree is matching the natural language words with database elements. The tree hierarchy contains the database tables, attributes, attribute descriptions, and all possible synonyms for each description. The latter phase contains a technique that implements syntax state table to extract the SQL components from the natural language user request. As a result the corresponding SQL statement is generated without interference of human experts.
The present work develops the analysis of two strategic maps. One is based on the principles of Compensatory Fuzzy Logic (CFL) and the other studies Organizational Culture. The research is applied with a quali-quantitative approach and it studies the case of a software development company with the use of a technical procedure and a documentary base with the application of interviews and questionnaires. It concludes that the strategic maps based on and Organizational Culture are robust methodologies that identify and prioritize strategic variables. There is also an interrelationship amongst them in their consideration of important behavioral aspects. With this it was possible to analyze strategic aspects of the companies in a more complex and realistic way.
The Resources Description Framework RDF Generator (RDFG) is a platform generates the RDF documents from any web page using predefined models for each internet domain, using special web pages classification system. The RDFG is one of the SWF units aimed to standardize the researchers efforts in Semantic Web, by classifying the internet sites to domains, and preparing special RDF model for each domain. RDFG used web intelligent methods for preparing RDF documents such as ontology based semantics matching system to detect the type of web page and knowledgebase machine learning system to create the RDF documents accurately and according to the standard models. RDFG reduce the complexity of the RDF modeling and realize the web entities creating, sharing and reusing.
This paper outlines how information technology (IT) can help to drive business innovation and growth. Today innovation is a key to properly managing business growth from all angles. IT governance is responsible for managing and aligning IT with the business objectives; managing strategic demand through the projects portfolio or managing operational demand through the services portfolio. IT portfolios offer the possibility of finding new opportunities to make changes and improve through innovation, enabling savings in capital expenditure and the company’s IT operations staff time. In the last century, IT was considered as a new source of infinite possibilities and business success through innovation.
Nowadays, the Operating System (OS) isn’t only the software that runs your computer. In the typical information-driven organization, the operating system is part of a much larger platform for applications and data that extends across the LAN, WAN and Internet. An OS cannot be an island unto itself; it must work with the rest of the enterprise. Enterprise wide applications require an Enterprise Operating System (EOS). Enterprise operating systems used in an enterprise have brought about an inevitable tendency to lunge towards organizing their information activities in a comprehensive way. In this respect, Enterprise Architecture (EA) has proven to be the leading option for development and maintenance of enterprise operating systems. EA clearly provides a thorough outline of the whole information system comprising an enterprise. To establish such an outline, a logical framework needs to be laid upon the entire information system. Zachman Framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have prominent roles in enterprise-wide system development. In this paper, we propose a framework based on ZF for enterprise operating systems. The presented framework helps developers to design and justify completely integrated business, IT systems, and operating systems which results in improved project success rate.
In general, a MLP training uses a training set containing only positive examples, which may change the neural network into an over confident network for solving the problem. A simple solution for this problem is the introduction of negative examples in the training set. Through this procedure, the network will be prepared for the cases it has not been trained for. Unfortunately, up to the present, the number of negative examples that must be used in the training process was not mentioned in the literature. Consequently, the present article aims at finding a general mathematical pattern for training a MLP with negative examples. With that end in view, we have used a regressive analytic technique in order to analyze the data resulted from training three neural networks for a number of three datasets: a dataset for letter recognition, one for the data supplied by a sonar and a last one for the data resulted from the medical tests for determining diabetes. The pattern testing was performed on a new database for confirming its truthfulness.
The main purpose of presented GPU benchmark is to generate complex polygonal mesh structures based on strange attractors with fractal structure. Attractors have to be created as 4D objects using quaternion algebra. Polygonal mesh can have different numbers of polygons because of iterative application of this system. The major complexity of every mesh would provide efficient results using multiple methods such as ray-tracing, anti aliasing and anisotropic filtering to evaluate GPU performance. Our main goal is to develop new faster algorithm to generate 3D structures and apply its complexity for GPU benchmarking. KeywordsBenchmark-fractal-strange attractor-quaternion-marching cube algorithm
In multiple speaker environments such as teleconferences we observe a loss of intelligibility, particularly if the sound is monaural in nature. In this study, we exploit the "Cocktail Party Effect", where a person can isolate one sound above all others using sound localization and gender cues. To improve clarity of speech, each speaker is assigned a direction using Head Related Transfer Functions (HRTFs) which creates an auditory map of multiple conversations. A mixture of male and female voices is used to improve comprehension. We see 6% improvement in cognition while using a male voice in a female dominated environment and 16% improvement in the reverse case. An improvement of 41% is observed while using sound localization with varying elevations. Finally, the improvement in cognition jumps to 71% when both elevations and azimuths are varied. Compared to our previous study, where only azimuths were used, we observe that combining both the azimuths and elevations gives us better results (57% vs. 71%).
The basic approaches to decision making and modeling tourism sustainable development are reviewed. Dynamics of a sustainable development is considered in the Forrester's system dynamics. Multidimensionality of tourism sustainable development and multicriteria issues of sustainable development are analyzed. Decision Support Systems (DSS) and Spatial Decision Support Systems (SDSS) as an effective technique in examining and visualizing impacts of policies, sustainable tourism development strategies within an integrated and dynamic framework are discussed. Main modules that may be utilized for integrated modeling sustainable tourism development are proposed.
In this paper we present a benchmark tool called PI-ping that can be used to compare real-time performance of operating systems. It uses two types of processes that are common in operating systems – interactive tasks demanding low latencies and also processes demanding high CPU utilization. Most operating systems have to perform well in both conditions and the goal is to achieve the highest throughput when keeping the latencies within a reasonable interval. PI-ping measures the latencies of an interactive process when the system is under heavy computational load. Using PI-ping benchmark tool we are able to compare different operating systems and we attest the functionality of it using two very common operating systems - Linux and FreeBSD.
We present a framework for the archetypes based engineering of domains, requirements and software (Archetypes-Based Software Development, ABD). An archetype is defined as a primordial object that occurs consistently and universally in business domains and in business software systems. An archetype pattern is a collaboration of archetypes. Archetypes and archetype patterns are used to capture conceptual information into domain specific models that are utilized by ABD. The focus of ABD is on software factories - family-based development artefacts (domain specific languages, patterns, frameworks, tools, micro processes, and others) that can be used to build the family members. We demonstrate the usage of ABD for developing laboratory information management system (LIMS) software for the Clinical and Biomedical Proteomics Group, at the Leeds Institute of Molecular Medicine, University of Leeds.
In this paper we will apply a SEC-DED code to the cache level of a memory hierarchy. From the category of SEC-DED (Single Error Correction Double Error Detection) codes we select the Hamming code. For correction of single-bit error we use a syndrome decoder, a syndrome generator and the check bits generator circuit.
There are many approaches proposed by the scientific community for the implementation and development of Wireless Sensor Networks (WSN). These approaches correspond to different areas of science, such as Electronics, Communications, Computing, Ubiquity, and Quality of Service among others. However, all are subject to the same constraints, because of the nature of WSN devices. The most common constraints of a WSN are the energy consumption, the network nodes organization, the sensor network’s task reprogramming, the reliability in the data transmission, the resource optimization (memory and processing), etc. In the Artificial Intelligence Area is has proposed an Distributed System Approach with Mobile Intelligent Agents. An Integration Model of Mobile Intelligent Agents within Wireless Sensor Network solves some of the constraints presented above on WSN´s topologies. However, the model only was tested on the square topologies. In this way, the aim of this paper is to evaluate the performance of this model in irregular topologies.
this paper shows the values of Flow ratio (FR) for control of an absorption double stage heat transformer. The main parameters for the heat pump system are defined as COP, FR and GTL. The control of the entire system is based in a new definition of FR. The heat balance of the Double Stage Heat Transformer (DSHT) is used for the control. The mass flow is calculated for a HPVEE program and a second program control the mass flow. The mass flow is controlled by gear pumps connected to LabView program. The results show an increment in the fraction of the recovery energy. An example of oil distillation is used for the calculation. The waste heat energy is added at the system at 70 °C. Water ™ - Carrol mixture is used in the DSHT. The recover energy is obtained in a second absorber at 128 °C with two scenarios.
Our goal is to tend to develop an enforcement architecture of privacy policies over multiple online social networks. It is used to solve the problem of privacy protection when several social networks build permanent or temporary collaboration. Theoretically, this idea is practical, especially due to more and more social network tend to support open source framework “OpenSocial”. But as we known different social network websites may have the same privacy policy settings based on different enforcement mechanisms, this would cause problems. In this case, we have to manually write code for both sides to make the privacy policy settings enforceable. We can imagine that, this is a huge workload based on the huge number of current social networks. So we focus on proposing a middleware which is used to automatically generate privacy protection component for permanent integration or temporary interaction of social networks. This middleware provide functions, such as collecting of privacy policy of each participant in the new collaboration, generating a standard policy model for each participant and mapping all those standard policy to different enforcement mechanisms of those participants.
This paper presents a proposal of distribution’s estimated algorithm to the extraction of sequential patterns in a database which use a probabilistic model based on graphs which represent the relations among items that form a sequence. The model maps a probability among the items allowing them to adjust the model during the execution of the algorithm using the evolution process of EDA, optimizing the candidate’s generation of solution and extracting a group of sequential patterns optimized.
The goal of this paper is to present a information visualization application capable of opening and synchronizing information between two or more datasets. We have chosen this approach to address some of the limitations of various applications. Also, the application uses multiple coordinated views and multiple simultaneous datasets. We highlight the application layout configuration by the user, including the flexibility to specify the number of data views and to associate different datasets for each visualization techniques.
The previous neural network based on the proximity values was developed using rectangular pavement images. However, the proximity value derived from the rectangular image was biased towards transverse cracking. By sectioning the rectangular image into a set of square sub-images, the neural network based on the proximity value became more robust and consistent in determining a crack type. This paper presents an improved neural network to determine a crack type from a pavement surface image based on square sub-images over the neural network trained using rectangular pavement images. The advantage of using square sub-image is demonstrated by using sample images of transverse cracking, longitudinal cracking and alligator cracking.
Building Information Modeling (BIM) may have obvious implications in the process of architectural design and construction at the present stage of technological development. However, BIM has rarely been really assessed and its benefits are often described in generic terms. In this paper we describe an experiment in which some benefits are identified from a comparison between two design processes of the same airport building, one run in an older CAD system and the other in a BIM based approach. The practical advantages of BIM to airport design are considerable.
With a trend toward becoming more and more information based, enterprises constantly attempt to surpass the accomplishments of each other by improving their information activities. In this respect, Enterprise Architecture (EA) has proven to serve as a fundamental concept to accomplish this goal. Enterprise architecture clearly provides a thorough outline of the whole enterprise applications and systems with their relationships to enterprise business goals. To establish such an outline, a logical framework needs to be laid upon the entire information system called Enterprise Architecture Framework (EAF). Among various proposed EAF, Zachman Framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have critical roles in enterprise management and development. One of the problems faced in using ZF is the lack of formal and verifiable models for its cells. In this paper, we proposed a formal language based on Petri nets in order to obtain verifiable models for all cells in ZF. The presented method helps developers to validate and verify completely integrated business and IT systems which results in improve the effectiveness or efficiency of the enterprise itself.
In order to support the early stages of conceptual design, the architect used throughout the years, mockups – scaled physical models - or perspective drawings that intended to predict architectural ambience before its effective construction. This paper studies the real time interactive visualization, focused on one of the most important aspects inside building space: the natural light. However, the majority of physically-based algorithms currently existing was designed for the synthesis of static images which may not take into account how to rebuild the scene - in real time - when the user is doing experiments to change certain properties of design. In this paper we show a possible solution for this problem.
This paper shows how shape grammars can be applied in the analyze of new types of architecture through the case study of the project of the City of Music of Rio de Janeiro by Christian Portzamparc. It aims to indicate how shape grammars can still be constructed from designs which were created with the purpose of avoiding standardization.
Architecture models and possible data flows for local and group datawarehouses are presented, together with some data processing models. The architecture models consists of several layers and the data flow between them. The choosen architecture of a datawarehouse depends on the data type and volumes from the source data, and inflences the analysis, data mining and reports done upon the data from DWH.
... In the third step, Survey In-28 strument Development, the items to measure each SHF were es- 29 tablished, and the instrument was constructed (Machuca-Villegas 30 et al., 2021a). In the fourth step Survey Instrument Evaluation, the 31 validity and reliability of the instrument were assessed (Machuca-32 Villegas et al., 2021a). The fifth step, Instrument Administration, 33 and the sixth step, Data Analysis are outlined in this article. ...
... Likewise, these components were clearly in-12 terpretable and the items or factors that saturated them reached 13 associated unifactor indices for the reduction of acceptable factors 14 and contribute to one of the three components extracted. 15 The Nágera, 29 2002), the feeling of having the backing and support of others 30 (Tomasello, 2010), and the responsibility that each member is 31 willing to assume in their tasks within their work team, as well 32 as being responsible for the goals set within the project. 33 The third component, 'Capabilities and Experience', explains 34 15.702% of the variance and includes two of the 13 factors (Ca-35 pabilities and experience in the software development process, 36 ...
Article
This research aims to know if software engineering professionals consider that social and human factors (SHF) influence the productivity of a work team. A survey-based study was conducted among 112 members of software development teams. Empirical results show professionals agree with the SHF in the context of software development influence in the productivity of work teams. It was identified that the 13 SHFs have a weak or moderate correlation with each other. Additionally, the results of the exploratory factorial analysis suggest categorizing the factors into those associated with the individual, those associated with team interaction, and those related to capabilities and experience. This categorization reduced the number of items in the original questionnaire while preserving the variability explained in the latent variables, which will require a shorter response time. Our results broaden the understanding of the SHFs that influence software development team productivity and open up new research opportunities. Measuring the perception of these factors can be used to identify which SHFs should be prioritized in a strategy to improve productivity. In addition, this knowledge can help software organizations appropriately manage their development teams and propose innovative work approaches that have a positive impact on the success of their projects.
... During transmission, the classified information has risked being attacked by an eavesdropper [3] such as interruption, interception, modification, and fabrication [4]. As prevention, this classified information can be protected by steganography [5] which covers the information with various media [6], e.g., text [7], images [8], audio [9], and video [10]. The cover is called the cover image. ...
Article
Full-text available
The study developed a digital text security system by inserting the classified message or text into digital audio (stego audio) with the least significant bit (LSB) steganography and audio feature extraction. The text bits were inserted into each cover audio frame based on the value of feature extraction energy and zero-crossing rate (ZCR). The study has demonstrated that the least significant bit and audio feature extraction can cover the digital text in stego audio and then be extracted back to the original digital text. There is a small difference in feature extraction energy value between stego audio and cover audio. While there is no difference in feature extraction zero-crossing rate between stego audio and cover audio. The study also measured signal-to-noise ratio (SNR); the detected noise in stego audio because of the change in the least significant bit. The SNR is 111.97 – 130.00 dB.
... Furthermore, collaboration, motivation, participation, communication, shared knowledge, performance, and commitment may be considered as social and human factors influencing the requirements elicitation process. These factors, in turn, influence software development productivity [31], [32], [33], [34], [35], [36]. Finally, Figure 2 shows the results obtained from the analysis of studies. ...
Chapter
Requirements elicitation is an important process for software product development. This process involves detecting and understanding clients’ and users’ needs. From this process is possible to provide clarity about the definition of the requirements to be used in the next stages of the software development. Although important, stakeholders’ collaboration in the elicitation process is scarce. In this regard, providing supporting strategies to foster collaboration in this type of process has become a motivating and challenging study area for researchers. This paper intends to define a set of gamification strategies characterized according to their contribution to stakeholders’ collaboration, communication, and participation in the requirements elicitation process. These strategies were collected via a systematic literature review process. In this process, a selection of strategies was made. Such strategies were analyzed and characterized. Results showed that software tools are one of the strategies being the most frequently used. Future work may include descriptive statistics to improve the analysis and characterization of the strategies.
... The second data set we evaluate our approach on is the Communities and Crime Unnormalized data set (Redmond and Highley 2010). The number of murders in 1995 is the target variable, and variables include potential factors such as percent of housing occupied, per capita income, and police operating budget. ...
Preprint
Adversarial attacks against neural networks in a regression setting are a critical yet understudied problem. In this work, we advance the state of the art by investigating adversarial attacks against regression networks and by formulating a more effective defense against these attacks. In particular, we take the perspective that adversarial attacks are likely caused by numerical instability in learned functions. We introduce a stability inducing, regularization based defense against adversarial attacks in the regression setting. Our new and easy to implement defense is shown to outperform prior approaches and to improve the numerical stability of learned functions.
... Many African states are developing and therefore the adoption and use of OSS is expected to be a natural choice due to the cost implications of proprietary/commercial softwares. However its adoption is inadequate even though many organizations have been highlighting the possible benefits of OSS to these countries (St.Amant & Still, 2007) Awareness according to (Sobh, 2010) is a significant facilitator of the OSS adoption. If people are aware of the technological benefits of OSS, adoption will be very straight forward. ...
Article
There is a consensus that Information and Communication Technologies (ICTs) can play an important role in developing countries. However, in recent years, there has been an increasing agitation by sections of the society that softwares be freed from the control of the commercial developers. The adoption and use of open source software (OSS), especially in institutions of higher learning can play a fundamental responsibility in the formation of new knowledge, distribution and ultimate application of OSS in the economy and society as a whole. This study aimed at assessing the determinants of the adoption of open source software applications in universities. A survey research design was employed and both Primary and secondary sources of data were used. From the findings, most of the respondents seemed to have knowledge on OSS this is true because lately OSS has gained a reputation in terms reliability, efficiency, functionality, security that has amazed majority of consumers who are increasing by day. The findings indicated all variables under study, which were; Awareness, Perception of utilization, training and perceived usefulness had an effect on adoption. This effect was however 45.1% which means other factors influence adoption. It therefore recommended further research on these other factors.
Article
Full-text available
There had been an enormous increase in the crime in the recent past. Crimes are a common social problem affecting the quality of life and the economic growth of a society. With the increase of crimes, law enforcement agencies are continuing to demand advanced systems and new approaches to improve crime analytics and better protect their communities. Decision tree (J48) applied in the context of law enforcement and intelligence analysis holds the promise of alleviating such problem. Data mining is a way to extract knowledge out of usually large data sets; in other words it is an approach to discover hidden relationships among data by using artificial intelligence methods of which decision tree (J48) is inclusive. The wide range of machine learning applications has made it an important field of research. Criminology is one of the most important fields for applying data mining. Criminology is a process that aims to identify crime characteristics. This study considered the development of crime prediction prototype model using decision tree (J48) algorithm because it has been considered as the most efficient machine learning algorithm for prediction of crime data as described in the related literature. From the experimental results, J48 algorithm predicted the unknown category of crime data to the accuracy of 94.25287% which is fair enough for the system to be relied on for prediction of future crimes.
Article
This paper presents software methods of improving fault tolerance in embedded systems. These methods have been adapted to a telemetry system dedicated to tracking vehicles for logistics purposes. The developed telemetry system allows us to monitor vehicle position and some technical parameters via GSM communication. It comprises the capability of remote software recon�guration. To evaluate dependability of the system we use a fault injection technique based on simulating bit- fliip errors within memory cells. For this purpose an original testbed has been developed. It provides not only the capability of disturbing internal state of the tested system (via JTAG interface) but also the possibility of controlling system input states and observing its behavior (in particular output signals) according to speci�ed test scenarios. The whole test process is automatized. The paper presents a case study related to a commercial product but the described methodology and techniques can be extended for other embedded systems.
Article
Winter can bring significant snow storm systems or nor'easters to New England. Understanding each factor which can affect nor'easters will allow forecasters to better predict the subsequent weather conditions. One important parameter is the sea surface temperature (SST) of the Atlantic Ocean, where many of these systems strengthen and gain much of their structure. The Weather Research and Forecasting (WRF) model was used to simulate four different nor'easters (Mar 2007, Dec 2007, Jan 2008, Dec 2010) using both observed and warmed SSTs. For the wanner SST simulations, the SSTs over the model domain were increased by 1°C. This change increased the total surface heat fluxes in all of the storms, and the resulting simulated storms were all more intense. The influence on the amount of snowfall over land was highly variable, depending on how close to the coastline the storms were and temperatures across the region.
Conference Paper
Full-text available
There are considerations on the problem of time-consuming calculations in this article. This type of computational problems concerns to multiple aspects of earth sciences. Distributed computing allows to perform calculations in reasonable time. This way of processing requires a cluster architecture. The authors propose using Apache Hadoop technology to solve geophysics inversion problem.This solution is designed rather for analyzing data, but it also enables to perform computations. There is an architecture of solution proposed and real test carried out to determine the performance of method.
Building Information Modeling as a Tool for the Design of Airports 611
  • Byoung Jiklee Andhosin
Byoung JikLee andHosin "David" Lee 106. Building Information Modeling as a Tool for the Design of Airports 611
Silva Junior andNeander Furtado Silva 110. Architecture Models and Data Flows in Local and Group Datawarehouses 627
  • A Felix
Felix A. Silva Junior andNeander Furtado Silva 110. Architecture Models and Data Flows in Local and Group Datawarehouses 627