BookPDF Available

Innovations in Computing Sciences and Software Engineering

Authors:

Abstract

Innovations in Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Topics Covered: •Image and Pattern Recognition: Compression, Image processing, Signal Processing Architectures, Signal Processing for Communication, Signal Processing Implementation, Speech Compression, and Video Coding Architectures. •Languages and Systems: Algorithms, Databases, Embedded Systems and Applications, File Systems and I/O, Geographical Information Systems, Kernel and OS Structures, Knowledge Based Systems, Modeling and Simulation, Object Based Software Engineering, Programming Languages, and Programming Models and tools. •Parallel Processing: Distributed Scheduling, Multiprocessing, Real-time Systems, Simulation Modeling and Development, and Web Applications. •Signal and Image Processing: Content Based Video Retrieval, Character Recognition, Incremental Learning for Speech Recognition, Signal Processing Theory and Methods, and Vision-based Monitoring Systems. •Software and Systems: Activity-Based Software Estimation, Algorithms, Genetic Algorithms, Information Systems Security, Programming Languages, Software Protection Techniques, Software Protection Techniques, and User Interfaces. •Distributed Processing: Asynchronous Message Passing System, Heterogeneous Software Environments, Mobile Ad Hoc Networks, Resource Allocation, and Sensor Networks. •New trends in computing: Computers for People of Special Needs, Fuzzy Inference, Human Computer Interaction, Incremental Learning, Internet-based Computing Models, Machine Intelligence, Natural Language.
Tarek
Sobh
Khaled
Elleithy
Editors
Innovations
in
Computing
Sciences
and
Software
Engineering
4^
Springer
TECHNI.SC
HE
IMFGRiuATiGi
iS3!DLIOTHEK
R
S
iT
ATS
B
i
B
LI
OTH
EK
HANNOVER
Contents
Reviewers
List
xix
1.
Recursive
Projection
Profiling
for
Text-Image
Separation
1
Shivsubramani
Krishnamoorthy
et
al.
2.
Risk
in the
Clouds?:
Security
Issues
Facing
Government
Use
of
Cloud
Computing
7
David
C.
Wyld
3.
Open
Source
Software
(OSS)
Adoption
Framework
for
Local
Environment
and
its
Comparison
13
U.
Laila
and
S.
F.
A.
Bukhari
4.
Ubiquitous
Data
Management
in
a
Personal
Information
Environment
17
AtifFarid
Mohammad
5.
Semantics
for
the
Asynchronous
Communication
in
LIPS,
a
Language
for
Implementing
Parallel/distributed
Systems
23
Amala
VijayaSelvi
Rajan
et
al.
6.
Separation
of
Concerns
in
Teaching
Software
Engineering
29
hzat
M.
Alsmadi
and
Mahmoud
Dieri
7.
Student
Model
Based
On
Flexible
Fuzzy
Inference
39
Dawod
Kseibat
et
al.
8.
PlanGraph:
An
Agent-Based
Computational
Model
for
Handling
Vagueness
in
Human-GIS
Communication
of
Spatial
Concepts
45
Hongmei
Wang
9.
Risk-Based
Neuro-Grid
Architecture
for
Multimodal
Biometrics
51
Sitalakshmi
Venkataraman
and
Siddhivinayak
Kidkarni
10.
A
SQL-Database
Based
Meta-CASE
System
and
its
Query
Subsystem
57
Erki
Eessaar
and
Riinno
Sgirka
11.
An
Intelligent
Control
System
Based
on
Non-Invasive
Man
Machine
Interaction
63
Darius
Drungilas
et
al.
xi
xii
CONTENTS
12.
A
UML
Profile
for
Developing
Databases
that
Conform
to
The
Third
Manifesto
69
Erki
Eessaar
13.
Investigation
and
Implementation
of
T-DMB
Protocol
in
NCTUns
Simulator
75
Tatiana
Zuyeva
et
al.
14.
Empirical
Analysis
of
Case-Editing
Approaches
for
Numeric
Prediction
79
Michael
A.
Redmond
and
Timothy
Highley
15.
Towards
a
Transcription
System
of
Sign
Language
for
3D
Virtual
Agents
85
Wanessa
Machado
do
Amaral and
Jose
Mario
De
Martino
16.
Unbiased
Statistics
of
a
Constraint
Satisfaction
Problem
-
a
Controlled-Bias
Generator
91
Denis
Berthier
17.
Factors
that
Influence
the
Productivity
of
Software
Developers
in
a
Developer
View
99
Edgy
Paiva
et
al.
18.
Algorithms
for
Maintaining
a
Consistent
Knowledge
Base
in
Distributed
Multiagent
Environments
105
Stanislav
Ustymenko
and
Daniel
G.
Schwartz
19.
Formal
Specifications
for
a
Document
Management
Assistant
Ill
Daniel
G.
Schwartz
20.
Towards
a
Spatial-Temporal
Processing
Model
117
Jonathan
B.
Lori
21.
Structure,
Context
and
Replication
in
a
Spatial-Temporal
Architecture
123
Jonathan
Lori
22.
Service
Oriented
E-Government
129
Margareth
Stoll
and
Dietmar
Laner
23.
Fuzzy-rule-based
Adaptive
Resource
Control
for
Information
Sharing
in
P2P
Networks
135
Zhengping
Wu
and
Hao
Wu
24.
Challenges
In
Web
Information
Retrieval
141
Monika
Arora
et
al.
25.
An
Invisible
Text
Watermarking
Algorithm
using
Image
Watermark
147
Zunera
Jalil
and
Anwar
M.
Mirza
26.
A
Framework
for
RFID
Survivability
Requirement
Analysis
and
Specification
153
Yanjun
Zuo
et
al.
27.
The
State of
Knowledge
Management
in
Czech
Companies
161
P.
Maresova and
M.
Hedvicakova
CONTENTS
xiii
28.
A
Suitable
Software
Process
Improvement
Model
for the
UK
Healthcare
Industry
167
Tien
D.
Nguyen
et
al.
29.
Exploring
User
Acceptance
of
FOSS: The
Role
of
the
Age
of the
Users
173
M.
Dolores
Gallego
and
Salvador
Bueno
30.
GFS
Tuning
Algorithm
Using
Fuzzimetric
Arcs
177
Issam
Kouatli
31.
Multi-step
EMG
Classification
Algorithm
for
Human-Computer
Interaction
183
Peng
Ren
et
al.
32.
Affective
Assessment
of
a
Computer
User
through
the
Processing
of
the
Pupil
Diameter
Signal
189
Ying
Gao
et
al.
33.
MAC,
A
System
for
Automatically
IPR
Identification,
Collection
and
Distribution
195
Carlos Serrao
34.
Testing
Distributed
ABS
System
with
Fault
Injection
201
Dawid
Trawczynski
et
al.
35.
Learning
Based
Approach
for
Optimal
Clustering
of
Distributed
Program's
Call
Flow
Graph
207
Yousef
Abofathi
and
Eager
Zarei
36.
Fuzzy
Adaptive
Swarm
Optimization
Algorithm
for
Discrete
Environments
213
M. Hadi
Zahedi
and
M.
Mehdi
S.Haghighi
37.
Project
Management
Software
for Distributed
Industrial
Companies
221
M.
Dobrojevic
et
al.
38.
How
to
Construct
an
Automated Warehouse
Based
on
Colored
Timed
Petri
Nets
227
Fei
Cheng
and
Shanjun
He
39.
Telecare
and
Social
Link
Solution
for
Ambient
Assisted
Living
Using
a
Robot
Companion
with
Visiophony
235
Thibaut
Varene
et
al.
40.
Contextual
Semantic:
A
Context-aware
Approach
for
Semantic
Web
Based
Data
Extraction
from
Scientific
Articles
241
Deniss
Kumlander
41.
Motivating
Company
Personnel
by
Applying
the
Semi-self-organized
Teams
Principle
245
Deniss
Kumlander
42.
Route
Advising
in
a
Dynamic
Environment
-
A
High-Tech
Approach
249
MF
MFirdhous
et
al.
43.
Building
Security
System
Based
on
Grid
Computing
To
Convert
and
Store
Media
Files
255
Hieu
Nguyen
Trung
et
al.
xiv
CONTENTS
44.
A
Tool
Supporting
C
code
Parallelization
259
Ilona
Bluemke
and
Joanna
Fugas
45.
Extending
OpenMP
for
Agent
Based
DSM
on
GRID
265
Mahdi
S.
Haghighi
et
al.
46.
Mashup
-
Based
End
User
Interface
for
Fleet
Monitoring
273
M.
Popa
et
al.
47.
The
Performance
of
Geothermal
Field
Modeling
in Distributed
Component
Environment
279
A.
Piorkowski
et
al.
48.
An
Extension
of
Least
Squares
Methods
for
Smoothing
Oscillation
of
Motion
Predicting
Function
285
O.
Starostenko
et
al.
49.
Security
of
Virtualized
Applications:
Microsoft
App-V
and
VMware
ThinApp
291
Michael
Hoppe
and
Patrick
Seeling
50.
Noise
Performance
of
a
Finite
Uniform
Cascade
of
Two-Port
Networks
297
Shmuel
Y.
Miller
51.
Evaluating
Software
Agent
Quality:
Measuring
Social
Ability
and
Autonomy
301
Fernando
Alonso
et
al.
52.
An
Approach
to
Measuring
Software
Quality
Perception
307
Radoslaw
Hofman
53.
Automatically
Modeling
Linguistic
Categories
in
Spanish
313
M.
D.
Lopez
De
Lvise
et
al.
54.
Efficient
Content-based
Image
Retrieval
using
Support
Vector
Machines
for
Feature
Aggregation
319
Ivica
Dimitrovski
et
al.
55.
The
Holistic,
Interactive
and
Persuasive
Model
to
Facilitate
Self-care
of
Patients
with
Diabetes
325
Miguel
Vargas-Lombardo
et
al.
56.
Jawi
Generator
Software
Using
ARM
Board
Under
Linux
331
O.
N.
Shalasiah
et
al.
57.
Efficient
Comparison
between
Windows
and
Linux
Platform
Applicable
in
a
Virtual
Architectural
Walkthrough
Application
337
P.
Thubaasini
et
al.
58.
Simulation-Based
Stress
Analysis
for
a
3D
Modeled
Humerus-Prosthesis
Assembly
343
S.
Herle
et
al.
CONTENTS
xv
59.
Chaos-Based
Bit
Planes
Image
Encryption
349
Jiri
Giesl
et
al.
60.
FLEX:
A
Modular
Software
Architecture
for
Flight
License
Exam
355
Taner
Arsan
et
al.
61.
Enabling
and
Integrating
Distributed
Web
Resources
for
Efficient
and
Effective
Discovery
of
Information
on
the
Web
361
Neeta
Verma
et
al.
62.
Translation
from
UML
to
Markov
Model:
A
Performance
Modeling
Framework
365
Razib
Hayat
Khan
and
Poitl
E.
Heegaard
63.
A
Comparative
Study
of
Protein
Sequence
Clustering
Algorithms
373
A.
Sharaf
Eldin
et
al.
64.
OpenGL
in
Multi-User
Web-Based
Applications
379
K.
Szostek
and
A.
Piorkowski
65.
Testing
Task
Schedulers
on
Linux
System
385
Leonardo
Jelenkovic
et
al.
66.
Automatic
Computer
Overhead
Line
Design
391
Lucie
Nohdcova
and
Karel
Nohdc
67.
Building
Test
Cases
through
Model
Driven
Engineering
395
Helaine
Soitsa
et
al.
68.
The
Effects
of
Educational
Multimedia
for
Scientific
Signs
in
the
Holy
Quran
in
Improving
the
Creative
Thinking
Skills
for
Deaf
Children
403
Sumaya
Abitsaleh
et
al
69.
Parallelization
of
Shape
Function
Generation
for
Hierarchical
Tetrahedral
Elements
409
Sara
E.
McCaslin
70.
Analysis
of
Moment
Invariants
on
Image
Scaling
and
Rotation
415
Dongguang
Li
71.
A
Novel
Binarization
Algorithm
for
Ballistics
Firearm
Identification
421
Dongguang
Li
72.
A
Schema
Classification
Scheme
for
Multilevel
Databases
427
Tzong-An
Su
andHong-Ju
Lu
73.
Memory
Leak
Sabotages
System
Performance
433
Nagm
Mohamed
xvi
CONTENTS
74.
Writer
Identification
Using
Inexpensive
Signal
Processing
Techniques
437
Sergnei
A.
Mokhov
et
al.
75.
Software
Artifacts
Extraction
for
Program
Comprehension
443
Ghnlam
Rasool
and
Ilka
Philippow
16.
Model-Driven
Engineering
Support
for
Building
C#
Applications
449
Anna
Derezinska
and
Przemyslaw
Oltarzewski
11.
Early
Abnormal
Overload
Detection
and
the
Solution
on
Content
Delivery
Network 455
Cam
Nguyen
Tan
et
al.
78.
ECG
Feature
Extraction
using
Time
Frequency
Analysis
461
Mahesh
A
Nair
19.
Optimal
Component
Selection
for
Component-Based
Systems
467
Muhammad
All
Khan
and
Sajjad
Mahmood
80.
Domain-based
Teaching
Strategy
for
Intelligent
Tutoring
System
Based
on
Generic
Rules
473
Dawod
Kseibat
et
al.
81.
Parallelization
of
Edge
Detection
Algorithm
using
MPI
on
Beowulf
Cluster
477
Nazleeni
Haron
et
al.
82.
Teaching
Physical
Based
Animation
via
OpenGL
Slides
483
Miao
Song
et
al.
83.
Appraising
the
Corporate
Sustainability
Reports
-
Text
Mining
and
Multi-Discriminatory
Analysis
489
J.
R.
Modapothala
et
al.
84.
A
Proposed
Treatment
for
Visual
Field
Loss
caused
by
Traumatic
Brain
Injury
using
Interactive
Visuotactile Virtual
Environment
495
Attila
J.
Farkas
et
al.
85.
Adaptive
Collocation
Methods
for
the
Solution
of
Partial
Differential
Equations
499
Paido
Brito
and
Antonio
Portugal
86.
Educational
Virtual
Reality
through
a
Multiview
Autostereoscopic
3D
Display
505
Emiliyan
G.
Petkov
87.
An
Approach
for
Developing
Natural
Language
Interface
to
Databases
Using
Data
Synonyms
Tree
and
Syntax
State
Table
509
Safwan
shatnawi
and
Rajeh
Khamis
88.
Analysis
of
Strategic
Maps
for
a
Company
in
the
Software
Development
Sector
515
Marisa
de
Camargo
Silveira
et
al.
CONTENTS
xvii
89.
The
RDF
Generator
(RDFG)
-
First
Unit
in
the
Semantic
Web
Framework
(SWF)
523
Ahmed
Nada
and
Badie
Sartawi
90.
Information
Technology
to
Help
Drive
Business
Innovation
and
Growth
527
Igor
Aguilar
Alonso
et
al.
91.
A
Framework
for
Enterprise
Operating
Systems
Based
on
Zachman
Framework
533
S.
Shervin
Ostadzadeh
andAmir
Masoud
Rahmani
92.
A
Model
for
Determining
the
Number
of
Negative
Examples
Used
in
Training
a
MLP
537
Cosmin
Cernazanu-Glavan
and
Stefan
Holban
93.
GPU
Benchmarks
Based
On
Strange
Attractors
543
Tomds
Podoba
et
al.
94.
Effect
of
Gender
and
Sound
Spatialization
on
Speech
Intelligibility
in
Multiple
Speaker
Environment
547
M.
Joshi
et
al.
95.
Modeling
Tourism
Sustainable
Development
551
O.
A.
Shcherbina
andE.
A.
Shembeleva
96.
Pi-ping
-
Benchmark
Tool
for
Testing
Latencies
and
Throughput
in
Operating
Systems
557
J.
Abaffy
and
T.
Krajiovic
97.
Towards
Archetypes-Based
Software
Development
561
Gimnar
Piho
et
al.
98.
Dependability
Aspects
Regarding
the
Cache
Level
of
a
Memory
Hierarchy
Using
Hamming
Codes
567
O.
Novae
et
al.
99.
Performance
Evaluation
of
an
Intelligent
Agents
Based
Model
within
Irregular
WSN
Topologies
571
Alberto
Piedrahita
Ospina
et
al.
100.
Double
Stage
Heat
Transformer
Controlled
by
Flow
Ratio
577
S.
Silva-Sotelo
et
al.
101.
Enforcement
of
Privacy
Policies
over
Multiple
Online
Social
Networks
for
Collaborative
Activities
583
Zhengping
Wu
and
Lifeng
Wang
102.
An
Estimation
of
Distribution
Algorithms
Applied
to
Sequence
Pattern
Mining
589
Paulo
Igor
A.
Godinho
et
al.
103.
TLATOA
COMMUNICATOR:
A
Framework
to
Create
Task-Independent
Conversational
Systems
595
D.
Perez
and
I.
Kirschning
xviii
CONTENTS
104.
Using
Multiple
Datasets
in
Information
Visualization
Tool
601
Rodrigo
Augnsto
de
Moraes
Lourenqo
et
al.
105.
Improved
Crack
Type
Classification
Neural
Network
based
on
Square
Sub-images
of
Pavement
Surface
607
Byoung
JikLee
andHosin
"David"
Lee
106.
Building
Information
Modeling
as
a
Tool
for the
Design
of
Airports
611
Julio
Tollendal
Gomes
Ribeiro
et
al.
107.
A
Petri-Nets
Based
Unified
Modeling
Approach
for
Zachman
Framework
Cells
615
S.
Shervin
Ostadzadeh
and
Mohammad
Ali
Nekoui
108.
From
Perspectiva
Artificialis
to
Cyberspace:
Game-Engine
and
the
Interactive
Visualization
of
Natural
Light
in
the
Interior
of
the
Building
619
Evangelos
Dimitrios
Christakou
et
al.
109.
Computational
Shape
Grammars
and
Non-Standardization:
a
Case
Study
on
the
City
of
Music
of
Rio
de
Janeiro
623
Felix
A.
Silva
Junior
andNeander
Furtado
Silva
110.
Architecture
Models
and
Data
Flows
in
Local
and
Group
Datawarehouses
627
R.M.
Bogza
et
al.
Index
633

Chapters (100)

This paper presents an efficient and very simple method for separating text characters from graphical images in a given document image. This is based on a Recursive Projection Profiling (RPP) of the document image. The algorithm tries to use the projection profiling method [4] [6] to its maximum bent to bring out almost all that is possible with the method. The projection profile reveals the empty space along the horizontal and vertical axes, projecting the gaps between the characters/images. The algorithm turned out to be quite efficient, accurate and least complex in nature. Though some exceptional cases were encountered owing to the drawbacks of projection profiling, they were well handled with some simple heuristics thus resulting in a very efficient method for text-image separation.
Cloud computing is poised to become one of the most important and fundamental shifts in how computing is consumed and used. Forecasts show that government will play a lead role in adopting cloud computing - for data storage, applications, and processing power, as IT executives seek to maximize their returns on limited procurement budgets in these challenging economic times. After an overview of the cloud computing concept, this article explores the security issues facing public sector use of cloud computing and looks to the risk and benefits of shifting to cloud-based models. It concludes with an analysis of the challenges that lie ahead for government use of cloud resources.
According to Business Software Alliance (BSA) Pakistan is ranked in the top 10 countries having highest piracy rate [1]. To overcome the problem of piracy local Information Technology (IT) companies are willing to migrate towards Open Source Software (OSS). Due to this reason need for framework/model for OSS adoption has become more pronounced. Research on the adoption of IT innovations has commonly drawn on innovation adoption theory. However with time some weaknesses have been identified in the theory and it has been realized that the factors affecting the adoption of OSS varies country to country. The objective of this research is to provide a framework for OSS adoption for local environment and then compare it with the existing framework developed for OSS adoption in other advanced countries. This paper proposes a framework to understand relevant strategic issues and it also highlights problems, restrictions and other factors that are preventing organizations from adopting OSS. A factor based comparison of propose framework with the existing framework is provided in this research.
This paper presents a novel research work on Personal Information Environment (PIE), which is a relatively new field to get explored. PIE is a self managing pervasive environment. It contains an individual’s personal pervasive information associated within user’s related or non-related contextual environments. Contexts are vitally important because they control, influence and affect everything within them by dominating its pervasive content(s). This paper shows in depth the achievement of Personal Information Environment, which deals with a user’s devices, which are to be spontaneous, readily self-manageable on autonomic basis. This paper shows an actual implementation of pervasive data management of a PIE-user, which contains append and update of PIE’s data from the last device used by the user to another PIE devices for further processing and storage needs. Data recharging is utilized to transmit and receive data among PIE devices.
This paper presents the operational semantics for the message passing system for a distributed language called LIPS. The message passing system is based on a virtual machine called AMPS(Asynchronous Message Passing System) designed around a data structure that is portable and can go with any distributed language. The operational semantics that specifies the behaviour of this system uses structured operational semantics to reveal the intermediate steps that helps with analysis of its behaviour. We are able combine this with the big-step semantics that specifies the computational part of the language to produce a cohesive semantics for the language as a whole.
Software Engineering is one of the recently evolving subjects in research and education. Instructors and books that are talking about this field of study lack a common ground of what subjects should be covered in teaching introductory or advance courses in this area. In this paper, a proposed ontology for software engineering education is formulated. This ontology divides the software engineering projects and study into different perspectives: projects, products, people, process and tools. Further or deeper levels of abstractions of those fields can be described on levels that depend on the type or level of the course to teach. The goal of this separation of concerns is to organize the software engineering project into smaller manageable parts that can be easy to understand and identify. It should reduce complexity and improve clarity. This concept is at the core of software engineering. The 4Ps concerns overlap and distinct. The research will try to point to the two sides. Concepts such as; ontology, abstraction, modeling and views or separation of concerns (which we are trying to do here) always include some sort of abstraction or focus. The goal is to draw a better image or understanding of the problem. In abstraction or modeling for example, when we model students in a university in a class, we list only relevant properties, meaning that there are many student properties that are ignored and not listed due to the fact that they are irrelevant to the domain. The weight, height, and color of the student are examples of such properties that will not be included in the class. In the same manner, the goal of the separation of the concerns in software engineering projects is to improve the understandability and consider only relevant properties. In another goal, we hope that the separation of concerns will help software engineering students better understand the large number of modeling and terminology concepts.
In this paper we present a design of a student model based on generic fuzzy inference design. The membership functions and the rules of the fuzzy inference can be fine-tuned by the teacher during the learning process (run time) to suit the pedagogical needs, creating a more flexible environment. The design is used to represent the learner’s performance. In order to test the human computer interaction of the system, a prototype of the system was developed with limited teaching materials. The interaction with the first prototype of the system demonstrated the effectiveness of the decision making using fuzzy inference.
A fundamental challenge in developing a usable conversational interface for Geographic Information Systems (GIS) is effective communication of spatial concepts in natural language, which are commonly vague in meaning. This paper presents a design of an agent-based computational model, PlanGraph. This model is able to help the GIS to keep track of the dynamic human-GIS communication context and enable the GIS to understand the meaning of a vague spatial concept under constrains of the dynamic context.
Meta-CASE systems simplify the creation of CASE (Computer Aided System Engineering) systems. In this paper, we present a meta-CASE system that provides a web-based user interface and uses an object-relational database system (ORDBMS) as its basis. The use of ORDBMSs allows us to integrate different parts of the system and simplify the creation of meta-CASE and CASE systems. ORDBMSs provide powerful query mechanism. The proposed system allows developers to use queries to evaluate and gradually improve artifacts and calculate values of software measures. We illustrate the use of the systems by using SimpleM modeling language and discuss the use of SQL in the context of queries about artifacts. We have created a prototype of the meta-CASE system by using PostgreSQL™ ORDBMS and PHP scripting language.
The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.
Investigation of T-DMB protocol forced us to create simulation model. NCTUns simulator which is open source software and allows addition of new protocols was chosen for implementation. This is one of the first steps of research process. Here we would like to give small overview of T-DMB (DAB) system, describe proposed simulation model and problems which we have met during the work. KeywordsT-DMB-Digital Radio-NCTUns
One important aspect of Case-Based Reasoning (CBR) is Case Selection or Editing – selection for inclusion (or removal) of cases from a case base. This can be motivated either by space considerations or quality considerations. One of the advantages of CBR is that it is equally useful for boolean, nominal, ordinal, and numeric prediction tasks. However, many case selection research efforts have focused on domains with nominal or boolean predictions. Most case selection methods have relied on such problem structure. In this paper, we present details of a systematic sequence of experiments with variations on CBR case selection. In this project, the emphasis has been on case quality – an attempt to filter out cases that may be noisy or idiosyncratic – that are not good for future prediction. Our results indicate that Case Selection can significantly increase the percentage of correct predictions at the expense of an increased risk of poor predictions in less common cases.
Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.
We show that estimating the complexity (mean and distribution) of the instances of a fixed size Constraint Satisfaction Problem (CSP) can be very hard. We deal with the main two aspects of the problem: defining a measure of complexity and generating random unbiased instances. For the first problem, we rely on a general framework and a measure of complexity we presented at CISSE08. For the generation problem, we restrict our analysis to the Sudoku example and we provide a solution that also explains why it is so difficult.
To measure and improve the productivity of software developers is one of the greatest challenges faced by software development companies. Therefore, aiming to help these companies to identify possible causes that interfere in the productivity of their teams, we present in this paper a list of 32 factors, extracted from the literature, that influence the productivity of developers. To obtain the ranking of these factors, we have applied a questionnaire with developers. In this work, we present the results: the factors that have the greatest positive and negative influence on productivity, the factors with no influence and the most important factors and what influences them. To finish, we present a comparison of the results obtained from the literature.
In this paper, we design algorithms for a system that allows Semantic Web agents to reason within what has come to be known as the Web of Trust. We integrate reasoning about belief and trust, so agents can reason about information from different sources and deal with contradictions. Software agents interact to support users who publish, share and search for documents in a distributed repository. Each agent maintains an individualized topic taxonomy for the user it represents, updating it with information obtained from other agents. Additionally, an agent maintains and updates trust relationships with other agents. When new information leads to a contradiction, the agent performs a belief revision process informed by a degree of belief in a statement and the degree of trust an agent has for the information source. The system described has several key characteristics. First, we define a formal language with well-defined semantics within which an agent can express the relevant conditions of belief and trust, and a set of inference rules. The language uses symbolic labels for belief and trust intervals to facilitate expressing inexact statements about subjective epistemic states. Second, an agent’s belief set at a given point in time is modeled using a Dynamic Reasoning System (DRS). This allows the agent’s knowledge acquisition and belief revision processes to be expressed as activities that take place in time. Third, we explicitly describe reasoning processes, creating algorithms for acquiring new information and for belief revision.
The concept of a dynamic reasoning system (DRS) provides a general framework for modeling the reasoning processes of a mechanical agent, to the extent that those processes follow the rules of some well-defined logic. It amounts to an adaptation of the classical notion of a formal logical system that explicitly portrays reasoning as an activity that takes place in time. Inference rule applications occur in discrete time steps, and, at any given point in time, the derivation path comprises the agent’s belief set as of that time. Such systems may harbor inconsistencies, but these do not become known to the agent until a contradictory assertion appears in the derivation path. When this occurs one invokes a Doyle-like reason maintenance process to remove the inconsistency, in effect, disbelieving some assertions that were formerly believed. The notion of a DRS also includes an extralogical control mechanism that guides the reasoning process. This reflects the agent’s goal or purpose and is context dependent. This paper lays out the formal definition of a DRS and illustrates it with the case of ordinary first-order predicate calculus, together with a control mechanism suitable for reasoning about taxonomic classifications for documents in a library. As such, this particular DRS comprises formal specifications for an agent that serves as a document management assistant.
This paper discusses architecture for creating systems that need to express complex models of real world entities, especially those that exist in hierarchical and composite structures. These models need to be persisted, typically in a database system. The models also have a strong orthogonal requirement to support representation and reasoning over time.
This paper examines some general aspects of partitioning software architecture and the structuring of complex computing systems. It relates these topics in terms of the continued development of a generalized processing model for spatial-temporal processing. Data partitioning across several copies of a generic processing stack is used to implement horizontal scaling by reducing search space and enabling parallel processing. Temporal partitioning is used to provide fast response to certain types of queries and in quickly establishing initial context when using the system.
Due to different directives, the growing request for citizen-orientation, improved service quality, effectiveness, efficiency, transparency and reduction of costs, as well as administrative burden public administrations apply increasingly management tools and IT for continual service development and sustainable citizens’ satisfaction. Therefore public administrations implement always more standard based management systems, such as quality ISO9001, environmental ISO 14001 or others. Due to this situation we used in different case studies as basis for e-government a the administration adapted, holistic administration management model to analyze stakeholder requirements and to integrate, harmonize and optimize services, processes, data, directives, concepts and forms. In these case studies the developed and consequently implemented holistic administration management model promotes constantly over more years service effectiveness, citizen satisfaction, efficiency, cost reduction, shorter initial training periods for new collaborators, employee involvement for sustainable citizen-oriented service improvement and organizational development.
With more and more peer-to-peer (P2P) technologies available for online collaboration and information sharing, people can launch more and more collaborative work in online social networks with friends, colleagues, and even strangers. Without face-to-face interactions, the question of who can be trusted and then share information with becomes a big concern of a user in these online social networks. This paper introduces an adaptive control service using fuzzy logic in preference definition for P2P information sharing control, and designs a novel decision-making mechanism using formal fuzzy rules and reasoning mechanisms adjusting P2P information sharing status following individual users’ preferences. Applications of this adaptive control service into different information sharing environments show that this service can provide a convenient and accurate P2P information sharing control for individual users in P2P networks. Keywordsadaptive resource control-fuzzy logic-P2P technology-information sharing-collaborative social network
The major challenge in information access is the rich data available for information retrieval, evolved to provide principle approaches or strategies for searching. The search has become the leading paradigm to find the information on World Wide Web. For building the successful web retrieval search engine model, there are a number of challenges that arise at the different levels where techniques, such as Usenet, support vector machine are employed to have a significant impact. The present investigations explore the number of problems identified its level and related to finding information on web. This paper attempts to examine the issues by applying different methods such as web graph analysis, the retrieval and analysis of newsgroup postings and statistical methods for inferring meaning in text. We also discuss how one can have control over the vast amounts of data on web, by providing the proper address to the problems in innovative ways that can extremely improve on standard. The proposed model thus assists the users in finding the existing formation of data they need. The developed information retrieval model deals with providing access to information available in various modes and media formats and to provide the content is with facilitating users to retrieve relevant and comprehensive information efficiently and effectively as per their requirements. This paper attempts to discuss the parameters factors that are responsible for the efficient searching. These parameters can be distinguished in terms of important and less important based on the inputs that we have. The important parameters can be taken care of for the future extension or development of search engines
Copyright protection of digital contents is very necessary in today’s digital world with efficient communication mediums as internet. Text is the dominant part of the internet contents and there are very limited techniques available for text protection. This paper presents a novel algorithm for protection of plain text, which embeds the logo image of the copyright owner in the text and this logo can be extracted from the text later to prove ownership. The algorithm is robust against content-preserving modifications and at the same time, is capable of detecting malicious tampering. Experimental results demonstrate the effectiveness of the algorithm against tampering attacks by calculating normalized hamming distances. The results are also compared with a recent work in this domain
Many industries are becoming dependent on Radio Frequency Identification (RFID) technology for inventory management and asset tracking. The data collected about tagged objects though RFID is used in various high level business operations. The RFID system should hence be highly available, reliable, and dependable and secure. In addition, this system should be able to resist attacks and perform recovery in case of security incidents. Together these requirements give rise to the notion of a survivable RFID system. The main goal of this paper is to analyze and specify the requirements for an RFID system to become survivable. These requirements, if utilized, can assist the system in resisting against devastating attacks and recovering quickly from damages. This paper proposes the techniques and approaches for RFID survivability requirements analysis and specification. From the perspective of system acquisition and engineering, survivability requirement is the important first step in survivability specification, compliance formulation, and proof verification.
In the globalised world, Czech economy faces many challenges brought by the processes of integration. The crucial factors for companies that want to succeed in the global competition are knowledge and abilities to use the knowledge in the best possible way. The purpose of the work is a familiarization with the results of a questionnaire survey with the topic of "Research of the state of knowledge management in companies in the Czech Republic" realized in the spring 2009 in the cooperation of the University of Hradec Králové and the consulting company Per Partes Consulting, Ltd under the patronage of the European Union.
Over the recent years, the UK Healthcare sector has been the prime focus of many reports and industrial surveys, particularly in the field of software development and management issues. This signals the importance of growing concerns regarding quality issues in the Healthcare domain. In response to this, a new tailored Healthcare Process Improvement (SPI) model is proposed, which takes into consideration both signals from the industry and insights from literature. This paper discusses and outlines the development of a new software process assessment and improvement model based on ISO/IEC 15504-5 model. The proposed model will provide the Healthcare sector with newly specific process practices that focus on addressing current development concerns, standard compliances and quality dimension requirements for this domain.
Evolutionary learning and tuning mechanism to fuzzy systems is the main concern to researchers in the filed. The optimized final performance on the fuzzy system is dependent on the ability of the system to find the best optimized rule-set(s) as well as the optimized fuzzy variable definition. This paper proposes a mechanism of selection and optimization of fuzzy variables termed as “Fuzzimetric Arcs” and then discusses how this mechanism can become a standard of selection and optimization of fuzzy set shapes to tune the performance of GFS. Genetic algorithm is the technique that can be utilized to alter/modify the initial shape of fuzzy sets using two main operators (Crossover and Mutation). Optimization of rule-set(s) is mainly dependent on the measurement of fitness factor and the level of deviation from fitness factor.
A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.
This study proposes to achieve the affective assessment of a computer user through the processing of the pupil diameter (PD) signal. An adaptive interference canceller (AIC) system using the H∞ time-varying (HITV) adaptive algorithm was developed to minimize the impact of the PLR (pupil size changes caused by light intensity variations) on the measured pupil diameter signal. The modified pupil diameter (MPD) signal, obtained from the AIC, was expected to reflect primarily the pupillary affective responses (PAR) of the subject. Additional manipulations of the AIC output resulted in a Processed MPD (PMPD) signal, from which a classification feature, “PMPDmean”, was extracted. This feature was used to train and test a support vector machine (SVM), for the identification of “stress” states in the subject, achieving an accuracy rate of 77.78%. The advantages of affective recognition through the PD signal were verified by comparatively investigating the classification of “stress” and “relaxation” states through features derived from the simultaneously recorded galvanic skin response (GSR) and blood volume pulse (BVP) signals, with and without the PD feature. Encouraging results in affective assessment based on pupil diameter monitoring were obtained in spite of intermittent illumination increases purposely introduced during the experiments. Therefore, these results confirmed the possibility of using PD monitoring to evaluate the evolving affective states of a computer user.
Controlling Intellectual Property Rights (IPR) in the Digital World is a very hard challenge. The facility to create multiple bit-by-bit identical copies from original IPR works creates the opportunities for digital piracy. One of the most affected industries by this fact is the Music Industry. The Music Industry has supported huge losses during the last few years due to this fact. Moreover, this fact is also affecting the way that music rights collecting and distributing societies are operating to assure a correct music IPR identification, collection and distribution. In this article a system for automating this IPR identification, collection and distribution is presented and described. This system makes usage of advanced automatic audio identification system based on audio fingerprinting technology. This paper will present the details of the system and present a use-case scenario where this system is being used.
The paper deals with the problem of adapting software implemented fault injection technique (SWIFI) to evaluate dependability of reactive microcontroller systems. We present an original methodology of disturbing controller operation and analyzing fault effects taking into account reactions of the controlled object and the impact of the system environment. Faults can be injected randomly (in space and time) or targeted at the most sensitive elements of the controller to check it at high stresses. This approach allows identifying rarely encountered problems, usually missed in classical approaches. The developed methodology has been used successfully to verify dependability of ABS system. Experimental results are commented in the paper.
Optimal clustering of call flow graph for reaching maximum concurrency in execution of distributable components is one of the NP-Complete problems. Learning automatas (LAs) are search tools which are used for solving many NP-Complete problems. In this paper a learning based algorithm is proposed to optimal clustering of call flow graph and appropriate distributing of programs in network level. The algorithm uses learning feature of LAs to search in state space. It has been shown that the speed of reaching to solution increases remarkably using LA in search process, and it also prevents algorithm from being trapped in local minimums. Experimental results show the superiority of proposed algorithm over others.
This paper gives an overview of the development of a new software solution for project management, intended mainly to use in industrial environment. The main concern of the proposed solution is application in everyday engineering practice in various, mainly distributed industrial companies. Having this in mind, special care has been devoted to development of appropriate tools for tracking, storing and analysis of the information about the project, and in-time delivering to the right team members or other responsible persons. The proposed solution is Internet-based and uses LAMP/WAMP (Linux or Windows - Apache - MySQL - PHP) platform, because of its stability, versatility, open source technology and simple maintenance. Modular structure of the software makes it easy for customization according to client specific needs, with a very short implementation period. Its main advantages are simple usage, quick implementation, easy system maintenance, short training and only basic computer skills needed for operators. KeywordsProject management software-Web based software-Resources planning-Task accomplishment tracking-Project team communication improvement
The automated warehouse considered here consists of a number of rack locations with three cranes, a narrow aisle shuttle, and several buffer stations with the roller. Based on analyzing of the behaviors of the active resources in the system, a modular and computerized model is presented via a colored timed Petri net approach, in which places are multicolored to simplify model and characterize control flow of the resources, and token colors are defined as the routes of storage/retrieval operations. In addition, an approach for realization of model via visual c++ is briefly given. These facts allow us to render an emulate system to simulate a discrete control application for online monitoring, dynamic dispatching control and off-line revising scheduler policies.
An increasing number of people are in need of help at home (elderly, isolated and/or disabled persons; people with mild cognitive impairment). Several solutions can be considered to maintain a social link while providing tele-care to these people. Many proposals suggest the use of a robot acting as a companion. In this paper we will look at an environment constrained solution, its drawbacks (such as latency) and its advantages (flexibility, integration…). A key design choice is to control the robot using a unified Voice over Internet Protocol (VoIP) solution, while addressing bandwidth limitations, providing good communication quality and reducing transmission latency
The paper explores whether the semantic context is good enough to cope with ever increasing number of available resources in different repositories including the web. Here a problem of identifying authors of scientific papers is used as an example. A set of problem still do arise in case we apply exclusively the semantic context. Fortunately contextual semantic can be used to derive more information required to separate ambiguous cases. Semantic tags, well-structured documents and available databases of articles do provide a possibility to be more context-aware. Under the context we use co-authors names, references and headers to extract key-words and identify the subject. The real complexity of the considering problem comes from the dynamical behaviour of authors as they can change the topic of the research in the next paper. As the final judge the paper proposes applying words usage patterns analysis. Final the contextual intelligence engine is described.
The only way nowadays to improve stability of software development process in the global rapidly evolving world is to be innovative and involve professionals into projects motivating them using both material and non material factors. In this paper self-organized teams are discussed. Unfortunately not all kind of organizations can benefit directly from agile method including applying self-organized teams. The paper proposes semi-self-organized teams presenting it as a new and promising motivating factor allowing deriving many positive sides of been self-organized and partly agile and been compliant to less strict conditions for following this innovating process. The semi-self organized teams are reliable at least in the short-term perspective and are simple to organize and support.
Finding the optimal path between two locations in the Colombo city is not a straight forward task, because of the complex road system and the huge traffic jams etc. This paper presents a system to find the optimal driving direction between two locations within the Colombo city, considering road rules (one way, two ways or fully closed in both directions). The system contains three main modules - core module, web module and mobile module, additionally there are two user interfaces one for normal users and the other for administrative users. Both these interfaces can be accessed using a web browser or a GPRS enabled mobile phone. The system is developed based on the Geographic Information System (GIS) technology. GIS is considered as the best option to integrate hardware, software, and data for capturing, managing, analyzing, and displaying all forms of geographically referenced information. The core of the system is MapServer (MS4W) used along with the other supporting technologies such as PostGIS, PostgreSQL, pgRouting, ASP.NET and C#.
In the recent years, Grid Computing (GC) has made big steps in development, contributing to solve practical problems which need large store capacity and computing performance. This paper introduces an approach to integrate the security mechanisms, Grid Security Infrastructure (GSI) of the open source Globus Toolkit 4.0.6 (GT4) into an application for storing, format converting and playing online media files, based on the GC.
In this paper a tool, called ParaGraph, supporting C code parallelization is presented. ParaGraph is a plug-in in Eclipse IDE and enables manual and automatic parallelization. A parallelizing compiler inserts automatically OpenMP directives into the outputted source code. OpenMP directives can be also manually inserted by a programmer. ParaGraph shows C code after parallelization. Visualization of parallelized code can be used to understand the rules and constraints of parallelization and to tune the parallelized code as well.
This paper discusses some of the salient issues involved in implementing the illusion of a shared-memory programming model across a group of distributed memory processors from a cluster through to an entire GRID. This illusion can be provided by a distributed shared memory (DSM) system implemented by using autonomous agents. Mechanisms that have the potential to increase the performance by omitting consistency latency intra site messages and data transfers are highlighted. In this paper we describe the overall design/architecture of a prototype system, AOMPG which integrates DSM and Agent paradigms and may be the target of an OpenMP compiler. Our goal is to apply this to GRID Applications.
Fleet monitoring of commercial vehicles has received a major attention in the last period. A good monitoring solution increases the fleet efficiency by reducing the transportation durations, by optimizing the planned routes and by providing determinism at the intermediate and final destinations. This paper presents a fleet monitoring system for commercial vehicles using the Internet as data infrastructure. The mashup concept was implemented for creating a user interface.
An implementation and performance analysis of heat transfer modeling using most popular component environments is a scope of this article. The computational problem is described, and the proposed solution of decomposition for parallelization is shown. The implementation is prepared for MS .NET, Sun Java and Mono. Tests are done for various operating systems and hardware platform combinations. The performance of calculations is experimentally indicated and analyzed. The most interesting issue is the communication tuning in distributed component software – proposed method can speed up computational time, but the final time depends also on the network connections performance in component environments. These results are presented and discussed.
A novel hybrid technique for detection and predicting the motion of objects in video stream is presented in this paper. The novelty consists in extension of Savitzky-Golay smoothing filter applying difference approach for tracing object mass center with or without acceleration in noised images. The proposed adaptation of least squares methods for smoothing the fast varying values of motion predicting function permits to avoid the oscillation of that function with the same degree of used polynomial. The better results are obtained when the time of motion interpolation is divided into subintervals, and the function is represented by different polynomials over each subinterval. Therefore, in proposed hybrid technique the spatial clusters with objects in motion are detected by the image difference operator and behavior of those clusters is analyzed using their mass centers in consecutive frames. Then the predicted location of object is computed using modified algorithm of weighted least squares model. That provides the tracing possible routes which now are invariant to oscillation of predicting polynomials and noise presented in images. For irregular motion frequently occurred in dynamic scenes, the compensation and stabilization technique is also proposed in this paper. On base of several simulated kinematics experiments the efficiency of proposed technique is analyzed and evaluated. Index TermsImage processing-motion prediction-least squares model-interpolating polynomial oscillation and stabilization
Virtualization has gained great popularity in recent years with application virtualization being the latest trend. Application virtualization offers several benefits for application management, especially for larger and dynamic deployment scenarios. In this paper, we initially introduce the common application virtualization principles before we evaluate the security of Microsoft App-V and VMware ThinApp application virtualization environments with respect to external security threats. We compare different user account privileges and levels of sandboxing for virtualized applications. Furtherwmore, we identify the major security risks as well as trade-offs with ease of use that result from the virtualization of applications.
The noise performance of a lumped passive uniform cascade of identical element two port networks is investigated. The N-block network is characterized in close form based on the eigenvalues of the element two-port ABCD transmission matrix. The thermal noise performance is derived and demonstrated in several examples.
Research on agent-oriented software has been developed around different practical applications. The same cannot, however, be said about the development of measures to evaluate agent quality by its key characteristics. In some cases, there have been proposals to use and adapt measures from other paradigms, but no agent-related quality model has been investigated. As part of research into agent quality, this paper presents the evaluation of two key characteristics: social ability and autonomy. Additionally, we present some results for a case study on a multi-agent system.
Perception measuring and perception management is an emerging approach in the area of product management. Cognitive, psychological, behavioral and neurological theories, tools and methods are being employed for a better understanding of the mechanisms of a consumer’s attitude and decision processes. Software is also being defined as a product, however this kind of product is significantly different from all other products. Software products are intangible and it is difficult to trace their characteristics which are strongly dependant on a dynamic context of use. Understanding customer’s cognitive processes gives an advantage to the producers aiming to develop products “winning the market”. Is it possible to adopt theories, methods and tools for the purpose of software perception, especially software quality perception? The theoretical answer to this question seems to be easy, however in practice the list of differences between software products and software projects hinders the analysis of certain factors and their influence on the overall perception. In this article the authors propose a method and describe a tool designed for the purpose of research regarding perception issues of software quality. The tool is designed to defeat the above stated problem, adopting the modern behavioral economics approach.
This paper presents an approach to process Spanish linguistic categories automatically. The approach is based in a module of a prototype named WIH (Word Intelligent Handler), which is a project to develop a conversational bot. It basically learns category usage sequence in a sentence. It extracts a weighting metric to discriminate most common structures in real dialogs. Such a metric is important to define the preferred organization to be used by the robot to build an answer.
In this paper, a content-based image retrieval system for aggregation and combination of different image features is presented. Feature aggregation is important technique in general content-based image retrieval systems that employ multiple visual features to characterize image content. We introduced and evaluated linear combination and support vector machines to fuse the different image features. The implemented system has several advantages over the existing content-based image retrieval systems. Several implemented features included in our system allow the user to adapt the system to the query image. The SVM-based approach for ranking retrieval results helps processing specific queries for which users do not have knowledge about any suitable descriptors.
The patient, in his multiple facets of citizen and user of services of health, needs to acquire during, and later in his majority of age, favorable conditions of health to accentuate his quality of life and it is the responsibility of the health organizations to initiate the process of support for that patient during the process of mature life. The provision of services of health and the relation doctor-patient are undergoing important changes in the entire world, forced to a large extent by the indefensibility of the system itself. Nevertheless decision making requires previous information and, what more the necessity itself of being informed requires having a “culture” of health that generates pro activity and the capacity of searching for instruments that facilitate the awareness of the suffering and the self-care of the same. Therefore it is necessary to put into effect a ICT model (hiPAPD) that has the objective of causing Interaction, Motivation and Persuasion towards the surroundings of the diabetic Patient facilitating his self-care. As a result the patient himself individually manages his services through devices and AmI Systems (Ambient Intelligent). KeywordsICT-Emotional Design-Captologic-Patient diabetic-self-Care
Jawi knowledge is becoming important not just to adult but also to growing children to learn at the initial stage of their life. This project basically is to study and develop Embedded Jawi Generator Software that will generate and create Jawi script easily. The user could choose and enter Jawi scripts and learn each of the script. The sum of scripts was from alif until yaa, approximately about 36 scripts with the colorful button. The system also should b creating as an interactive system that will attract users especially kids. This Jawi Generator Software developed using java language in a Linux operating system (fedora). This software will be running on UP-NETARM2410-S Linux. Later the performances of Jawi Generator System are investigate for its accuracy in displaying of the words, and also board performances.
This paper describes Linux, an open source platform used to develop and run a virtual architectural walkthrough application. It proposes some qualitative reflections and observations on the nature of Linux in the concept of Virtual Reality (VR) and on the most popular and important claims associated with the open source approach. The ultimate goal of this paper is to measure and evaluate the performance of Linux used to build the virtual architectural walkthrough and develop a proof of concept based on the result obtain through this project. Besides that, this study reveals the benefits of using Linux in the field of virtual reality and reflects a basic comparison and evaluation between Windows and Linux base operating system. Windows platform is use as a baseline to evaluate the performance of Linux. The performance of Linux is measured based on three main criteria which is frame rate, image quality and also mouse motion.
The development of mechanical models of the humerus-prosthesis assembly represent a solution for analyzing the behavior of prosthesis devices under different conditions; some of these behaviors are impossible to reproduce in vivo due to the irreversible phenomenon that can occur. This paper presents a versatile model of the humerus-prosthesis assembly. The model is used for stress analysis and displacement distributions under different configurations that correspond to possible in vivo implementations later on. A 3D scanner was used to obtain the virtual model of the humerus bone. The endoprosthesis was designed using 3D modeling software and the humerus-prosthesis assembly was analyzed using Finite Element Analysis software.
Bit planes of discrete signal can be used not only for encoding or compressing, but also for encrypting purposes. This paper investigates composition of bit planes of an image and their utilization in the encryption process. Proposed encryption scheme is based on chaotic maps of Peter de Jong and it is designed for image signals primarily. Positions of all components of bit planes are permutated according to chaotic behaviour of Peter de Jong’s system.
This paper is about the design and implementation of an examination system based on World Wide Web. It is called FLEX-Flight License Exam Software. We designed and implemented flexible and modular software architecture. The implemented system has basic specifications such as appending questions in system, building exams with these appended questions and making students to take these exams. There are three different types of users with different authorizations. These are system administrator, operators and students. System administrator operates and maintains the system, and also audits the system integrity. The system administrator can not be able to change the result of exams and can not take an exam. Operator module includes instructors. Operators have some privileges such as preparing exams, entering questions, changing the existing questions and etc. Students can log on the system and can be accessed to exams by a certain URL. The other characteristic of our system is that operators and system administrator are not able to delete questions due to the security problems. Exam questions can be inserted on their topics and lectures in the database. Thus; operators and system administrator can easily choose questions. When all these are taken into consideration, FLEX software provides opportunities to many students to take exams at the same time in safe, reliable and user friendly conditions. It is also reliable examination system for the authorized aviation administration companies. Web development platform – LAMP; Linux, Apache web server, MySQL, Object-oriented scripting Language – PHP are used for developing the system and page structures are developed by Content Management System – CMS.
National Portal of India [1] integrates information from distributed web resources like websites, portals of different Ministries, Departments, State Governments as well as district administrations. These websites are developed at different points of time, using different standards and technologies. Thus integrating information from the distributed, disparate web resources is a challenging task and also has a reflection on the information discovery by a citizen using a unified interface such as National Portal. The existing text based search engines would also not yield desired results [7]. Couple of approaches was deliberated to address the above challenge and it was concluded that a metadata replication based approach would be most feasible and sustainable. Accordingly solution was designed for replication of metadata from distributed repositories using services oriented architecture. Uniform Metadata specifications were devised based on Dublin core standard [9]. To begin with solution is being implemented among National Portal and 35 State Portals spread across length and breadth of India. Metadata from distributed repositories is replicated to a central repository regardless of the platform and technology used by distributed repositories. Simple Search Interface has also been developed for efficient and effective information discovery by the citizens.
Performance engineering focuses on the quantitative investigation of the behavior of a system during the early phase of the system development life cycle. Bearing this on mind, we delineate a performance modeling framework of the application for communication system that proposes a translation process from high level UML notation to Continuous Time Markov Chain model (CTMC) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and activity diagram as reusable specification building blocks, while deployment diagram highlights the components of the system. The collaboration and activity show how reusable building blocks in the form of collaboration can compose together the service components through input and output pin by highlighting the behavior of the components and later a mapping between collaboration and system component identified by deployment diagram will be delineated. Moreover the UML models are annotated to associate performance related quality of service (QoS) information which is necessary for solving the performance model for relevant performance metrics through our proposed framework. The applicability of our proposed performance modeling framework in performance evaluation is delineated in the context of modeling a communication system.
In this paper, we survey four clustering techniques and discuss their advantages and drawbacks. A review of eight different protein sequence clustering algorithms has been accomplished. Moreover, a comparison between the algorithms on the basis of some factors has been presented.
In this article construction and potential of OpenGL multi-user web-based application are presented. The most common technologies like: .NET ASP, Java and Mono were used with specific OpenGL libraries to visualize tree-dimensional medical data. The most important conclusion of this work is that server side applications can easily take advantage of fast GPU and produce efficient results of advanced computation just like the visualization.
Testing task schedulers on Linux operating system proves to be a challenging task. There are two main problems. The first one is to identify which properties of the scheduler to test. The second problem is how to perform it, e.g., which API to use that is sufficiently precise and in the same time supported on most platforms. This paper discusses the problems in realizing test framework for testing task schedulers and presents one potential solution. Observed behavior of the scheduler is the one used for “normal” task scheduling (SCHED_OTHER), unlike one used for real-time tasks (SCHED_FIFO, SCHED_RR).
Approach to design of outer electric lines has changed in last years very significantly. Especially new demands in branch reliability should designer keep in mind. These new requests are basis of new European and national standards. To simplify design layout, automate verification of all rules and limitations and minimize mistakes computer application was developed to solve these tasks. This article describes new approach to this task, features and possibilities of this software tool.
Recently, Model Driven Engineering (MDE) has been proposed to face the complexity in the development, maintenance and evolution of large and distributed software systems. Model Driven Architecture (MDA) is an example of MDE. In this context, model transformations enable a large reuse of software systems through the transformation of a Platform Independent Model into a Platform Specific Model. Although source code can be generated from models, defects can be injected during the modeling or transformation process. In order to delivery software systems without defects that cause errors and fails, the source code must be submitted to test. In this paper, we present an approach that takes care of test in the whole software life cycle, i.e. it starts in the modeling level and finishes in the test of source code of software systems. We provide an example to illustrate our approach.
This paper investigates the role of the scientific signs in the holy Quran in improving the creative thinking skills for the deaf children using multimedia. The paper investigates if the performance made by the experimental group’s individuals is statistically significant compared with the performance made by the control group’s individuals on Torrance Test for creative thinking (fluency, flexibility, originality and the total degree) in two cases: 1. Without considering the gender of the population. 2. Considering the gender of the population.
Research has gone into parallelization of the numerical aspects of computationally intense analysis and solutions. Recent advances in computer algebra systems have opened up new opportunities for research: generating closed-form, symbolic solutions more efficiently by parallelizing the symbolic manipulations.
Multilevel secure (MLS) database models provide a data protection mechanism different from traditional data access control. The MLS database has been used in various application domains including government, hospital, military, etc. The MLS database model protects data by grouping them into different classification and creates different views to the users of different clearance levels. Previous models have focused on data level classification like tuples and elements. In this study, we introduce a schema level classification mechanism, i.e. attribute and relation classification. We first define the basic model, and then give definitions of integration properties and operations of database. The schema classification scheme will reduce semantics inferences and thus prevent users from compromising the database.
This refers to the inability of a program to release the memory-or part it-that it has accessed to perform certain task(s) in computer systems [1]. The unintended consequences of such behavior are manifested in forms of diminishing performance at best. In worse case scenarios, memory leaks could lead the computer system to freeze and/or complete application failure. Memory leaks are particularly disastrous in limited memory embedded systems and client-server environments where applications share memories across multiple-user platforms. It is up to operating system designers to make sure that the currently running applications release memory after program termination. This work accesses and quantifies the impact of memory leak in system performance.
The maintenance of legacy software applications is a complex, expensive, quiet challenging, time consuming and daunting task due to program comprehension difficulties. The first step for software maintenance is to understand the existing software and to extract the high level abstractions from the source code. A number of methods, techniques and tools are applied to understand the legacy code. Each technique supports the particular legacy applications with automated/semi-automated tool support keeping in view the requirements of the maintainer. Most of the techniques support the modern languages but lacks support for older technologies. This paper presents a lightweight methodology for extraction of different artifacts from legacy COBOL and other applications