Figure 13
Similar publications
Citations
... There are three main components to the MVC design pattern in client-server-like web applications of database content, as shown in Figure 1 below. The pattern's objective is the separation of concerns [1]. Thus developers implementing the view can focus solely on the user interface, and not on the application's business logic and data management. ...
... In general -and within the context of web applications -the model layer is implemented with an object-oriented programming language, such as Java [2]. The objects in the model layer encapsulate the application's business logic and usually interface with some sort of relational database [1]. Data in objects in the model layer and data in a relational database are handled in conceptually different ways. ...
... And when the Studio This prediction was made in a phone conversation between the author and Mark Burgin. 1 compiles menu data for a decision tree at a scheduled interval its display data is static. ...
... To improve this result, we present a graph-theoretic approach based on Dijkstra's shortest path algorithm that minimizes the transients. 19 Effectively, this shows that the problem of targeting UPOs can be replaced by that of finding optimal paths within a graph. One advantage of our method is that path-searching is already a mature topic in computer science and can be performed very efficiently. ...
... Dijkstra's shortest path algorithm is a graph-theoretic method for determining the path with the smallest cumulative weight between any two vertices that belong to the same connected component of a weighted digraph. 19 This could involve traversing through several intermediate vertices in order to join two given vertices. In terms of cupolet transitions, this corresponds to a sequence of switchable cupolets that connects a pair of control bins while collectively passing through the least number of intermediary control bins. ...
We present an efficient control scheme that stabilizes the unstable periodic orbits of a chaotic system. The resulting orbits are known as cupolets and collectively provide an important skeleton for the dynamical system. Cupolets exhibit the interesting property that a given sequence of controls will uniquely identify a cupolet, regardless of the system's initial state. This makes it possible to transition between cupolets, and thus unstable periodic orbits, simply by switching control sequences. We demonstrate that although these transitions require minimal controls, they may also involve significant chaotic transients unless carefully controlled. As a result, we present an effective technique that relies on Dijkstra's shortest path algorithm from algebraic graph theory to minimize the transients and also to induce certainty into the control of nonlinear systems, effectively providing an efficient algorithm for the steering and targeting of chaotic systems.
... On the other hand recursion is allowed, i.e. z i ∈ Z k can activate z i ∈ Z k once again. Obviously, the general rules that apply to recursive procedures, known in software [15] and hardware [16] engineering, have to be satisfied, i.e. after a certain number of iterations there must be a non recursive exit that allows the recursive macrooperation to end; @BULLET In accordance with [14], the execution of an HGS is synchronous and the execution of PHGSs is synchronous as well.Fig. 4,5 present an example of PHGSs describing the functionality of the priority buffer shown inFig. 2 and Z = {z 0 ,z 1 ,z 2 ,…,z 6 } (z 0 corresponds to the top level algorithm). ...
Many practical algorithms require support for hierarchy and parallelism. Hierarchy assumes an opportunity to activate one sub-algorithm from another and parallelism enables different sub-algorithms to be executed at the same time. The paper presents a graphical specification of parallel hierarchical algorithms, suggests architecture of a parallel reconfigurable controller, indicates limitations and describes a formal method of synthesis allowing the given algorithms to be implemented in hardware on the basis of the proposed architecture.
... Here, was conveniently taken to be . For the special case when placing (that is, scale parameters are infinite), in (18) for noninformative (flat) priors, becomes ...
... This happens when the observed number of samples in (19) such that and . Then, (19) will reduce to (18) ...
... By a similar process, we can reparametrize for the RV as in (18). This reparameterization is achieved since, if SL , then its complement is SL , a characteristic that is similar to the one employed for the standard Beta as in (4). ...
With the advances in pervasive computing and wireless networks, the quantitative measurements of component and network availability have become a challenging task, especially in the event of often encountered insufficient failure and repair data. It is well recognized that the Forced Outage Ratio (FOR) of an embedded hardware component is defined as the failure rate divided by the sum of the failure and the repair rates; or FOR is the operating time divided by the total exposure time. However, it is also well documented that FOR is not a constant but is a random variable. The probability density function (pdf) of the FOR is the Sahinoglu-Libby (SL) probability model, named after the originators if certain underlying assumptions hold. The SL pdf is the generalized three-parameter Beta distribution (G3B). The failure and repair rates are taken to be the generalized Gamma variables where the corresponding shape and scale parameters, respectively, are not identical. The SL model is shown to default to that of a standard two-parameter Beta pdf when the shape parameters are identical. Decision Theoretic (Bayesian) solutions are employed to compute small-sample Bayesian estimators by using informative and noninformative priors for the component failure and repair rates with respect to three definitions of loss functions. These estimators for component availability are then propagated to calculate the network expected input-output or source-target (s-t) availability for four different fundamental networks given as examples. The proposed method is superior to using a deterministic way of estimating availability simply by dividing total up-time by exposure time. Various examples will illustrate the validity of this technique to avoid over- or underestimation of availability when only small samples or insufficient data exist for the historical lifecycles of components and networks.
... It is important to note that the use of the term data structure is somewhat relaxed, as this is used to encompass the traditional data structures such as arrays, and linked lists, as well as abstract data types (ADT) 1 , which may include the Table ADT, the List ADT and the Binary Tree ADT as well as more complex ADTs [3]. An example of an ADT that may require a collection of measurable quantity data, is the Table ADT that may be used by the small telephone directory application. ...
Operational profiles are a quantification of usage patterns for a software application. These profiles are used to measure software reliability by testing the software in a manner that represents actual use. The current definition of an operational profile states that it is the set of operations available in the application, and the operations probabilities of occurrence in customer usage scenarios. This definition is too limited. In most industrial applications, focusing on operations alone does not offer adequate representation of the use of software. The limited definition of operational profiles can restrict their applicability and hence software reliability analysis for many software development organizations. This paper describes a formal and practical extension of the current definition of operational profiles to increase their applicability.
... To ensure O(log n) worst case performance, we balance BinSeT tree as an AVL tree (cf. [1,8,24])hence we also need to talk about the height of BinSeT tree. This gives the following invariance for every node of our data structure: ...
... While the details of when and how to perform the rotations are explained in most textbooks (cf. [8,24]) we concentrate only on updates of values µ and δ. Observe that the value τ does not change during rotations. ...
We discuss a problem of handling resource reservations. The resource can be reserved for some time, it can be freed or it can be queried what is the largest amount of reserved resource during a time interval. We show that the problem has a lower bound of per operation on average and we give a matching upper bound algorithm. Our solution also solves a dynamic version of the related problems of a prefix sum and a partial sum.
... For step 3, when we sort characters according to the length of longest matching strings starting from that particular character, we can use Radix sort approach [CP01]. The time complexity for Radix sort is O(N). ...
In many applications, it is necessary to determine the string similarity. Edit distance[WF74] approach is a classic method to determine Field Similarity. A well known dynamic programming algorithm [GUS97] is used to calculate edit distance with the time complexity O(nm). (for worst case, average case and even best case) Instead of continuing with improving the edit distance approach, [LL+99] adopted a brand new approach-token-based approach. Its new concept of token-base-retain the original semantic information, good time complex-O(nm) (for worst, average and best case) and good experimental performance make it a milestone paper in this area. Further study indicates that there is still room for improvement of its Field Similarity algorithm. Our paper is to introduce a package of substring-based new algorithms to determine Field Similarity. Combined together, our new algorithms not only achieve higher accuracy but also gain the time complexity O(knm) (k<0.75) for worst case, O(*n) where <6 for average case and O(1) for best case. Throughout the paper, we use the approach of comparative examples to show higher accuracy of our algorithms compared to the one proposed in [LL+99]. Theoretical analysis, concrete examples and experimental result show that our algorithms can significantly improve the accuracy and time complexity of the calculation of Field Similarity. [US97] D. Guseld. Algorithms on Strings, Trees and Sequences, in Computer Science and Computational Biology. [LL+99] Mong Li Lee, Cleansing data for mining and warehousing, In Proceedings of the 10th International Conference on Database and Expert Systems Applications (DEXA99), pages 751-760,August 1999. [WF74] R. Wagner and M. Fisher, The String to String Correction Problem, JACM 21 pages 168-173, 1974.
... The outcome of the proposed project will be a powerful research tool, that is a domain-focused crawler which will provide a best user experience. this period, we will also consider relevant theoretical approaches to analyze and minimize possible risks, as well as consider best methodologies to identify and manage problems, in order to provide a correct algorithm to implement the crawler[9]. During July, we will be implementing the crawler by transforming the algorithm into a particular programming language. ...
PURPOSE OF THE PROJECT: The purpose of this project is to design and implement a crawler that will process relevant information on the web. After its integration to the search engine, the crawler will provide web users a powerful educational search tool. It will also enable users to access resourceful educational intranets such as universities and libraries. On the other hand, it will empower these intranets to establish communications, as well as opening opportunities for future collaborations among them. PROJECT DETAILS: Presently, the biggest advantage for the users of the Internet is the immense amount of information available on the World Wide Web (www). However, the biggest drawback is the difficulty of finding relevant information. Search engines provide only a partial solution to this problem. The enormous amount of information available on the Web perpetually limits the efficiency, speed and accuracy of search engines. This limitation is becoming problematic since the information available on the Web is growing constantly and rapidly at an accelerated rate[1]. There are billions of web pages with their number increasing at a rate of a million pages per day[2] and 40% of all web pages changing weekly[3, 4]. Google, for instance, provides a search of over 3 billion web pages[5] which represents only a part of whole the web. Therefore, it is important to address the problems of limited efficiency, speed and accuracy. This project is an attempt to improve information retrieval on the web. Crawlers are intelligent agents. They are also know as spiders or bots. They are specialized programs that automatically visit sites and index the web pages by creating entries in the databases of search engines. They do so by "crawling" through a site a page at a time, following the links to other pages on the site until all pages have been read[4]. However, maintaining currentness of indices by constant crawling is rapidly becoming impossible due to the increasing size and dynamic content of the web[6]. The main reason is that current crawlers can easily wander off the targeted web sites when they follow hyperlinks. The challenge of this project is to design and implement a crawler that will stay focused when crawling. A domain-focused crawler, that is integrated to a search engine, selectively seeks out pages that are relevant to pre-defined topics[2,3]. Designed as such, the crawler will no longer target the entire web. It will only crawl sets of hosts that are in a given domain (e.g. *.edu). It will crawl particular topical sections of the web without exploring the irrelevant ones[7]. This behavior is called "an intelligent crawling"[8]. A crawler that is designed to produce efficient, speedy, and accurate information is a major educational tool for researchers, academicians, and students. Such users will have access to the potentially rich intranets and will be able to locate wide range of information. In short, the applicability of the proposed crawler will allow communications and collaborations between these intranets.
Data structures have been a core discipline in computer engineering studies. Several difficulties related to teaching and learning of these contents have been detected by the academic community. With the aim of obtaining a better knowledge of these situations, we present the conclusions obtained after evaluating the results achieved by the students of the subject Programming II, in the first course of the degree in computer engineering at the University of A Coruña (Spain) and the students of the subject data structures, where similar contents are taught in the Informatics Degree at Portucalense University (Portugal).