Conference Paper

Agent Based Web Browser

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This system, "Agent based Web browser", resides on users' computers providing with effective advice to help them locate the relevant information required from their browsing experience to view WWW document accessing the Internet taking advantages of text formatting, hypertext links, images, sounds, motion, and other features. Agent Based Web Browser, "Angel SOFT", is compatible with modern Web pages and effectiveness and efficiency technologies of the process, active security features and the fastest response times, thousands of free ways to personalize user's online experience, superior speed and performance and autonomous process, running in the background of the computer while the user is the one with absolute control over the browsing path. The new generation of browser will be smarter by working online and offline to facilitate the connections of using ideal AI agents to be communicative, capable and autonomous able to understand user's goals, preferences and constraints. And it must be able to act without the user being in control the whole time. Angel SOFT is a client service user interface intelligent agent browser which communicates with the World Wide Web tracking user behavior and attempts.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

Conference Paper
Analysis of contemporary web browser sessions for forensic purposes has one major challenge - that of distinguishing Internet events and sessions across multiple tabs. While some information is contained inside the log files, identifying the coherency and concurrency is necessary to generate "specificity of attribution". In this work, we focus on isolating multiple simultaneous browser sessions using network and browser-related metadata on browsers that deploy a separate process for each browser session, which we term process-separated browser implementation. In a previous work, it was shown that network artifacts can be associated with browser artifacts to relate network stream with browser sessions on multi-threaded browser implementations. However, in process-separated browser implementations, where each session has its own network stream, the ability to track all network streams via a single process is not available. Therefore, there is a need to associate each network stream with its corresponding process for reconstruction. In this paper, we propose an algorithm to reconstruct multiple simultaneous browser sessions on browser applications that use a separate process for each browser session. We achieve this by developing a representation for the information associated with a browser session. Further, we define two relationships, viz., 'stream coherency' and 'session concurrency' based on the associations discovered among the network and browser artifacts. Finally, we develop an algorithm called "Samhita" to identify number of simultaneous browser sessions that are deployed and associate them with the respective processes and network streams. We take the reader through specially designed experiments to elicit browser-session intelligence and the process to separate out the tabbed sessions using the timing information present in the browser context and session context.
Conference Paper
Internet browsers support multiple browser tabs, each browser tab capable of initiating and maintaining a separate web session, accessing multiple uniform resource identifiers (URIs) simultaneously. As a consequence, network traffic generated as part of a web request becomes indistinguishable across tabbed sessions. However, it is possible to find the specificity of attribution in the session-related context information recorded as metadata in log files (in servers and clients) and as network traffic related logs in routers and firewalls, along with their metadata. The forensic questions of “who,” “what” and “how” are easily answered using the metadata-based approach presented in this chapter. The same questions can help systems administrators decide on monitoring and prevention strategies. Metadata, by definition, records context information related to a session; such metadata recordings transcend sources. This chapter presents an algorithm for reconstructing multiple simultaneous browser sessions on browser applications with multi-threaded implementations. Two relationships, coherency and concurrency, are identified based on metadata associations across artifacts from browser history logs and network packets recorded during active browser sessions. These relationships are used to develop the algorithm that identifies the number of simultaneous browser sessions that are deployed and then reconstructs the sessions. Specially-designed experiments that leverage timing information alongside the browser and session contexts are used to demonstrate the processes for eliciting intelligence and separating and reconstructing tabbed browser sessions.
Article
Full-text available
Summary In this paper, we propose an intelligent notification of a new e-mail, and describe an implementation of the notification system. There are two ways to read e-mails by using a computer. One way is a mail system on a user's personal computer. The other way is a web mail system on a web server. Using a web mail system is better for users because they can read e-mails by using other person's personal computers. A web mail system has a problem. A web mail system is low immediacy. Users have to look at a web page of a web mail system again and again to check new e-mails. To solve this problem, we implemented a web mail system WisdomMail. WisdomMail is a web application based on a web agent. WisdomMail has a function that shows a notification of new e-mails without special plug-ins in a web browser. This system uses the web agent framework MiSpider. MiSpider enables developers to implement a web agent having a persistent function, a message passing function, and a graphical user interface. By using MiSpider, a notification of a new e-mail appears on a web page that a user is browsing. A feature of this system is that a notification of a new e-mail appears on any web pages. Also, the notification system automatically adjusts notification timing and a position of a notification on a web page. By using the notification system, users can read new e-mails without looking at a web page of a web mail system. Finally, we evaluated the scalability of the notification system and show experimental results.
Article
Full-text available
This article reports on a new agent technology that helps Internet surfers to scout out the online terrain and recommend the best paths for the user to follow. These agents are called reconnaissance agents, programs that look ahead in the user's browsing activities and act as an advance scout to save the user needless searching and recommend the best paths to follow. Reconnaissance agents are also among the first representatives of a new class of computer applications, learning agents that infer user preferences and interests by tracking interactions between the user and the machine over the long term. Two examples of reconnaissance agents are provided in the article are that of Letizia and PowerScout. The main difference is that Letizia uses local reconnaissance, searching the neighborhood of the current page, while Powerscout uses global reconnaissance, making use of a traditional search engine to search the Web in general. Both the Letizia and the PowerScout style of agent have their advantages and disadvantages. This new technology permits users to maintain a focus on their quest for information while decreasing the time and frustration of finding material of interest.
Article
Full-text available
Agents can personalize otherwise impersonal computational systems. The World Wide Web presents the same appearance to every user regardless of that user's past activity. Web Browser Intelligence (WBI, pronounced "WEB-ee") is an implemented system that organizes agents on a user's workstation to observe user actions, proactively offer assistance, modify web documents, and perform new functions. WBI can annotate hyperlinks with network speed information, record pages viewed for later access, and provide shortcut links for common paths. In this way, WBI personalizes a user's web experience by joining personal information with global information to effectively tailor what the user sees. Keywords Agents, World wide web, User models. Download WBI may be downloaded from http://www.raleigh.ibm.com/wbi/wbisoft.htm INTRODUCTION One goal of networked computing is to provide all users access to the world's vast information resources. This trend toward a single, global database clearly shows i...
Article
Full-text available
The World Wide Web (WWW) is becoming increasingly important for business, education, and entertainment. Popular web browsers make access to Internet information resources relatively easy for novice users. Simply by clicking on a link, a new page of information replaces the current one on the screen. Unfortunately however, after following a number of links, people can have difficulty remembering where they've been and navigating links they have followed. As one's collection of web pages grows and as more information of interest populates the web, effective navigation becomes an issue of fundamental importance. We are developing a prototype zooming browser to explore alternative mechanisms for navigating the WWW. Instead of having a single page visible at a time, multiple pages and the links between them are depicted on a large zoomable information surface. Pages are scaled so that the page in focus is clearly readable with connected pages shown at smaller scales to provide context. As a...
Article
Full-text available
. In this paper, we sketch a model of what people do when they search for information on the web. From a theoretical perspective, our interest lies in the cognitive processes and internal representations that are both used in and affected by the search for information. From a practical perspective, our aim is to provide personal support for informationsearching and to effectively transfer knowledge gained by one person to another. Toward these ends, we first collected behavioral data from people searching for information on the web; we next analyzed these data to learn what the searchers were doing and thinking; and we then constructed specific web agents to support searching behaviors we identified. 1 Introduction The World Wide Web connects tens of millions of people with hundreds of millions of pages of information. The web's explosive growth, its simple means for authoring, and its simple means of access have combined to make it a place many people now rely on to find information ...
Article
A reference architecture for a domain captures the fundamental subsystems com- mon to systems of that domain, as well as the relationships between these subsys- tems. A reference architecture can be useful both at design time and during main- tenance: it can improve understanding of a given system, aid in analyzing trade-os between dierent design options, or serve as a template for designing new systems and reengineering existing ones. We examine the history of the web browser domain and identify several underly- ing forces that have contributed to its evolution. We develop a reference architecture for web browsers based on two well-known open source implementations, and we validate it against five additional implementations. We discuss the maintenance im- plications of dierent strategies for code reuse and identify several underlying evo- lutionary phenomena in the web browser domain; namely, emergent domain bound- aries, convergent evolution, and tension between open and closed source development approaches.
Article
The above levels of linguistic processing reflect an increasing size of unit of analysis as well as increasing complexity and difficulty as we move from top to bottom. The larger the unit of analysis becomes (i.e., from morpheme to word to sentence to paragraph to full document), the less precise the language phenomena and the greater the free choice and variability. This decrease in precision results in fewer discernible rules and more reliance on less predictable regularities as one moves from the lowest to the highest levels. Additionally, higher levels presume reliance on the lower levels of language understanding, and the theories used to explain the data move more into the areas of cognitive psychology and artificial intelligence. As a result, the lower levels of language processing have been more thoroughly investigated and incorporated into IR systems. I am aware of only one system that includes all levels of language analysis.
Conference Paper
A reference architecture for a domain captures the fundamental subsystems common to systems of that domain as well as the relationships between these subsystems. Having a reference architecture available can aid both during maintenance and at design time: it can improve understanding of a given system, it can aid in analyzing tradeoffs between different design options, and it can serve as a template for designing new systems and re-engineering existing ones. In this paper, we examine the history of the Web browser domain and identify several underlying phenomena that have contributed to its evolution. We develop a reference architecture for Web browsers based on two well known open source implementations, and we validate it against two additional implementations. Finally, we discuss our observations about this domain and its evolutionary history; in particular, we note that the significant reuse of open source components among different browsers and the emergence of extensive Web standards have caused the browsers to exhibit "convergent evolution".
Conference Paper
The use and number of Web browsers have mushroomed. A wide variety of special purpose Web browser based applications are being developed for the Internet and Intranets. Software engineers developing these new applications need generic models that describe Web browser architectures, components, features and functionalities to better structure the software architecture of their applications. The paper presents an object oriented domain analysis of commonly used Web browsers
Conference Paper
The concept of an agent has recently become important in Artificial Intelligence (AI), and its relatively youthful subfield, Distributed AI (DAI). Our aim in this paper is to point the reader at what we perceive to be the most important theoretical and practical issues associated with the design and construction of intelligent agents. For convenience, we divide the area into three themes (though as the reader will see, these divisions are at times somewhat arbitrary). Agent theory is concerned with the question of what an agent is, and the use of mathematical formalisms for representing and reasoning about the properties of agents. Agent architectures can be thought of as software engineering models of agents; researchers in this area are primarily concerned with the problem of constructing software or hardware systems that will satisfy the properties specified by agent theorists. Finally, agent languages are software systems for programming and experimenting with agents; the...
Article
As large systems evolve, their architectural integrity tends to decay. Reverse engineering tools, such as PBS [7, 19], Rigi [15], and Acacia [5], can be used to acquire an understanding of a system's "as-built" architecture and in so doing regain control over the system. A problem that has impeded the widespread adoption of reverse engineering tools is the tight coupling of their subtools, including source code "fact" extractors, visualization engines, and querying mechanisms; this coupling has made it difficult, for example, for users to employ alternative extractors that might have different strengths or understand different source languages. The TAXFORM project has sought to investigate how different reverse engineering tools can be integrated into a single framework by providing mappings to and from common data schemas for program "facts" [2]. In this paper, we describe how we successfully integrated the Acacia C and C++ fact extractors into the PBS system, and how we were then a...
Conceptual Architecture of Mozilla Firefox (version 2
  • J Haines
  • I Lai
  • John Chun-Hung
  • Josh Chiu
  • Fairhead