Péter Pál Boda

Nokia Research Center, Palo Alto, California, United States

Are you Péter Pál Boda?

Claim your profile

Publications (16)1.55 Total impact

  • Jun Yang · Hong Lu · Zhigang Liu · Péter Pál Boda
    [Show abstract] [Hide abstract]
    ABSTRACT: In this book chapter, we present a novel system that recognizes and records the physical activity of a person using a mobile phone. The sensor data is collected by built-in accelerometer sensor that measures the motion intensity of the device. The system recognizes five everyday activities in real-time, i.e., stationary, walking, running, bicycling, and in vehicle. We first introduce the sensor’s data format, sensor calibration, signal projection, feature extraction, and selection methods. Then we have a detailed discussion and comparison of different choices of feature sets and classifiers. The design and implementation of one prototype system is presented along with resource and performance benchmark on Nokia N95 platform. Results show high recognition accuracies for distinguishing the five activities. The last part of the chapter introduces one demo application built on top of our system, physical activity diary, and a selection of potential applications in mobile wellness, mobile social sharing and contextual user interface domains.
    No preview · Chapter · Sep 2010
  • Zhigang Liu · Hawk-Yin Pang · Jun Yang · Guang Yang · Péter Boda
    [Show abstract] [Hide abstract]
    ABSTRACT: The ever-increasing capability of mobile devices enables many mobile services far beyond a traditional voice call. In this paper, we present WebCall, a research framework on how to share and utilize the rich contextual information about users, such as phonebook, indoor and outdoor location, and calendar. WebCall also demonstrates a few services (e.g. human powered questions and answers) that can be built on top of user context. Third-party services can be integrated with WebCall through a simple API and potentially benefit from context filtering. An invitation mechanism is introduced to enable bootstrapping user base. Privacy concern is addressed by giving full control to users on how to share their information.
    No preview · Conference Paper · Oct 2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work, the authors design and prototype PassItOn, a fully distributed opportunistic messaging system. Our goal is to build up a proof-of-concept platform on real mobile devices, and thus show the feasibility and potentials of utilizing human movements for dissemination applications. Meanwhile, we seek to shed lights on the design, implementation and deployment issues in building such systems, and thus stimulate new ideas and perspectives on addressing these issues. Moreover, we aim to offer a real testbed on which new mechanisms, protocols and use cases can be tested and evaluated.
    Preview · Conference Paper · Feb 2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: PEIR, the Personal Environmental Impact Report, is a participatory sensing application that uses location data sam- pled from everyday mobile phones to calculate personalized estimates of environmental impact and exposure. It is an ex- ample of an important class of emerging mobile systems that combine the distributed processing capacity of the web with the personal reach of mobile technology. This paper doc- uments and evaluates the running PEIR system, which in- cludes mobile handset based GPS location data collection, and server-side processing stages such as HMM-based ac- tivity classification (to determine transportation mode); au- tomatic location data segmentation into "trips"; lookup of traffic, weather, and other context data needed by the mod- els; and environmental impact and exposure calculation us- ing efficient implementations of established models. Addi- tionally, we describe the user interface components of PEIR and present usage statistics from a two month snapshot of system use. The paper also outlines new algorithmic compo- nents developed based on experience with the system and un- dergoing testing for inclusion in PEIR, including: new map- matching and GSM-augmented activity classification tech- niques, and a selective hiding mechanism that generates be- lievable proxy traces for times a user does not want their real location revealed.
    Full-text · Conference Paper · Jan 2009
  • Alan L. Liu · Jun Yang · Péter Pál Boda
    [Show abstract] [Hide abstract]
    ABSTRACT: This work presents a gesture recognition system via continuous maximum entropy (MaxEnt) training on accelerometer data. MaxEnt models are commonly learned using generalized iterative scaling (GIS), which is an iterative algorithm for most convex optimization problems.
    No preview · Conference Paper · Jan 2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Widgets are embeddable objects that provide easy and ubiquitous access to dynamic information sources, for example weather, news or TV program information. Widgets are typically rather static - they provide the information regardless of whether the information is relevant to the user's current information needs. In this paper we introduce Capricorn, which is an intelligent interface for mobile widgets. The interface uses various adaptive web techniques for facilitating navigation. For example, we use collaborative filtering to recommend suitable widgets and we dim infrequently used widgets. The demonstration presents the Capricorn interface focusing on the adaptive parts of the interface. The user interface is web-based, and as such platform independent. However, our target environment is mobile phones, and thus the interface has been optimized for mobile phones.
    Preview · Conference Paper · Jan 2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Widgets are embeddable objects that provide easy and ubiquitous access to dynamic information sources, e.g., weather, news or TV program information. Interactions with widgets take place through a so-called widget engine, which is a specialized client-side runtime component that also provides functionalities for managing widgets. As the number of supported widgets increases, managing widgets becomes increasingly complex. For example, finding relevant or interesting widgets becomes difficult and the user interface easily gets cluttered with irrelevant widgets. In addition, interacting with information sources can be cumbersome, especially on mobile platforms. In order to facilitate widget management and interactions, we have developed Capricorn, an intelligent user interface that integrates adaptive navigation techniques into a widget engine. This paper describes the main functionalities of Capricorn and presents the results of a usability evaluation that measured user satisfaction and compared how user satisfaction varies between desktop and mobile platforms.
    Full-text · Conference Paper · Jan 2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Social interaction is an essential element of our d aily lives. One could argue that such interaction is even more impo rtant while on vacation. To showcase certain technology enablers, we implement a cruise ship scenario with a few advanced applicat ions. The cruise ship scenario serves not only as a tangible goal but also as a metaphor: these applications can be adapted easily to other environments such as office, classroom, conference, exhibition, museum, malls, etc. Cruise ships represent a unique environment for social life; passengers live in the same time a nd space and attend the same social activities together, with fr equent physical encounters. They also produce/consume large amounts of media content and heavily interact with each other. To me et people's social networking needs we propose a new paradigm called Social Proximity Network (SPN). SPN applications are built on our connectivity and indoor positioning infrastructure, as well as on advanced device-based utilities. By relying on the sensing power of today's mobile devices and mashing up digital co ntent with physical context, SPN services are able to provide rich and unique experiences to cruise passengers, both during and a fter the trip.
    Full-text · Conference Paper · Jan 2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Personal mobile devices are all ubiquitous in many forms like mobile handsets, PDAs, etc. Their features and computational powers make them a very capable platform for wireless sensing and mobile agents. We present Magrathea: A mobile agent- and sensing platform that is built on top of mobile smart phones. We discuss the motivation behind the platform, present the design choices, the architecture and discuss the possible use cases of the platform. We also show empirically how the platform can be used for investigating viral phenomena within human test subjects by modeling the viruses with the mobile agents, for detecting the social networks within the test subjects and how the results can be further refined by simulating other possible agent configurations using the acquired data.
    Full-text · Conference Paper · Jan 2008
  • [Show abstract] [Hide abstract]
    ABSTRACT: The number of wireless Internet users is expected to increase rapidly within the next few years resulting in a huge increase in the number of bits transferred between wireless devices and networks. When looking at current mobile networks, they have mainly been designed for optimized delivery of voice calls and not for broadband data. Although mobile networks are evolving, we expect there to be a need for complementary solutions for providing wireless Internet access. Thus we introduce an ad-hoc networking solution using WiFi short range radio technology for extending cellular wireless broadband coverage and capacity into places where it is most urgently needed; densely populated areas and indoors. We further propose connecting the ad-hoc networking solution to the operator's total access offering. As a result, wireless users may enjoy an easy to use, good quality, secure and robust service offering when entering the wireless Internet era, and access providers are equipped with tools to utilize complementary access technologies in places where they suit best. Operators arc able to make efficient use of their existing investments and open new service concepts and business models with mobile Internet domain use cases evolution. Also end users become an essential part of business ecosystem by providing content, services and access to other users.
    No preview · Conference Paper · Jul 2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Mobiscope is an incorporation of distributed mobile sensors, which achieves high-density sampling coverage over a wide area through mobility. Mobiscopes applications include public-health epidemiological studies of human exposure using mobile phones and real-time, fine-grained automobile traffic characterization using sensors on fleet vehicles. There are two categories of mobiscopes, one category is vehicular applications for traffic and automotive monitoring, where a subset of equipped vehicles senses various surrounding conditions such as traffic, road conditions, or weather and the second category of mobiscopes use handheld devices. Several challenges arise in the application of mobiscopes from mobility and researchers are working to overcome these challenges to offer an efficient, robust, private, and secure networking and sensory data collection.
    Preview · Article · May 2007 · IEEE Pervasive Computing
  • Source
    Péter Pál Boda
    [Show abstract] [Hide abstract]
    ABSTRACT: Multimodal Integration addresses the problem of combining various user inputs into a single semantic representation that can be used in deciding the next step of system action(s). The method presented in this paper uses a statistical framework to implement the integration mechanism and includes contextual information additionally to the actual user input. The underlying assumption is that the more information sources are taken into account, the better picture can be drawn about the actual intention of the user in the given context of the interaction. The paper presents the latest results with a Maximum Entropy classifier, with special emphasis on the use of contextual information (type of gesture movements and type of objects selected). Instead of explaining the design and implementation process in details (a longer paper to be published later will do that), only a short description is provided here about the demonstration implementation that produces above 91% accuracy for the 1st best and higher than 96% for the accumulated five N-bests results.
    Preview · Conference Paper · Jan 2006
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces a generic architecture that enables the development and execution of mobile multimodal applications proposed within the EU IST-511607 project MobiLife. Mobi Life aims at exploiting the synergetic use of multimodal user interface technology and contextual information processing, with the ultimate goal that the two together can provide a beyond-the-state-of-the-art user experience. And this led to an integrated concept, components of the underlying architecture are described in detail and the interfaces towards the application back-end as well as towards context aware resources are discussed. The paper also positions the current work against existing standardisation efforts and it pinpoints technologies required to support the implementation of a device and modality function within the MobiLife architecture
    No preview · Conference Paper · Oct 2005
  • Source
    Péter Pál Boda · Edward Filisko
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces a method that generates simulated multimodal input to be used in test-ing multimodal system implementations, as well as to build statistically motivated multi-modal integration modules. The generation of such data is inspired by the fact that true mul-timodal data, recorded from real usage scenar-ios, is difficult and costly to obtain in large amounts. On the other hand, thanks to opera-tional speech-only dialogue system applica-tions, a wide selection of speech/text data (in the form of transcriptions, recognizer outputs, parse results, etc.) is available. Taking the tex-tual transcriptions and converting them into multimodal inputs in order to assist multimo-dal system development is the underlying idea of the paper. A conceptual framework is es-tablished which utilizes two input channels: the original speech channel and an additional channel called Virtual Modality. This addi-tional channel provides a certain level of ab-straction to represent non-speech user inputs (e.g., gestures or sketches). From the tran-scriptions of the speech modality, pre-defined semantic items (e.g., nominal location refer-ences) are identified, removed, and replaced with deictic references (e.g., here, there). The deleted semantic items are then placed into the Virtual Modality channel and, according to external parameters (such as a pre-defined user population with various deviations), tem-poral shifts relative to the instant of each cor-responding deictic reference are issued. The paper explains the procedure followed to cre-ate Virtual Modality data, the details of the speech-only database, and results based on a multimodal city information and navigation application.
    Preview · Article · Jan 2004
  • Péter Pál Boda
    [Show abstract] [Hide abstract]
    ABSTRACT: Integration of various user input channels for a multimodal interface is not just an engineering problem. To fully understand users in the context of an application and the current session, solutions are sought that process information from different intentional, i.e. user-originated, as well as from passively available sources in a uniform manner. As a first step towards this goal, the work demonstrated here investigates how intentional user input (e.g. speech, gesture) can be seamlessly combined to provide a single semantic interpretation of the user input. For this classical Multimodal Integration problem the Maximum Entropy approach is demonstrated with 76.52% integration accuracy for the 1st and 86.77% accuracy for the top 3-best candidates. The paper also exhibits the process that generates multimodal data for training the statistical integrator, using transcribed speech from MIT's Voyager application. The quality of the generated data is assessed by comparing to real inputs to the multimodal version of Voyager.
    No preview · Conference Paper · Jan 2004
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The paper introduces three multimodal context-aware mobile demonstrator applications designed and developed within the scope of the EU IST-511607 project MobiLife. The three, family-oriented applications Mobile Multimedia Infotainer, Wellness-Aware Multimodal Gaming System and FamilyMap, provide advanced multimodal user interactions supported with context-aware functionalities, such as personalisation and profiling. The paper briefly explains the underlying architectural solutions and how the development work fits to the User-Centered Design process. The ultimate intention is to enhance the acceptance and usability of current mobile applications with beyond-state-of-the-art user interaction capabilities, by researching how contextual information can affect the functionality of the multimodal user interface and how to provide the users with a seamless, habitable and non-intrusive user interaction experience.
    Full-text · Article ·