Ibm Systems Journal (IBM SYST J )

Publisher: International Business Machines Corporation

Description

The IBM Systems Journal is a quarterly, refereed technical publication, featuring the work of authors from systems and software fields in information science and the computer industry. The papers are written for a technically aware readership in the software and systems professional community worldwide: technical professionals, researchers, and users. Each paper is peer-reviewed for content, currency, and value by recognized experts in the field. The Web version of the journal is free but the printed version has a subscription fee.

  • Impact factor
    1.29
    Hide impact factor history
     
    Impact factor
  • 5-year impact
    1.98
  • Cited half-life
    8.60
  • Immediacy index
    0.00
  • Eigenfactor
    0.00
  • Article influence
    0.38
  • Website
    IBM Systems Journal website
  • Other titles
    IBM systems journal, International business machines systems journal
  • ISSN
    0018-8670
  • OCLC
    1445487
  • Material type
    Periodical, Internet resource
  • Document type
    Journal / Magazine / Newspaper, Internet Resource

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper discusses high availability and disaster recovery solutions and their differences and presents the concepts and technical details of various solutions that combine them for highly critical environments. It discusses the business and regulatory issues that are driving the requirements for these solutions and presents various data center topologies that customers are choosing when implementing 3-site solutions.
    Ibm Systems Journal 02/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: IBM Parallel Sysplex® is a clustering technology that was designed to address specific client business objectives for IBM mainframe System z® servers. The nondisruptive addition of scalable processing capacity and improved application availability with respect to unplanned and planned outages were two key design objectives. This paper focuses on the evolving technology options that support the business objective of continuous availability. Key technology options are discussed relative to their importance to achieving continuous availability. Best practices for effectively using these technology options to improve availability, based on extensive client experiences, are recommended.
    Ibm Systems Journal 02/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: All too often, software designers ignore the fact that a running computer system is a combination of software and hardware. In this combination, hardware may play a crucial role, particularly in time-sensitive systems. In this paper, we first explore the nature and impact that platforms may have on application software and its design. Based on this analysis, a canonical model of software platforms is proposed to assist in more accurately factoring in the effects of platforms on the design of real-time and embedded software applications. Finally, we show how this model can be realized using modern model-driven design standards and methods.
    Ibm Systems Journal 02/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we describe continuously available services and application hosting on the Events/IBM.comt Infrastructure (EI)—a continuously available virtualized environment based on three active data centers that has demonstrated 100-percent availability for many premier Web sites, including www.ibm.com. The environment consists of simultaneously active paths spanning three geographically diverse data centers. We describe techniques for automated rapid scaling and continuous availability using IBM WebSphere® clustering, IBM DB2™ replication, load balancing, virtualized network infrastructure components, and IBM System p® virtualization capabilities. We explore best practices for deploying application releases and updates, applying fix packs, and updating hardware and perating systems, all without interrupting service. In addition, we discuss how to troubleshoot and synchronize the vast flows of a multipath redundant solution. Finally, we describe the requirements that dictate continuously available hosting solutions and modifications to our multisite approach for use in fewer data centers. Application of the topics in this paper has resulted in continuous service with no planned or unplanned interruptions over many years of application hosting within the EI.
    Ibm Systems Journal 02/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The service-oriented modeling and architecture modeling environment (SOMA-ME) is first a framework for the model-driven design of service-oriented architecture (SOA) solutions using the service-oriented modeling and architecture (SOMA) method. In SOMA-ME, Unified Modeling Language (UML™) profiles extend the UML 2.0 metamodel to domain-specific concepts. SOMA-ME is also a tool that extends the IBM Rational® Software Architect product to provide a development environment and automation features for designing SOA solutions in a systematic and model-driven fashion. Extensibility, traceability, variation-oriented design, and automatic generation of technical documentation and code artifacts are shown to be some of the properties of the SOMA-ME tool.
    Ibm Systems Journal 02/2008;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces Distributed Responsive Infrastructure-Virtualization Environment (DRIVE), a tool that provides both an integrated development environment (IDE) and an execution environment and thus supports the entire life cycle of sensor/actuator applications. Developers are only responsible for implementing the core event-handling logic, whereas DRIVE generates the necessary code for message passing and invocation, thus reducing the development skills required. The development methodology, which is component based and model driven, separates the solution model, which captures the business logic, from the deployment model, which reflects the physical computing infrastructure. This allows the administrators to configure and deploy applications on various infrastructure topologies. To illustrate the benefits of DRIVE, we present an example application, dock-door receiving, and show the ways in which DRIVE supports the modeling and development of the application logic and the multiphase deployment of the resulting application in a production environment.
    Ibm Systems Journal 02/2008;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we examine managed service in the information and communication technology (ICT) sector, characterized by the polarization between an infrastructure service that is growing in scale and increasingly becoming a commodity and customized or even one-of-a-kind projects. We refer to the approaches taken by three highly innovative advanced service companies, IBM, Ericsson, and Cable & Wireless, to package and deliver ICT service on a more industrialized basis. We identify the six-stage process that describes their journeys to date. We also describe some of the challenges they faced on that journey as well those currently facing them as they move to a higher degree of industrialization. To address these challenges, we propose a model with three axes: offering development, service delivery, and go to market. The model demonstrates how the increasing industrialization of managed service requires an approach integrating all three of these dimensions. We also show that strong governance is required to address the impacts of technological evolution, marketplace dynamics, and corporate culture.
    Ibm Systems Journal 02/2008;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Leveraging redundant resources is a common means of addressing availability requirements, but it often implies redundant costs as well. At the same time, virtualization technologies promise cost reduction through resource consolidation. Virtualization and high-availability (HA) technologies can be combined to optimize availability while minimizing costs, but merging them properly introduces new challenges. This paper looks at how virtualization technologies and techniques can augment and amplify traditional HA approaches while avoiding potential pitfalls. Special attention is paid to applying HA configurations (such as active/active and active/passive) to virtualized environments, stretching virtual environments across physical machine boundaries, resource-sharing approaches, field experiences, and avoiding potential hazards.
    Ibm Systems Journal 02/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Providers of highly reliable information technology (IT) services have historically adopted multiple service delivery quality standards and have obtained certificates of registration or certification associated with these standards. In this paper, we present a case study involving a provider of IT infrastructure services and solutions. We describe the business context of the service provider, its approach to the analysis of the requirements of multiple standards, process integration efforts (both local and global), and the reuse of documentation and other evidentiary data in the context of obtaining certificates of registration or certifications. We compare the evidentiary data (e.g., documentation, observations, and interviews) used in the diagnostics of the International Standards Organization 9001:2000 standard and the eSourcing Capability Model for Service Providers standard to evaluate the unique value that each standard contributes to IT service delivery. The case study also provides initial examples of measures resulting from the adoption of these two quality standards that may be used to improve service delivery.
    Ibm Systems Journal 02/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Designing and implementing a business resilience (or disaster recovery) plan is a complex procedure for customers, and the impact of implementing an incorrect or incomplete plan can be significant. For some customers, being able to recover their data center functionality in a short period of time may be of the utmost importance; for others, recovering in a short period of time may be worthless if the data with which their database is restored is hours or days old. Also of importance is the impact to business-critical applications when copies of data are being made. This paper presents the IBM TotalStorage™ Productivity Center for Replication (TPC-R), a tool designed to help customers implement cost-effective data replication solutions for continuous availability and disaster recovery. We give an overview of TPC-R, describe recent enhancements to TPC-R that are available on all supported platforms (as well as those that are unique to the z/OS™ platform) and discuss the ways in which customers can exploit TPC-R to implement business resilience solutions, with a focus on the various trade-offs customers must consider when choosing between different storage replication technologies.
    Ibm Systems Journal 02/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Telecommunications service providers (TSPs) are currently faced with a significant number of threats to their core business models. In addition to competition from traditional TSPs, they must also face increasing competition from Internet service providers such as Google, Yahoo!, and eBay, which have succeeded in implementing a variety of very useful communications services, including voice services, for a fraction of the traditional cost. This new set of threats is causing TSPs to reexamine their business models, explore ways of reducing their operational expenses, and devise a means of reducing the typical service life cycle (from concept to delivery, typically more than a year) to a few weeks. To help address these issues, IBM has created an SOA-centric (service-oriented-architecture-centric) reference architecture called the telecommunications service delivery platform (SDP). In this paper, we present three case studies involving field deployments to the networks of three major wireless TSPs and describe the role of the IBM SDP and its key benefits. We highlight the architecture and key use cases involved in these carrier-grade deployments, and articulate the best practices and valuable lessons gleaned from them.
    Ibm Systems Journal 02/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Conventional messaging technologies have been designed for large transactional systems, making the prediction and calibration of their delay impractical. In this paper, we present a minimal messaging system, implemented in Java™, that is designed to enable the analysis, modeling, and calibration of the expected performance of these technologies. We describe the algorithms and protocols that underlie this messaging system, show how an analysis can be performed, and give the actual measured performance figures. We show that the system achieves a throughput of more than 100,000 messages per second with less than 120-millisecond maximum latency, in the test environment. At 10,000 messages per second, a maximum latency of 5 milliseconds is measured. The algorithms make use of lock-free data structures, which allow the throughput to scale on multi-core systems.
    Ibm Systems Journal 02/2008;
  • [Show abstract] [Hide abstract]
    ABSTRACT: For a service delivery system to produce optimal solutions to service-related business problems, it must be based on an approach that involves many of the traditional functional areas in an organization. Unfortunately, most business school curricula mirror the older traditional organizational structure that dominated businesses throughout most of the twentieth century. This structure typically consisted of vertically organized functions (or silos), such as production, marketing, and finance, with each silo operating largely independently of the others. Similarly, business schools today are usually organized by functional departments—such as marketing, finance, accounting, and operations management—with little interaction among them. Within this traditional silo-structured environment, it is very difficult to properly develop a curriculum, or even a course, in service management. Consequently, a significant gap exists between the education received by business school graduates and the skills that they need to succeed in today's service-intense environment. This paper explores the underlying causes of this gap and suggests ways in which the emerging field of service science can facilitate the changes in business school curricula that will make them more relevant in meeting the needs of today's businesses and organizations.
    Ibm Systems Journal 02/2008;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper relates our experiences at the University of California, Berkeley (UC Berkeley), designing a service science discipline. We wanted to design a discipline of service science in a principled and theoretically motivated way. We began our work by asking, “What questions would a service science have to answer?” and from that we developed a new framework for understanding service science. This framework can be visualized as a matrix whose rows are stages in a service life cycle and whose columns are disciplines that can provide answers to the questions that span the life cycle. This matrix systematically organizes the issues and challenges of service science and enables us to compare our model of a service science discipline with other definitions and curricula. This analysis identified gaps, overlaps, and opportunities that shaped the design of our curriculum and in particular a new survey course that serves as the cornerstone of service science education at UC Berkeley.
    Ibm Systems Journal 02/2008;