SSRN Electronic Journal

Published by Elsevier BV

Online ISSN: 1556-5068

Articles


The Unique Needs of Medicare Beneficiaries
Article

November 2004

·

1,968 Reads

Reginald D Williams
Medicare beneficiaries have a distinctive combination of needs and circumstances that make providing appropriate health care services for them a greater challenge than for the working age population. Medicare beneficiaries generally have poorer health status, lower literacy levels, and suffer from more chronic conditions than the general population. More than half have arthritis and high blood pressure. Many Medicare beneficiaries are not totally independent, with forty-five percent needing help with at least one of the key activities of every day living, such as eating, bathing, shopping, using the telephone, or balancing a checkbook. Medicare beneficiaries also have an increased likelihood of financial insecurity; nearly sixty percent have annual incomes below $20,000, compared to only fourteen percent of the working-age population. Given the complex needs and circumstances of many Medicare beneficiaries, managing their health and navigating the health care system can be challenging. Choosing physicians, understanding doctors' orders, deciding among complex treatment options, or choosing a drug discount card are more difficult for them than for the general population. Decreased mobility hampers their ability to travel to health care facilities. Others do not use the telephone easily because they cannot hear or see well. Many do not have computers or know how to use the internet to find valuable information. For all these reasons, many beneficiaries need appropriate services and supports to ensure that Medicare is meeting their needs.
Share

Creation of Safety-Net-Based Provider Networks Under the California Health Care Coverage Initiative: Interim Findings

December 2009

·

529 Reads

·

Cori Reifman

·

Anna Davis

·

[...]

·

Organized provider networks have been developed as a method of achieving efficiencies in the delivery of health care, and to reduce problems such as limited access to specialty and tertiary care, fragmentation and duplication of services, low-quality care and poor patient outcomes. This policy brief examines the experience of ten California counties participating in the Health Care Coverage Initiative (HCCI), a demonstration project to expand coverage to low-income and indigent residents, in overcoming these barriers and creating provider networks based on existing safety-net systems. These interim findings should provide valuable information for future efforts to develop effective networks based on safety-net providers. Specifically, the brief examines the structure of the networks built, how they were implemented, the types of services and reimbursements offered, the health information technologies employed in the effort, as well as plans to further enhance the networks in the future. The counties involved in HCCI are: Alameda, Contra Costa, Kern, Los Angeles, Orange, San Diego, San Francisco, San Mateo, Santa Clara and Ventura.

Choosing Not to Choose

October 2014

·

268 Reads

Choice can be an extraordinary benefit or an immense burden. In some contexts, people choose not to choose, or would do so if they were asked. In part because of limitations of "bandwidth," and in part because of awareness of their own lack of information and potential biases, people sometimes want other people to choose for them. For example, many people prefer not to make choices about their health or retirement plans; they want to delegate those choices to a private or public institution that they trust (and may well be willing to pay a considerable amount to those who are willing to accept such delegations). This point suggests that however well accepted, the line between active choosing and paternalism is often illusory. When private or public institutions override people's desire not to choose and insist on active choosing, they may well be behaving paternalistically, through a form of choice-requiring paternalism. Active choosing can be seen as a form of libertarian paternalism, and a frequently attractive one, if people are permitted to opt out of choosing in favor of a default (and in that sense permitted not to choose); it is a form of nonlibertarian paternalism insofar as people are required to choose. For both ordinary people and private or public institutions, the ultimate judgment in favor of active choosing, or in favor of choosing not to choose, depends largely on the costs of decisions and the costs of errors.

Medicare and Communities of Color

December 2004

·

294 Reads

Medicare and Communities of Color originates from topics in The Role of Private Health Plans in Medicare: Lessons from the Past, Looking to the Future, the final report of the Study Panel on Medicare and Markets convened by the National Academy of Social Insurance. The brief also updates information from the Kaiser Family Foundation's 1999 brief, Faces of Medicare: Medicare and Minority Americans. Medicare and Communities of Color is a factual presentation highlighting principal issues in Medicare's interaction with people of color. A National Academy of Social Insurance study panel is examining how Medicare can be a leader in reducing racial and ethnic health disparities among its beneficiaries and the rest of the health system.

Restructuring Medicare: A Synthesis of the NASI Medicare Projects

May 2003

·

282 Reads

This Medicare brief highlights the findings and recommendations of the National Academy of Social Insurance (NASI) Medicare study panels. These reports grapple with the most important policy challenges facing Medicare and its future, including financing, delivery of health services, and the administration of Medicare. Although each study panel had a specific charge, unifying themes emerge. This brief summarizes the findings of each study panel and identifies the common themes found among them.

Figure 1: DoDAF Views and their inter-relationships 
Figure 2: Layered Information Stack in Knowledge Meta-model 
Figure 3: Broad classification of Domain-meaning 
Figure 4: DEVS/DoDAF as the basis for development of Enterprise Architectures incorporating formal M&S 
Figure 5: Meta-model of Domain 

+1

Strengthening OV-6a Semantics with Rule-Based Meta-models in DEVS/DoDAF based Life-cycle Architectures Development
Conference PaperFull-text available

October 2006

·

664 Reads

The development of a distributed testing environment would have to comply with recent Department of Defense (DoD) mandates requiring that the DoD architectural framework (DoDAF) be adopted to express high level system and operational requirements and architectures. Unfortunately, DoDAF and DoD net-centric mandates pose significant challenges to testing and evaluation since DoDAF specifications must be evaluated to see if they meet requirements and objectives, yet they are not expressed in a form that is amenable to such evaluation. DoDAF is the basis for integrated architectures and provides broad levels of specification related to operational, system, and technical views. In our earlier work, we described an approach to support specification of DoDAF architectures within a development environment based on DEVS (discrete event system specification) for semi-automated construction of the needed simulation models. The result is an enhanced system lifecycle development process that includes both development and testing in an integral manner. We also developed automated model generation using XML which paves the way for OVs to become service-providing components in the Web services architecture. In this paper we present the semantic structure for one of the operational view documents OV-6a that would aid the development of these semi-automated models. We will describe how 0V-6a can be structured in a more generalized meta-model framework such that every rule is reducible to meaningful code which is automatedly constructed through natural language processing (NLP) methods and further be reduced to DEVS based models. The paper also presents an overview of the life-cycle development methodology for these enterprise architectures and how a common enterprise domain-model can be used in customized business/domain-specific rules and policy structures
Download

Figure 4. Cost savings for the roll-out of a UMTS900 instead of a UMTS2100 in the brownfield scenario  
Assessment of the benefits of introducing a HSDPA carrier at 900MHz
The roll-out of UMTS/HSPA networks in 900MHz band offers the possibility of bringing 3G and mobile broadband services to new areas with fewer base station sites and of improving indoor coverage in existing UMTS coverage areas. This paper presents the benefits in term of savings on site numbers and on network-related costs provided by the roll-out of a mixed-frequency UMTS network (networks with carriers operating at 900 and 2100MHz) instead of a single UMTS2100 network. These results are presented for dense urban, urban, suburban and rural areas and for three demand scenarios. Despite the higher reductions on site number achieved for lower demand scenarios, savings higher than 30% are found even in higher demand scenarios. Regarding costs savings, higher savings are obtained in the green field roll-out scenario. Lower cost savings are achieved in the brown field scenario but the introduction of a UMTS900 layer results in a more cost efficient solution than maintenance of the single frequency UMTS2100 network.

A novel soft computing model to increase the accuracy of software development cost estimation

March 2010

·

171 Reads

Software cost and time estimation is the process of estimating the cost and time required to develop a software system. Software cost and time estimation supports the planning and tracking of software projects. Effectively controlling the expensive investment of software development is one of the important issues in software project management. Estimating software development cost with high precision is still a great challenge for project managers, because it allows for considerable financial and strategic planning. Software cost estimation refers to the predictions of the likely amount of effort, time, and staffing levels required to build a software system. A very helpful form of cost estimation is the one made at an early stage during a project, when the costing of the project is proposed for approval. However, estimates at the early stages of the development are the most difficult to obtain. In this paper a novel Constructive Cost Model (COCOMO) based on soft computing approach is proposed for software cost estimation. This model carries some of the desirable features of neural networks approach, such as learning ability and good interpretability, while maintaining the merits of the COCOMO model. Unlike the standard neural networks approach, the proposed model can be interpreted and validated by experts, and has good generalisation capability. The model deals effectively with imprecise and uncertain input and enhances the reliability of software cost estimates. From the experimental results, it was concluded that, by the proposed neural network model, the accuracy of cost estimation can be improved and the estimated cost can be very close to the actual cost.

Figure 1. A snapshot of the Collaboratorium. ACKNOWLEDGMENT I'd like to thank Thomas Malone and Marco Cioffi for many useful discussions about the ideas underlying this paper.  
Achieving Collective Intelligence via Large-Scale On-line Argumentation
While argumentation tools represent a promising avenue for exploiting the Internet's vast potential for enabling collective intelligence, they have to date been applied almost exclusively to small-scale, usually centrally facilitated, groups. This paper explores some of the issues and design options involved in supporting large-scale on-line argumentation.

Human Capital as a Basis of Comparative Advantage Equations in Services Outsourcing: A Cross Country Comparative study

June 2006

·

259 Reads

International outsourcing of services has come to occupy an increasingly important part of international trade. Analytically services outsourcing is the export of services by a country to the outsourcing nation. Given this, the pattern of trade in these services would be decided in line with country specific comparative advantage equations. Since services are typically intensive in skilled labour and educated manpower, what matters for a country's comparative advantage in services is its resource base in terms of skilled and educated manpower. This is a cross country study which uses the intuitive logic of the Hecksher Ohlin model to determine comparative advantage for a sample of developed countries such as the US, that essentially constitute outsourcers, and developing nations such as India and China that carryout outsourcing. Different definitions of human capital are compared to identify comparative advantage. A key conclusion is that for services outsourcing, the definition of human capital needs to be restricted to secondary and particularly tertiary students rather than literacy. This is further validated by the significance of size of tertiary students in cross country equations estimating business service exports. Secondly, what matters for comparative advantage is the absolute size of the human capital base rather than in percentage terms

Study on Effect of Agent's Overconfidence on Principal-Agent Relationship

July 2007

·

154 Reads

A great deal of the psychological literature shows that individuals always appear to be overconfident preference in their economic life. Studying the influence of agent's overconfidence preference on principal-agent relationship has very profound theoretical and practical significance. Using the expressing methods of overconfidence provided by Daniel , Hirshleifer , Subrahmanyam , Gervais and Odean etc. for reference, this paper researches the principal-agent relationship model under the condition that the agent is overconfident, and the effect mechanism of the agent's overconfidence on the principal-agent relationship by setting up an appropriate mathematical model. The result shows that when the agent's active wage and the principal's monitoring cost are not zero, the optimal level of effort exerted by the agent in equilibrium would always increase in the agent's overconfident level and the principal's optimal intensity of monitoring in equilibrium would always decrease in the agent's overconfident level; and when the agent's active wage is zero, the agent's overconfidence would not affect the principal-agent relation. In addition, the optimal level of effort exerted by the agent and the principal's optimal intensity of monitoring in equilibrium would always decrease in the principal's monitoring cost, and the former would increase in the agent's fixed wage and active wage, and the latter would yet increase in the agent's fixed wage, but not always increase in the agent's active wage.

ICTE and new professional degree programs in higher education: anthropological approaches
This paper is related to two empirical research projects: one was an investigation of users of a University Service of Distance Learning; the second was an ethnographic research which was aimed at identifying the pedagogical models underpinning new professional degree courses (2004-2006). In Education and Training, the relationship which users of ICTE have with such technologies can be studied in part as a relationship to specific knowledge which, when put into other contexts, opens up other, much wider, theoretical questions at three differing levels: a generalized view of ICTE; the cultural experience of ICTE; and the systematization of ICTE. To each level, we have attributed three core concepts: 'gathering' butinage, 'poaching'-braconnage, and 'odd jobbing' -bricolage. In the generalized view of ICTE, 'gathering' relates dialectically to notions of navigation. Within the cultural experience of ICTE, 'poaching' relates dialectically to notions of programming, and in the systematizing of ICTE, 'odd-jobbing' relates dialectically to notions of systems development.

Fig. 2(c) zooms in to show that, basically, spikes and antispikes seem to follow a similar dynamics, but in the reverse direction. Whereas spikes appear only during demand crests, antispikes appear only during demand troughs, and only occasionally. The overall dynamics seems to switch occasionally from a normal regime to two other (opposite) spiky regimes.
TARX models for spikes and antispikes in electricity markets
Two electricity price models are presented in the frame of switching and threshold autoregressive exogenous (TARX) models, both in continuous time and in discrete time. The first model is based on the biologically inspired McKean model for spiking neurons, the second model is its extension crafted in such a way to easily display a spike and antispike phenomenology, showing random spikes only in price crest phases (daytime) and random antispikes only during price trough phases. The spiking behavior can be explained by the presence of a mathematical threshold mechanism that can be related to power grid congestions.

Figure 2 
Luxury e-services at the pre- and after-sales stages of the decision making process: Watch, car, art and travel blogs analysis

November 2008

·

2,171 Reads

Needs and expectations of customers in pre- and after-sales stages in the e-commerce purchase process in the luxury product field are not well known and identified. We were interested in discovering the opinions of customers concerning the role of internet in the pre- and after-sales stages of the purchasing process in this industry. As we are just at the early stages of the e-commerce era in the luxury sector, we chose to focus on blogs content. After-sales service on the Internet in the luxury sector is considered with circumspection. Practitioners feel that consumers are still very much attached to the 'physical' experience. On the contrary, we assumed that Netsurfers belonging to net communities are also luxury goods and services consumers. According to the netnography methodology [1,4], we analyzed the Netsurfers comments about pre- and after-sales services. We identified and selected blogs that are specific to the Web 2.0 generation; these blogs are discussing about luxury watches, cars, travels and art objects. The results allowed us to identify three categories of services needs: 1) the need for the service in broad terms, especially the service that should be linked to a product 2) the need for a specific or a very specific service, and 3) specific complaint about service experiences. These insights were discussed to be fully integrated in the e-commerce strategy in general.

Figure 1. A generated taxonomy for "Blood". The fat links are correct wrt. the MeSH benchmark, semi-fat links are in grand-or great-grandchild relation in MeSH.
Figure 4. The figure shows the F 0.5-measure (solid) and the precision (dashed lines) for three MeSH benchmarks. Higher entropy of similarities expresses lower confidence in a taxonomy-link. Not filtering by entropy at all yield in precision and F-measure equal to the rightmost data point of each curve, indicated by horizontal dotted lines. The figure shows that indeed high entropy links are often wrong and precision decreases for all benchmark sets. Therefore, by filtering these low confidence links, the algorithm improves in terms of precision, while maintaining or slightly improving the Fmeasure. Any threshold above 0.7 increases precision without worsening the F-measure.
Comparison of Generality Based Algorithm Variants for Automatic Taxonomy Generation
We compare a family of algorithms for the automatic generation of taxonomies by adapting the Heymann-algorithm in various ways. The core algorithm determines the generality of terms and iteratively inserts them in a growing taxonomy. Variants of the algorithm are created by altering the way and the frequency, generality of terms is calculated. We analyse the performance and the complexity of the variants combined with a systematic threshold evaluation on a set of seven manually created benchmark sets. As a result, betweenness centrality calculated on unweighted similarity graphs often performs best but requires threshold fine-tuning and is computationally more expensive than closeness centrality. Finally, we show how an entropy-based filter can lead to more precise taxonomies.

New direction in project management success: Base on smart methodology selection

September 2008

·

172 Reads

Modern project management is a well-understood discipline that can produce predictable, repeatable results. The methodologies of modern project management are highly analytic, usually requiring automated tools to support them on large projects. Like most other disciplines, it is learned through both practice and study. Project management encompasses many different skills, such as understanding the interdependencies among people, technologies, budgets, and expectations; planning the project to maximize productivity; motivating others to execute the plan; analyzing the actual results; and reworking and tuning the plan to deal with the realities of what really happens as the project is executed. In order to manage a project and bring it to a successful completion, its project manager must have a complete understanding of the methodologies being used for the management of different parts of the project. Managers prefer specific project methodology, while resist and face difficulties for an opportunity to manage another project with different methodology as they don’t know how much commonality exists between the preferred and the new required methodology.

Information Transmitting Efficiency in China Gold Spot Market: A Random Walk Test with Wild and Bootstrap Variance Ratio Method

August 2011

·

114 Reads

China provides an excellent empirical case study on the information transmitting efficiency in the capital market. Our paper extend the existing investigation based on much more new data and test the efficiency of information transmitting with Individual, Sub-sampling and Wild bootstrap Variance Ratio method. The empirical results show that gold prices reflect all the past information efficiently and no one can get abnormal return continuously. The China gold spot price followed Random Walk and the gold spot market is a weak-form efficiency market.

Table 1 Difference in Means Tests
Table 4 : Startup Size and Liquidation Event Regressions
Bringing Entrepreneurial Ideas to Life
Prior work on the commercialization of innovation is motivated by competitive dynamics between startup and incumbent firms and has looked at the determinants of innovator commercialization mode. We contribute to the literature by examining variation in performance resulting from the choice of whether to innovate technologically in new ventures. The institutional and business environment conditions which strategies lead to higher performance for start-up innovators. In contrast to the factors shown to determine commercialization mode, we show that the team characteristics and economic environment play a stronger role in predicting an innovator's entrepreneurial firm performance. Using unique data from a novel survey of entrepreneurial firms founded over five decades and across diverse industries, we show the conditions when technology focused vs. functionally diverse founding teams outperform. In addition, we show that in certain environments firms with more highly original innovations and innovating firms founded during a recession have higher performance.

Classification of Business Scenarios for Spectrum Sensing

October 2010

·

74 Reads

Because of ever growing use of wireless applications and inflexibilities in the current way spectrum is allocated, spectrum is becoming more and more scarce. One of the methods to overcome this is spectrum sensing. In spectrum sensing research, use case analysis is often used to determine challenges and opportunities for this technology. However, such use cases vary widely in terms of business context, so that assessment of viability for one cannot be a basis for inferring viability of another. Therefore, this paper proposes a number of basic parameters that set apart fundamentally different business scenarios for sensing technology, i.e. ownership, exclusivity, tradability and neutrality. Based on these parameters, a classification is outlined which sheds light on the particular business and regulatory opportunities and constraints related to spectrum sensing, and could therefore help business actors as well as regulators to fine-tune their decision-making vis-à-vis this technology.


Figure 1. A generic dairy value chain adapted from [11, 18-20]
Table 1 . Sources of papers and the number of publications per source
Table 2 .
RFID deployment and use in the dairy value chain: Applications, current issues and future research directions
RFID technology is currently considered as a key enabler of supply chain transformation. However, very little has been written about the deployment and use of RFID in the dairy industry. Drawing on an extensive literature review and a case example, this exploratory study seeks to present current applications and issues related to RFID's adoption in the dairy industry and discuss future research directions.

The Service Paradox: Supporting Service Supply Chains with Product-oriented ICT

September 2007

·

84 Reads

The dominant models of supply chain management have a strong product orientation, with service supply chain models only starting to appear. The technologies used in existing supply chains generally aim at improving production and logistical operations by increasing their efficiencies and by linking and integrating collaboration between individual supply chain partners. This paper focuses on service supply chains and argues that despite their increasing importance, current information and communication technologies (ICTs) fail to fully support the specific needs of service co-production. The paper reviews current technology use from a service perspective and concludes that even the most advanced technologies that have huge potential to support services (such as RFID) remain product-oriented. It argues that such technologies should be deployed in ways that can support the service logic and identifies the danger that unreflected ICT use may pose for the pragmatic utility of ICT use in service-supply

The City as a Platform: Exploring the Potential Role(s) of the City in Mobile Service Provision through a Mobile Service Platform Typology

July 2011

·

80 Reads

This exploratory paper applies an existing framework for the analysis of platform types in mobile services to the urban context and explores the role(s) cities may take up in the development and deployment of mobile services. It first presents the platform typology that will be used, provides some contextual information on the concept of Smart Cities and then applies several diverging cases to it by means of validation. The paper concludes that having control over value-adding assets such as datasets pertaining to the city is a more important factor for governments to take into account than solely developing a customer relationship with its inhabitants.

Figure 7. Evolution of ConSors' share price
Table 7 . Copeland's forecasts and valuation for Amazon (billion dollars)
Valuation and Value Creation of Internet Companies

November 2009

·

1,689 Reads

Author has described selected problems related to valuation and value creation of Internet related companies. Relationships between the brand, loyalty, trust and valuation were researched. The method of valuing companies based on valuation of Customer Lifetime Value was described. Two kinds of the Internet clients are considered: one-time deal clients and oriented to creating relationships with the supplier. Based on this partition, a valuation model is proposed.

Competing Platform Models for Mobile Service Delivery: The Importance of Gatekeeper Roles

August 2008

·

114 Reads

This paper introduces and analyses four potential business models for emerging mobile service platforms. For each model, a real-life case is described and briefly analyzed. It is found that 'platform leadership' strategies - focused a.o. on control of core assets on the one hand, and open interfaces on the other hand -, permeate the strategies and models introduced by the stakeholders vying for dominance in this market. In particular, these strategies and models differ strongly according to the specific set-up and control of a number of crucial 'gatekeeper roles'.

Top-cited authors