Hitotsubashi University
Recent publications
  • R. Yamato
    R. Yamato
  • S. Kobayashi
    S. Kobayashi
  • T. Fujita
    T. Fujita
  • [...]
  • Y. Mototake
    Y. Mototake
A synthetic diagnostic of x-ray bremsstrahlung radiation and a Monte Carlo radiation transport simulation were carried out to obtain the electron energy distribution in stochastic acceleration experiments in Heliotron J, a mid-sized heliotron-type magnetic confinement device. Three sets of LaBr3(Ce) scintillator and photomultiplier tubes were installed in Heliotron J in three directions relative to the magnetic field line, co, counter, and perpendicular, to determine the velocity distribution of the high-energy electrons. These systems are positioned about 5 m away from the vacuum chamber and shielded by lead blocks and magnetic shields to reduce the influence of stray radiation and magnetic fields. The vacuum chamber of Heliotron J is made of stainless steel with a 3D helical shape. Since its x-ray shielding effect is not negligible when obtaining the x-ray energy distribution in the vacuum chamber, the Monte Carlo radiation transport code particle and heavy ion transport code system was applied to Heliotron J to clarify the shielding effect. Given the vacuum chamber and coil shape information and x-ray energy distribution expected in the vacuum chamber, this code can calculate x-ray energy distribution considering the shielding effect of Heliotron J. This simulation model was based on computer aided design data of Heliotron J devices. The x-ray energy distribution in the vacuum vessel was adapted until the simulated and measured x-ray spectra outside the vacuum vessel match with each other. The shapes of resultant distribution have two types of shape: power-law distribution at more than 450 keV and Maxwell distribution at less than 450 keV. At a higher energy range, x-ray bremsstrahlung energy distribution is consistent with the characteristic of stochastic acceleration.
Developmental studies have adopted preferential-looking paradigms to investigate infant interest in emotional face stimuli. However, because of the attention-grabbing nature of threatening stimuli, research has reported inconsistent results regarding infants’ fixation on happy and angry faces. A recent value-based framework of social looking behavior suggested that infants’ looking behavior depends on the value of looking (i.e., the expected reward value of a specific looking behavior). Using anticipatory-looking tests alongside preferential-looking tests, we aimed to investigate whether or not infants’ looking behavior to faces is value driven. A total of thirty-two 8-month-old infants completed an eye-tracking study. In each block, two faces displaying a combination of happy, neutral, or angry expressions were repeatedly presented side-by-side on the screen. A block consisted of a preferential-looking test and four trials of an anticipatory-looking test. The results of the preferential-looking test showed longer durations of total fixation at the happy and angry faces than the neutral face. In the anticipatory-looking test, infants predictively looked at the position of the happy face compared with the positions of the neutral and angry faces. Furthermore, infants predictively looked at the position of the neutral face more than that of the angry face. A control study using inverted faces indicated that these emotion effects were not due to low-level stimulus differences. Our findings suggest that infants focus on facial stimuli that are affectively arousing regardless of their valence, while anticipatory-looking behavior depends on the value of looking.
We conduct a theoretical analysis of the performance of β\beta -encoders. The β\beta -encoders are A/D (analogue-to-digital) encoders, the design of which is based on the expansion of real numbers with noninteger radix. For the practical use of such encoders, it is important to have theoretical upper bounds of their errors. We investigate the generating function of the Perron–Frobenius operator of the corresponding one-dimensional map and deduce the invariant measure of it. Using this, we derive an approximate value of the upper bound of the mean squared error of the quantization process of such encoders. We also discuss the results from a numerical viewpoint.
One of the major outstanding questions in computational semantics is how humans integrate the meaning of individual words into a sentence in a way that enables understanding of complex and novel combinations of words, a phenomenon known as compositionality. Many approaches to modeling the process of compositionality can be classified as either “vector-based” models, in which the meaning of a sentence is represented as a vector of numbers, or “syntax-based” models, in which the meaning of a sentence is represented as a structured tree of labeled components. A major barrier in assessing and comparing these contrasting approaches is the lack of large, relevant datasets for model comparison. This article aims to address this gap by introducing a new dataset, STS3k, which consists of 2,800 pairs of sentences rated for semantic similarity by human participants. The sentence pairs have been selected to systematically vary different combinations of words, providing a rigorous test and enabling a clearer picture of the comparative strengths and weaknesses of vector-based and syntax-based methods. Our results show that when tested on the new STS3k dataset, state-of-the-art transformers poorly capture the pattern of human semantic similarity judgments, while even simple methods for combining syntax- and vector-based components into a novel hybrid model yield substantial improvements. We further show that this improvement is due to the ability of the hybrid model to replicate human sensitivity to specific changes in sentence structure. Our findings provide evidence for the value of integrating multiple methods to better reflect the way in which humans mentally represent compositional meaning.
This chapter discusses superlative indices, exploring their relationship with the cost-of-living index (COLI) and how they can provide a good approximation without precise knowledge of the utility and expenditure functions. It explains the concept of “exact” indices, such as the Sato–Vartia index, which aligns perfectly with the COLI under specific utility functions like the Constant Elasticity of Substitution (CES). The chapter also introduces the theory of superlative indices, which simplifies the computation of exact indices by using standard indices like Fisher, Walsh, and Törnqvist without identifying the underling expenditure function. Key examples are given to illustrate how these indices correspond to the COLI under various assumptions, showing how superlative indices can be computed easily and how they approximate the true COLI effectively. The discussion highlights their practical significance in simplifying complex economic measurements and their impact on theoretical and practical economics, making them pivotal in modern index number research.
This chapter explores the difficulties and methodologies associated with constructing price indices when the set of goods changes over time. It acknowledges that no two products are “exactly” identical, even within the same category, and the emergence and disappearance of goods complicate price comparisons. Seasonal goods, technological innovations, and disruptions like the COVID-19 pandemic further challenge traditional price measurement methods. The chapter discusses the hedonic method, which adjusts for quality differences by considering various attributes of goods, and highlights its limitations, including the difficulty of measuring new attributes and dealing with multicollinearity. The matching method, which compares only concurrently available goods, is straightforward but prone to bias due to non-random product exits and introductions. The chapter also introduces the concept of variety effects, where changes in the variety of goods impact consumer welfare and the cost-of-living index (COLI). Feenstra’s COLI, which incorporates variety effects, is widely used but can exhibit significant chain drift over time, complicating long-term price comparisons. The chapter concludes by highlighting the ongoing research and practical issues in addressing these challenges, emphasizing the need for innovative approaches and new methodologies.
This chapter discusses the theory and practice of regional price index numbers, particularly focusing on exchange rates and Purchasing Power Parity (PPP). The chapter begins by highlighting the limitations of using exchange rates for price comparisons due to their volatility and the Balassa–Samuelson effect, which causes market exchange rates to undervalue currencies of developing countries. It suggests using price indices to measure prices across countries, adapting the theoretical framework used for temporal comparisons to spatial ones. The concept of Purchasing Power Parity (PPP), introduced by Gustav Cassel in 1918, is discussed in depth. It emphasizes that while comparing prices between two countries using a basket of goods is straightforward, involving more countries complicates calculations due to non-transitivity issues. To address these complexities, the chapter introduces three main methods used for constructing PPP: Gini–Eltető–Kőves–Szulc (GEKS), Geary-Khamis (GK), and the Country Product Dummy (CPD). Each method has its strengths and weaknesses, with GEKS ensuring transitivity and GK satisfying aggregation consistency. The CPD method employs regression analysis to account for differences in goods across countries, making it suitable for international comparisons where homogeneous goods are difficult to find. The chapter also introduces the Minimum Spanning Tree (MST) method, which aims to reflect the “distance” between countries in price index calculations. Despite advancements, the chapter acknowledges the challenges in constructing accurate regional price indices, such as differences in consumption baskets and the quality of goods and services between countries. It concludes by highlighting the ongoing research and the importance of PPP for international economic comparisons.
This chapter investigates the axiomatic approach in index number theory. This approach, also known as the test approach, aims to determine the most appropriate index number formula by listing and satisfying various axioms or tests. The chapter begins by addressing the multitude of existing index number formulas and emphasizes the importance of selecting one based on specific objectives. Edgeworth’s view, which advocated for choosing an index number formula according to its purpose, contrasts with the opinions of contemporaries like Walsh and Fisher. The axiomatic approach is central to modern index number theory, narrowing down desirable indices through axioms such as transitivity, monotonicity, linear homogeneity, identity, and independence from the measurement unit. The chapter introduces some basic axioms, elaborating on their significance and the only index number formula (the Cobb–Douglas-type index) that satisfies all these axioms. It also discusses the implications of abandoning certain axioms, such as transitivity, leading to a variety of index formulas that meet other criteria. Further, the chapter explores indices like the Walsh, Fisher, Törnqvist, and Sato–Vartia indices, highlighting their axiomatic properties and practical applications. It explains the concept of characterization, where an index number is defined through its foundational axioms, identifying specific index number formulas that uniquely fulfill these conditions. The chapter concludes by presenting the characterization of well-known index number formulas within the axiomatic framework, discussing various debates related to them, and emphasizing the ongoing importance and centrality of the axiomatic approach in current index number theory.
This chapter traces the history of price index numbers, beginning with early attempts in the sixteenth century and covering key figures such as Bodin, Dutot, Fleetwood, Jevons, and Laspeyres. It highlights how the concept of measuring price changes evolved over time, from the initial empirical efforts by Jean Bodin, who analyzed price fluctuations in sixteenth-century France, to more systematic approaches by later economists. Laspeyres and Paasche are so well-known that their names have become synonymous with price indices. However, this chapter points out that many others have also made significant contributions to this field. It also discusses the theoretical debates and controversies that shaped the development of price index number theory, such as the arguments between Jevons and Laspeyres. The chapter concludes by noting the shift toward more sophisticated methods in the late nineteenth and early twentieth centuries, including the axiomatic and economic approaches that form the foundation of modern index number theory.
This chapter introduces the concept of index numbers, particularly in the context of economics. Index numbers are used to measure changes over time or differences between places or individuals, often for quantities that are difficult to measure directly. The chapter begins with a definition from John Maynard Keynes, highlighting that an index number quantifies changes in magnitudes that are otherwise hard to measure accurately. The term “index number” has its origins in the word “index,” which initially meant a pointing finger, evolving to signify something that indicates or points out. Its economic application began in the mid-nineteenth century, with significant contributions from economists like William Newmarch and William Stanley Jevons. This chapter explains the structure of the book that includes chapters on the history of index numbers, representative index formulas, advanced index number theory, and practical applications.
This final chapter discusses the future prospects and expectations for index and aggregation theory analysis, highlighting the gap between academic researchers and official statistics producers. It emphasizes the need for indices grounded in economic theory and addresses the challenges in their practical implementation. Traditional cost-of-living indices assume homothetic preferences, but recent studies stress the importance of non-homothetic preferences, which consider varying income elasticities and are crucial for analyzing income inequalities. The chapter also explores significant quality improvements in durable goods in the long run and the difficulties in measuring these improvements using traditional methods, suggesting the potential role of Artificial Intelligence and machine learning. Furthermore, it highlights the unrealistic assumption of fixed consumer preferences in standard cost-of-living indices, advocating for indices that account for changing preferences. It proposes creating different price indices for various households or regions to accurately measure income and consumption disparities, noting the significance of detailed household consumption data. The chapter concludes by emphasizing the importance of index number theory in the practical world and the need for more researchers specializing in this field. It calls for integrating economic theory with data aggregation and encourages researchers to become not only users but also makers of indices. This ongoing research and collaboration between academia and statistical offices are essential for developing accurate and theoretically sound indices.
This chapter introduces standard indices used by statistical agencies and in academic research. It covers widely used price indices like the Laspeyres and Paasche indices, which rely on price and quantity data of various commodities over time. The construction of price indices typically involves a two-stage aggregation process: lower level (elementary) and upper level aggregates. Widely used index number formulas for elementary price indices include the Carli, Dutot, and Jevons indices. The chapter discusses their mathematical properties and practical applications, noting that the choice of formula significantly impacts the final index value. The Laspeyres and Paasche indices are introduced for upper level aggregation, where the Laspeyres index uses base-period weights and the Paasche index uses comparison-period weights. The chapter also covers quantity indices and their calculation methods, emphasizing the importance of the factor reversal test. The chapter concludes with a discussion on alternative index formulas like the Fisher, Walsh, Marshall–Edgeworth, Törnqvist, Sato–Vartia, Theil, and Young indices, highlighting their theoretical properties and practical implications. The Fisher index, in particular, is noted for its theoretical advantages but is less commonly used due to data collection difficulties.
This chapter explores the measurement of price indices using the Engel curve, which reflects the relationship between food expenditure and total expenditure (Engel coefficient). Ernst Engel’s 1857 study found that as income increases, the proportion of income spent on food decreases. Recent studies leverage this relationship to measure price index biases, suggesting significant biases in existing indices. The chapter details the Almost Ideal Demand System (AIDS), introduced by Deaton and Muellbauer (1980), as a central model for such measurements. The AIDS model provides a second-order approximation to any expenditure function, making it a robust tool for analyzing consumption patterns and estimating price indices. Nakamura (1996), Hamilton (2001), and Costa (2001) found significant biases in historical consumer price statistics in the US, using the Engel curve to analyze changes in food expenditure relative to income and prices. Almås (2012) extended this approach internationally, revealing substantial biases in purchasing power parity, particularly for developing countries. Despite the method’s simplicity and the substantial body of literature supporting it, official price indices have not adopted the Engel curve approach. The chapter highlights criticisms, including the model’s reliance on invariant preference parameters across time and regions, and the omission of factors like savings and home production. These issues cast doubt on the method’s reliability for official statistics, even as it gains traction in academic research.
This chapter discusses the stochastic approach to price indices, exploring its historical development, methodologies, and applications. Initially proposed by Jevons and Edgeworth in the nineteenth and early twentieth centuries, this approach treats price movements as stochastic variables, reflecting uncertain fluctuations. Despite early support, the approach fell out of favor until its revival in the 1980s, driven by renewed interest in core inflation estimation and international price comparisons. The classical stochastic approach focuses on identifying common price change factors across goods, while the new stochastic approach, advanced by Selvanathan and Rao (1994), employs techniques like Generalized Least Squares to estimate price indices with standard errors. The chapter also covers criticisms of the new stochastic approach, particularly regarding the variance of residual terms, changes in estimated indices with varying sample periods, and despite these difficulties the approach has proven valuable for core inflation measurement and spatial price indices, utilizing methods such as Structural Vector Autoregression (SVAR) and Dynamic Stochastic General Equilibrium (DSGE) models. The chapter concludes by emphasizing the growing importance of the stochastic approach in econometric analysis and its potential to become a standard tool for index number theory.
The Divisia index, introduced by François Divisia in 1925, is a unique economic index that treats time as a continuous variable, contrasting with standard indices which rely on discrete-time points. It defines price and quantity indices through logarithmic differentiation, enabling integration over time, which enables us to conduct various economic analysis easily. Key properties include transitivity, factor reversal, aggregation consistency, and homogeneity of degree one, making it particularly relevant for productivity analysis under continuous-time models. Despite its theoretical strengths, practical application of the Divisia index faces difficulties due to the discrete nature of real-world data, necessitating approximations that often lose desirable properties. Chain indices, such as the chain Törnqvist index, serve as discrete approximations but are less accurate with high-frequency data, often displaying unrealistic trends. The Divisia index also encounters issues with path dependency and fails to satisfy identity and monotonicity without assumptions of homothetic preferences or linear homogegneous production technology.
This chapter explores the economic approach to index numbers, specifically focusing on the cost-of-the living (COLI). This approach conceptualizes price indices based on utility levels and prices, differentiating from axiomatic and stochastic approaches which rely purely on price and quantity data. The chapter discusses the theoretical basis of the COLI, which measures the expenditure required to maintain a constant level of utility over time. It investigates the assumptions underlying the COLI, such as homothetic preferences, and the mathematical simplicity of computing the index through expenditure functions. However, it also addresses the practical challenges and limitations of applying COLI, especially the difficulty in estimating necessary economic parameters and adapting the model to real-world data where consumer preferences and conditions vary. The discussion highlights the need for robust economic models to accurately reflect consumer behaviours emphasizing the gap between theoretical ideal and practical application.
This study investigates a household’s commitment to a resource allocation by utilizing a 2007 Japanese pension reform allowing divorced women to claim a portion of their husband’s pension benefits while keeping the household’s total benefits unchanged. Although the reform would have had no effect on a couple’s decision making under full commitment, we find that it increased wives’ leisure activities and decreased their market and domestic work. This suggests that wives were able to increase their welfare by exploiting an improved outside option, and thus their commitment to the resource allocation was less than complete.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
1,617 members
Iichiro Uesugi
  • Institute of Economic Research
Takashi Nagashima
  • Faculty of Economics/ Graduate School of Economics
Toshio Yamagishi
  • Graduate School of International Corporate Strategy
Chihiro Shimizu
  • School of Social Data Science Education and Research
Sadao Nagaoka
  • Institute of Innovation Research
Information
Address
Tokyo, Japan