Pakistan Journal of Statistics

Published by

Articles


A Comparison of Marginal Likelihood Based and Approximate Point Optimal Tests for Random Regression Coefficient in the Presence of Autocorrelation.
  • Article

February 1994

·

27 Reads

S. Rahman

·

With respect to testing linear regression disturbances two methods of test construction have recently been found to work well. These are traditional asymptotic tests based on the marginal likelihood or equivalently the likelihood of the maximal invariant and point optimal or approximate point optimal (AOP) tests. The former approach has been found to work well for testing for random regression coefficients in the presence of autocorrelated errors. This paper constructs APO invariant (APOI) tests for this testing problem and extends a previous Monte Carlo study to include APOI tests. We conclude that for this testing problem, the extra work required to apply APOI tests hardly seems worthwhile, particularly for larger sample sizes.
Share

BURR Distribution Tables for Approximating p-Values and Critical Values by Matching Skewness and Kurtosis.

February 1994

·

45 Reads

Tables are presented for parameters c, k of the Burr III and XII distribution, which cover a wide grid of skewness and kurtosis values. These enable distributions, whose first four moments are known, to be approximated. The form of the Burr III and XII distribution functions allows p-values to be readily calculated. Applications include hypothesis testing, modelling and simulation studies in a range of disciplines.

Microbased Time Series Analysis: Estimating the autocorrelation function using survey samples

January 1994

·

17 Reads

Analysts using data from official statistical authorities often neglect the fact that data frequently is collected using sample surveys. We study the impact of sampling error on the estimation of the autocorrelation function for a population total under a microbased superpopulation time series model. We show that uncritical use of data published by statistical agencies may result in biased estimators. The bias is caused by the sampling error and is different from aggregation bias, Theil (1954). A simulation study shows that the bias can be considerable.

Figuer.1: The American Customer Satisfaction Framework
Customer satisfaction measurement models: Generalised Maximum Entropy approach
  • Article
  • Full-text available

April 2005

·

2,531 Reads

This paper presents the methodology of the Generalised Maximum Entropy (GME) approach for estimating linear models that contain latent variables such as customer satisfaction measurement models. The GME approach is a distribution free method and it provides better alternatives to the conventional method; Namely, Partial Least Squares (PLS), which used in the context of costumer satisfaction measurement. A simplified model that is used for the Swedish customer satis faction index (CSI) have been used to generate simulated data in order to study the performance of the GME and PLS. The results showed that the GME outperforms PLS in terms of mean square errors (MSE). A simulated data also used to compute the CSI using the GME approach.
Download

Record Values From Univariate Distributions

August 1996

·

33 Reads

s: Conference on Extreme Value Theory and Its Applications RECORD VALUES FROM UNIVARIATE DISTRIBUTIONS Mohammad Ahsanullah Rider College Lawrenceville, NJ The paper aims to present recent developments about the inference of univariate distributions based on record values. Estimation of parameters, test of hypothesis, characterization problems and recurrence relations of moment and entropy of univariate distributions are discussed. THE LAW OF ITERATED LOGARITHM FOR THE SQUARE INTEGRAL OF BROWNIAN MOTION J. M. P. Albin Chalmers University of Technology Gothenburg, SWEDEN Let X(t) j R t 0 W (s) 2 ds where W is Brownian motion. Cameron and Martin (1944) found the distribution of X(t). Strassen (1964) proved that lim t!1X(t)=(t 2 ln ln t) = 8ß Gamma2 (w. p. 1). (Li (1992) gave a probabilistic proof.) Cs'aki (1979) showed that lim t!1 Theta X(t) Gamma 8ß Gamma2 t 2 ln ln t =[t 2 ln ln ln t] 12ß Gamma2 . We shall prove that lim t!1 Theta X(t) Gamma 8ß Gamma...

Fig. 1: Number of Journals by Country of Origin (JCR 2010-Statistics & Probability)  
Fig. 2: Mean IF of Journals by Country of Origin (JCR 2010-Statistics & Probability)  
Table 2 Publishing Group and Country of Origin of the Journals (JCR 2010-Statistics & Probability)
Contribution of Pakistan Journal of Statistics in ‘Statistics and Probability’ Literature: An Analysis Based on the Journal Citation Report (2010)

March 2012

·

848 Reads

Only twenty countries have impact factor journals in ‘Statistics and Probability’. In this paper, we trace contributions of Pakistan Journal Statistics in publishing research papers in comparison to other countries. The analysis is based on Journals Citation Reports 2010 edition issued by the Thomson’s Institute of Scientific Information. The paper provides country and publishing group level comparisons of the impact factors of journals in this subject. Of 20 countries, only two Islamic countries i.e. Pakistan and Turkey have one journal each in the ‘Statistics and Probability’ literature.

Figure 2: CV(λ) Plots for a Lognormal Sample, Sample Size = 100
Figure 4: Box Plot for λ n for 100 Exponential Samples; 1: Chaubey-Sen Choice, 2: KL Cross-Validation, 3: ISE Cross-Validation, 4: Optimum Hellinger Distance
Figure 9: Box Plot for λ n for 100 Exponential Mixture Samples, θ 1 = 10, θ 2 = 1, Π = .2; 1: Chaubey-Sen Choice, 2: KL Cross-Validation, 3: ISE Cross-Validation, 4: Optimum Hellinger Distance
On the Selection of the Smoothing Parameter in Poisson Smoothing of Histogram Estimator: Computational Aspects

October 2009

·

111 Reads

In this paper the problem of selection of the smoothing parameter for the density esti- mator using the Poisson distribution (see Gowronski and Stadm¨ uller (1980) and Chaubey and Sen (1996)) is considered. Two cross validation methods, namely likelihood based cross validation and integrated squared error cross validation, are compared through a nu- merical study. It is found that the choice proposed in Chaubey and Sen (1996) may not be appropriate for large samples. Instead, data adaptive choice works well for large as well as small samples. Based on this study we also claim that the smoothing parameters selected using any of the two cross validation methods are asymptotically equivalent and seem to provide the smallest Hellinger distance between the estimator and the true density.

Measuring HR-Line Relationship Quality: A Construct and Its Validation

November 2012

·

193 Reads

Despite wide recognition of the importance of relationship between HR professionals and line managers (HR-line relationship) in strategic HRM literature, no measurement construct is available that could grasp the complicated concept. We introduce a variable, HR-line relationship quality and propose a measurement construct for the variable. The first dimension of the variable covers the traditional elements - often measured in psychology and marketing. The second and third dimension covers HR specific elements based on the attitudes of HR professionals towards the line managers and vice versa. The proposed construct is validated through an empirical survey from a sample of HRM specialists and line managers. The paper concludes that HR-line relationship quality can be measured from a 34 item construct. This construct can be used to quantify HR-line relationship quality, which would be helpful to understand and manage the relationship for the success of strategic HRM.

On the efficiency of linear method

January 2004

·

14 Reads

In this paper an attempt is taken to compare the efficiency of the linear method relative to the Maximum Likelihood (ML) method with the help of computer simulated data in estimating the parameters of the two-parameters exponential distribution, a member of the location-scale family of distributions. Nine parameter combinations are investigated each with five different sample sizes. Generalized variance (GV) due to S. S. Wilks [Biometrika 24, 471–494 (1932; Zbl 0006.02301)] is used to compare the relative efficiency. It is observed that the linear method is on an average equally as efficient as the ML method in terms of GV. The performance of the ML method is comparatively better for large samples.

The m-grouped cylindrically rotatable designs of types (1,0,m-1), (0,1,m- 1), (1,1,m-2) and (0,0,m)

January 1989

·

8 Reads

Cylindrically rotatable designs introduced by A. M. Herzberg [Ann. Math. Stat. 38, 167-176 (1967; Zbl 0171.170)] are generalized to m- grouped cylindrically rotatable designs in order to consider the factors partitioned into m-sets (m≥2). By restricting the property of rotatability in these sets in different ways these designs are classified into the m-grouped cylindrically rotatable designs of types (1,0,m-1), (0,1,m-1), (1,1,m-2) and (0,0,m). The conditions which these designs have to satisfy are derived through the moment generating function of the designs.

Factors that influence consumers' willingness to buy traceable agricultural foods in China: Based on the empirical analysis from 1573 consumers

January 2014

·

90 Reads

With the context of the crisis of agriculture food safety, 1573 consumers from eight representative cities located in different parts of China were surveyed. This paper focuses on the influential factors on consumers' willingness to purchase traceable foods. Several meaningful results are concluded: (1) The Majority of consumers lack basic knowledge and cognition about traceable foods. Furthermore, some consumers still show doubt the safety of traceable food; (2) Knowledge of the traceable food, attitudes toward food traceability, confidence on food safety, education, age, income, primary buyer of household food, experience of food safety matter, and location are the determinant factors for consumers' intention to purchase traceable food; (3) the consumers from Beijing, Shanghai and Xi'an may be more willing to buy traceable food, because of 3H (high-income, high-educated and high cognition of traceable food); (4) however, sex, age and health factors may impact their purchase intention within a limited level.

Meta-analytical themes in the history of statistics: 1700 to 1938

January 2002

·

82 Reads

This note provides an overview of some aspects of the early history of statistics as a means to better understand current practice and theory of statistics, and in particular meta-analysis. Meta-analysis entails the quantitive investigation of replication of studies or observations made by different investigators under different conditions, and the subsequent quantification of the uncertainty that remains, given the degrees of replication assessed. These were important problems in the late 1700’s, and attracted the attention of very talented and thoughtful ”meta-analysts” such as Laplace and Gauss. In the early 1900’s, similar problems arose in medicine and agriculture, and were addressed by Pearson, Fisher and Cochran, perhaps all three drawing heavily on the earlier work by Laplace and Gauss. It is argued that likelihood (perhaps multiplied by prior probabilities) was (and is) the essential ingredient by which both, replication is investigated and subsequent uncertainty, best quantified. It is also argued that the underlying motivations for random effects models need to be critically and carefully considered. Lack of replication between studies (both apparent and real) can be quite common, and most definitely is often a hard thing to puzzle out.

An empirical evaluation of the accuracy of the Hartley and Rao (1962) variance approximation following PPS systematic sampling

October 2011

·

41 Reads

Hartley and Rao (1962) formalized a strategy of unequal probability systematic sampling combined with the Horvitz-Thompson estimator of the population total. They also provided an expression for the variance of the estimator based on a large population cum sample approximation to the joint inclusion probabilities. Likewise they offered an estimator of this variance. To our knowledge there is limited evidence regarding the effectiveness of these asymptotic results to sampling from small populations. Based upon simulated sampling from two populations, neither of which are regarded as very large, the performance of both the variance approximation and estimator of same were examined. Both were found to be quite accurate.


REPRODUCTIVITY AND AGE-SPECIFIC FERTILITY RATES IN PAKISTAN AFTER 1981

July 2009

·

79 Reads

The purpose of this study is to observe the trends and the patterns of reproductivity in Pakistan using the secondary data on age-specific fertility rates after 1981. Also, the fertility pattern in Pakistan has been described since 1960s. To experience the modest change in reproductivity, the estimated total fertility rate, gross reproduction rate and mean age of childbearing are computed for the above-mentioned period corresponding to the data available for the different years. Further, models have been fitted to the data on age-specific fertility rate and its forward and backward cumulative distributions. Finally, the cross validity prediction power technique has been used to check the validity of these models.

On the possibility of extraneous maximum likelihood estimate of the parameter involved in 1st order autoregressive process

October 2010

·

6 Reads

Existence of real solution to maximum likelihood equations related to parameter of 1st order autoregressive process is ensured by the rigorous application of the separation of roots. Situations leading to acceptable / extraneous solutions are demarcated along with a numerical illustration.

Fixed for Science and Math in Random Coefficient Model
Random Effects for Science and Math in Random Coefficient Model
Fixed and Random Effects for Science and Math in Means as Outcomes Model
The examination of teacher and student effectiveness at timss 2011 science and math scores using multi level models

December 2014

·

132 Reads

This study examines the effects of teachers‟ and students‟ features on student‟s science and mathematics performance employing two level hierarchical linear models using data from TIMSS 2011 conducted with Turkish 8th graders and their science and math teachers. The results of the research showed a significant mutual impact on science and math scores due to student‟s confidence with lessons and teacher‟s career satisfaction. The study also revealed a relationship between students‟ performance, students‟ perception of science and science teachers‟ confidence related to their expertise. The impacts of mentioned variables were discussed and implications for further research were provided.

A study on the critical success factors of ISO 22000 implementation in the hotel industry

December 2011

·

380 Reads

The hotel industry is taking various initiatives for the sake of food safety. In order to increase the food safety of the food and beverage service, some hotels have gone one step further and adopted the internationally recognized ISO 22000/ food safety management standards (FSMSs) or implementing total quality management (TQM) approach. Few researches were published about the critical success factors (CSF) evaluation of ISO 22000 implementation in hotel industry. To address this gap in the literature, this paper provides an assessment criteria of ISO 22000 implementation in hotel industry by modified Delphi method and the analytic hierarchy process (AHP). According above accurate factors, 217 formal questionnaires was design by AHP method. Then, we performed a survey investigation of three types of stakeholders in 29 hotels across Taiwan. The officials and team leaders of hotel inspection and supervision center from government tourism bureau, the hotel front and back of the house middle and top hotel managers and first line managers plus employees participated in the study. Third, 157 available samples in the reclaimed questionnaires were analysis by AHP. The implications of the results for hotel managerial practice are described and future research opportunities are outlined.

Robust 2k factorial designs: Non-normal symmetric distributions

January 2012

·

54 Reads

Şenoǧlu (2005, 2007a) considered the estimation and hypothesis testing procedures for the 2k factorial designs when the error distributions are skew. They obtained the modified maximum likelihood (MML) estimators of the model parameters and proposed new test statistics based on them. They show that their solutions are more efficient and robust than the classical solutions based on the least squares (LS) estimators via Monte Carlo simulation. In this study, we extend their studies to non-normal symmetric error distributions and develop new test statistics for testing the main effects and the interactions. Moreover, a real life example is given to illustrate the theoretical results.



Figure 1: The Processing Unit (Neuron) (∑ ) (1)
Table 1 Comparison of Different Measurement Criteria
Figure 4: Mean Absolute Error v/s Root Mean Squared Error Furthermore, the hybrid ABC-PSO optimized FFNN algorithm obtained lower relative absolute error and root relative squared error compared with the other algorithms, which achieved high error values, as shown in Figure 5.
Figure 5: Relative Absolute Error v/s Root Relative Squared Error
the performance accuracies of the five compared algorithms. The experiment results indicate that the hybrid ABC-PSO optimized FFNN algorithm overcomes all other algorithms, as shown in Figure 6.
New approach to improve anomaly detection using a neural network optimized by hybrid ABC and PSO algorithms

January 2018

·

399 Reads

Intrusion detection is one of the most significant concerns in network safety. Improvements have been proposed from various perspectives. One such proposal is improving the classification of packets in a network to determine whether the classified packets contain harmful packages. In this study, we present a new approach for intrusion detection using a hybrid of the artificial bee colony (ABC) algorithm and particle swarm optimization (PSO) to develop an artificial neural network with increased precision in classifying malicious from harmless traffic in a network. Our proposed algorithm is compared with four other algorithms employed in WEKA tool, namely, radial basis function, voted perceptron, logistic regression, and multilayer perceptron. The system is first prepared, and then suitable biases and weights, for the feed-forward neural network are selected using the hybrid ABC–PSO algorithm. Then, the network is retrained using the prepared information, which has been generated from the ideal weights and biases, to establish the intrusion-detection model. The KDD Cup 1999 data set is used as the information set in comparing our proposed algorithm with the other algorithms. The experiment shows that the proposed method outperforms the four classification algorithms and is suitable for network intrusion detection.

A recurrent relation of Gini coefficient for Abdalla and Hassan Lorenz curve using a property of confluent geometric series function

January 2004

·

15 Reads

I. M. Abdalla and M. Y. Hassan [Pak. J. Statist., Vol. 20(2), 277–286 (2004)] proposed a new parametric Lorenz curve and found the Gini coefficient associated with their Lorenz curve. In this note we have represented Gini coefficient for the Abdalla and Hassan Lorenz Curve in terms of confluent hyper-geometric series function. Using a property of the hypergeometric function, a recurrent relation of Gini coefficient and Hassan Lorenz Curve has been derived.

A FAMILY OF ABEL SERIES DISTRIBUTIONS OF ORDER k

July 2008

·

54 Reads

In this paper we have considered a class of univariate discrete distributions of order k, the Abel Series Distributions of order k (ASD(k)) generated by suitable functions of real valued parameters in the Abel polynomials. A new distribution called the Quasi Logarithmic Series Distribution of order k (QLSD(k)) is derived from ASD( k) and many other distributions, viz. Quasi Binomial distributions of order k (QBD(k)), Generalized Poison distribution of order k (GPD(k)) and Quasi negative binomial distribution of order k (QNBD(k)) have been derived as particular cases of ASDs of order k. Some properties have also been discussed.

A nearly pseudo-optimal method for keeping calibration weights from falling below unity in the absence of nonresponse or frame errors

October 2011

·

22 Reads

Calibration weighting is a technique for adjusting randomization-based weights so that the estimator of a population total becomes unbiased under a linear prediction model. In the absence of nonresponse or frame errors, one set of calibration weights has been shown to be asymptotically optimal in some sense under Poisson sampling. Unfortunately, although it is desirable that each weight be at least one (so that the corresponding element "represents itself"), there is no guarantee that will be the case. We will see how to construct an asymptotically equivalent set of weights so that no weight is less than unity. One consequence is that it often will be a simple matter to construct a variance measure that simultaneously estimates the prediction variance and the randomization mean squared error of the estimator.


Statistical models of accelerated life testing

January 2001

·

8 Reads

Accelerated life testing is all active research topic in statistical literature. It has a significance interest for both users and producers of the mechanical equipment and devices which are subjected to failure due to slow or medium rate damage process, such as wear, fatigue and creep. In service data on time to failure can be generated only after prolonged operation or testing of the equipment under normal operating conditions. This testing time call be substantially reduced by adopting accelerated life testing methods. Thus data generated with low cost and time can be used to evaluate the reliability of equipment or machine elements in their normal working environment. In this survey paper all extensive review of the statistical models used in the accelerated life testing is presented, along with a number of references on the topic particularly related to the areas of engineering and applied sciences.

Step stress partially accelerated life testing plan for competing risk using adaptive type-i progressive hybrid censoring

July 2017

·

145 Reads

This study deals with estimating information about failuretimes of items under step-stress partially accelerated life tests for competing risk based on adaptive type-I progressive hybrid censoring criteria. The life data of the units under test is assumed to follow the Weibull distribution. The point and interval maximum likelihood estimations are obtained for distribution parameters and tampering coefficient. The performance of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Chinese users’ acceptance of medical health websites based on tam

November 2014

·

44 Reads

Compared with the traditional model of health service, internet health service has distinct advantages, promising future in the 21st century. But whether the medical service can satisfy users’ requirements and whether the service those websites provide can be useful and reliable? We need objectively analyze the current situation in China. This paper employs Technology Acceptance Model (TAM) to analyze users’ perception of medical websites and their attitude toward using. After systematically summarizing and analyzing the theories and empirical studies of formal researchers, the author has adjusted TAM according to the practical situation of Chinese internet health service and introduced factors which influenced users’ perceived usefulness and perceived ease of use as external variables into this model and thereby set up the theoretical framework of this paper. Different from former researches conducted from the perspective of technical development of internet health service, this paper carries out a relatively systematic analysis of user requirements and service level of internet health service and reaches conclusions which accord with expectations.

Acceptance sampling plans based on truncated life tests for maxwell distribution

April 2011

·

58 Reads

In this paper, acceptance sampling plans for Maxwell distribution are developed when the life test is truncated at a pre-fixed time. The minimum sample sizes necessary to ensure that specified median lifetime, the operating characteristic function values of the sampling plans and producer's risk are presented. An algorithm is provided to establish the sampling plans for other quality parameters. The results are explained with tables and examples.

Hierarchical Poisson regression modeling of road traffic accidents

January 2002

·

97 Reads

In this study, we consider the problem of modeling and analyzing the number of traffic accidents and the number of accident injuries in 2 gulf countries. A hierarchical Poisson regression model is fitted to the real life data. The number of registered vehicles and the number of vehicle kilometers are used as fixed covariates. Estimates of the individual parameters and the structural regression coefficients are obtained via Gibbs sampling implemented using BUGS software.

Molecular nitrogen genesis in natural gases and relationship with gas accumulation history in Tarim basin

December 2014

·

9 Reads

·

·

·

[...]

·

Molecular nitrogen (N2) content of marine saporpelic type gas derived from the northern and central part of Tarim basin is higher, especially for the wet gas, whose N2 content of wet gas ranges from 10.1% to 36.2%, while dry gas less than 10%, that is, N2 content of wet gas is higher than that of dry gas. Why is there large difference of N2 content between wet gas and dry gas derived from the same type source rock? What relationship it has with gas migration and accumulation? Based on the composition and isotope geochemical characteristics of associated gas and non-hydrocarbon gas, as well as rare gas in natural gas, N2 genesis and origin are ascertained in this paper. The study shows that natural gas of middle-high N2 content belong to organic genesis, which is originated from low Paleozoic marine hydrocarbon source rock. It is pointed out that N2 content difference between wet and dry gas has close relationship with source rock, maturity and natural gas accumulation history. For the same type source rock, N2 content in gas is controlled by source rock maturity and entrapment conditions.


On the accuracy of some estimators in the presence of measurement error

January 2001

·

5 Reads

When the observed values for the sample elements are contaminated with mesurements errors, the desirable properties of the estimators are adversely affected. In this paper, we study the effectiveness of such errors on the accuracy (in terms of bias and precision) of some standard survey estimates under a super-population model.

On the asymptotic accuracy of the bootstrap with random sample size

January 1998

·

22 Reads

The purpose of this paper is to study the rate of convergence of bootstrap estimates with a random sample size.

Testing for differences in predictive accuracy

October 2009

·

11 Reads

In this paper we provide new tests for the difference in predictive accuracy of two prognostic factors X1 and X2 on a common output Y. Given a set of independent replicates of (X1,X2,Y), we split this sample into a learning part for estimating the unknown regression functions, and a validation part for which the residuals need to be computed. We show that the null distributions of our test statistics may be approximated by a normal. In simulations, the power is promising already for small to moderate sample sizes. Extensions to the time series context are briefly outlined.

Outliers in educational achievement data: Their potential for the improvement of performance

January 2014

·

16 Reads

Statistical outliers (Barnett and Lewis, 1994) have customarily been identified as either potential threats to data integrity (Cook, 1977) or as potential distortions of estimates of central tendency (Strutz, 2010). In both circumstances outliers are treated as having nuisance value. On the other hand, Gladwell (2008) views outliers as success stories. The study adopts the latter approach to reach to classrooms and teachers which add value in student's academic capabilities. Scores of 86207 students from 423 government schools in 2010 middle school (Grade VI to VIII) promotion examinations are analysed to explore the possibility of identifying gifted teachers. The study reports successful use of non-parametric methods of box plot and quartile formula to identify outstanding student performance. This performance is independent of overall school achievement, school size, student gender and is attributable to either individual or combined teaching effort.

Effect of an ontology-based reasoning learning approach on cognitive load and learning achievement of secondary school students

December 2013

·

50 Reads

Previous studies show that the concept mapping approach could organize students' cognitive schema, decrease student's cognitive load and help promote meaningful learning in any educational settings. However, concept mapping has its limitation in the representation of knowledge for complex concepts and domain knowledge. Thus, in the study, we proposed and implemented an ontology-based Reasoning learning system. To examine the effectiveness of the proposed system, a quasi-experimental design method was conducted in the study with three groups. A total of ninety-five seventh graders participated in the experiment. The graders were randomly distributed to three groups, one experimental group and two control groups. The experimental group with 31 students was conducted ontology-based reasoning approach. The first control group with 32 students was managed ontology learning approach, while the second control group with 32 students was guided with concept map learning approach. The experimental findings show the proposed approach could help learners improve students' misconceptions and improve their learning performance; moreover, it could reduce students' cognitive load in the learning process. Relevant suggestions are advised in the results of the study.

The effects of science writing heuristic on pre-service teachers’ achievement and science process skills in general physics laboratory course

December 2014

·

47 Reads

This study was conducted to identify the effects of using a traditional laboratory report format and a Science Writing Heuristic (SWH) students’ template on pre-service teachers’ achievement and science process skills in General Physics Laboratory-I course. In this study, a pretest-posttest non-equivalent control group model was used with two experimental groups and one control group for a total of 90 pre-service teachers participating. Within the groups, the SWH form was used in one experimental group, the SWH template with a peer assessment form was used in the other experimental group, and a traditional laboratory report format was used in the control group. Achievement and scientific process skills were examined in the experimental and control groups during the pretest-posttest. According to the findings, between the groups’ achievement and scientific process skill scores average that the SWH were applied to and the group that SWH were not applied to, a significant difference were found in favor of experimental groups. Although there were no significant differences between the group using peer assessment and the group without peer assessment, the group using peer assessment reported higher evaluation scores in the SWH student template.

A statistical model for annotating videos with human actions

March 2016

·

85 Reads

This contribution addresses the approach to recognize single and multiple human actions in video streams. This work introduces a novel action recognition algorithm with normalization enhancements. Initially feature vectors are extracted using 2D SIFT features. Bag of Words model is extended with a new normalization technique on the visual vocabulary to make the dimensions same so that the actions would be easier to read and extract. This normalization technique vastly improves the results from the state of the art methods. HMM based model is developed for training and testing of six basic actions present in the KTH human action dataset. By comparing our work with previously applied models, results display that our approach vastly improves the accuracy of the existing methods of action recognition.

Generalized likelihood summary in actuarial prediction problems

January 2001

·

3 Reads

Generalized likelihood summary function is presented for prediction in actuarial problems. Once the summary function is available, the actuary has all the information needed to decide the insurance premiums. For instance summary function can be combined with the prior information of underlying parameters to find estimates of the pure premium. In this paper we show how the idea of the generalized likelihood summary function can be applied to actuarial rate making problems

Detecting groups of influential observations in linear regression using survey data-adapting the forward search method

October 2011

·

359 Reads

The forward search is an effective and efficient approach when analyzing non-survey data to detect a group of influential observations which affect regression estimates greatly if they were removed from the model fitting. It has the advantages of avoiding masked effects among the outliers, as well as automatically identifying influential points. Compared to multiple-case deletion diagnostic statistics, this method reduces computational burden, especially when the dataset is very large. In this research we adapted the forward search to linear regression diagnostics for some types of complex survey data. While keeping the existing advantages of this method, we incorporate sample weights and the effects of stratification. A case study is conducted to illustrate the advantages of the adapted method.

Resampling method for the data adaptive choice of tuning constant in robust regression

January 2015

·

67 Reads

Robust regression estimators are designed to easily fit the data sets that are contaminated with outliers. A common problem associated with robust regression estimators to be completely specified is the proper choice of cut-off points called tuning constant. The choice of the tuning constant is somewhat arbitrary and is largely the matter of the personal preference. Several authors suggested different value of tuning constant ‘c’ for various M estimators. Yohai (1974) considered a class of error distribution in the linear regression model and showed how to choose c for Huber’s robust regression estimator so that the resulting estimator was minimax over the class of error distribution. Kelly(1992), with a different approach, showed that the choice of tuning constant is critical in the trade-off between bias and variance and suggested minimum choice of jackknife asymptotic mean-square error of the estimator to choose tuning constant. Salahuddin (1990) used leave-one-out cross-validation to choose an optimal value of tuning constant for Andrews’ wave estimator. We propose K-fold crossvalidation procedure for choosing optimal choice of tuning constant to minimize crossvalidated absolute Median Predicted Residual. Furthermore, we investigated a suitable preliminary resistant estimator to arrive at a good robust fit. Andrew’s, Tukey’s, Qadir and Asad robust regression estimators are explored and compared. The study found that the proposed technique is working well.

Exploring the correlation between serious leisure and recreation specialization during role transition of food critics using addictive tendency as an moderator

December 2011

·

20 Reads

With the development of modern food culture, the numbers of food critics or food review commentators actively participating in various kinds of food review programs are increasing fast. The food critics are keen to discover new food tastes and new dining experiences, and they also love to make comparison with other gourmet foods. Their involvement in food review programs ranges from low-involvement pursuit of good taste food to high-level involvement that the food critics have to do own research. From another viewpoint, through the series of transition to high-level involvement in food review programs with specialization, their behaviour will be transcended from simply participation as leisure activity to becoming new focus of life. Our study was based on the theories of Stebbins (1982) and Bryan (1977) who proposed that any recreation will eventually become a specialization. We analyzed the involvement of food critics in food review programs and explored the causal relationship between "serious leisure", "addictive tendency" and "recreation specialization". We examined the moderating effect of "addictive tendency" on the role transition of program participants. Our study launched a questionnaire survey, in which we selected test subjects that were food columnists, bloggers, and amateur commentators who had participated in food tasting and dining programs. Our analysis results show that if the characteristics of "serious leisure" reflected by program participants are more notable, the behaviour of activity participant will be demonstrated with higher level of recreation specialization. Also, guest commentators in food review programs can regulate the addictive tendency during the transition process in order to attain higher level of recreation specialization.

On optimum invariant test of independence of two sets of variates with additional information on covariance matrix

January 1992

·

3 Reads

A test of independence of two sets of variates has been considered under the assumption that only a part of the covariance matrix is known. This has been interpreted as a testing problem with incomplete data. The likelihood ratio test (LRT) for the problem has been obtained by I. Olkin and M. Sylvan [Multivariate Anal. IV, Proc. 4th int. Symp., Dayton 1975, 175-191 (1977; Zbl 0364.62053)]. We have derived an optimum invariant test which is LMPI and locally minimax but the test is not LRT. However, under special situation LRT has been shown to be UMPI.

Coding theorems on new additive information measure of order α

March 2018

·

11 Reads

In this article we develop a new additive information measure of order α and a new average code-word length and develop the noiseless coding theorems for discrete channel. Also we show that the measures defined in this communication are the generalizations of some well-known measures in the subject of coding and information theory. The results obtained in this article are verified by considering Huffman and Shannon-Fano coding schemes by taking an empirical data. The important properties of the new information measure have also been studied.

Table 3
Descriptive Statistics of School Administrators' Job Satisfaction
Distribution of School Administrators' Work-Life Balance Scores by Variables
A quantitative study of school administrators‘ work-life balance and job satisfaction in public schools

December 2014

·

3,132 Reads

The main purpose of this paper is to examine the level of school administrators’ work-life balance and job satisfaction. In this context, the connection between school administrators’ work-life balance and job satisfaction were investigated. In the present study, 139 school administrators were contacted, using a random sampling method. Descriptive statistics was used to identify participants’ opinions in the study. Additionally regression analysis was utilized to establish whether work-life balance significantly predicts job satisfaction. According to the results obtained, it was seen that the levels of school administrators’ job satisfaction were generally high level. At the same time it was determined that the balance of school administrators’ work and family life was medium level. Finally, according to the result of regression analysis, a meaningful but low level correlation was established between school administrators’ work-life balance and job satisfaction.

On the construction of admissible tests for the exponential distribution with numerical applications

January 1995

·

4 Reads

We use the method presented by the first author and J. I. Marden [Statistical decision theory and related topics IV, Pap. 4th Purdue Symp.; West Lafayette/Indiana 1986, Vol. 2, 225-237 (1988; Zbl 0688.62009)] to construct an admissible test which dominates a given inadmissible one for testing two sided hypotheses on the geometric distribution. We give some numerical computations to verify that the constructed test actually dominates the given one. We also include an algorithm which can be used to construct such a test numerically.

An empirical study on online group buying (ogb) adoption behavior in china

November 2014

·

215 Reads

The purpose of this study is to propose a theoretical model to examine the crucial factors affecting OGB adoption in China by integrating the technology acceptance model (TAM), the theory of planned behavior (TPB) and other influence factors. The influence factors are classified into three types in theoretical model: technological acceptance factors, social factors and psychological factors. Then a questionnaire survey and statistical analysis method are used and 232 valid samples are collected. The research model and hypotheses are tested with the help of SPSS 17.0 software. The results show that perceived usefulness, perceived ease of use, subjective norm, word-of-mouth, perceived low-price, and perceived playfulness have positive influences on consumers‟ adoption intention towards OGB in China. Nevertheless, the results demonstrate that perceived low price has no significant influence on consumers perceived risk. Similarly, the relationship between perceived risk and customers‟ adoption intention towards OGB is not supported either. Overall, this study extends the application of TAM and TPB to study OGB adoption behavior in developing countries. Also, the findings make a significant contribution to the understanding of the influential factors on consumers adoption intention towards OGB in China and provide specific directions for online group buying websites to improve their service and enhance their competitions.

Strategy of board structure for financial performance -the empirical analysis of chinese companies with issued ADRs

December 2012

·

10 Reads

The discussion on Board of Directors' responsibility is not novel presently. Nevertheless, addressing the impact of financial performance on the multinational business of Chinese companies is worth exploring. In this study, financial performance was measured by financial statements, where the performance included ROA (return on assets) and ROE (return on equity). The findings indicated that parts of directors' characteristics were the significantly explanatory variables of financial performance. Therefore, the improved board structure could lead to an increasing financial performance.

Top-cited authors