José A. Pino’s research while affiliated with University of Chile and other places
What is this page?
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
We present a methodology to handle the problem of planning sales goals. The methodology supports the retail manager to carry out simulations to find the most plausible goals for the future. One of the novel aspects of this methodology is that the analysis is based not on current sales levels, as most previous works do, but on those in the future, making a more precise and accurate analysis of the situation. The work presents the solution for a scenario using three sales performance indicators: foot traffic, conversion rate and ticket mean value for sales, but it explains how it can be generalized to more indicators. The contribution of this work is in the first place a framework, which consists of a methodology for performing sales planning, then, an algorithm, which finds the best prediction model for a particular store, and finally, a tool, which helps sales planners to set realistic sales goals based on the predicted sales. First we present the method to choose the best indicator prediction model for each retail store and then we present a tool which allows the retail manager estimate the improvements on the indicators in order to attain a desired sales goal level; the managers may then perform several simulations for various scenarios in a fast and efficient way. The developed tool implementing this methodology was validated by experts in the subject of administration of retail stores yielding good results.
Purpose
Business process modeling faces a difficult balance: on the one hand, organizations seek to enact, control and automate business processes through formal structures (procedures and rules). On the other hand, organizations also seek to embrace flexibility, change, innovation, value orientation, and dynamic capabilities, which require informal structures (unique user experiences). Addressing this difficulty, the authors propose the composite approach, which integrates formal and informal process structures. The composite approach adopts a socio-material conceptual lens, where both material and human agencies are supported.
Design/methodology/approach
The study follows a design science research methodology. An innovative artifact – the composite approach – is introduced. The composite approach is evaluated in an empirical experiment.
Findings
The experimental results show that the composite approach improves model understandability and situation understandability.
Research limitations/implications
This research explores the challenges and opportunities brought by adopting a socio-material conceptual lens to represent business processes.
Originality/value
The study contributes an innovative hybrid approach for modeling business processes, articulating coordination and contextual knowledge. The proposed approach can be used to improve model understandability and situation understandability. The study also extends the socio-material conceptual lens over process modeling with a theoretical framework integrating coordination and contextual knowledge.
Virtual Reality has been successfully used to implement highly motivating learning environments where learners interact with the represented environment using hand gestures. However, little has been done in order to study which are the most convenient gestures for implementing this interaction in order to discover good practices for HCI. This investigation proposes a guideline for gesture evaluation based on user experience, in the scenario of small children learning words of a foreign language. Particularly, it provides a starting point by contrasting two gestures per action for selecting, moving and inspecting objects in a virtual environment. This study found out that these gestures were not indistinguishable. Instead, children have a clear preference for each action according to several indicators, which enabled us to establish a ranking of gesture preference for each action. Further research may expand the experiment by including additional gestures and actions.
To create a geo-collaborative hyperhistory, physical areas associated with data and multimedia content are geolocalized over a map, from which links to other areas can be generated, which define paths of exploration of the hypernarrative. In this work in progress, we aim at facilitating the creation of geo-collaborative hyperstories, by redesigning the HCI of an existing application using implicit HCI principles. Implicit HCI (iHCI) advocates using the user's context information to anticipate the actions they want to perform, facilitating interaction and alleviating their cognitive load. iHCI has usually used as single-user interaction; therefore, we explore ways to extend its reach by taking contextual information from a group that works collaboratively. The result is a redesign proposal of six frequent tasks during creating and reading a hypernarrative described according to the existing literature on iHCI.KeywordsGeocollaborationHyperstoriesImplicit human-computer interface
This paper presents a survey of innovative concepts and technologies involved in virtual museums (ViM) that shows their advantages and disadvantages in comparison with physical museums. We describe important lessons learned during the creation of three major virtual museums between 2010 and 2020 with partners at universities from Armenia, Germany, and Chile. Based on their categories and features, we distinguish between content-, communication- and collaboration-centric museums with a special focus on learning and co-curation. We give an overview of a generative approach to ViMs using the ViMCOX metadata format, the curator software suite ViMEDEAS, and a comprehensive validation and verification management. Theoretical considerations include exhibition design and new room concepts, positioning objects in their context, artwork authenticity, digital instances and rights management, distributed items, private museum and universal access, immersion, and tour and interaction design for people of all ages. As a result, this survey identifies different approaches and advocates for stakeholders’ collaboration throughout the life cycle in determining the ViM's direction and evolution, its concepts, collection type, and the technologies used with their requirements and evaluation methods. The paper ends with a brief perspective on the use of artificial intelligence in ViMs.
Modern technologies and various domains of human activities increasingly rely on data science to develop smarter and autonomous systems. This trend has already changed the whole landscape of the global economy becoming more AI-driven. Massive production of data by humans and machines, its availability for feasible processing with advent of deep learning infrastructures, combined with advancements in reliable information transfer capacities, open unbounded horizons for societal progress in close future. Quite naturally, this brings also new challenges for science and industry.
In that context, Internet of things (IoT) is an enormously huge factory of monitoring and data generation. It enables countless devices to act as sensors which record and manipulate data, while requiring efficient algorithms to derive actionable knowledge. Billions of end-users equipped with smart mobile phones are also producing immensely large volumes of data, being it about user interaction or indirect telemetry such as location coordinates. Social networks represent another kind of data-intensive sources, with both structured and unstructured components, containing valuable information about world’s connectivity, dynamism, and more. Last but not least, to help businesses run smoothly, today’s cloud computing infrastructures and applications are also serviced and managed through measuring huge amounts of data to leverage in various predictive and automation tasks for healthy performance and permanent availability. Therefore, all these technology areas, experts and practitioners, are facing innovation challenges on building novel methodologies, accurate models, and systems for respective data-driven solutions which are effective and efficient. In view of the complexity of contemporary neural network architectures and models with millions of parameters they derive, one of such challenges is related to the concept of explainability of the machine learning models. It refers to the ability of the model to give information which can be interpreted by humans about the reasons for the decision made or recommendation released. These challenges can only be met with a mix of basic research, process modeling and simulation under uncertainty using qualitative and quantitative methods from the involved sciences, and taking into account international standards and adequate evaluation methods.
Based on a successful funded collaboration between the American University of Armenia, the University of Duisburg-Essen and the University of Chile, in previous years a network was built, and in September 2020 a group of researchers gathered (although virtually) for the 2 nd CODASSCA workshop on “Collaborative Technologies and Data Science in Smart City Applications”. This event has attracted 25 paper submissions which deal with the problems and challenges mentioned above. The studies are in specialized areas and disclose novel solutions and approaches based on existing theories suitably applied.
The authors of the best papers published in the conference proceedings on Collaborative Technologies and Data Science in Artificial Intelligence Applications by Logos edition Berlin were invited to submit significantly extended and improved versions of their contributions to be considered for a journal special issue of J.UCS. There was also a J.UCS open call so that any author could submit papers on the highlighted subject. For this volume, we selected those dealing with more theoretical issues which were rigorously reviewed in three rounds and 6 papers nominated to be published.
The editors would like to express their gratitude to J.UCS foundation for accepting the special issues in their journal, to the German Research Foundation (DFG), the German Academic Exchange Service (DAAD) and the universities and sponsors involved for funding the common activities and thank the editors of the CODASSCA2020 proceedings for their ongoing encouragement and support, the authors for their contributions, and the anonymous reviewers for their invaluable support.
The paper “Incident Management for Explainable and Automated Root Cause Analysis in Cloud Data Centers” by Arnak Poghosyan, Ashot Harutyunyan, Naira Grigoryan, and Nicholas Kushmerick addresses an increasingly important problem towards autonomous or self-X systems, intelligent management of modern cloud environments with an emphasis on explainable AI. It demonstrates techniques and methods that greatly help in automated discovery of explicit conditions leading to data center incidents.
The paper “Temporal Accelerators: Unleashing the Potential of Embedded FPGAs” by Christopher Cichiwskyj and Gregor Schiele presents an approach for executing computational tasks that can be split into sequential sub-tasks. It divides accelerators into multiple, smaller parts and uses the reconfiguration capabilities of the FPGA to execute the parts according to a task graph. That improves the energy consumption and the cost of using FPGAs in IoT devices.
The paper “On Recurrent Neural Network based Theorem Prover for First Order Minimal Logic” by Ashot Baghdasaryan and Hovhannes Bolibekyan investigates using recurrent neural networks to determine the order of proof search in a sequent calculus for first-order minimal logic with a history mechanism. It demonstrates reduced durations in automated theorem proving systems.
The paper “Incremental Autoencoders for Text Streams Clustering in Social Networks” by Amal Rekik and Salma Jamoussi proposes a deep learning method to identify trending topics in a social network. It is built on detecting changes in streams of tweets. The method is experimentally validated to outperform relevant data stream algorithms in identifying “hot” topics.
The paper “E-Capacity–Equivocation Region of Wiretap Channel” by Mariam Haroutunian studies a secure communication problem over the wiretap channel, where information transfer from the source to a legitimate receiver needs to be realized maximally secretly for an eavesdropper. This is an information-theoretic research which generalizes the capacity-equivocation region and secrecy-capacity function of the wiretap channel subject to error exponent criterion, thus deriving new and extended fundamental limits in reliable and secure communication in presence of a wiretapper.
The paper “Leveraging Multifaceted Proximity Measures among Developers in Predicting Future Collaborations to Improve the Social Capital of Software Projects” by Amit Kumar and Sonali Agarwal targets improving the social capital of individual software developers and projects using machine learning. Authors’ approach applies network proximity and developer activity features to build a classifier for predicting the future collaborations among developers and generating relevant recommendations.
Foot traffic, conversion rate, and total sales during a period of time may be considered to be important indicators of store performance. Forecasting them may allow for business managers plan stores operation in the near future in an efficient way. This work presents a regression method that is able to predict these three indicators based on previous data. The previous data includes values for the indicators in the recent past; therefore, it is a requirement to have gathered them in a suitable manner. The previous data also considers other values that are easily obtained, such as the day of the week and hour of the day of the indicators. The novelty of the approach that is presented here is that it provides a confidence interval for the predicted information and the importance of each parameter for the predicted output values, without additional processing or analysis. Real data gathered by Follow Up, a customer experience company, was used to test the proposed method. The method was tried for making predictions for up to one month in the future. The results of the experiments show that the proposed method has a comparable performance to the best methods proposed in the past that do not provide confidence intervals or parameter rankings. The method obtains RMSE of 0.0713 for foot traffic prediction, 0.0795 for conversion rate forecasting, and 0.0757 for sales prediction.
The Journal of Universal Computer Science is a monthly peer-reviewed open-access scientific journal covering all aspects of computer science, launched in 1994, so becoming twenty-five years old in 2019. In order to celebrate its anniversary, this study presents a bibliometric overview of the leading publication and citation trends occurring in the journal. The aim of the work is to identify the most relevant authors, institutions, countries, and analyze their evolution through time. The article uses the Web of Science Core Collection citations and the ACM Computing Classification System in order to search for the bibliographic information. Our study also develops a graphical mapping of the bibliometric material by using the visualization of similarities (VOS) viewer. With this software, the work analyzes bibliographic coupling, citation and co-citation analysis, co-authorship, and co-occurrence of keywords. The results underline the significant growth of the journal through time and its international diversity having publications from countries all over the world and covering a wide range of categories which confirms the “universal” character of the journal.
Predicting an individual’s risk of getting a stroke has been a research subject for many authors worldwide since it is a frequent illness and there is strong evidence that early awareness of having that risk can be beneficial for prevention and treatment. Many Governments have been collecting medical data about their own population with the purpose of using artificial intelligence methods for making those predictions. The most accurate ones are based on so called black-box methods which give little or no information about why they make a certain prediction. However, in the medical field the explanations are sometimes more important than the accuracy since they allow specialists to gain insight about the factors that influence the risk level. It is also frequent to find medical information records with some missing data. In this work, we present the development of a prediction method which not only outperforms some other existing ones but it also gives information about the most probable causes of a high stroke risk and can deal with incomplete data records. It is based on the Dempster-Shafer theory of plausibility. For the testing we used data provided by the regional hospital in Okayama, Japan, a country in which people are compelled to undergo annual health checkups by law. The paper presents experiments comparing the results of the Dempster-Shafer method with the ones obtained using other well-known machine learning methods like Multilayer perceptron, Support Vector Machines and Naive Bayes. Our approach performed the best in these experiments with some missing data. It also presents an analysis of the interpretation of rules produced by the method for doing the classification. The rules were validated by both medical literature and human specialists.
There are often gaps between the lived experiences of end users and the official version of processes as espoused by the organization. To understand and address these gaps, we propose and evaluate process stories, a method to capture knowledge from end users based on organizational storytelling and visual narrative theories. The method addresses two dimensions related to business processes: 1) coordination knowledge, explaining how activities enfold over time; and 2) contextual knowledge, explaining how coordination depends on other contingency factors. The method is evaluated by comparing process stories against process models officially supported by the participating organizations. The results suggest that process stories identify more activities, events, and actors than official processes, which are supported by a diversity of contextual elements. We then qualitatively analyse these elements to identify the contributions of process stories to process knowledge. Based on the quantitative and qualitative analysis, we draw several implications for business process management.
... This has led to the definition of different approaches for virtual museums, based on the used digitization procedures (Table 1). The first one is based on the digitization and on the online dissemination of the heritage preserved in the museums; another one is based on the digitization on the online dissemination of the museums; a third one is based on the 'born digital' concept and provide an unrealistic virtual space, with no connection to reality, used to define a virtual 'container' for digital copies of the artefacts to be show (Baloian et al., 2021). There is a subtle difference between those three approaches since the first one in focused on the heritage, the second one on the museums, the third one on the virtual container and on the heritage. ...
... We chose two factors as key indicators of a company's success, namely 1) sales (per month) and 2) number of customers (per day) (Panay et al., 2021). ...
... Co-authorship defines the number of publications that are co-authored by multiple authors, institutions or countries. It can be used to explore the authors' position in scientific communities (Baloian et al. 2021). Co-occurrence or co-word analysis identifies the frequencies of words in titles, abstracts, or in text. ...
... The authors propose an automatic hyperparameter optimization (AutoHPO) based on a deep neural network (DNN) as a two-step technique that uses random forest regression to impute missing values. The study by Peñafiel et al. [8] presents a predictive model for stroke risk using an Electronic Health Record, focusing on interpretability and handling missing data. The model utilizes a Dempster-Shafer theory-based approach and outperforms other machine learning methods, especially with incomplete data. ...
... Cui et al [20] analysed that the prediction of multi-output regression improved the management of healthcare resources, and payment. Panay et al [21] studied that the regression model is necessary to predict the treatment cost to a patient. Thus, the regression analysis technique can predict the risk of patients, mortality rate, resources' requirements and price requirements. ...
... Gaining insights into the widely investigated themes in translation studies proves essential in comprehending the shift and predicting future research trends (Huang & Liu, 2019). The cooccurrence is determined by the frequency with which two keywords appear together in publications (Zurita et al., 2020). Through the analysis of keywords, which reflect the significant research issues discussed in retrieved articles, this study can effectively scrutinize and identify the explored subjects that have received considerable attention in translation studies. ...
... Li et al. [35] have shown that DT techniques can be extended to uncertain environment by employing Dempster-Shafer evidence theory. For the latest developments in areas related to the use of this theory, see the work by Peñafiel et al. [36]. ...
... Most studies have improved performance by including cost-related inputs such as prior total expenses and prescription prices. In contrast, some studies only depend on demographic and clinical information, such as diagnostic groups and medical tests, to create predictions [11], [12], [13]. Among all the machine learning algorithms used in healthcare price prediction, gradient boosting has consistently emerged as a top performer in accurately predicting healthcare costs, which is an ensemble learning approach that sequentially integrates weak regression tree models, using iterative optimization to minimize loss and use the minor absolute deviation. ...
... (2) Improve the emergency coordination command system Comprehensively improving the emergency response capability and giving full play to the linkage effect requires the establishment of an efficient, synergistic, and interactive emergency response collaborative command system [37]. In this paper, in conjunction with the establishment of the second-level emergency rapid response unit, an emergency collaborative command system with closely linked subjects and a straightforward response process is formed (as shown in Fig. 3). ...
... Process flexibility enables organizations to cope with uncertainty, emergence, and change. Many organizations nowadays face uncertain business environments (Cognini, Corradini, Gnesi, Polini, & Re, 2018;Mejri, Ghannouchi, & Martinho, 2016), including a constant state of emergence (de Albuquerque & Christ, 2015), and operational vagaries (Antunes, Tate, & Pino, 2019;Haseeb, Ahmad, Malik, & Anjum, 2019). These 'push' towards the realization of more flexible processes. ...