Equilibrium logic is a logical characterization of Answer Set Programming (ASP). We introduce Deontic Equilibrium Logic with eXplicit negation (DELX), its extension for normative reasoning. In contrast to modal approaches, DELX utilizes a normal form that restricts deontic operators solely to atoms. We establish that any theories in DELX can be reduced to ASP, and demonstrate the efficacy of this minimalist approach in addressing key challenges from the defeasible deontic logic literature.
Programmable Logic Controllers (PLCs) are important subsystems in Industrial Internet of Things (IIoT) systems. Recently, sophisticated and targeted cyber-attacks against owners and operators of industrial control systems in IIoT systems have become more frequent. In this work we present a Markov model for assessing the dependability of PLCs in Industrial Internet of Things (IIoT) systems, and its main indicator of availability—the stationary coefficient of availability (AC). We show the AC under the consideration of cyber-attacks, with focus on DoS attacks. The model provides a basis to assess how the dependability of a PLC is affected by cyber-attacks. We describe the states and potential state transitions for the PLC in an IIoT environment with a Markov model. We simulated different use cases based on the model and show with exemplary parameter settings how the probabilities of the states of the PLC’s subsystems can be calculated and how the AC can be derived from the model. Results of the simulation are used to analyze the influence of cyber-attack rates on the PLC availability and it is shown how DoS-attacks impact the PLC’s dependability.
In this paper, we introduce game-theoretic semantics (GTS) for Qualitative Choice Logic (QCL), which, in order to express preferences, extends classical propositional logic with an additional connective called ordered disjunction. In particular, we present a new semantics that makes use of GTS negation and, by doing so, avoids contentious behavior of negation in existing QCL-semantics.
The expressiveness of any given formalism lays the theoretical foundation for more specialized topics such as investigating dynamic reasoning environments. The modeling capabilities of the formalism under investigation yield immediate (im)possibility results in such contexts. In this paper we investigate the expressiveness of assumption-based argumentation (ABA), one of the major structured argumentation formalisms. In particular, we examine so-called signatures, i.e., sets of extensions that can be realized under a given semantics. We characterize the signatures of common ABA semantics for flat, finite frameworks with and without preferences. We also give several results regarding conclusion-based semantics for ABA.
Answer-Set Programming (ASP) is a popular declarative reasoning and problem solving formalism. Due to the increasing interest in explainability, several explanation approaches have been developed for ASP. However, while those formalisms are correct and interesting on their own, most are more technical and less oriented towards philosophical or social concepts of explanation. In this work, we study the notion of contrastive explanation, i.e., answering questions of the form “Why P instead of Q?”, in the context of ASP. In particular, we are interested in answering why atoms are included in an answer set, whereas others are not. Contrastive explainability has recently become popular due to its strong support from the philosophical, cognitive, and social sciences and its apparent ability to provide explanations that are concise and intuitive for humans. We formally define contrastive explanations for ASP based on counterfactual reasoning about programs. Furthermore, we demonstrate the usefulness of the concept on example applications and give some complexity results. The latter also provide a guideline as to how the explanations can be computed in practice.
The present research article describes a study on the wear resistance of high manganese Hadfield steel produced through a clean metallurgical process during dry sliding wear conditions; solidified ingots are subjected to heat treatments, such as annealing, ice water quenching, and age hardening for 2 h at 550 °C, 600 °C, 650 °C, and 700 °C. Samples are prepared as per the standards for mechanical and tribological tests. The peak hardness of 292.87 HV is observed for alloy 4 aged at 600 °C. The ultimate tensile strength of 434.18 MPa is observed for alloy 3 aged at 600 °C. The minimum specific wear rate of 1.5649E-05 mm3/N-m is witnessed for the as-cast sample of alloy 4 at 50 N load and 1.885 m/s speed. Wear tracks are analyzed through SEM, and microstructures are retrieved from OM, SEM, and TEM. The XRD patterns revealed the developed steel is austenitic in nature. Furthermore, to validate the wear rate, a total of 8 machine learning models/ensembles are developed and trained as it is obvious to associate tribological and material features with ML models. With an efficiency of 94%, Decision Tree Regressor outperformed all other constructed models using R2 values as the performance assessment criterion.
Argumentative Zoning (AZ) is a tool to extract salient information from scientific texts for further Natural Language Processing (NLP) tasks, e.g. scientific articles summarisation. AZ defines the main rhetorical structure in scientific articles. The lack of large AZ annotated benchmark datasets along with the manual annotation complexity of scientific texts form a bottle neck in utilizing AZ for scientific NLP tasks. Aiming to solve this problem, in previous work, we presented an AZ-annotation platform that defines and uses four categories, or zones (Claim, Method, Result, Conclusion) that are used to label sentences in scientific articles. The platform helps to create benchmark datasets to be used with the AZ tool. In this work we look at the usability of the said platform to create/expand datasets for AZ. We present a annotation experiment, composed of two annotation rounds, selected scientific articles from the ACL anthology corpus are annotated using the platform. We compare the user annotations with a ground truth annotation and compute the inter annotation agreement. The annotations obtained in this way are used as training data for various BERT-based models to predict the zone of a given sentence from a scientific article. We compare the trained models with a model trained on a baseline AZ corpus.
Search engines have become essential tools for learning, providing access to vast amounts of educational resources. However, selecting the most suitable resources from numerous options can be challenging for learners. While search engines primarily rank resources based on topical relevance, factors like understandability and engagement are crucial for effective learning as well. Understandability, a key aspect of text, is often associated with readability. This study evaluates eight commonly used readability measures to determine their effectiveness in predicting understandability, engagement, topical relevance, and user-assigned ranks. The empirical evaluation employs a survey-based methodology, collecting explicit relevance feedback from participants regarding their preferences for learning from web pages. The relevance data was then analyzed concerning the readability measures. The findings highlight that readability measures are not only reliable predictors of understandability but also of engagement. Specifically, the FKGL and GFI measures demonstrate the highest and most consistent correlation with perceived understandability and engagement. This research provides valuable insights for selecting effective readability measures to tailor search results to the users’ learning needs.
General-purpose search engines are frequently used to retrieve content for learning. However, their ranking strategies are typically optimised for relevance, which means that they do not take into account other criteria important in the learning context, such as the understandability and the degree of engagement of the retrieved resources. We have conducted a user study to assess the extent to which ranking algorithms used by a popular search engine satisfy the expectations of users who are learning by searching. We study the relationships between users’ perceptions of topical relevance, engagement, and understandability for retrieved documents with respect to their ranks. While we observe that the perceived user-assigned rank is strongly associated with all dimensions of relevance under study, specifically engagement (\(\rho =0.89\)), understandability (\(\rho =0.58\)) and topical relevance (\(\rho =0.88\)), the relationship between SERP ranks and user-assigned ranks appears unstable, indicating that learners are not necessarily always served well by general-purpose search engines.
Tracking an object’s 6D pose, while either the object itself or the observing camera is moving, is important for many robotics and augmented reality applications. While exploiting temporal priors eases this problem, object-specific knowledge is required to recover when tracking is lost. Under the tight time constraints of the tracking task, RGB(D)-based methods are often conceptionally complex or rely on heuristic motion models. In comparison, we propose to simplify object tracking to a reinforced point cloud (depth only) alignment task. This allows us to train a streamlined approach from scratch with limited amounts of sparse 3D point clouds, compared to the large datasets of diverse RGBD sequences required in previous works. We incorporate temporal frame-to-frame registration with object-based recovery by frame-to-model refinement using a reinforcement learning (RL) agent that jointly solves for both objectives. We also show that the RL agent’s uncertainty and a rendering-based mask propagation are effective reinitialization triggers.
The automation of visual quality inspection is becoming increasingly important in manufacturing industries. The objective is to ensure that manufactured products meet specific quality characteristics. Manual inspection by trained personnel is the preferred method in most industries due to the difficulty of identifying defects of various types and sizes. Sensor placement for 3D automatic visual inspection is a growing computer vision and robotics area. Although some methods have been proposed, they struggle to provide high-speed inspection and complete coverage. A fundamental requirement is to inspect the product with a certain specific resolution to detect all defects of a particular size, which is still an open problem. Therefore, we propose a novel model-based approach to automatically generate optimal viewpoints guaranteeing maximal coverage of the object’s surface at a specific spatial resolution that depends on the requirements of the problem. This is done by ray tracing information from the sensor to the object to be inspected once the sensor model and the 3D mesh of the object are known. In contrast to existing algorithms for optimal viewpoints generation, our approach includes the spatial resolution within the viewpoint planning process. We demonstrate that our approach yields optimal viewpoints that achieve complete coverage and a desired spatial resolution at the same time, while the number of optimal viewpoints is kept small, limiting the time required for inspection.
High‐dimensional compositional data are commonplace in the modern omics sciences, among others. Analysis of compositional data requires the proper choice of a log‐ratio coordinate representation, since their relative nature is not compatible with the direct use of standard statistical methods. Principal balances, a particular class of orthonormal log‐ratio coordinates, are well suited to this context as they are constructed so that the first few coordinates capture most of the compositional variability of data set. Focusing on regression and classification problems in high dimensions, we propose a novel partial least squares (PLS) procedure to construct principal balances that maximize the explained variability of the response variable and notably ease interpretability when compared to the ordinary PLS formulation. The proposed PLS principal balance approach can be understood as a generalized version of common log‐contrast models since, instead of just one, multiple orthonormal log‐contrasts are estimated simultaneously. We demonstrate the performance of the proposed method using both simulated and empirical data sets.
In this paper, we consider the topology optimization for a bipolar plate of a hydrogen electrolysis cell. We present a model for the bipolar plate using the Stokes equation with an additional drag term, which models the influence of fluid and solid regions. Furthermore, we derive a criterion for a uniform flow distribution in the bipolar plate. To obtain shapes that are well‐manufacturable, we introduce a novel smoothing technique for the fluid velocity. Finally, we present some numerical results and investigate the influence of the smoothing on the obtained shapes.
Accurate streamflow simulations rely on good estimates of the catchment-scale soil moisture distribution. Here, we evaluated the potential of Sentinel-1 backscatter data assimilation (DA) to improve soil moisture and streamflow estimates. Our DA system consisted of the Noah-MP land surface model coupled to the HyMAP river routing model and the water cloud model as backscatter observation operator. The DA system was set up at 0.01° resolution for two contrasting catchments in Belgium: i) the Demer catchment dominated by agriculture, and ii) the Ourthe catchment dominated by mixed forests. We present results of two experiments with an ensemble Kalman filter updating either soil moisture only or soil moisture and Leaf Area Index (LAI). The DA experiments covered the period January 2015 through August 2021 and were evaluated with independent rainfall error estimates based on station data, LAI from optical remote sensing, soil moisture retrievals from passive microwave observations, and streamflow measurements. Our results indicate that the assimilation of Sentinel-1 backscatter observations can partly correct errors in surface soil moisture due to rainfall errors and overall improve surface soil moisture estimates. However, updating soil moisture and LAI simultaneously did not bring any benefit over updating soil moisture only. Our results further indicate that streamflow estimates can be improved through Sentinel-1 DA in a catchment with strong soil moisture-runoff coupling, as observed for the Ourthe catchment, suggesting that there is potential for Sentinel-1 DA even for forested catchments.
Identification of the available friction potential is crucial for road safety but difficult, in particular at normal driving. This paper aims to contribute by presenting an effect-based method for slip slope change detection related to friction potential changes at all wheel-drive vehicles applying active drive force excitation. The proposed estimation approach relies above all on the wheel speeds and the axle/wheel drive forces of the front and rear axle. Different types of periodic active drive force excitation that are superimposed to the drive force requested by the driver while maintaining the desired level of speed or acceleration are investigated w.r.t. the availability of the estimates and overall effectiveness of the estimator. Vehicle tests are performed to evaluate theoretical results and the (co-)driver’s perception of the active drive force excitation. Results from both the simulation study and vehicle tests show that the proposed method allows to reliably estimate slip slope changes at all-wheel drive vehicles in driving conditions with low levels of drive force excitation.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.