Ronin Institute
  • Montclair, New Jersey, United States
Recent publications
Recently, the retention time of pesticides has been considered an informative indicator of the ecological quality of pesticides. Two new possibilities are proposed for building pesticide retention time models using the CORAL program (http://www.insilico.eu/coral). Firstly, the possibility of being involved in modelling the correlation weights of local symmetry fragments in SMILES. Secondly, using two criteria of predictive potential (correlation ideality index and correlation intensity index) as a vector in Monte Carlo optimization for model building. Building models of the retention time of pesticides using the CORAL software confirms the effectiveness of these innovations.
We present the deep connections among (Anti) de Sitter geometry, and complex conformal gravity-Maxwell theory, stemming directly from a gauge theory of gravity based on the complex Clifford algebra Cl(4, C). This is attained by simply promoting the de (Anti) Sitter algebras so(4, 1), so(3, 2) to the real Clifford algebras Cl(4, 1, R), Cl(3, 2, R), respectively. This interplay between gauge theories of gravity based on Cl(4, 1, R), Cl(3, 2, R) , whose bivector-generators encode the de (Anti) Sitter algebras so(4, 1), so(3, 2), respectively, and 4D conformal gravity based on Cl(3, 1, R) is reminiscent of the AdSD+1/CFTD\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$AdS_{ D+1}/CFT_D$$\end{document} correspondence between D+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D+1$$\end{document}-dim gravity in the bulk and conformal field theory in the D-dim boundary. Although a plausible cancellation mechanism of the cosmological constant terms appearing in the real-valued curvature components associated with complex conformal gravity is possible, it does not occur simultaneously in the imaginary curvature components. Nevertheless, by including a Lagrange multiplier term in the action, it is still plausible that one might be able to find a restricted set of on-shell field configurations leading to a cancellation of the cosmological constant in curvature-squared actions due to the coupling among the real and imaginary components of the vierbein. We finalize with a brief discussion related to U(4)×U(4)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$U(4) \times U(4)$$\end{document} grand-unification models with gravity based on Cl(5,C)=Cl(4,C)⊕Cl(4,C)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ Cl (5, C) = Cl(4,C) \oplus Cl(4,C)$$\end{document}. It is plausible that these grand-unification models could also be traded for models based on GL(4,C)×GL(4,C)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ GL (4, C) \times GL(4, C) $$\end{document}.
Introduction Diabetic retinopathy (DR) is a condition in which the retina of the human eye is impaired. If not properly treated, the condition may spread throughout the eye and result in irreversible blindness. There is a shortage of qualified ophthalmologists in developing nations, and there is very low knowledge of this condition. However, it is feasible to provide initial treatment to patients by integrating automated instruments and treating the illness before it reaches an incurable stat. Objectives Identifying the various processes involved in identifying diabetic retinopathy, including preprocessing, feature extraction, segmentation, and classification. The work being discussed focuses on different detection approaches for diabetic retinopathy, specifically on CAD (Computer-Aided Diagnosis) models that are based on various AI (Artificial Intelligence) and image processing technologies. Computer-aided diagnosis (CAD) systems are designed to assist medical professionals in the diagnosis of various diseases, including diabetic retinopathy. CAD systems use various algorithms and image processing techniques to analyze medical images and extract relevant features for classification. By using AI and image processing technologies, CAD systems can help to automate and standardize the diagnostic process, improving accuracy and efficiency. Methodology To analyze the different strategies for detecting diabetic retinopathy, reputable publications and conferences are mined for research materials. The papers will be from the most recent years and will be categorized depending on the methods employed, such as machine learning, deep learning, etc. Each study is evaluated based on its technique, findings, dataset, and pros and cons. Results The data shows that journals account for most of the work in this study (51%), with conferences accounting for 40% of the work and book chapters accounting for 9%. In addition, the data depicts a year-by-year study of work relevant to diabetic retinopathy detection. The major data is available on Google Scholar compared to Elsevier and Science Direct for research precision. Google Scholar has 60% data, Elsevier has approximately 10%, and Science Direct has approx 30% data on diabetic retinopathy detection. Conclusion The conclusion suggests that to achieve high accuracy in detecting diabetic retinopathy, hybrid techniques should be developed in the future. This may involve combining multiple approaches, such as machine learning algorithms, image processing techniques, and human expertise, to create a more comprehensive and accurate diagnostic tool. By using a hybrid approach, it may be possible to overcome individual techniques' limitations and improve the accuracy of diabetic retinopathy detection. Further research and development in this area may lead to more effective screening and treatment for diabetic retinopathy, ultimately improving patient outcomes.
What used to be the practice of science and research – the understanding of natural phenomena – has undergone substantial alterations in the past century. As an aspect of society, science and its practitioners are not beyond the realm of contemporary transformations taking place in society driven by numerous causes, mainly of an economic nature. The purpose of this article is to present to young and enthusiastic students, who are consumed by their curiosity to investigate natural phenomena and therefore plan to join the academic ranks to perform research, what they will find in today’s academic environment. In addition, practical advice will be presented so that the variety of obstacles that we find in this day and age can be minimised so that the true purpose of the scientist – to think, to reflect and to perform experiments and analysis of natural phenomena – can be achieved, if at least to some extent. This article can be informative to some professional academics, although they might already know and have experienced everything that will be presented in this narrative.
Objectives: The body mass index (BMI) largely underestimates excess body fat, suggesting that the prevalence of obesity could be underestimated. Biologically, women are known to have higher body fat than men. This study aimed to compare the temporal trends in general obesity by sex, ethnicity and age among adults in the USA using the relative fat mass (RFM), a validated surrogate for whole-body fat percentage and BMI. Design: Population-based study. Setting: US National Health and Nutrition Examination Survey, from 1999-2000 to 2017-March 2020. Participants: A representative sample of adults 20-79 years in the USA. Main outcome measures: Age-adjusted prevalence of general obesity. RFM-defined obesity was diagnosed using validated cut-offs to predict all-cause mortality: RFM≥40% for women and ≥30% for men. BMI-defined obesity was diagnosed using a cut-off of 30 kg/m2. Results: Analysis included data from 47 667 adults. Among women, RFM-defined obesity prevalence was 64.7% (95% CI 62.1% to 67.3%) in 2017-2020, a linear increase of 13.9 percentage points (95% CI 9.0% to 18.9%; p<0.001) relative to 1999-2000. In contrast, the prevalence of BMI-defined obesity was 42.2% (95% CI 39.4% to 45.0%) in 2017-2020. Among men, the corresponding RFM-defined obesity prevalence was 45.8% (95% CI 42.0% to 49.7%), a linear increase of 12.0 percentage points (95% CI 6.6% to 17.3%; p<0.001). In contrast, the prevalence of BMI-defined obesity was 42.0 (95% CI 37.8% to 46.3%). The highest prevalence of RFM-defined obesity across years was observed in older adults (60-79 years) and Mexican Americans, in women and men. Conversely, the highest prevalence of BMI-defined obesity across years was observed in middle-age (40-59 years) and older adults, and in African American women. Conclusions: The use of a surrogate for whole-body fat percentage revealed a much higher prevalence of general obesity in the USA from 1999 to 2020, particularly among women, than that estimated using BMI, and detected a disproportionate higher prevalence of general obesity in older adults and Mexican Americans.
Coming soon my new article with relevant data never published before (RSA Parliament) and accepted by the JLS editor in chief, Professor The Lord Norton of Louth, Philip Norton, UK's greatest living expert on Parliament. An analysis of the Economic Freedom Fighters (EFF) party parliamentary behaviour from 2014 to 2022 and compare with the leader of the opposition, the Democratic Alliance party in the South African Parliament.
Background Climate change threatens Earth’s ice-based ecosystems which currently offer archives and eco-evolutionary experiments in the extreme. Arctic cryopeg brine (marine-derived, within permafrost) and sea ice brine, similar in subzero temperature and high salinity but different in temporal stability, are inhabited by microbes adapted to these extreme conditions. However, little is known about their viruses (community composition, diversity, interaction with hosts, or evolution) or how they might respond to geologically stable cryopeg versus fluctuating sea ice conditions. Results We used long- and short-read viromics and metatranscriptomics to study viruses in Arctic cryopeg brine, sea ice brine, and underlying seawater, recovering 11,088 vOTUs (~species-level taxonomic unit), a 4.4-fold increase of known viruses in these brines. More specifically, the long-read-powered viromes doubled the number of longer (≥25 kb) vOTUs generated and recovered more hypervariable regions by >5-fold compared to short-read viromes. Distribution assessment, by comparing to known viruses in public databases, supported that cryopeg brine viruses were of marine origin yet distinct from either sea ice brine or seawater viruses, while 94% of sea ice brine viruses were also present in seawater. A virus-encoded, ecologically important exopolysaccharide biosynthesis gene was identified, and many viruses (~half of metatranscriptome-inferred “active” vOTUs) were predicted as actively infecting the dominant microbial genera Marinobacter and Polaribacter in cryopeg and sea ice brines, respectively. Evolutionarily, microdiversity (intra-species genetic variations) analyses suggested that viruses within the stable cryopeg brine were under significantly lower evolutionary pressures than those in the fluctuating sea ice environment, while many sea ice brine virus-tail genes were under positive selection, indicating virus-host co-evolutionary arms races. Conclusions Our results confirmed the benefits of long-read-powered viromics in understanding the environmental virosphere through significantly improved genomic recovery, expanding viral discovery and the potential for biological inference. Evidence of viruses actively infecting the dominant microbes in subzero brines and modulating host metabolism underscored the potential impact of viruses on these remote and underexplored extreme ecosystems. Microdiversity results shed light on different strategies viruses use to evolve and adapt when extreme conditions are stable versus fluctuating. Together, these findings verify the value of long-read-powered viromics and provide foundational data on viral evolution and virus-microbe interactions in Earth’s destabilized and rapidly disappearing cryosphere.
To provide solutions to the world’s global challenges, there is an urgent demand for wise organisations, wise leadership, wise workers, and most importantly, for wise actions. Since the mid-1990s, Knowledge Management (KM) as a discipline and practice has emerged internationally, and it has passed through several phases of development. Simultaneously, in the last four decades we have experienced a growth of wisdom research and intensified discourses about Wisdom Management (WM) as a possible venue for dealing with wicked problems. The dilemma is whether the current sixth phase of KM is able to address the global problems of the world. Therefore, this conceptual paper seeks to answer the question if WM will complement or replace KM. This paper explores the evolutionary phases of KM; it presents the discourses about WM, and it discovers the presence of wisdom in the KM literature (1995–2022). The research approach is explorative and inductive. Methods of Computational Social Sciences and RStudio as an Integrated Development Environment (IDE) are utilised for analysing, visualising, and interpreting the data. This paper analyses 255 wisdom-related keywords that emerged from 39 sources written by 50 authors. The three data analyses methods are: word cloud data analysis; dictionary-based data analysis, and bipartite network analysis. The findings are presented in word clouds; keywords frequency; bipartite network, and in a synthesised framework to show the similar and different concepts of KM and WM. The novelty value of this paper is the result of the bipartite network analysis that allows to identify how authors and wisdom-related keywords are connected with the same word nodes. This paper contributes to KM because it is the first study to explore and illustrate the presence of wisdom attributes in the KM literature.
We develop a general framework for abstracting the behavior of an agent that operates in a nondeterministic domain, i.e., where the agent does not control the outcome of the nondeterministic actions, based on the nondeterministic situation calculus and the ConGolog programming language. We assume that we have both an abstract and a concrete nondeterministic basic action theory, and a refinement mapping which specifies how abstract actions, decomposed into agent actions and environment reactions, are implemented by concrete ConGolog programs. This new setting supports strategic reasoning and strategy synthesis, by allowing us to quantify separately on agent actions and environment reactions. We show that if the agent has a (strong FOND) plan/strategy to achieve a goal/complete a task at the abstract level, and it can always execute the nondeterministic abstract actions to completion at the concrete level, then there exist a refinement of it that is a (strong FOND) plan/strategy to achieve the refinement of the goal/task at the concrete level.
Importance: Preprints have been increasingly used in biomedical science, and a key feature of many platforms is public commenting. The content of these comments, however, has not been well studied, and it is unclear whether they resemble those found in journal peer review. Objective: To describe the content of comments on the bioRxiv and medRxiv preprint platforms. Design, setting, and participants: In this cross-sectional study, preprints posted on the bioRxiv and medRxiv platforms in 2020 were accessed through each platform's application programming interface on March 29, 2021, and a random sample of preprints containing between 1 and 20 comments was evaluated independently by 3 evaluators using an instrument to assess their features and general content. Main outcome and measures: The numbers and percentages of comments from authors or nonauthors were assessed, and the comments from nonauthors were assessed for content. These nonauthor comments were assessed to determine whether they included compliments, criticisms, corrections, suggestions, or questions, as well as their topics (eg, relevance, interpretation, and methods). Nonauthor comments were also analyzed to determine whether they included references, provided a summary of the findings, or questioned the preprint's conclusions. Results: Of 52 736 preprints, 3850 (7.3%) received at least 1 comment (mean [SD] follow-up, 7.5 [3.6] months), and the 1921 assessed comments (from 1037 preprints) had a median length of 43 words (range, 1-3172 words). The criticisms, corrections, or suggestions present in 694 of 1125 comments (61.7%) were the most prevalent content, followed by compliments (n = 428 [38.0%]) and questions (n = 393 [35.0%]). Criticisms usually regarded interpretation (n = 286), methodological design (n = 267), and data collection (n = 238), while compliments were mainly about relevance (n = 111) and implications (n = 72). Conclusions and relevance: In this cross-sectional study of preprint comments, topics commonly associated with journal peer review were frequent. However, only a small percentage of preprints posted on the bioRxiv and medRxiv platforms in 2020 received comments on these platforms. A clearer taxonomy of peer review roles would help to describe whether postpublication peer review fulfills them.
A method is proposed for choosing unit cells for a group of crystals so that they all appear as nearly similar as possible to a selected cell. Related unit cells with varying cell parameters or indexed with different lattice centering can be accommodated.
The present paper describes a quantum mechanical study of Cooper pairs from the point of view of Fermion pair correct spin. The basic idea of building the theory consists of describing each particle with an appropriate Gaussian function, transforming them into a soft quantum mechanical object. From here, a new orthonormalized basis set with adequate symmetry is constructed, and then one can easily build the two-particle spin functions and the two-particle energies. As the particle energies become repulsive, one can add a harmonic oscillator term to stabilize the initial Hamiltonian expectation values. Results indicate that a way to extend the description of Cooper pairs to N Fermion particles becomes a straightforward consequence of the theory developed here.
Commonly used data citation practices rely on unverifiable retrieval methods which are susceptible to content drift, which occurs when the data associated with an identifier have been allowed to change. Based on our earlier work on reliable dataset identifiers, we propose signed citations, i.e., customary data citations extended to also include a standards-based, verifiable, unique, and fixed-length digital content signature. We show that content signatures enable independent verification of the cited content and can improve the persistence of the citation. Because content signatures are location- and storage-medium-agnostic, cited data can be copied to new locations to ensure their persistence across current and future storage media and data networks. As a result, content signatures can be leveraged to help scalably store, locate, access, and independently verify content across new and existing data infrastructures. Content signatures can also be embedded inside content to create robust, distributed knowledge graphs that can be cited using a single signed citation. We describe applications of signed citations to solve real-world data collection, identification, and citation challenges.
This paper reports on the Hackathon Sessions organised at the Polar Data Forum IV (PDF IV) (20–24 September 2021), during which 351 participants from 50 different countries discussed collaboratively about the latest developments in polar data management. The 4th edition of the PDF hosted lively discussions on (i) best practices for polar data management, (ii) data policy, (ii) documenting data flows into aggregators, (iv) data interoperability, (v) polar federated search, (vi) semantics and vocabularies, (vii) Virtual Research Environments (VREs), and (viii) new polar technologies. This paper provides an overview of the organisational aspects of PDF IV and summarises the polar data objectives and outcomes by describing the conclusions drawn from the Hackathon Sessions.
Background Cancer patients often experience treatment-related symptoms which, if uncontrolled, may require emergency department admission. We developed models identifying breast or genitourinary cancer patients at the risk of attending emergency department (ED) within 30-days and demonstrated the development, validation, and proactive approach to in-production monitoring of an artificial intelligence-based predictive model during a 3-month simulated deployment at a cancer hospital in the United States. Methods We used routinely-collected electronic health record data to develop our predictive models. We evaluated models including a variational autoencoder k-nearest neighbors algorithm (VAE-kNN) and model behaviors with a sample containing 84,138 observations from 28,369 patients. We assessed the model during a 77-day production period exposure to live data using a proactively monitoring process with predefined metrics. Results Performance of the VAE-kNN algorithm is exceptional (Area under the receiver-operating characteristics, AUC = 0.80) and remains stable across demographic and disease groups over the production period (AUC 0.74–0.82). We can detect issues in data feeds using our monitoring process to create immediate insights into future model performance. Conclusions Our algorithm demonstrates exceptional performance at predicting risk of 30-day ED visits. We confirm that model outputs are equitable and stable over time using a proactive monitoring approach.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
07043, Montclair, New Jersey, United States
Head of institution
Jon F. Wilkins
Website
http://ronininstitute.org/