Background: Dual labour market theory raises questions about the relationship between non-standard employment and job quality. While scattered empirical evidence exists, there is a paucity of systematic evidence on the relationship between workers' employment status and job quality. Objective: The authors investigated the relation between workers' employment status (e.g., open-ended, long- and short-term fixed contracts, economically dependent and independent solo self-employment, and self-employment with employees) and important dimensions of job quality (JQ) (e.g., employment prospects, physical work environment, skills and discretion, and working times quality). Cross-national variation in that relation and causes of that variation (e.g., country-level unemployment rate and labour market efficiency) were also investigated. Methods: Hierarchical regression modelling was applied using a sample of 34,094 workers from the European Working Conditions Survey 2015. Results: The study highlighted a negative association between fixed-term contracts and JQ. For self-employed workers (except economically dependent self-employed workers) a generally positive association was observed. In this study, also positive associations were found between labour market efficiency at the country-level and some JQ indicators. National unemployment rates were negatively associated to most JQ indicators. Conclusion: Non-standard employment contracts exhibited poorer job quality than open-ended contracts. Stronger labour market organization centred around indicators of both flexibility and equity related to more beneficial job quality for all employment statuses, thereby promoting more labour market inclusivity.
Accumulating evidence shows that the posterior cerebellum is involved in mentalizing inferences of social events by detecting sequence information in these events, and building and updating internal models of these sequences. By applying anodal and sham cerebellar transcranial direct current stimulation (tDCS) on the posteromedial cerebellum of healthy participants, and using a serial reaction time (SRT) task paradigm, the current study examined the causal involvement of the cerebellum in implicitly learning sequences of social beliefs of others (Belief SRT) and non-social colored shapes (Cognitive SRT). Apart from the social or cognitive domain differences, both tasks were structurally identical. Results of anodal stimulation (i.e., 2 mA for 20 min) during the social Belief SRT task, did not show significant improvement in reaction times, however it did reveal generally faster responses for the Cognitive SRT task. This improved performance could also be observed after the cessation of stimulation after 30 min, and up to one week later. Our findings suggest a general positive effect of anodal cerebellar tDCS on implicit non-social Cognitive sequence learning, supporting a causal role of the cerebellum in this learning process. We speculate that the lack of tDCS modulation of the social Belief SRT task is due to the familiar and overlearned nature of attributing social beliefs, suggesting that easy and automatized tasks leave little room for improvement through tDCS.
Precision and effectiveness of Artificial Intelligence (AI) models are highly dependent on the availability of genuine, relevant, and representative training data. AI systems tested and validated on poor-quality datasets can produce inaccurate, erroneous, skewed, or harmful outcomes (actions, behaviors, or decisions), with far-reaching effects on individuals' rights and freedoms. Appropriate data governance for AI development poses manifold regulatory challenges, especially regarding personal data protection. An area of concern is compliance with rules for lawful collection and processing of personal data, which implies, inter alia, that using databases for AI design and development should be based on a clear and precise legal ground: the prior consent of the data subject or another specific valid legal basis. Faced with this challenge, the European Union's personal data protection legal framework does not provide a preferred, one-size-fits-all answer, and the best option will depend on the circumstances of each case. Although there is no hierarchy among the different legal bases for data processing, in doubtful cases, consent is generally understood by data controllers as a preferred or default choice for lawful data processing. Notwithstanding this perception, obtaining data subjects' consent is not without drawbacks for AI developers or AI-data controllers, as they must meet (and demonstrate) various requirements for the validity of consent. As a result, data subjects' consent could not be a suitable and realistic option to serve AI development purposes. In view of this, it is necessary to explore the possibility of basing this type of personal data processing on lawful grounds other than the data subject's consent, specifically, the legitimate interest of the data controller or third parties. Given its features, legitimate interests could help to meet the challenge of quality, quantity, and relevance of data curation for AI training. The aim of this article is to provide an initial conceptual approach to support the debate about data governance for AI development in the European Union (EU), as well as in non-EU jurisdictions with European-like data protection laws. Based on the rules set by the EU General Data Protection Regulation (GDPR), this paper starts by referring to the relevance of adequate data curation and processing for designing trustworthy AI systems, followed by a legal analysis and conceptualization of some difficulties data controllers face for lawful processing of personal data. After reflecting on the legal standards for obtaining data subject's valid consent, the paper argues that legitimate interests (if certain criteria are met) may better match the purpose of building AI training datasets.
Throughout the COVID-19 pandemic, public transport has been one of the hardest hit transport modes, losing ridership due to fear of contagion. This can partially be explained by the lack of preparedness in the sector to a pandemic scenario, as only few cities had epidemic contingency plans for the transport sector. To anticipate disruptions caused by future crises, we look at the preparedness and the response to COVID-19 by the public transport sector in Belgium. We interview all public transport operators in Belgium and analyze the interviews through the disaster management framework. We also aim to distill the lessons that can be learned from the pandemic to increase resilience in future public transport planning. We find that no operator in Belgium had contingency plans ready for a pandemic scenario, but that other plans were deployed to adapt their offer to COVID-19 conditions. Although all operators lost a significant part of ridership, their offer was maintained throughout the crisis, albeit at a decreased level for some operators. The availability of reliable and real-time data is identified as an important learning by the operators, as well as the ability to identify a core response team in case of a crisis. COVID-19 was seen by the operators as a learning platform to face future crises and highlighted the need to increase reactivity through better preparedness and data availability. We recommend the structural use of foresight methods through for example scenario planning to increase the preparedness of operators in the case of future disruptions.
The air transport industry is a competitive and volatile market, creating a challenging operating environment for both airports and airlines. While airline market structures are rapidly changing, airports are in a continuous need to improve their technical efficiency. Therefore, in this paper, we examine the effect of airline dominance on airport technical efficiency. Previous research is contributed by examining this effect on a balanced panel dataset of medium-sized European airports while considering the effect of the macro-environment, ownership, belonging to an airport group, and the network structure of the dominant carrier (LCC or FSC). Results demonstrate a significant positive relationship between airline dominance and the airport's technical efficiency. The paper has an important policy implication as it highlights the importance of taking into account efficiency effects at airports when evaluating airline consolidation cases.
A new cutting-edge lignocellulose fractionation technology for the co-production of glucose, native-like lignin, and furfural was introduced using mannitol (MT)-assisted p-toluenesulfonic acid/pentanol pretreatment, as an eco-friendly process. The addition of optimized 5% MT in pretreatment enhanced the delignification rate by 29% and enlarged the surface area and biomass porosity by 1.07–1.80 folds. This increased the glucose yield by 45% (from 65.34 to 94.54%) after enzymatic hydrolysis relative to those without MT. The extracted lignin in the organic phase of pretreatment exhibited β-O-4 bonds (61.54/100 Ar) properties of native cellulosic enzyme lignin. Lignin characterization and molecular docking analyses revealed that the hydroxyl tails of MT were incorporated with lignin and formed etherified lignin, which preserved high lignin integrity. The solubilized hemicellulose (96%) in the liquid phase of pretreatment was converted into furfural with a yield of 83.99%. The MT-assisted pretreatment could contribute to a waste-free biorefinery pathway toward a circular bioeconomy.
Objective: The role of the midwife is well defined and midwifery education is precisely prescribed in order that students gain all competencies that derive from the definition of midwifery profession. However in Slovenia, midwives do not practice the full scope midwifery, therefore the aim of the study was to explore whether women are aware of the role that midwives have. Design: In order to study the lay people awareness of midwives' role and competencies, a quanitative survey was performed using the validated Midwifery Profiling Questionnaire (MidProQ), designed by a Belgium research team of midwives, adapted to Slovenian circumstances. Setting: An online survey was performed, using the software 1KA. The link to the survey was distributed amongst groups of women via social media. Participants: Snow-ball sampling was used, recruiting women via gynaecology and obstetric forums. Measurements: The MidProQ is measuring the agreement of women with statements that describe competencies of midwives for prenatal, intrapartum and postnatal period. 228 fully fulfilled questionnaire were analysed with SPSS programme. Findings: Only 43% of participants felt that midwives were capable of managing an uncomplicated pregnancy independently, however they clearly state their role in uncomplicated labour (93%). Most clearly recognised role of midwives in the postnatal period was breastfeeding counselling (89%). The role of the midwife is intertwined with the competencies of the obstetrician, who majority of participants still consider more competent for managing an uncomplicated pregnancy. Key conclusions: Participants were not aware of all the fields where midwife could practice. Implications for practice: More has to be done that lay public will recognize the potential of full scope midwifery practice, like promoting the profession via social media.
Mangrove distribution maps are used for a variety of applications, ranging from estimates of mangrove extent, deforestation rates, quantify carbon stocks, to modelling response to climate change. There are multiple mangrove distribution datasets, which were derived from different remote sensing data and classification methods, and so there are some discrepancies among these datasets, especially with respect to the locations of their range limits. We investigate the latitudinal discrepancies in poleward mangrove range limits represented by these datasets and how these differences translate climatologically considering factors known to control mangrove distributions. We compare four widely used global mangrove distribution maps - the World Atlas of Mangroves, the World Atlas of Mangroves 2, the Global Distribution of Mangroves, the Global Mangrove Watch. We examine differences in climate among 21 range limit positions by analysing a set of bioclimatic variables that have been commonly related to the distribution of mangroves. Global mangrove maps show important discrepancies in the position of poleward range limits. Latitudinal differences between mangrove range limits in the datasets exceed 5°, 7° and 10° in western North America, western Australia and northern West Africa, respectively. In some range limit areas, such as Japan, discrepancies in the position of mangrove range limits in different datasets correspond to differences exceeding 600 mm in annual precipitation and > 10 °C in the minimum temperature of the coldest month. We conclude that dissimilarities in mapping mangrove range limits in different parts of the world can jeopardise inferences of climatic thresholds. We expect that global mapping efforts should prioritise the position of range limits with greater accuracy, ideally combining data from field-based surveys and very high-resolution remote sensing data. An accurate representation of range limits will contribute to better predicting mangrove range dynamics and shifts in response to climate change.
The factory of the future is steering away from conventional assembly line production with sequential conveyor technology, towards flexible assembly lines, where products dynamically move between work-cells. Flexible assembly lines are significantly more complex to plan compared to sequential lines. Therefore there is an increased need for autonomously generating flexible robot-centered assembly plans. The novel Autonomous Constraint Generation (ACG) method presented here will generate a dynamic assembly plan starting from an initial assembly sequence, which is easier to program. Using a physics simulator, variations of the work-cell configurations from the initial sequence are evaluated and assembly constraints are autonomously deduced. Based on that the method can generate a complete assembly graph that is specific to the robot and work-cell in which it was initially programmed, taking into account both part and robot collisions. A major advantage is that it scales only linearly with the number of parts in the assembly. The method is compared to previous research by applying it to the Cranfield Benchmark problem. Results show a 93% reduction in planning time compared to using Reinforcement Learning Search. Furthermore, it is more accurate compared to generating the assembly graph from human interaction. Finally, applying the method to a real life industrial use case proves that a valid assembly graph is generated within reasonable time for industry.
The development of a reliable and automated condition monitoring methodology for the detection of mechanical failures in rotating machinery has garnered much interest in recent years. Thanks to the rise in popularity of machine learning techniques, the number of purely data-driven approaches that try to tackle the issue of vibration-based condition monitoring has also drastically improved. Instead of directly using the vibration measurement data as input to a machine learning model, this work first exploits the cyclostationary characteristics inherent to vibration waveforms originating from rotating machinery. The proposed methodology first estimates the two-dimensional cyclic spectral coherence map of a vibration signal in order to decompose the cyclic modulations on the cyclic and carrier frequency plane. While this provides an effective tool to visualize any potential modulation signatures of faulty gears or bearings, it does not allow for easy inspection over time due to its dimensions. To tackle this issue, this paper proposes an unsupervised deep learning approach to wield this vast amount of data as a tool for detecting persistent changes in the modulation characteristics of the vibration signals. In the first phase, a deep autoencoder learns to reconstruct predictions of the cyclic coherence maps based on unlabeled healthy vibration data and the machine operating conditions. Two post-processing steps improve the predictions by mitigating frequency shifts and outlier or noisy measurements. Lastly, the residual error between the predicted and the actual coherence map is then aggregated and employed for further alarming based on thresholds. The autoencoder model is trained using five years of gearbox vibration data from five different wind turbines. The methodology is then validated on two faulty and eight healthy turbines. The results confirm that the proposed approach can deliver clear indications of failure for the faulty turbine while being completely devoid of any significant alarm trends for the healthy turbines. Thanks to the combination of highly effective cyclostationary signal processing with deep learning while using the operating conditions, the proposed methodology can detect and track incipient mechanical faults from non-stationary vibration data of rotating machinery. Lastly, it is important to emphasize that the proposed method is capable of learning the healthy behavior on one turbine and predicting the expected behavior on another turbine.
Ag-based semiconductors have attracted significant attention as promising visible-light photocatalysts for environmental purification. In this study, electronic and photocatalytic properties of AgTi2(PO4)3 NASICON-type phosphate, have been addressed in detail, combining experimental results and theoretical calculations. The as-prepared sample was characterized for its morphological, structural and optical properties by various techniques. The generalized gradient approximation by Perdew-Burke-Ernzerhof (GGA-PPE) within density functional theory (DFT) was used to investigate the electronic structure. We have applied corrective Hubbard U terms to Ti 3d orbitals in order to better reproduce the experimental band gap of 2.6 eV. The photocatalytic activity has then been performed for rhodamine B dye degradation under visible light illumination. Efficient dye degradation up to 97.2 % was achieved in 120 min. In addition, the catalyst exhibited good stability over four consecutive cycles. Finally, combining experimental and theoretical findings, the origin of the photocatalytic activity was identified and a photodegradation mechanism was proposed.
In the present study, we tested the common assumption that teachers with more experience consider themselves better prepared for online teaching and learning (OTL). Utilizing the data from a survey of 366 higher-education teachers from Portugal at the beginning of the COVID-19 pandemic in 2020, we performed structural equation modeling to quantify the experience-readiness relationship. The survey contained an assessment of teachers' OTL readiness which was measured by their perceptions of the institutional support, online teaching presence, and TPACK self-efficacy. In contrast to the linearity assumption "the more experienced, the better prepared", we found robust evidence for a curvilinear relationship. Teachers' readiness for OTL increased first and then decreased with more experience-this applied especially to the self-efficacy dimension of readiness. Further analyses suggested that the experience-readiness relationship does not only exist at the level of aggregated constructs but also at the level of indicators, that is, specific areas of knowledge, teaching, and support. We argue that both novice and experienced teachers in higher education could benefit from experience-appropriate, pedagogical, and content-related support programs for OTL.
Inversion of in situ borehole gamma spectrometry data is a faster and relatively less laborious method for calculating the vertical distribution of radioactivity in soil than conventional soil sampling method. However, the efficiency calculation of a detector for such measurements is a challenging task due to spatial and temporal variation of the soil properties and other measurement parameters. In this study, the sensitivity of different soil characteristics and measurement parameters on simulated efficiencies for a 662 keV photon peak were investigated. In addition, a Bayesian data inversion with a Gaussian process model was used to calculate the activity concentration of 137 Cs and its uncertainty considering the sources of uncertainty identified during the sensitivity analysis, including soil density, borehole radius, and the uncertainty in detector position in the borehole. Several soil samples were also collected from the borehole and surrounding area, and 137 Cs activity concentration was measured to compare with the inversion results. The calculated 137 Cs activity concentrations agree well with those obtained from soil samples. Therefore, it can be concluded that the vertical radioactivity distribution can be calculated using the probabilistic method using in situ gamma spectrometric measurements.
Carbon reduction requirements while securing energy demand create a huge development opportunity for district energy systems (DESs) supported by renewable energy. In this study, a DES was fueled with kitchen waste (KW) feedstock for bioenergy production optimization and sector decarbonization. Inspired by the cascading principle, gasification and anaerobic digestion have been used to treat mixed KW for maximizing bioenergy recovery. Furthermore, a multi-objective mixed-integer model with time series was built to express complex processes such as technology deployment and mass and renewable electricity flows using mathematical language, achieving a trade-off between economic and environmental benefits. The KW treatment industry in Chongqing city was chosen as the location for a case study, in which two electricity trade scenarios were simulated by adjusting the associated parameters. The results show that a DES with surplus electricity feed-in mode helps avoid inevitable adverse events on the main grid, as it ensures that at least 9,457 kWh per day, and up to a maximum of 16,793 kWh, will not be lost by long-distance transmission. Therefore, the mode also contributes more to carbon reduction benefits, but the levelized carbon reduction cost is between 353 and 470 CNY/tone CO2, which is higher than the average carbon tax level published by the trading market.
The data inversion algorithm of the Dekati electrical low pressure impactor (ELPI+) is the procedure to convert the electrical currents measured in each impactor stage into a particle size distribution. If a particle is collected in the incorrect stage, either due to particle bounce or premature collection, the erroneously induced current will cause an error to propagate through the data inversion. In this work, it is examined how this error propagation will modify the particle number size distribution. It is shown that particle bounce can contribute considerably to an erroneous particle distribution as one large bouncing particle can be misinterpreted as more than 10,000 small particles. This indicates that efforts should be made to avoid particle bounce in the ELPI+ as particle bounce thoroughly modifies the obtained particle number size distribution. On the other hand, it appears that the ELPI+ is quite robust against the effects of premature particle collection.
We analyzed δ13C and δ15N values in different tooth portions (Growth Layer Groups, GLGs) of franciscanas, Pontoporia blainvillei, to investigate their effect on whole tooth (WT) isotopic values and the implications for dietary estimates. Tooth portions included the dentin deposited during the prenatal development (PND), the first year of life (GLG1) deposited during the nursing period and the central part of the tooth with no distinction amongst subsequent GLGs (Center). Isotopic mixing models estimating the contribution of PND, GLG1 and Center to WT showed that GLG1 has a strong effect on WT isotope values in juveniles, while Center only starts to affect WT isotopic values from age four. Isotopic mixing models estimating prey contribution to the diet of juveniles using WT vs Center tooth portions significantly differed in dietary outputs, demonstrating that GLG1 influence on WT isotope values affects dietary estimates in young franciscanas. As the small tooth size and narrowness of the last GLGs hinder the analysis of individual layers, we recommend excluding GLG1 in studies based on teeth isotope composition in franciscanas and caution when interpreting isotopic values from the WT of other small cetaceans.
Prosumers can actively participate in electricity markets through new market models. Peer-to-peer, community self-consumption, and transactive energy are the three market models which are said to complement traditional electricity markets, enabling prosumers to create and capture value. To date, however, the characteristics of these models and incentivisation opportunities for prosumers cannot be easily distilled. Here, we propose a framework to distinguish between these market models based on involved parties (peers, communities, and grid operators) and traded commodities (electricity and flexibility). Furthermore, we compare the capacity of the different models in value generation for and by prosumers, which extend beyond financial benefits, by differentiation. In doing so, we systematically draw out the value generation potential in the dynamic between market models' capacities and prosumers business models. In doing so, a larger number of prosumers can be engaged and empowered in becoming active market actors, stimulating the ongoing energy transition towards achieving sustainability goals.
We studied the corrosion of Roman copper alloy coins that experienced alternations or progressive changes in their burial environment. We used coins that were still embedded in soil or in a concretion selected from three professional excataved sites - Berlicum and Krommenie in the Netherlands and Kempraten in Switserland. mCT scanning and neutron scanning were used to record the 3-D properties of these coins prior to (destructive) analyses. It proved possible to tentatively identify the coins. Microscope observations and SEM-EDX analyses revealed complex corrosion processes, related to changing burial environments. In soil horizon with fluctuating groundwater levels in a region with upwelling reducing, iron-rich groundwater, the copper in a gunmetal coin is essentially replaced by iron oxides while tin remains and forms tin-oxide bands. Fluctuating redox conditions in marine-influenced environments was shown to transform a copper-alloy coin into strongly laminated copper sulphides with embedded gypsum crystals, with an outer surface of copper and copper-iron sulphides. Burial of bronze in a charcoal rich layer probably caused temporary highly alkaline soil conditions. This caused most of the copper to leach from this coin, leaving behind a laminated tin-dominated mass, with only a limited amount of (malachite) corrosion products remaining in the surrounding groundmass. In all three cases, corrosion processes tend to be anisotropic, probably because of cold-hammering of the coins during their manufacture. Such corrosion processes on massive copper alloy coins may produce features that may lead to their incorrect classification as subferrati, i.e. copper alloy coins with an iron core. Our results may help in future to distinguish strongly corroded massive coins from subferrati.
Owing to its high porosity and tunable surface chemistry activated carbon (AC) is considered a promising material for CO2 adsorption. Functionalising porous materials by plasma is challenging but if successful, it could enhance the CO2 uptake capacity of AC via chemisorption. This work presents an in-depth analysis of the interactions between ammonia plasmas and the porous surface of AC monolithic samples. The treatment involved an ammonia based atmospheric-pressure dielectric barrier discharge and a low-pressure radio frequency plasma. Unique plasma reactor designs for treating 3-dimensional, electrically conducting and non-conducting monolithic structures at atmospheric pressure with versatile applications are presented. The plasma-surface interactions were analysed using emission spectroscopy and X-ray photoelectron spectroscopy. High surface N containing AC samples were then used to assess the treatment effect on the subsurface. A much lower although a still significant amount of N was found at depths of ∼30 µm. A simple fit of the results showed that the ratio of plasma species reaching the surface with higher to lower sticking probability was 4:1. A slight decrease in the microporosity of the plasma treated samples was found and attributed to pore blocking by the grafted N species. Plasma treated AC with high N showed an improved CO2 adsorption capacity of up to 14 % and selectivity against CH4 and N2 adsorption showed that the treatment was selective primarily towards CO2.
Background The reemergence of the monkeypox epidemic has aroused great concern internationally. Concurrently, the COVID-19 epidemic is still ongoing. It is essential to understand the temporal dynamics of the monkeypox epidemic in 2022 and its relationship with the dynamics of the COVID-19 epidemic. In this study, we aimed to explore the temporal dynamic characteristics of the human monkeypox epidemic in 2022 and its relationship with those of the COVID-19 epidemic. Methods We used publicly available data of cumulative monkeypox cases and COVID-19 in 2022 and COVID-19 at the beginning of 2020 for model validation and further analyses. The time series data were fitted with a descriptive model using the sigmoid function. Two important indices (logistic growth rate and semi-saturation period) could be obtained from the model to evaluate the temporal characteristics of the epidemic. Results As for the monkeypox epidemic, the growth rate of infection and semi-saturation period showed a negative correlation ( r = 0.47, p = 0.034). The growth rate also showed a significant relationship with the locations of the country in which it occurs [latitude ( r = –0.45, p = 0.038)]. The development of the monkeypox epidemic did not show significant correlation compared with the that of COVID-19 in 2020 and 2022. When comparing the COVID-19 epidemic with that of monkeypox, a significantly longer semi-saturation period was observed for monkeypox, while a significant larger growth rate was found in COVID-19 in 2020. Conclusions This novel study investigates the temporal dynamics of the human monkeypox epidemic and its relationship with the ongoing COVID-19 epidemic, which could provide more appropriate guidance for local governments to plan and implement further fit-for-purpose epidemic prevention policies.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.