Recent publications
Despite the impressive performance obtained by recent single-image hand modeling techniques, they lack the capability to capture sufficient details of the 3D hand mesh. This deficiency greatly limits their applications when high-fidelity hand modeling is required, e.g. , personalized hand modeling. To address this problem, we design a frequency split network to generate 3D hand meshes using different frequency bands in a coarse-to-fine manner. To capture high-frequency personalized details, we transform the 3D mesh into the frequency domain, and proposed a novel frequency decomposition loss to supervise each frequency component. By leveraging such a coarse-to-fine scheme, hand details that correspond to the higher frequency domain can be preserved. In addition, the proposed network is scalable, and can stop the inference at any resolution level to accommodate different hardware with varying computational powers. To feed the scalable frequency network with frequency split image features, we proposed an image-graph ring feature mapping strategy. To train our network with per-vertex supervision, we use a bidirectional registration strategy to generate a topology-fixed ground-truth. To quantitatively evaluate the performance of our method in terms of recovering personalized shape details, we introduce a new evaluation metric named Mean-frequency Signal-to-Noise Ratio (MSNR) to measure the mean signal-to-noise ratio of mesh signal on each frequency component. Extensive experiments demonstrate that our approach generates fine-grained details for high-fidelity 3D hand reconstruction, and our evaluation metric is more effective than traditional metrics for measuring mesh details.
Consumer-grade mobile devices are used by billions worldwide. Their ubiquity provides opportunities to robustly capture everyday cognition. ‘Intuition’ was a remote observational study that enrolled 23,004 US adults, collecting 24 months of longitudinal multimodal data via their iPhones and Apple Watches using a custom research application that captured routine device use, self-reported health information and cognitive assessments. The study objectives were to classify mild cognitive impairment (MCI), characterize cognitive trajectories and develop tools to detect and track cognitive health at scale. The study addresses sources of bias in current cognitive health research, including limited representativeness (for example, racial/ethnic, geographic) and accuracy of cognitive measurement tools. We describe study design and provide baseline cohort characteristics. Next, we present foundational proof-of-concept MCI classification modeling results using interactive cognitive assessment data. Initial findings support the reliability and validity of remote MCI detection and the usefulness of such data in describing at-risk cognitive health trajectories in demographically diverse aging populations. ClinicalTrials.gov identifier: NCT05058950.
In conversation with Barocas, Hardt, and Narayanan’s Fairness and Machine Learning (FaML), we seek to broaden the scope of normative argument about machine learning and algorithmic decision making. Beginning from an understanding of fair cooperation among free and equal persons as a fundamental political value, we argue that concerns about fairness and machine learning need to be expanded in three ways. First, unfairness and discrimination are not only a matter of systematic group subordination. We consider other forms of unfairness that are not about disadvantaged groups but about removing barriers to opportunity, and suggest practical implications for algorithmic decisions. Secondly, while we broadly agree with FaML’s approach to fair organizational decisions, we underscore the limits of a focus on fair organizational decisions in advancing equality of opportunity in society. Finally, drawing on Rawls, we present aspects of a fair society that are not simply matters of equal opportunity, and consider some broader, under-explored ramifications of algorithms and AI on societal fairness. Specifically, we suggest the implications that AI deployment at scale has for fair distribution of income and wealth, political liberties, and public deliberation.
Denoising diffusion models have emerged as a powerful tool for various image generation and editing tasks, facilitating the synthesis of visual content in an unconditional or input-conditional manner. The core idea behind them is learning to reverse the process of gradually adding noise to images, allowing them to generate high-quality samples from a complex distribution. In this survey, we provide an exhaustive overview of existing methods using diffusion models for image editing, covering both theoretical and practical aspects in the field. We delve into a thorough analysis and categorization of these works from multiple perspectives, including learning strategies, user-input conditions, and the array of specific editing tasks that can be accomplished. In addition, we pay special attention to image inpainting and outpainting, and explore both earlier traditional context-driven and current multimodal conditional methods, offering a comprehensive analysis of their methodologies. To further evaluate the performance of text-guided image editing algorithms, we propose a systematic benchmark, EditEval, featuring an innovative metric, LMM Score. Finally, we address current limitations and envision some potential directions for future research. The accompanying repository is released at https://github.com/SiatMMLab/Awesome-Diffusion-Model-Based-Image-Editing-Methods .
We analyze call center data on factors such as agent heterogeneity, customer patience and agent breaks. Based on this, we construct different simulation models and compare their performance with the actual realized performance. We classify them according to the extent in which they accurately approximate the service level and average waiting times. We also obtain a theoretical understanding on how to distinguish between the model error and other aspects such as random noise. We conclude that modeling explicitly breaks and agent heterogeneity is crucial for obtaining a precise model.
Thanks to the vast amount of available resources and unique propagation properties, terahertz (THz) frequency bands are viewed as a key enabler for achieving ultrahigh communication performance and precise sensing capabilities in future wireless systems. Recently, the European Telecommunications Standards Institute (ETSI) initiated an Industry Specification Group (ISG) on THz which aims at establishing the technical foundation for subsequent standardization of this technology, which is pivotal for its successful integration into future networks. Starting from the work recently finalized within this group, this article provides an industrial perspective on potential use cases and frequency bands of interest for THz communication systems. We first identify promising frequency bands in the 100 GHz-1 THz range, offering over 500 GHz of available spectrum that can be exploited to unlock the full potential of THz communications. Then, we present key use cases and application areas for THz communications, emphasizing the role of this technology and its advantages over other frequency bands. We discuss their target requirements and show that some applications demand multi-Tb/s data rates, latency below 0.5 ms, and sensing accuracy down to 0.5 cm. Additionally, we identify the main deployment scenarios and outline other enabling technologies crucial for overcoming the challenges faced by THz systems. Finally, we summarize past and ongoing standardization efforts focusing on THz communications, while also providing an outlook toward the inclusion of this technology as an integral part of the future sixth generation (6G) and beyond communication networks.
Millimeter wave (mmWave) communication has been proposed as an enabling technology for the evolution from 4G to 5G. Using the enormous bandwidth between 24.25 GHz-71.0 GHz, this technology allows for a huge improvement in channel capacity compared to sub-6 GHz bands [1]. In addition, the smaller size of individual antenna elements also supports the integration of larger antenna arrays for beamforming and spatial multiplexing. 3GPP has therefore supported this technology and defined corresponding technical specifications for frequency range 2 (FR2) bands in 5G NR. Although mmWave communication has shown significant potential in research, its commercial deployment has not thrived as expected. Major deployment only happens in US and Japan, while in other regions (China, Europe, South Korea, etc.) the deployment is generally limited. Besides, the coverage can be almost non-existent in many urban areas even in those countries that deploy mmWave cells. The difficulty commercializing 5G mmWave results from inherent challenges which need solutions to fully unleash its benefits. In the following sections, we discuss five major challenges and elaborate on three enabling technologies for coverage and reliability enhancement that can be the current focus. Other technologies, such as reconfigurable intelligent surface, will be considered in future research.
STUDY QUESTION
Can algorithms using wrist temperature, available on compatible models of iPhone and Apple Watch, retrospectively estimate the day of ovulation and predict the next menses start day?
SUMMARY ANSWER
Algorithms using wrist temperature can provide retrospective ovulation estimates and next menses start day predictions for individuals with typical or atypical cycle lengths.
WHAT IS KNOWN ALREADY
Wrist skin temperature is affected by hormonal changes associated with the menstrual cycle and can be used to estimate the timing of cycle events.
STUDY DESIGN, SIZE, DURATION
We conducted a prospective cohort study of 262 menstruating females (899 menstrual cycles) aged 14 and older who logged their menses, performed urine LH testing to define day of ovulation, recorded daily basal body temperature (BBT), and collected overnight wrist temperature. Participants contributed between 2 and 13 menstrual cycles.
PARTICIPANTS/MATERIALS, SETTING, METHODS
Algorithm performance was evaluated for three algorithms: one for retrospective ovulation day estimate in ongoing cycles (Algorithm 1), one for retrospective ovulation day estimate in completed cycles (Algorithm 2), and one for prediction of next menses start day (Algorithm 3). Each algorithm’s performance was evaluated under multiple scenarios, including for participants with all typical cycle lengths (23–35 days) and those with some atypical cycle lengths (<23, >35 days), in cycles with the temperature change of ≥0.2°C typically associated with ovulation, and with any temperature change included.
MAIN RESULTS AND ROLE OF CHANCE
Two hundred and sixty participants provided 889 cycles. Algorithm 1 provided a retrospective ovulation day estimate in 80.5% of ongoing menstrual cycles of all cycle lengths with ≥0.2°C wrist temperature signal with a mean absolute error (MAE) of 1.59 days (95% CI 1.45, 1.74), with 80.0% of estimates being within ±2 days of ovulation. Retrospective ovulation day in an ongoing cycle (Algorithm 1) was estimated in 81.9% (MAE 1.53 days, 95% CI 1.35, 1.70) of cycles for participants with all typical cycle lengths and 77.7% (MAE 1.71 days, 95% CI 1.42, 2.01) of cycles for participants with atypical cycle lengths. Algorithm 2 provided a retrospective ovulation day estimate in 80.8% of completed menstrual cycles with ≥0.2°C wrist temperature signal with an MAE of 1.22 days (95% CI 1.11, 1.33), with 89.0% of estimates being within ±2 days of ovulation. Wrist temperature provided the next menses start day prediction (Algorithm 3) at the time of ovulation estimate (89.4% within ±3 days of menses start) with an MAE of 1.65 (95% CI 1.52, 1.79) days in cycles with ≥0.2°C wrist temperature signal.
LIMITATIONS, REASONS FOR CAUTION
There are several limitations, including reliance on LH testing to identify ovulation, which may mislabel some cycles. Additionally, the potential for false retrospective ovulation estimates when no ovulation occurred reinforces the idea that this estimate should not be used in isolation.
WIDER IMPLICATIONS OF THE FINDINGS
Algorithms using wrist temperature can provide retrospective ovulation estimates and next menses start day predictions for individuals with typical or atypical cycle lengths.
STUDY FUNDING/COMPETING INTEREST(S)
Apple is the funding source for this manuscript. Y.W., C.Y.Z., J.P., S.Z., and C.L.C. own Apple stock and are employed by Apple. S.M. has research funding from Apple for a separate study, the Apple Women’s Health Study, including meeting and travel support to present research findings related to that separate study. A.M.Z.J., D.D.B., B.A.C., and J.P. had no conflicts of interest.
TRIAL REGISTRATION NUMBER
NCT05852951.
We addressed genomic prediction accounting for partial correlation of marker effects, which entails the estimation of the partial correlation network/graph (PCN) and the precision matrix of an unobservable m‐dimensional random variable. To this end, we developed a set of statistical models and methods by extending the canonical model selection problem in Gaussian concentration, and directed acyclic graph models. Our frequentist formulations combined existing methods with the EM algorithm and were termed Glasso‐EM, Concord‐EM and CSCS‐EM, whereas our Bayesian formulations corresponded to hierarchical models termed Bayes G‐Sel and Bayes DAG‐Sel. We implemented our methods in a real bull fertility dataset and then carried out gene annotation of seven markers having the highest degrees in the estimated PCN. Our findings brought biological evidence supporting the usefulness of identifying genomic regions that are highly connected in the inferred PCN. Moreover, a simulation study showed that some of our methods can accurately recover the PCN (accuracy up to 0.98 using Concord‐EM), estimate the precision matrix (Concord‐EM yielded the best results) and predict breeding values (the best reliability was 0.85 for a trait with heritability of 0.5 using Glasso‐EM).
Full-duplex (FD) wireless can significantly enhance spectrum efficiency but requires effective self-interference (SI) cancellers. RF SI cancellation (SIC) via frequency-domain equalization (FDE), where bandpass filters channelize the SI, is suited for integrated circuits (ICs). In this paper, we explore the limits and higher layer challenges associated with using such cancellers. We evaluate the performance of a custom FDE-based canceller using two testbeds; one with mobile FD radios and the other with upgraded, static FD radios in the PAWR COSMOS testbed. The latter is a lasting artifact for the research community, alongside a dataset containing baseband waveforms captured on the COSMOS FD radios, facilitating FD-related experimentation at the higher networking layers. We evaluate the performance of the FDE-based FD radios in both testbeds, with experiments showing 95 dB overall achieved SIC (52 dB from RF SIC) across 20 MHz bandwidth. We conduct network-level experiments for (i) uplink-downlink networks with inter-user interference, and (ii) heterogeneous networks with half-duplex and FD users, showing FD gains of 1.14×–1.25× and 1.25×–1.73×, respectively, confirming analytical results. We also evaluate the performance of an FD jammer-receiver, demonstrating a strong dependence on relative transmit power levels and modulation schemes.
Reinforcement learning (RL) is an effective method of finding reasoning pathways in incomplete knowledge graphs (KGs). To overcome the challenges of a large action space, a self-supervised pre-training method is proposed to warm up the policy network before the RL training stage. To alleviate the distributional mismatch issue in general self-supervised RL (SSRL), in our supervised learning (SL) stage, the agent selects actions based on the policy network and learns from generated labels; this self-generation of labels is the intuition behind the name self-supervised. With this training framework, the information density of our SL objective is increased and the agent is prevented from getting stuck with the early rewarded paths. Our self-supervised RL (SSRL) method improves the performance of RL by pairing it with the wide coverage achieved by SL during pretraining, since the breadth of the SL objective makes it infeasible to train an agent with that alone. We show that our SSRL model meets or exceeds current state-of-the-art results on all Hits@k and mean reciprocal rank (MRR) metrics on four large benchmark KG datasets. This SSRL method can be used as a plug-in for any RL architecture for a KGR task. We adopt two RL architectures, i.e., MINERVA and MultiHopKG as our baseline RL models and experimentally show that our SSRL model consistently outperforms both baselines on all of these four KG reasoning tasks. Code for our method and scripts to run all experiments can be found in our GitHub at https://anonymous.4open.science/r/KnowledgeGraph-Reasoning-with-Self-supervised-Reinforcement-Learning00E7/README.md.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Cupertino, United States
Website