Recent publications
To determine whether convolutional neural networks (CNN) can classify the severity of central vision loss using fundus autofluorescence (FAF) images and color fundus images of retinitis pigmentosa (RP), and to evaluate the utility of those images for severity classification.
Retrospective observational study.
Medical charts of patients with RP who visited Nagoya University Hospital were reviewed. Eyes with atypical RP or previous surgery were excluded. The mild group was comprised of patients with a mean deviation value of > − 10 decibels, and the severe group of < − 20 decibels, in the Humphrey field analyzer 10-2 program. CNN models were created by transfer learning of VGG16 pretrained with ImageNet to classify patients as either mild or severe, using FAF images or color fundus images.
Overall, 165 patients were included in this study; 80 patients were classified into the severe and 85 into the mild group. The test data comprised 40 patients in each group, and the images of the remaining patients were used as training data, with data augmentation by rotation and flipping. The highest accuracies of the CNN models when using color fundus and FAF images were 63.75% and 87.50%, respectively.
Using FAF images may enable the accurate assessment of central vision function in RP. FAF images may include more parameters than color fundus images that can evaluate central visual function.
Purpose
Deep-learning-based supervised CT segmentation relies on fully and densely labeled data, the labeling process of which is time-consuming. In this study, our proposed method aims to improve segmentation performance on CT volumes with limited annotated data by considering category-wise difficulties and distribution.
Methods
We propose a novel confidence-difficulty weight (CDifW) allocation method that considers confidence levels, balancing the training across different categories, influencing the loss function and volume-mixing process for pseudo-label generation. Additionally, we introduce a novel Double-Mix Pseudo-label Framework (DMPF), which strategically selects categories for image blending based on the distribution of voxel-counts per category and the weight of segmentation difficulty. DMPF is designed to enhance the segmentation performance of categories that are challenging to segment.
Result
Our approach was tested on two commonly used datasets: a Congenital Heart Disease (CHD) dataset and a Beyond-the-Cranial-Vault (BTCV) Abdomen dataset. Compared to the SOTA methods, our approach achieved an improvement of 5.1% and 7.0% in Dice score for the segmentation of difficult-to-segment categories on 5% of the labeled data in CHD and 40% of the labeled data in BTCV, respectively.
Conclusion
Our method improves segmentation performance in difficult categories within CT volumes by category-wise weights and weight-based mixture augmentation. Our method was validated across multiple datasets and is significant for advancing semi-supervised segmentation tasks in health care. The code is available at https://github.com/MoriLabNU/Double-Mix .
A new method using the muon lateral distribution and an underground muon detector to achieve high discrimination power against hadrons is presented. The method is designed to be applied in the Andes Large-area PArticle detector for Cosmic-ray physics and Astronomy (ALPACA) experiment in Bolivia. This new observatory in the Southern hemisphere has the goal of detecting >100 TeV rays in search for the origins of Galactic cosmic rays. The method uses the weighted sum of the lateral distribution of the muons detected by underground detectors to separate between air showers initiated by cosmic rays and rays. We evaluate the performance of the method through Monte Carlo simulations with CORSIKA and Geant4 and apply the analysis to the prototype of ALPACA, ALPAQUITA. With the application of this method in ALPAQUITA, we achieve an improvement of about 18 % in the energy range from 60 to 100 TeV over the estimated sensitivity using only the total number of muons.
For a well-studied family of domination-type problems, in bounded-treewidth graphs, we investigate whether it is possible to find faster algorithms. For sets σ , ρ of non-negative integers, a ( σ , ρ )-set of a graph G is a set S of vertices such that | N ( u )∩ S | ∈ σ for every u ∈ S , and | N ( v )∩ S | ∈ ρ for every . The problem of finding a ( σ , ρ )-set (of a certain size) unifies common problems like Independent Set , Dominating Set , Independent Dominating Set , and many others.
In an accompanying paper, it is proven that, for all pairs of finite or cofinite sets ( σ , ρ ), there is an algorithm that counts ( σ , ρ )-sets in time (if a tree decomposition of width is given in the input). Here, c σ , ρ is a constant with an intricate dependency on σ and ρ . Despite this intricacy, we show that the algorithms in the accompanying paper are most likely optimal, i.e., for any pair ( σ , ρ ) of finite or cofinite sets where the problem is non-trivial, and any ε > 0, a -algorithm counting the number of ( σ , ρ )-sets would violate the Counting Strong Exponential-Time Hypothesis (#SETH). For finite sets σ and ρ , our lower bounds also extend to the decision version, showing that those algorithms are optimal in this setting as well.
As quantum computers advance, the complexity of the software they can execute increases as well. To ensure this software is efficient, maintainable, reusable, and cost-effective —key qualities of any industry-grade software— mature software engineering practices must be applied throughout its design, development, and operation. However, the significant differences between classical and quantum software make it challenging to directly apply classical software engineering methods to quantum systems. This challenge has led to the emergence of Quantum Software Engineering as a distinct field within the broader software engineering landscape. In this work, a group of active researchers analyse in depth the current state of quantum software engineering research. From this analysis, the key areas of quantum software engineering are identified and explored in order to determine the most relevant open challenges that should be addressed in the next years. These challenges help identify necessary breakthroughs and future research directions for advancing Quantum Software Engineering.
Connecting multiple processors via quantum interconnect technologies could help overcome scalability issues in single-processor quantum computers. Transmission via these interconnects can be performed more efficiently using quantum multiplexing, where information is encoded in high-dimensional photonic degrees of freedom. We explore the effects of multiplexing on logical error rates in surface codes and hypergraph product codes. We show that, although multiplexing makes loss errors more damaging, assigning qubits to photons in an intelligent manner can minimize these effects, and the ability to encode higher-distance codes in a smaller number of photons can result in overall lower logical error rates. This multiplexing technique can also be adapted to quantum communication and multimode quantum memory with high-dimensional qudit systems.
Electromagnetic‐wave (EMW) sensing in microwave (MW) frequencies exhibits permeability even to deeper positions of various non‐metallic materials (indispensable for social products) and potentially facilitates non‐destructive inspections. However, conventional MW‐sensor designs generally have faced difficulties in miniaturizations for longer wavelengths and the subsequent diffraction limit. While EMW sensors essentially require pixel miniaturizations for imaging, implementations of typical external antennas concentrating MW‐irradiation into smaller areas than the diffraction limit fatally complicate overall fabrications and operations. Herein, this work demonstrates that carbon nanotube (CNT) film photo‐thermoelectric (PTE) sensors sufficiently handle even MW‐irradiation in compact configurations beyond the diffraction limit by themselves while maintaining inherent operations in shorter‐wavelength millimeter‐wave–infrared bands. The CNT film PTE sensors enhance MW‐detection responses with particular channel dimensions (shorter length and narrower width), demonstrating a signal‐to‐noise ratio of 1497 with a 1‐mm‐square planar structure under 5 GHz irradiation (one‐sixtieth size of the wavelength). In such advantageous behaviors, this work experimentally clarifies that electrically conductive wiring of the CNT film PTE sensor (inherently included within pristine device structures as response signal readout electrodes) plays a key antenna‐like role. Then, the presenting devices demonstrate composition‐identifying non‐destructive testing of complex targets with multiple‐wavelength imaging in ultrabroad MW–near‐infrared bands, while compensating characteristics in respective irradiation regions.
We present a general form of temporal effects for recursive types. Temporal effects have been adopted by effect systems to verify both linear-time temporal safety and liveness properties of higher-order programs with recursive functions. A challenge in a generalization to recursive types is that recursive types can easily cause unstructured loops, which obscure the regularity of the infinite behavior of computation and make it harder to statically verify liveness properties. To solve this problem, we introduce temporal effects with a later modality, which enable us to capture the behavior of non-terminating programs by stratifying obscure loops caused by recursive types. While temporal effects in the prior work are based on certain concrete formal forms, such as logical formulas and automata-based lattices, our temporal effects, which we call algebraic temporal effects, are more abstract, axiomatizing temporal effects in an algebraic manner and clarifying the requirements for temporal effects that can reason about programs soundly. We formulate algebraic temporal effects, formalize an effect system built on top of them, and prove two kinds of soundness of the effect system: safety and liveness soundness. We also introduce two instances of algebraic temporal effects: one is temporal regular effects, which are based on ω-regular expressions, and the other is temporal fixpoint effects, which are based on a first-order fixpoint logic. Their usefulness is demonstrated via examples including concurrent and object-oriented programs.
From the perspective of telecommunications, next-generation networks or beyond 5G will inevitably face the challenge of a growing number of users and devices. Such growth results in high-traffic generation with limited network resources. Thus, the analysis of the traffic and the precise forecast of user demands is essential for developing an intelligent network. In this line, Machine Learning (ML) and especially Deep Learning (DL) models can further benefit from the huge amount of network data. They can act in the background to analyze and predict traffic conditions more accurately than ever, and help to optimize the design and management of network services. Recently, a significant amount of research effort has been devoted to this area, greatly advancing network traffic prediction (NTP) abilities. In this paper, we bring together NTP and DL-based models and present recent advances in DL for NTP. We provide a detailed explanation of popular approaches and categorize the literature based on these approaches. Moreover, as a technical study, we conduct different data analyses and experiments with several DL-based models for traffic prediction. Finally, discussions regarding the challenges and future directions are provided.
Previous incomplete multi-modal brain tumor segmentation technologies, while effective in integrating diverse modalities, commonly deliver under-expected performance gains. The reason lies in that the new modality may cause confused predictions due to uncertain and inconsistent patterns and quality in some positions, where the direct fusion consequently raises the negative gain for the final decision. In this paper, considering the potentially negative impacts within a modality, we propose multi-modal Positive-Negative impact region Double Calibration pipeline, called PNDC, to mitigate misinformation transfer of modality fusion. Concretely, PNDC involves two elaborate pipelines, Reverse Audit and Forward Checksum. The former is to identify negative regions impacts of each modality. The latter calibrates whether the fusion prediction is reliable in these regions by integrating the positive impacts regions of each modality. Finally, the negative impacts region from each modality and miss-match reliable fusion predictions are utilized to enhance the learning of individual modalities and fusion process. It is noted that PNDC adopts the standard training strategy without specific architectural choices and does not introduce any learning parameters, and thus can be easily plugged into existing network training for incomplete multi-modal brain tumor segmentation. Extensive experiments confirm that our PNDC greatly alleviates the performance degradation of current state-of-the-art incomplete medical multi-modal methods, arising from overlooking the positive/negative impacts regions of the modality. The code is released at PNDC.
This paper introduces SpoofCeleb, a dataset designed for Speech Deepfake Detection (SDD) and Spoofing-robust Automatic Speaker Verification (SASV), utilizing source data from real-world conditions and spoofing attacks generated by Text-To-Speech (TTS) systems also trained on the same real-world data. Robust recognition systems require speech data recorded in varied acoustic environments with different levels of noise to be trained. However, current datasets typically include clean, high-quality recordings (bona fide data) due to the requirements for TTS training; studio-quality or well-recorded read speech is typically necessary to train TTS models. Current SDD datasets also have limited usefulness for training SASV models due to insufficient speaker diversity. SpoofCeleb leverages a fully automated pipeline we developed that processes the VoxCeleb1 dataset, transforming it into a suitable form for TTS training. We subsequently train 23 contemporary TTS systems. SpoofCeleb comprises over 2.5 million utterances from 1,251 unique speakers, collected under natural, real-world conditions. The dataset includes carefully partitioned training, validation, and evaluation sets with well-controlled experimental protocols. We present the baseline results for both SDD and SASV tasks. All data, protocols, and baselines are publicly available at
https://jungjee.github.io/spoofceleb
.
The problem of subliminal communication has been addressed in various forms of steganography, primarily relying on visual, auditory and linguistic media. However, the field faces a fundamental paradox: as the art of concealment advances, so too does the science of revelation, leading to an ongoing evolutionary interplay. This study seeks to extend the boundaries of what is considered a viable steganographic medium. We explore a steganographic paradigm, where hidden information is communicated through the episodes of multiple agents interacting with an environment. Each agent, acting as an encoder, learns a policy to disguise the very existence of hidden messages within actions seemingly directed toward innocent objectives. Meanwhile, an observer, serving as a decoder, learns to associate behavioural patterns with their respective agents despite their dynamic nature, thereby unveiling the hidden messages. The interactions of agents are governed by the framework of multi-agent reinforcement learning and shaped by feedback from the observer. This framework encapsulates a game-theoretic dilemma, wherein agents face decisions between cooperating to create distinguishable behavioural patterns or defecting to pursue individually optimal yet potentially overlapping episodic actions. As a proof of concept, we exemplify action steganography through the game of labyrinth, a navigation task where subliminal communication is concealed within the act of steering toward a destination. The stego-system has been systematically validated through experimental evaluations, assessing its distortion and capacity alongside its secrecy and robustness when subjected to simulated passive and active adversaries.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Tokyo, Japan
Website