Recent publications
The digital age, driven by the AI revolution, brings significant opportunities but also conceals security threats, which we refer to as cyber shadows. These threats pose risks at individual, organizational, and societal levels. This paper examines the systemic impact of these cyber threats and proposes a comprehensive cybersecurity strategy that integrates AI-driven solutions, such as Intrusion Detection Systems (IDS), with targeted policy interventions. By combining technological and regulatory measures, we create a multilevel defense capable of addressing both direct threats and indirect negative externalities. We emphasize that the synergy between AI-driven solutions and policy interventions is essential for neutralizing cyber threats and mitigating their negative impact on the digital economy. Finally, we underscore the need for continuous adaptation of these strategies, especially in response to the rapid advancement of autonomous AI-driven attacks, to ensure the creation of secure and resilient digital ecosystems.
When antispasmodics are unavailable, the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER; called BLADE by Siemens Healthineers) or half Fourier single-shot turbo spin echo (HASTE) is clinically used in gynecologic MRI. However, their imaging qualities are limited compared to Turbo Spin Echo (TSE) with antispasmodics. Even with antispasmodics, TSE can be artifact-affected, necessitating a rapid backup sequence.
This study aimed to investigate the utility of HASTE with deep learning reconstruction and variable flip angle evolution (iHASTE) compared to conventional sequences with and without antispasmodics.
This retrospective study included MRI scans without antispasmodics for 79 patients who underwent iHASTE, HASTE, and BLADE and MRI scans with antispasmodics for 79 case–control matched patients who underwent TSE. Three radiologists qualitatively evaluated image quality, robustness to artifacts, tissue contrast, and uterine lesion margins. Tissue contrast was also quantitatively evaluated.
Quantitative evaluations revealed that iHASTE exhibited significantly superior tissue contrast to HASTE and BLADE. Qualitative evaluations indicated that iHASTE outperformed HASTE in overall quality. Two of three radiologists judged iHASTE to be significantly superior to BLADE, while two of three judged TSE to be significantly superior to iHASTE. iHASTE demonstrated greater robustness to artifacts than both BLADE and TSE. Lesion margins in iHASTE had lower scores than BLADE and TSE.
iHASTE is a viable clinical option in patients undergoing gynecologic MRI with anti-spasmodics. iHASTE may also be considered as a useful add-on sequence in patients undergoing MRI with antispasmodics.
Despite the benefits of minimally invasive surgery, interventions such as laparoscopic liver surgery present unique challenges, like the significant anatomical differences between preoperative images and intraoperative scenes due to pneumoperitoneum, patient pose, and organ manipulation by surgical instruments. To address these challenges, a method for intraoperative three‐dimensional reconstruction of the surgical scene, including vessels and tumors, without altering the surgical workflow, is proposed. The technique combines neural radiance field reconstructions from tracked laparoscopic videos with ultrasound three‐dimensional compounding. The accuracy of our reconstructions on a clinical laparoscopic liver ablation dataset, consisting of laparoscope and patient reference posed from optical tracking, laparoscopic and ultrasound videos, as well as preoperative and intraoperative computed tomographies, is evaluated. The authors propose a solution to compensate for liver deformations due to pressure applied during ultrasound acquisitions, improving the overall accuracy of the three‐dimensional reconstructions compared to the ground truth intraoperative computed tomography with pneumoperitoneum. A unified neural radiance field from the ultrasound and laparoscope data, which allows real‐time view synthesis providing surgeons with comprehensive intraoperative visual information for laparoscopic liver surgery, is trained.
Traditional flexible job shop scheduling ignores transportation or considers production and transportation separately. With factory digitalization and the widespread use of automated guided vehicles (AGVs), the isolated scheduling of production and transportation is no longer sufficient to meet increased productivity demands. Thus, integrated scheduling has become an inevitable option. Previous research has not sufficiently explored the domain knowledge of flexible job shop scheduling problem with finite transportation resources (FJSP-T), and thus the optimal solution cannot be found in an acceptable time using meta-heuristic algorithms. This paper explores FJSP-T to enhance the efficiency of the entire production system, and the objective is to minimize the makespan. The transportation situations of FJSP-T are analyzed, and it has been identified that the key to solving the problem is considering continuous transportation of AGVs. Further, the active decoding method and the initialization method considering continuous transportation are designed. A hybrid algorithm (HA) is proposed, which incorporates local search into the genetic algorithm, and various neighborhood structures for local search are designed. Finally, the superiority of the active decoding method is proved experimentally, and the algorithm performance is tested on two sets of famous benchmark instances (including 67 instances). Compared with other state-of-the-art reported algorithms, the proposed method obtains the new best solutions for 6 instances, and all solutions are not lower than previous results. Meanwhile, the computational time is only a few seconds. As a result, the proposed HA has significantly improved in solving FJSP-T regardless of the solution accuracy and the computational time.
Purpose
To prospectively accelerate whole‐brain CEST acquisition by joint k‐space and image‐space parallel imaging (KIPI) with a proposed golden‐angle view ordering technique (GAVOT) in Cartesian coordinates.
Theory and Methods
The T2‐decay effect will vary across frames with variable acceleration factors (AF) in the prospective acquisition using sequences with long echo trains. The GAVOT method uses a subset strategy to eliminate the T2‐decay inconsistency, where all frames use a subset of shots from the calibration frame to form their k‐space view ordering. The golden‐angle rule is adapted to ensure uniform k‐space coverage for arbitrary AFs in Cartesian coordinates. Phantom and in vivo studies were conducted on a 3 T scanner.
Results
The GAVOT view ordering yielded a higher g‐factor than conventional uniformly centric ordering, whereas the noise propagation in amide proton transfer (APT) weighted images was similar between different view ordering. Compared to centric ordering, GAVOT successfully eliminated the T2‐decay inconsistency across all frames, resulting in fewer image artifacts for both KIPI and conventional parallel imaging methods. The synergy of GAVOT and KIPI mitigated strong aliasing artifacts and achieved high‐quality reconstruction of prospective variable‐AF datasets. GAVOT‐KIPI reduced the scan time to 2.1 min for whole‐brain APT weighted imaging and 4.7 min for quantitative APT signal (APT#) mapping.
Conclusion
GAVOT makes the prospective variable AF strategy flexible and practical, and, in conjunction with KIPI, ensures high‐quality reconstruction from highly undersampled data, facilitating the clinical translation of whole‐brain CEST imaging.
Regression estimates functional dependencies between features. Linear regression models can be efficiently computed from covariances but are restricted to linear dependencies. Substitution allows to identify specific types of nonlinear dependencies by linear regression. Robust regression finds models that are robust against outliers. A popular class of nonlinear regression methods are universal approximators. We present two well-known examples for universal approximators from the field of artificial neural networks: the multilayer perceptron and radial basis function networks. Universal approximators can realize arbitrarily small training errors, but cross-validation is required to find models with low validation errors that generalize well on other data sets. Feature selection allows us to include only relevant features in regression models leading to more accurate models.
The popular Iris benchmark set is used to introduce the basic concepts of data analysis. Data scales (nominal, ordinal, interval, ratio) must be considered because certain mathematical operations are only appropriate for specific scales. Numerical data can be represented by sets, vectors, or matrices. Data analysis is often based on dissimilarity measures (like matrix norms, Lebesgue/Minkowski norms) or on similarity measures (like cosine, overlap, Dice, Jaccard, Tanimoto). Sequences can be analyzed using sequence relations (like Hamming or edit distance). Data can be extracted from continuous signals by sampling and quantization. The Nyquist condition allows sampling without loss of information.
For forecasting future values of a time series we imagine that the time series is generated by a (possibly noisy) deterministic process such as a Mealy or a Moore machine. This leads to recurrent or auto-regressive models. Building forecasting models is essentially a regression task. The training data sets for forecasting models are generated by finite unfolding in time. Examples for linear forecasting models are auto-regressive models (AR) and generalized AR models with moving average (ARMA), with integral terms (ARIMA), or with local regression (ARMAX). Examples for nonlinear forecasting models are recurrent neural networks. ChatGPT and other generative AI models based on transformer architectures allow forecasting of text data.
Classification is supervised learning that uses labeled data to assign objects to classes. We distinguish false positive and false negative errors and review numerous indicators to quantify classifier performance. Also pairs of indicators are often considered to assess classification performance. We illustrate this with the receiver operating characteristic and the precision recall diagram. Several different classifiers with specific capabilities and limitations are presented in detail: the naive Bayes classifier, linear discriminant analysis, the support vector machine (SVM) using the kernel trick, nearest neighbor classifiers, learning vector quantification, and hierarchical classification using decision trees.
In real world applications, data usually contain errors and noise, need to be scaled and transformed, or need to be collected from different and possibly heterogeneous information sources. We distinguish deterministic and stochastic errors. Deterministic errors can sometimes be easily corrected. Inliers and outliers may be identified and removed or corrected. Inliers, outliers, or noise can be reduced by filtering. We distinguish many different filtering methods with different effectiveness and computational complexities: moving statistical measures, discrete linear filters, finite impulse response, infinite impulse response. Data features with different ranges often need to be standardized or transformed.
Clustering is unsupervised learning that assigns labels to objects in unlabeled data. When clustering is performed on data that possess class labels, the clusters may or may not correspond with these classes. Cluster partitions may be mathematically represented by sets, partition matrices, and/or cluster prototypes. Sequential clustering (single linkage, complete linkage, average linkage, Ward’s method, etc.) yields hierarchical cluster structures but is computationally expensive. Partitional clustering can be based on hard, fuzzy, possibilistic, or noise clustering models. Cluster prototypes can have different shapes such as hyperspheres, ellipsoids, lines, circles, or more complex shapes. Relational clustering finds clusters in relational data, often enhanced by kernelization. Cluster tendency assessment finds out if the data possess a cluster structure at all, and cluster validity measures help identify the number of clusters or other algorithmic parameters. Clustering can also be done by heuristic methods such as self-organizing maps.
Data can often be very effectively analyzed using visualization techniques. Standard visualization methods for object data are plots and scatter plots. To visualize high-dimensional data, projection methods are necessary. We present linear projection (principal component analysis, Karhunen-Loève transform, singular value decomposition, eigenvector projection, Hotelling transform, proper orthogonal decomposition, multidimensional scaling) and nonlinear projection methods (Sammon mapping, auto-encoder). Data distributions can be estimated and visualized using histogram techniques. Periodic data (such as time series) can be analyzed and visualized using spectral analysis (cosine and sine transforms, amplitude and phase spectra).
Correlation quantifies the relation between features. Linear correlation methods are robust and computationally efficient but detect only linear dependencies. Nonlinear correlation methods are able to detect nonlinear dependencies but need to be carefully parametrized. As an example for nonlinear correlation we present the chi-square test for independence that can be applied to continuous features using histogram counts. Nonlinear correlation can also be quantified by the cross-validation error of regression models. Correlation does not imply causality. Spurious correlations may lead to wrong conclusions. If the underlying features are known, then spurious correlations may be compensated by partial correlation methods.
Despite the benefits of minimally invasive surgery, interventions such as laparoscopic liver surgery present unique challenges, like the significant anatomical differences between preoperative images and intraoperative scenes due to pneumoperitoneum, patient pose, and organ manipulation by surgical instruments. To address these challenges, we propose a method for intraoperative 3D reconstruction of the surgical scene, including vessels and tumors, without altering the surgical workflow. The technique combines Neural Radiance Field (NeRF) reconstructions from tracked laparoscopic videos with ultrasound 3D compounding. We evaluate the accuracy of our reconstructions on a clinical laparoscopic liver ablation dataset, consisting of laparoscope and patient reference poses from optical tracking, laparoscopic and ultrasound videos, as well as preoperative and intraoperative CTs. We propose a solution to compensate for liver deformations due to pressure applied during ultrasound acquisitions, improving the overall accuracy of the 3D reconstructions compared to the ground truth intraoperative CT with pneumoperitoneum. We train a unified NeRF from the ultrasound and laparoscope data, which allows real-time view synthesis providing surgeons with comprehensive intraoperative visual information for laparoscopic liver surgery.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Germany
Website