Silin Chen’s research while affiliated with Beijing Jiaotong University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (23)


Figure 4.1: Explainability Techniques in Large Language Models
Ethics and Social Implications of Large Models
  • Preprint
  • File available

April 2025

·

12 Reads

·

Silin Chen

·

Tianyang Wang

·

[...]

·

Yan Zhong

Large Language Models (LLMs) have become a cornerstone of modern artificial intelligence (AI), finding applications across various domains such as healthcare, finance, entertainment, and customer service. To understand their ethical and social implications, it is essential to first grasp what these models are, how they function, and why they carry significant impact. This introduction aims to provide a comprehensive and beginner-friendly overview of LLMs, introducing their basic structure, training process, and the types of tasks they are commonly employed for. We will also include simple analogies and examples to ease understanding.

Download

Generative Adversarial Networks Bridging Art and Machine Intelligence

February 2025

·

43 Reads

Generative Adversarial Networks (GAN) have greatly influenced the development of computer vision and artificial intelligence in the past decade and also connected art and machine intelligence together. This book begins with a detailed introduction to the fundamental principles and historical development of GANs, contrasting them with traditional generative models and elucidating the core adversarial mechanisms through illustrative Python examples. The text systematically addresses the mathematical and theoretical underpinnings including probability theory, statistics, and game theory providing a solid framework for understanding the objectives, loss functions, and optimisation challenges inherent to GAN training. Subsequent chapters review classic variants such as Conditional GANs, DCGANs, InfoGAN, and LAPGAN before progressing to advanced training methodologies like Wasserstein GANs, GANs with gradient penalty, least squares GANs, and spectral normalisation techniques. The book further examines architectural enhancements and task-specific adaptations in generators and discriminators, showcasing practical implementations in high resolution image generation, artistic style transfer, video synthesis, text to image generation and other multimedia applications. The concluding sections offer insights into emerging research trends, including self-attention mechanisms, transformer-based generative models, and a comparative analysis with diffusion models, thus charting promising directions for future developments in both academic and applied settings.


From In Silico to In Vitro: A Comprehensive Guide to Validating Bioinformatics Findings

January 2025

·

20 Reads

The integration of bioinformatics predictions and experimental validation plays a pivotal role in advancing biological research, from understanding molecular mechanisms to developing therapeutic strategies. Bioinformatics tools and methods offer powerful means for predicting gene functions, protein interactions, and regulatory networks, but these predictions must be validated through experimental approaches to ensure their biological relevance. This review explores the various methods and technologies used for experimental validation, including gene expression analysis, protein-protein interaction verification, and pathway validation. We also discuss the challenges involved in translating computational predictions to experimental settings and highlight the importance of collaboration between bioinformatics and experimental research. Finally, emerging technologies, such as CRISPR gene editing, next-generation sequencing, and artificial intelligence, are shaping the future of bioinformatics validation and driving more accurate and efficient biological discoveries.


From Aleatoric to Epistemic: Exploring Uncertainty Quantification Techniques in Artificial Intelligence

January 2025

·

30 Reads

Uncertainty quantification (UQ) is a critical aspect of artificial intelligence (AI) systems, particularly in high-risk domains such as healthcare, autonomous systems, and financial technology, where decision-making processes must account for uncertainty. This review explores the evolution of uncertainty quantification techniques in AI, distinguishing between aleatoric and epistemic uncertainties, and discusses the mathematical foundations and methods used to quantify these uncertainties. We provide an overview of advanced techniques, including probabilistic methods, ensemble learning, sampling-based approaches, and generative models, while also highlighting hybrid approaches that integrate domain-specific knowledge. Furthermore, we examine the diverse applications of UQ across various fields, emphasizing its impact on decision-making, predictive accuracy, and system robustness. The review also addresses key challenges such as scalability, efficiency, and integration with explainable AI, and outlines future directions for research in this rapidly developing area. Through this comprehensive survey, we aim to provide a deeper understanding of UQ's role in enhancing the reliability, safety, and trustworthiness of AI systems.


Figure 4.1: Explainability Techniques in Large Language Models
Ethics and Social Implications of Large Language Models

January 2025

·

13 Reads

Large Language Models (LLMs) have become a cornerstone of modern artificial intelligence (AI), finding applications across various domains such as healthcare, finance, entertainment, and customer service. To understand their ethical and social implications, it is essential to first grasp what these models are, how they function, and why they carry significant impact. This introduction aims to provide a comprehensive and beginner-friendly overview of LLMs, introducing their basic structure, training process, and the types of tasks they are commonly employed for. We will also include simple analogies and examples to ease understanding.


Deep Learning and Machine Learning -Generative Models: Foundations, Techniques, and Applications

January 2025

·

4 Reads

In recent years, the field of artificial intelligence (AI) and machine learning (ML) has undergone a transformative shift, with generative models emerging as one of the most significant and impactful areas of research. Generative models, in essence, are models that can generate new data instances that resemble a given set of training data. Unlike discriminative models, which focus on classification tasks, generative models aim to understand and replicate the underlying structure of data, making them capable of generating images, text, audio, and even 3D objects. This introduction serves as a foundation for understanding the core principles behind the most important generative models: Autoencoders (AE), Variational Autoencoders (VAE), Masked Autoencoders (MAE), Generative Adversarial Networks (GANs), Diffusion Models, and models like GPT (Generative Pre-trained Transformers).


Deep Learning Model Security: Threats and Defenses

December 2024

·

22 Reads

Deep learning has transformed AI applications but faces critical security challenges, including adversarial attacks, data poisoning, model theft, and privacy leakage. This survey examines these vulnerabilities, detailing their mechanisms and impact on model integrity and confidentiality. Practical implementations, including adversarial examples, label flipping, and backdoor attacks, are explored alongside defenses such as adversarial training, differential privacy, and federated learning, highlighting their strengths and limitations. Advanced methods like contrastive and self-supervised learning are presented for enhancing robustness. The survey concludes with future directions, emphasizing automated defenses, zero-trust architectures, and the security challenges of large AI models. A balanced approach to performance and security is essential for developing reliable deep learning systems.


From Bench to Bedside: A Review of Clinical Trialsin Drug Discovery and Development

December 2024

·

24 Reads

Clinical trials are an indispensable part of the drug development process, bridging the gap between basic research and clinical application. During the development of new drugs, clinical trials are used not only to evaluate the safety and efficacy of the drug but also to explore its dosage, treatment regimens, and potential side effects. This review discusses the various stages of clinical trials, including Phase I (safety assessment), Phase II (preliminary efficacy evaluation), Phase III (large-scale validation), and Phase IV (post-marketing surveillance), highlighting the characteristics of each phase and their interrelationships. Additionally, the paper addresses the major challenges encountered in clinical trials, such as ethical issues, subject recruitment difficulties, diversity and representativeness concerns, and proposes strategies for overcoming these challenges. With the advancement of technology, innovative technologies such as artificial intelligence, big data, and digitalization are gradually transforming clinical trial design and implementation, improving trial efficiency and data quality. The article also looks forward to the future of clinical trials, particularly the impact of emerging therapies such as gene therapy and immunotherapy on trial design, as well as the importance of regulatory reforms and global collaboration. In conclusion, the core role of clinical trials in drug development will continue to drive the progress of innovative drug development and clinical treatment.


Deep Learning and Machine Learning, Advancing Big Data Analytics and Management: Generative Models

December 2024

·

21 Reads

In recent years, the field of artificial intelligence (AI) and machine learning (ML) has undergone a transformative shift, with generative models emerging as one of the most significant and impactful areas of research. Generative models, in essence, are models that can generate new data instances that resemble a given set of training data. Unlike discriminative models, which focus on classification tasks, generative models aim to understand and replicate the underlying structure of data, making them capable of generating images, text, audio, and even 3D objects. This introduction serves as a foundation for understanding the core principles behind the most important generative models: Autoencoders (AE), Variational Autoencoders (VAE), Masked Autoencoders (MAE), Generative Adversarial Networks (GANs), Diffusion Models, and models like GPT (Generative Pre-trained Transformers)


Explainable AI Across Domains: Techniques, Domain-Specific Applications, and Future Directions

December 2024

·

80 Reads

Explainability in artificial intelligence (AI) has become crucial for ensuring transparency, trust, and usability across diverse application domains, such as healthcare, finance, and autonomous systems. This comprehensive review analyzes the state of research on explainability techniques, categorizing approaches into model-agnostic, model-specific, and hybrid methods. Key techniques, such as SHAP, LIME, and rule-based explanations, are discussed alongside their respective strengths and limitations. The review also delves into domain-specific applications, highlighting unique interpretability requirements in sectors like medical diagnostics, credit scoring, and autonomous decision-making. We further explore the evaluation metrics and benchmarks essential for assessing the quality and effectiveness of explainable AI, addressing challenges such as computational complexity, user-centered design, and ethical considerations. By identifying gaps in current methodologies, this review proposes future research directions aimed at developing adaptable, cross-domain explainability frameworks, enhancing robustness against adversarial manipulations, and promoting ethically aligned AI.


Citations (2)


... Medical image generation and reconstruction refer to the process of creating realistic medical images or enhancing existing ones to improve their quality [193]. These techniques are especially useful in situations where high-resolution images are difficult to obtain due to technical or economic constraints. ...

Reference:

Generative Adversarial Networks Bridging Art and Machine Intelligence
Pseudo Training Data Generation for Unsupervised Cell Membrane Segmentation in Immunohistochemistry Images
  • Citing Conference Paper
  • December 2024

... Emerging architectures like Vision Transformers (ViT) also show promise, as demonstrated by Mzoughi et al. [105], where ViT outperformed CNNs (91.61% vs. 83.37% accuracy) with improved interpretability using Grad-CAM, LIME, and SHAP. ...

A Comprehensive Guide to Explainable AI: From Classical Models to LLMs