Sébastien Adam’s research while affiliated with Université de Rouen Normandie and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (110)


3-WL GNNs for Metric Learning on Graphs
  • Conference Paper

January 2025

·

·

Pierre Héroux

·

[...]

·

Sebastien Adam

Figure 7: Comparison of the time consumption of P l and P * l in function of the number of nodes for l ∈ {3, 4, 5, 6}. Each time, the yellow correspond to the time computation of P l divided by the theoretical gain of time consumption.
Finding path and cycle counting formulae in graphs with Deep Reinforcement Learning
  • Preprint
  • File available

October 2024

·

30 Reads

This paper presents Grammar Reinforcement Learning (GRL), a reinforcement learning algorithm that uses Monte Carlo Tree Search (MCTS) and a transformer architecture that models a Pushdown Automaton (PDA) within a context-free grammar (CFG) framework. Taking as use case the problem of efficiently counting paths and cycles in graphs, a key challenge in network analysis, computer science, biology, and social sciences, GRL discovers new matrix-based formulas for path/cycle counting that improve computational efficiency by factors of two to six w.r.t state-of-the-art approaches. Our contributions include: (i) a framework for generating gramformers that operate within a CFG, (ii) the development of GRL for optimizing formulas within grammatical structures, and (iii) the discovery of novel formulas for graph substructure counting, leading to significant computational improvements.

Download


Fig. 1: (a) Dynamic Link Prediction task on a Discrete Time Dynamic Graph: In this figure, the model takes snapshots from t 2 to t 4 as input and predicts the existence of edges at time t 5 . The number of time steps in which the model can capture information is called the size of the receptive field, denoted as τ . (b) When predicting, for example at t 5 , the encoder of the model computes node representation vectors based on the input. (c) For each edge to be predicted, the decoder of the model computes the relevant node representations to obtain the probability of the existence of the edge. The figure shows the edge between the green and yellow nodes at time t 5 as an example.
Fig. 2: Two categories of Discrete Time Dynamic Graph Neural Networks (DTDGNNs). Left: Sequentially encoding the hidden states H of each snapshot across time with a temporal encoder f T (·). Right: Sequentially encoding the parameters Θ of the graph encoder f G (·) across time with a temporal encoder f T (·).
Fig. 3: Model performance in average precision, with an optimal temporal receptive field τ * vs. all temporal information τ ∞ .
Fig. 4: Average precision (AP) scores of various DTDG models across multiple datasets, shown as a function of the temporal receptive field τ . A value of τ ∞ represents the use of all temporal information.
Statistics of datasets used in our experiments.
Temporal receptive field in dynamic graph learning: A comprehensive analysis

July 2024

·

14 Reads

Dynamic link prediction is a critical task in the analysis of evolving networks, with applications ranging from recommender systems to economic exchanges. However, the concept of the temporal receptive field, which refers to the temporal context that models use for making predictions, has been largely overlooked and insufficiently analyzed in existing research. In this study, we present a comprehensive analysis of the temporal receptive field in dynamic graph learning. By examining multiple datasets and models, we formalize the role of temporal receptive field and highlight their crucial influence on predictive accuracy. Our results demonstrate that appropriately chosen temporal receptive field can significantly enhance model performance, while for some models, overly large windows may introduce noise and reduce accuracy. We conduct extensive benchmarking to validate our findings, ensuring that all experiments are fully reproducible.


Dynamic Graph Representation Learning With Neural Networks: A Survey

January 2024

·

157 Reads

·

26 Citations

IEEE Access

In recent years, Dynamic Graph (DG) representations have been increasingly used for modeling dynamic systems due to their ability to integrate both topological and temporal information in a compact representation. Dynamic graphs efficiently handle applications such as social network prediction, recommender systems, traffic forecasting, or electroencephalography analysis, which cannot be addressed using standard numerical representations. As a direct consequence, dynamic graph learning has emerged as a new machine learning problem, combining challenges from both sequential/temporal data processing and static graph learning. In this research area, the Dynamic Graph Neural Network (DGNN) has become the state-of-the-art approach and a plethora of models have been proposed in the very recent years. This paper aims to provide a review of the problems and models related to dynamic graph learning. The various dynamic graph supervised learning settings are analyzed and discussed. We identify the similarities and differences between existing models concerning the way time information is modeled. Finally, we provide guidelines for DGNN design and optimization, and review public datasets for evaluating model performance on various tasks, along with the corresponding publications.


Dynamic Graph Representation Learning with Neural Networks: A Survey

April 2023

·

84 Reads

·

1 Citation

In recent years, Dynamic Graph (DG) representations have been increasingly used for modeling dynamic systems due to their ability to integrate both topological and temporal information in a compact representation. Dynamic graphs allow to efficiently handle applications such as social network prediction, recommender systems, traffic forecasting or electroencephalography analysis, that can not be adressed using standard numeric representations. As a direct consequence of the emergence of dynamic graph representations, dynamic graph learning has emerged as a new machine learning problem, combining challenges from both sequential/temporal data processing and static graph learning. In this research area, Dynamic Graph Neural Network (DGNN) has became the state of the art approach and plethora of models have been proposed in the very recent years. This paper aims at providing a review of problems and models related to dynamic graph learning. The various dynamic graph supervised learning settings are analysed and discussed. We identify the similarities and differences between existing models with respect to the way time information is modeled. Finally, general guidelines for a DGNN designer when faced with a dynamic graph learning problem are provided.


Figure 1: Production of the sentence 1 T Adiag (A1) 1 with G L 1 .
Figure 3: Model of G 2 N 2 architecture from the graph to the output. One can see that each layer updates the nodes and the edges embedding and that the readout function acts separately on the diagonal and the non-diagonal of C (k) and H (k) .
Figure 6: Hadamard product allows to compute 4-cycle counting at edge level from adjacency and X 3 .
R 2 score on spectral filtering node regression problems. Results are median of 10 different runs.
Results on QM9 dataset focusing on the best methods. The metric is MAE, the lower, the better. The complete results can be found in table 5
Technical report: Graph Neural Networks go Grammatical

March 2023

·

117 Reads

This paper proposes a new GNN design strategy. This strategy relies on Context-Free Grammars (CFG) generating the matrix language MATLANG. It enables us to ensure both WL-expressive power, substructure counting abilities and spectral properties. Applying our strategy, we design Grammatical Graph Neural Network G2 ^2N2^2, a provably 3-WL GNN able to count at edge-level cycles of length up to 6 and able to reach band-pass filters. A large number of experiments covering these properties corroborate the presented theoretical results.


Data extraction and matching The EurHisFirm experience

December 2021

·

8 Reads

This paper reports results from the design phase of EurHisFirm. Its goal is to integrate isolated and badly accessible financial data sets on 19 th and 20 th century European companies so that users can query the data as if they reside in one large database. In addition, it wants to stimulate database construction by providing not only methodology and tools to connect to and collaborate with existing ones, but also a collaborative platform, based on machine learning and artificial intelligence, that allows harvesting data in a semi-automatic way. We present the proof-of-concept results of this platform in addition to the performance of matching algorithms, which are necessary to connect and collate the different constituent databases as well as to connect them to contemporary commercial databases.


Symbols Detection and Classification using Graph Neural Networks

October 2021

·

60 Reads

·

10 Citations

Pattern Recognition Letters

In this paper, we propose a method to both extract and classify symbols in floorplan images. This method relies on the very recent developments of Graph Neural Networks (GNN). In the proposed approach, floorplan images are first converted into Region Adjacency Graphs (RAGs). In order to achieve both classification and extraction, two different GNNs are used. The first one aims at classifying each node of the graph while the second targets the extraction of clusters corresponding to symbols. In both cases, the model is able to take into account edge features. Each model is firstly evaluated independently before combining both tasks simultaneously, increasing the quickness of the results.


Breaking the Limits of Message Passing Graph Neural Networks

June 2021

·

700 Reads

·

1 Citation

Since the Message Passing (Graph) Neural Networks (MPNNs) have a linear complexity with respect to the number of nodes when applied to sparse graphs, they have been widely implemented and still raise a lot of interest even though their theoretical expressive power is limited to the first order Weisfeiler-Lehman test (1-WL). In this paper, we show that if the graph convolution supports are designed in spectral-domain by a non-linear custom function of eigenvalues and masked with an arbitrary large receptive field, the MPNN is theoretically more powerful than the 1-WL test and experimentally as powerful as a 3-WL existing models, while remaining spatially localized. Moreover, by designing custom filter functions, outputs can have various frequency components that allow the convolution process to learn different relationships between a given input graph signal and its associated properties. So far, the best 3-WL equivalent graph neural networks have a computational complexity in O(n3)\mathcal{O}(n^3) with memory usage in O(n2)\mathcal{O}(n^2), consider non-local update mechanism and do not provide the spectral richness of output profile. The proposed method overcomes all these aforementioned problems and reaches state-of-the-art results in many downstream tasks.


Citations (72)


... The Graph Edit Distance(GED) [4] is a state-ofthe-art method for this purpose; however, it suffers from NP-hard complexity. Recently, several architectures have been proposed to address this limitation [9,7,8,11,12] in a learning framework. These architectures generally consist of two main components. ...

Reference:

3-WL GNNs for Metric Learning on Graphs
Graph node matching for edit distance
  • Citing Article
  • August 2024

Pattern Recognition Letters

... Consequently, researchers have proposed the construction of dynamic graphs to model the dynamic relationships contained in historical time series data. Dynamic graphs, the topological structure of which can be adjusted over time, offer significant advantages in capturing the evolutionary features and dynamic dependencies of spatio-temporal data [27][28][29]. For example, Li et al [30] developed a DSTGN model that used dynamic graph structures (SDs) to effectively capture dynamic associations within the data, enabling predictions based on the extraction of dynamic features from multivariate time series. ...

Dynamic Graph Representation Learning With Neural Networks: A Survey

IEEE Access

... Most research focussed on one type of object detector, such as You Only Look Once (YOLO) [28] based approaches [11,15,24,[29][30][31][32] or Faster Regionbased Convolutional Neural Network (Faster R-CNN) [33] based approaches [2,6,34,35,55]. Other approaches were based on Fully Convolutional Network (FCN) [37] segmentation models [38,39] or graph-based methods [40][41][42]. ...

Symbols Detection and Classification using Graph Neural Networks
  • Citing Article
  • October 2021

Pattern Recognition Letters

... The difference is that GCN uses the Laplace matrix, GAT uses the attention coefficient. Different from the above two methods, we adopt a simpler method [39] to aggregate neighbor information, which depends on the node update mode given in the following equation: ...

Breaking the Limits of Message Passing Graph Neural Networks

... Spectral decomposition. In spectral decomposition/ analysis (Balcilar et al. 2021;Gao et al. 2021), the normalized Laplacian matrix L of graph G is defined as L = I − D −1/2 AD −1/2 , where A and D are the adjacency matrix and the degree matrix, respectively. We decompose L as L = UΛU ⊤ via a spectral decomposition, where Λ is a diagonal matrix Λ = diag(λ 1 , . . . ...

ANALYZING THE EXPRESSIVE POWER OF GRAPH NEURAL NETWORKS IN A SPECTRAL PERSPECTIVE

... We subsequently used the depth labels provided by the ChebNet in the link prediction module. The reason for ChebNet to perform better than the remaining two methods could be attributed to the fact that ChebNet covers the full spectrum profile compared to GCN, as indicated in [53]. HGNN builds on top of the linear approximation used in GCN, and follows similar performance. ...

When Spectral Domain Meets Spatial Domain in Graph Neural Networks

... [91] Use modern char-level LM embeddings (Flair) [184] Use modern word-level LM embeddings (BERT, ELMo) [70,152,203] Uses stack of modern embeddings [105,137,162] Transfer learning How well modern embeddings can transfer to historical texts? What is the impact of in-domain embeddings? ...

A Named Entity Extraction System for Historical Financial Data
  • Citing Chapter
  • August 2020

Lecture Notes in Computer Science

... [12][13][14] Moreover, several machine learning (ML) approaches have recently emerged based on Mayr's database which currently holds experimental reactivity parameters for 355 electrophiles and 1300 nucleophiles. 6,[15][16][17][18][19][20][21] Our recent work introduces ESNUEL, 22 a fully automated quantum chemistry (QM)-based workow for EStimating NUcleophilicity and ELectrophilicity. A workow that builds upon studies by van Vranken and Baldi showing that calculated methyl cation affinities (MCAs) and methyl anion affinities (MAAs) of structurally different molecules correlate with Mayr's N × s N and E, respectively, when accounting for solvent effects. ...

Predicting experimental electrophilicities from quantum and topological descriptors: A machine learning approach
  • Citing Article
  • July 2020

Journal of Computational Chemistry

... The GraphSAGE network is a spatial graph neural network introduced in Reference [42], and it is a type of graph neural networks. Graph neural networks are currently widely applied in various fields, including link prediction [43], node classification [44], community detection [45], graph classification [46], and graph embedding [47]. ...

Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks

... Ferrando (2018) adopts a heuristic approach to identify room connections as edges and room regions as nodes with the assumption that all doors are axis-aligned, making it difficult to be generalized to complex floor plans. Renton et al. (2019) apply a graph neural network to detect and classify indoor elements from a floor plan image and transform them into adjacency graphs, excluding other important architectural elements such as walls and rooms. Song and Yu (2021) convert floor plan images into polygon graph data using graph neural networks. ...

Graph Neural Network for Symbol Detection on Document Images
  • Citing Conference Paper
  • September 2019