-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
1 lines (1 loc) · 83.9 KB
/
index.json
1
[{"authors":["vijay-dwivedi"],"categories":null,"content":"Vijay Dwivedi is a first year PhD student in Machine Learning at NTU, Singapore supervised by Dr. Xavier Bresson. His primary interest is developing Deep Learning algorithms on graph-structured data and their applications to domains such as quantum chemistry, social networks, etc.\nBefore starting his PhD, Vijay worked with Dr. Bresson as a Research Assistant in the same lab. He has a background in Computer Science (BTech) from MNNIT Allahabad, where he explored the fields of Natural Language Processing and Multi-Modal Systems.\n","date":1592161427,"expirydate":-62135596800,"kind":"taxonomy","lang":"en","lastmod":1592161427,"objectID":"d15c7e26e75929b01d15b6a505059a72","permalink":"https://graphdeeplearning.github.io/authors/vijay-dwivedi/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/authors/vijay-dwivedi/","section":"authors","summary":"Vijay Dwivedi is a first year PhD student in Machine Learning at NTU, Singapore supervised by Dr. Xavier Bresson. His primary interest is developing Deep Learning algorithms on graph-structured data and their applications to domains such as quantum chemistry, social networks, etc.\nBefore starting his PhD, Vijay worked with Dr. Bresson as a Research Assistant in the same lab. He has a background in Computer Science (BTech) from MNNIT Allahabad, where he explored the fields of Natural Language Processing and Multi-Modal Systems.","tags":null,"title":"Vijay Prakash Dwivedi","type":"authors"},{"authors":["chaitanya-joshi"],"categories":null,"content":"Chaitanya Joshi is a Research Assistant under Dr. Xavier Bresson at NTU, Singapore. His current research focuses on the emerging field of Graph Deep Learning and its applications for Operations Research and Combinatorial Optimization.\nHe graduated from NTU in 2019 as the Valedictorian of his cohort with a BEng in Computer Science and a specialization in Artificial Intelligence. He is passionate about building data-driven solutions for real-world problems, and has 3+ years of experience doing the same at companies and research labs in Singapore and Switzerland. He has co-authored patent applications and research papers at top Machine Learning conferences such as NeurIPS and ICLR.\n","date":1591920000,"expirydate":-62135596800,"kind":"taxonomy","lang":"en","lastmod":1591920000,"objectID":"7570c80afe3b75e2650605764905ca7e","permalink":"https://graphdeeplearning.github.io/authors/chaitanya-joshi/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/authors/chaitanya-joshi/","section":"authors","summary":"Chaitanya Joshi is a Research Assistant under Dr. Xavier Bresson at NTU, Singapore. His current research focuses on the emerging field of Graph Deep Learning and its applications for Operations Research and Combinatorial Optimization.\nHe graduated from NTU in 2019 as the Valedictorian of his cohort with a BEng in Computer Science and a specialization in Artificial Intelligence. He is passionate about building data-driven solutions for real-world problems, and has 3+ years of experience doing the same at companies and research labs in Singapore and Switzerland.","tags":null,"title":"Chaitanya Joshi","type":"authors"},{"authors":["xavier-bresson"],"categories":null,"content":"Xavier Bresson (PhD 2005, EPFL, Switzerland) is Associate Professor in Computer Science at NTU, Singapore. He is a leading researcher in the field of Graph Deep Learning, a new framework that combines graph theory and deep learning techniques to tackle complex data domains in natural language processing, computer vision, combinatorial optimization, quantum chemistry, physics, neuroscience, genetics and social networks. In 2016, he received the highly competitive Singaporean NRF Fellowship of $2.5M to develop these deep learning techniques. He was also awarded several research grants in the U.S. and Hong Kong. As a leading researcher in the field, he has published more than 60 peer-reviewed papers in the leading journals and conference proceedings in machine learning, including articles in NeurIPS, ICML, ICLR, CVPR, JMLR. He has organized several international workshops and tutorials on AI and deep learning in collaboration with Facebook, NYU and Imperial such as the 2019 and 2018 UCLA workshops, the 2017 CVPR tutorial and the 2017 NeurIPS tutorial. He has been teaching undergraduate, graduate and industrial courses in AI and deep learning since 2014 at EPFL (Switzerland), NTU (Singapore) and UCLA (U.S.).\n","date":1591920000,"expirydate":-62135596800,"kind":"taxonomy","lang":"en","lastmod":1591920000,"objectID":"63dcb8a76817f689f77e4d0a567d1ad3","permalink":"https://graphdeeplearning.github.io/authors/xavier-bresson/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/authors/xavier-bresson/","section":"authors","summary":"Xavier Bresson (PhD 2005, EPFL, Switzerland) is Associate Professor in Computer Science at NTU, Singapore. He is a leading researcher in the field of Graph Deep Learning, a new framework that combines graph theory and deep learning techniques to tackle complex data domains in natural language processing, computer vision, combinatorial optimization, quantum chemistry, physics, neuroscience, genetics and social networks. In 2016, he received the highly competitive Singaporean NRF Fellowship of $2.","tags":null,"title":"Xavier Bresson","type":"authors"},{"authors":["peng-xu"],"categories":null,"content":"Peng Xu is a Postdoctoral Scholar working with Dr. Xavier Bresson at NTU, Singapore. His current research focuses on sketch-based human-computer interaction and representation learning for sketches, as well as computer vision. He received his PhD degree from Pattern Recognition and Intelligent System Laboratory (PRIS) in Beijing University of Posts and Telecommunications (BUPT), supervised by Prof. Jun Guo. During his PhD, he was a visiting student in sketchX Lab, Queen Mary University of London (QMUL) and the National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA).\n","date":1578989967,"expirydate":-62135596800,"kind":"taxonomy","lang":"en","lastmod":1578989967,"objectID":"6d262e4afb0164909446efb9a5792c7b","permalink":"https://graphdeeplearning.github.io/authors/peng-xu/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/authors/peng-xu/","section":"authors","summary":"Peng Xu is a Postdoctoral Scholar working with Dr. Xavier Bresson at NTU, Singapore. His current research focuses on sketch-based human-computer interaction and representation learning for sketches, as well as computer vision. He received his PhD degree from Pattern Recognition and Intelligent System Laboratory (PRIS) in Beijing University of Posts and Telecommunications (BUPT), supervised by Prof. Jun Guo. During his PhD, he was a visiting student in sketchX Lab, Queen Mary University of London (QMUL) and the National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA).","tags":null,"title":"Peng Xu","type":"authors"},{"authors":["victor-getty"],"categories":null,"content":"Victor Getty is a Research Assistant under Dr. Xavier Bresson at NTU, Singapore, working on applications of Graph Neural Networks to Quantum Chemistry. He graduated from NTU in 2018 with a BSc in Mathematics.\n","date":1568730035,"expirydate":-62135596800,"kind":"taxonomy","lang":"en","lastmod":1568730035,"objectID":"950df23f7dff572b3d8601730716b544","permalink":"https://graphdeeplearning.github.io/authors/victor-getty/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/authors/victor-getty/","section":"authors","summary":"Victor Getty is a Research Assistant under Dr. Xavier Bresson at NTU, Singapore, working on applications of Graph Neural Networks to Quantum Chemistry. He graduated from NTU in 2018 with a BSc in Mathematics.","tags":null,"title":"Victor Getty","type":"authors"},{"authors":["axel-nilsson"],"categories":null,"content":"Axel Nilsson is an exchange researcher student under Dr. Xavier Bresson at NTU, Singapore. His research project focuses on Spectral Graph Neural Networks and their transferability.\nAxel is a Master’s student in at EPFL and obtained a BSc from the same school in 2017.\n","date":-62135596800,"expirydate":-62135596800,"kind":"taxonomy","lang":"en","lastmod":-62135596800,"objectID":"7abf0cbbec3519fe8730f9524e7f7f30","permalink":"https://graphdeeplearning.github.io/authors/axel-nilsson/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/authors/axel-nilsson/","section":"authors","summary":"Axel Nilsson is an exchange researcher student under Dr. Xavier Bresson at NTU, Singapore. His research project focuses on Spectral Graph Neural Networks and their transferability.\nAxel is a Master’s student in at EPFL and obtained a BSc from the same school in 2017.","tags":null,"title":"Axel Nilsson","type":"authors"},{"authors":["david-low"],"categories":null,"content":"David Low is currently a PhD student under School of Computer Science \u0026amp; Engineering, NTU supervised by Associate Professor Xavier Bresson. His current research focuses on Deep Learning and its applications for Natural Language Processing.\nBefore starting his PhD, he cofounded two startups and worked as a data scientist at Infocomm Development Authority, Singapore. In 2016, he represented Singapore and National University of Singapore (NUS) in Data Science Game at France and clinched top spot among teams from Asia and America.\nThroughout his career, David has engaged in data science projects ranging from banking, telco, e-commerce to insurance industry. Some of his works including sales forecast modeling, mineral deposits prediction and process optimization had won him awards in several machine learning competitions. Earlier in his career, David was involved in research collaborations with Carnegie Mellon University (CMU) and Massachusetts Institute of Technology (MIT) on separate projects funded by National Research Foundation and SMART.\n","date":-62135596800,"expirydate":-62135596800,"kind":"taxonomy","lang":"en","lastmod":-62135596800,"objectID":"6c9a46e1612dac4eae89a9765ad008a9","permalink":"https://graphdeeplearning.github.io/authors/david-low/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/authors/david-low/","section":"authors","summary":"David Low is currently a PhD student under School of Computer Science \u0026amp; Engineering, NTU supervised by Associate Professor Xavier Bresson. His current research focuses on Deep Learning and its applications for Natural Language Processing.\nBefore starting his PhD, he cofounded two startups and worked as a data scientist at Infocomm Development Authority, Singapore. In 2016, he represented Singapore and National University of Singapore (NUS) in Data Science Game at France and clinched top spot among teams from Asia and America.","tags":null,"title":"David Low Jia Wei","type":"authors"},{"authors":["Vijay Prakash Dwivedi"],"categories":[],"content":"This blog is based on the paper Benchmarking Graph Neural Networks which is a joint work with Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio and Xavier Bresson.\n Graph Neural Networks (GNNs) are widely used today in diverse applications of social sciences, knowledge graphs, chemistry, physics, neuroscience, etc., and accordingly there has been a great surge of interest and growth in the number of papers in the literature.\nHowever, it has been increasingly difficult to gauge the effectiveness of new models and validate new ideas that generalize universally to larger and complex datasets in the absence of a standard and widely-adopted benchmark.\nTo address this paramount concern existing in graph learning research, we develop an open-source, easy-to-use and reproducible benchmarking framework with a rigorous experimental protocol that is representative of the categorical advances in GNNs.\n This post outlines the issues in the GNN literature suggesting the need of a benchmark, the framework proposed in the paper, the broad classes of widely used and powerful GNNs benchmarked and the insights learnt from the extensive experiments. Why benchmark? In any core research or application area in deep learning, a benchmark helps to identify and quantify what types of architectures, principles, or mechanisms are universal and generalizable to real-world tasks and large datasets. Particularly, the recent revolution in this AI field is often credited, to a possibly large extent, to be triggered by the large-scale benchmark image dataset, ImageNet. (Obviously, other driving factors include increase in the volume of research, more datasets, compute, wide-adoptance, etc.)\n Fig 1: ImageNet Classification Leaderboard from paperswithcode.com Benchmarking has been proved to be beneficial for driving progress, identifying essential ideas, and solving domain-related problems in many sub-fields of science. This project was conceived with this fundamental motivation.\n Need of a benchmarking framework for GNNs a. Datasets: Many of the widely cited papers in the GNN literature contain experiments that are evaluated on small graph datasets which have only a few hundreds (or, thousand) of graphs.\n Fig 2: Statistics of the widely used TU datasets. Source Errica et al., 2020 Take for example, the ENZYMES dataset, which is almost seen in every work on a GNN for classification task. If one uses a random $10$-fold cross validation (in most papers), the test set would have $60$ graphs (i.e. $10$% of $600$ total graphs). That would mean a correct classification (or, alternatively a misclassification) would change $1.67$% of test accuracy score. A couple of samples could determine a $3.33$% difference in performance measure, which is usually a significant gain score stated when one validates a new idea in literature. You see there, the number of samples is unreliable to concretely acknowledge the advances. 1\nOur experiments, too, show that the standard deviation of performance on such datasets is large, making it difficult to make substantial conclusions on a research idea. Moreover, most GNNs perform statistically the same on these datasets. The quality of these datasets also leads one to question if you should use them while validating ideas on GNNs. On several of these datasets, simpler models, sometimes, perform as good, or even beats GNNs.\nConsequently, it has become difficult to differentiate complex, simple and graph-agnostic architectures for graph machine learning.\nb. Consistent experimental protocol: Several papers in the GNN literature do not have consensus on a unifying and robust experimental setting which leads to discussing the inconsistencies and re-evaluating several papers\u0026rsquo; experiments.\nFor a couple of examples to highlight here, Ying et al., 2018 performed training on $10$-fold split data for a fixed number of epochs and reported the performance of the epoch which has the \u0026ldquo;highest average validation accuracy across the splits at any epoch\u0026rdquo; whereas Lee et al., 2019 used an \u0026ldquo;early stopping criterion\u0026rdquo; by monitoring the epoch-wise validation loss and report \u0026ldquo;average test accuracy at last epoch\u0026rdquo; over $10$-fold split.\nNow, if we extract results of both these papers to put together in the same table and claim that the model with the highest performance score is the promising of all, can we get convinced that the comparison is fair?\n There are other issues related to hyperparamter selection, comparison in an unfair budgets of trainable paramters, use of different train-validation-test splits, etc.\n The existence of such problems pushed us to develop a GNN benchmarking framework which standardizes GNN research and help researchers make more meaningful advances.\n Challenges of building a GNN benchmark The lack of benchmarks have been a major issue in GNN literature as the aforementioned requirements have not been strictly enforced.\nDesigning benchmarks is highly challenging as we must make robust decisions for coding framework, experimental settings and appropriate datasets. The benchmark should also be comprehensive to cover most of the fundamental tasks which is indicative of the application area the research can be applied to. For instance, graph learning problems include predicting properties at the node-level, edge-level and graph-level. A benchmark should attempt to cover many, if not all, of these.\nSimilarly, it is challenging to collect real and representative large-scale datasets. The lack of theoretical tools that can define the quality of a dataset or, validate its statistical representativeness for a given task makes it difficult to decide on datasets. Furthermore, there are arbitrary choices required on the features of nodes and edges for graphs and the scale of graph sizes as most of the popular graph learning frameworks do not cater ‘very efficiently’ to large graphs.\n There has been a promising effort recently, The Open Graph Benchmark (OGB), to collect meaningful medium-to-large scale dataset in order to steer graph learning research. The initiative is complementary to the goals of this project.\n Proposed benchmarking framework: We propose a benchmarking framework for graph neural networks with the following key characteristics:\n We develop a modular coding infrastructure which can be used to speed up the development of new ideas Our framework adopts a rigorous and fair experimental protocol, We propose appropriate medium-scale datasets that can be used a plug-ins for later research. 2 Four fundamental tasks in graph machine learning are covered, i.e. graph classification, graph regression, node classification, and edge classification. a. Coding infrastructure: Our benchmarking code infrastructure is based on Pytorch/DGL.\nFrom a high-level view, our framework unifies independent components for i) Data pipelines, ii) GNN layers and models, iii Training and evaluation functions, iv) Network and hyperparameters configurations, and v) Single execution scripts for reproducibility.\n Fig 3: Snapshot of our modular coding framework open-sourced on GitHub The detailed user instructions on use of each of these components is described on GitHub README. 3\n b. Datasets: We include 8 datasets from diverse domains of chemistry, mathematical modeling, computer vision, combinatorial optimization and social networks.\n Fig 4: Summary statistics of the datasets included in the proposed benchmark The steps for datasets\u0026rsquo; preparation and their relevance to benchmarking graph neural networks are described in the paper.\n It is worth mentioning that we include OGBL-COLLAB from OGB which demonstrates the we can flexibly incorporate any of the current and future datasets from the OGB initiative.\n c. Experimental Protocol: We define a rigorous and fair experimental protocol for benchmarking graph neural network models.\nDataset splits: Given the literature has issues with using different train-val-test splits for different models, we make sure our data pipelines provide the same training, validation and test splits for every GNN model compared. We follow standard splits for the datasets available. For synthetic datasets with no standard splits, we ensure the class distribution or the synthetic properties are the same across the splits. Please refer to the paper on more details.\nTraining: We use the same training setup and reporting protocol for all experiments. We use the Adam optimizer to train the GNNs with a learning rate decay strategy based on the validation loss. We train each experiment for an unspecified number of epochs where the model stops to train at a minimum learning rate at which there is no significant learning.\n Importantly, this strategy makes it easy for the users to not fathom on choosing how many epochs to train their model for.\n Each experiment is run on $4$ different seeds for a maximum of $12$ hours of training time and the summary statistics of the last epoch score of the $4$ experiments is reported.\nParameter budget: We decide on using two trainable parameter budgets: (i) $100k$ parameters for each GNNs for all the tasks, and (ii) $500k$ parameters for GNNs for which we investigate scaling a model to larger parameters and deeper layers. The number of hidden layers and hidden dimensions are selected accordingly to match these budgets.\nWe make this choice of having a similar parameter budget for fair comparison because it becomes otherwise difficult to rigorously evaluate different models. In GNN literature, it is often seen the a new model is compared to the existing literature without any detail of the number of parameters, or any attempt to have the same size of the model. Having said that, our goal is not to find the optimal set of hyperparameters for each of the models which is a compute-intensive task.\n d. Graph Neural Networks: We benchmark two broad classes of GNNs that represent the categorical advances in the architectures of a graph neural network witnessed in the most recent literature. We call the two classes, for nomenclature, as GCNs (Graph Convolutional Networks) and WL-GNNs (Weisfeiler-Lehman GNNs).\n GCNs refer to the popular message-passing based GNNs which leverage sparse tensor computation and WL-GNNs are the theoretically expressive GNNs based on the WL-test to distinguish non-isomorphic graphs which require dense tensor computation at each layer.\n Accordingly, our experimental pipeline is shown in Fig 5 for GCNs and Fig 6 for WL-GNNs.\n Fig 5: Our standard experimental pipeline for GCNs which operate on sparse rank-$2$ tensors. Fig 6: Our standard experimental pipeline for WL-GNNs which operate on dense rank-$2$ tensors. We direct the readers to our paper and the corresponding works for more details on the mathematical formulations of the GNNs. To interested readers, we also include in paper the block diagrams of layer updates of each GNN benchmarked.\n For a quick recap at this stage, we discussed the need of a benchmark, the challenges in building such a framework and details on our proposed benchmarking framework. We now delve into the experiments. We perform a principled investigation into the message passing based GCNs and the WL-GNNs to reveal important insights and highlight critical underlying challenges in building a powerful GNN model.\n Benchmarking GNNs on the proposed datasets. We perform exhaustive experiments on all datasets using every GNN models included currently in our benchmarking framework. The experiments help us draw many insights, few of which are discussed here. We recommend reading the paper for details on the experimental results.\n The GNNs that we benchmark are: Vanilla Graph Convolutional Network (GCN), GraphSage, Graph Attention Network (GAT), Gaussian Mixture Model (MoNet), GatedGCN, Graph Isomorphism Network (GIN), RingGNN and 3WL-GNN.\n 1. Graph-agnostic NNs perform poorly on the proposed datasets: We compare all GNNs to a simple MLP which updates each node’s features independent of one-another, i.e. ignoring the graph structure.\n MLP node update equation at layer $\\ell$ is: $$ h_{i}^{\\ell+1} = \\sigma \\left( W^{\\ell} \\ h_{i}^{\\ell} \\right) $$\n MLP evaluates to consistently low scores on each of the datasets which shows the necessity to consider graph structure for these tasks. This result is also indicative of how appropriate these datasets are for GNN research as they statistically separate model’s performance.\n2. GCNs outperform WL-GNNs on the proposed datasets: Although WL-GNNs are provably powerful in terms of graph isomorphism and expressiveness, the WL-GNN models that we consider were not able to outperform GCNs. These models are limited in scaling to larger datasets as their space/time complexity are inefficient as compared to the GCNs which leverage sparse tensors.\n GCNs are seen to conveniently scale to $16$ layers and provide the best results on all datasets, whereas the WL-GNNs face loss divergence and/or out-of-memory errors when trying to build deeper networks.\n 3. Anisotropic mechanisms improve message-passing GCNs architectures: Among the models in the message-passing GCNs, we can classify them into isotropic and anisotropic.\nA GCN model whose node update equation treats every edge direction equally, is considered isotropic; and a GCN model whose node update equation treats every edge direction differently, is considered anisotropic.\n Isotropic layer update equation: $$ h_{i}^{\\ell+1} = \\sigma \\Big( W_1^{\\ell} \\ h_{i}^{\\ell} + \\sum_{j \\in \\mathcal{N}_i} W_2^{\\ell} \\ h_{j}^{\\ell} \\Big) $$\n Anisotropic layer update equation: $$ h_{i}^{\\ell+1} = \\sigma \\Big( W_1^{\\ell} \\ h_{i}^{\\ell} + \\sum_{j \\in \\mathcal{N}_i} \\eta_{ij} W_2 h_{j}^{\\ell} \\Big) $$\n As per the above equations, GCN, GraphSage and GIN are isotropic GCNs whereas GAT, MoNet and GatedGCN are anisotropic GCNs.\nOur benchmark experiments reveal that the anisotropic mechanism is an architectural improvement in GCNs which give consistently impressive results. Note that sparse and dense attention mechanisms (in GAT and GatedGCN respectively) are examples anisotropic components in a GNN.\n4. There are underlying challenges for training the theoretically powerful WL-GNNs: We observe a high standard deviation of performance scores on the WL-GNNs. (Recall that we report every performance of 4 runs with different seeds). This reveals the problem in training these models.\nUniversal training procedures like batched training and batch normalization are not used in WL-GNNs since they operate on dense rank-2 tensors.\nTo describe this clearly, the batching approach for GCNs in leading graph machine learning libraries which operate on sparse rank-2 tensors involves preparing a sparse block diagonal adjacency matrix for a batch of graphs.\n Fig 7: Mini-batch graph represented with one sparse block-diagonal matrix. Source The WL-GNNs that operate on dense rank-2 tensors, have components which compute information at/from every position in the dense tensor. Therefore, the same approach (Fig 7) is not applicable as it would make the entire block diagonal matrix dense and would break sparsity.\nGCNs leverage batched training and hence batch normalization for stable and fast training. Besides, WL-GNNs, with the current design, are not suitable for single large graphs, eg. OGBL-COLLAB. We failed to fit the dense tensor of this large size on both GPU and CPU memory.\nHence, our benchmark suggests the need for re-thinking better design approaches for WL-GNNs which can leverage sparsity, batching, normalization schemes, etc. that have become universal ingredients in deep learning.\n More reading With this introduction and usefulness of a GNN benchmarking framework, we conlcude this blog post, but there is more reading left if you\u0026rsquo;re interested in this work.\nParticularly, we investigate anisotropy and edge representations for link prediction in more detail in the paper and propose a new approach for improving low-structurally expressive GCNs. We shall discuss these in future blog posts separately for clear understanding.\nIf this benchmarking framework comes to use in your research, please use the following bibtex in your work. For discussions, hit us with a query on GitHub Issues. We would love to discuss and improve the benchmark for steering more meaningful research in graph neural networks.\n@article{dwivedi2020benchmarkgnns, title={Benchmarking Graph Neural Networks}, author={Dwivedi, Vijay Prakash and Joshi, Chaitanya K and Laurent, Thomas and Bengio, Yoshua and Bresson, Xavier}, journal={arXiv preprint arXiv:2003.00982}, year={2020} } By this, we do not mean the ideas are not useful and/or the work put by the authors is not meaningful. Every effort equally contributes to the advance of this field. \u0026#x21a9;\u0026#xfe0e;\n As examples, you may refer to these works that leverage our framework to conveniently work on their research idea. It indicates the effectiveness of having such a framework. \u0026#x21a9;\u0026#xfe0e;\n Note that we do not aim to develop a software library, but to come up with a coding framework where each component is simple and transparent to as many users as possible. \u0026#x21a9;\u0026#xfe0e;\n ","date":1592161427,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1592161427,"objectID":"35051d565f575214cd6a46739c9ba178","permalink":"https://graphdeeplearning.github.io/post/benchmarking-gnns/","publishdate":"2020-06-15T03:03:47+08:00","relpermalink":"/post/benchmarking-gnns/","section":"post","summary":"This blog is based on the paper Benchmarking Graph Neural Networks which is a joint work with Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio and Xavier Bresson.\n Graph Neural Networks (GNNs) are widely used today in diverse applications of social sciences, knowledge graphs, chemistry, physics, neuroscience, etc., and accordingly there has been a great surge of interest and growth in the number of papers in the literature.\nHowever, it has been increasingly difficult to gauge the effectiveness of new models and validate new ideas that generalize universally to larger and complex datasets in the absence of a standard and widely-adopted benchmark.","tags":["Deep Learning","Graph Neural Networks","Benchmark"],"title":"Benchmarking Graph Neural Networks","type":"post"},{"authors":["Chaitanya Joshi","Quentin Cappart","Louis-Martin Rousseau","Thomas Laurent","Xavier Bresson"],"categories":null,"content":"","date":1591920000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1591920000,"objectID":"486d754122055da393f10deedced6df1","permalink":"https://graphdeeplearning.github.io/publication/joshi-2020-learning/","publishdate":"2020-06-17T00:46:25.621256Z","relpermalink":"/publication/joshi-2020-learning/","section":"publication","summary":"End-to-end training of neural network solvers for combinatorial problems such as the Travelling Salesman Problem is intractable and inefficient beyond a few hundreds of nodes. While state-of-the-art Machine Learning approaches perform closely to classical solvers for trivially small sizes, they are unable to generalize the learnt policy to larger instances of practical scales. Towards leveraging transfer learning to solve large-scale TSPs, this paper identifies inductive biases, model architectures and learning algorithms that promote generalization to instances larger than those seen in training. Our controlled experiments provide the first principled investigation into such zero-shot generalization, revealing that extrapolating beyond training data requires rethinking the entire neural combinatorial optimization pipeline, from network layers and learning paradigms to evaluation protocols.","tags":["Deep Learning","Graph Neural Networks","Operations Research","Combinatorial Optimization","Travelling Salesman Problem"],"title":"Learning TSP Requires Rethinking Generalization","type":"publication"},{"authors":["Vijay Prakash Dwivedi","Chaitanya Joshi","Xavier Bresson"],"categories":["Models"],"content":"","date":1583245235,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1583245235,"objectID":"715ce538d3425653656eb49888b96614","permalink":"https://graphdeeplearning.github.io/project/benchmark/","publishdate":"2020-03-03T22:20:35+08:00","relpermalink":"/project/benchmark/","section":"project","summary":"Identify universal building blocks for robust and scalable GNNs.","tags":["Deep Learning","Graph Neural Networks","Benchmark","Models","Spatial Graph ConvNets"],"title":"Benchmarking Graph Neural Networks","type":"project"},{"authors":["Vijay Prakash Dwivedi","Chaitanya Joshi","Thomas Laurent","Yoshua Bengio","Xavier Bresson"],"categories":null,"content":"","date":1583107200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1583107200,"objectID":"76585bebed66049885797c81005a1e65","permalink":"https://graphdeeplearning.github.io/publication/dwivedi-2020-benchmark/","publishdate":"2020-03-02T16:08:24+08:00","relpermalink":"/publication/dwivedi-2020-benchmark/","section":"publication","summary":"Graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. As the field grows, it becomes critical to identify key architectures and validate new ideas that generalize to larger, more complex datasets. Unfortunately, it has been increasingly difficult to gauge the effectiveness of new models in the absence of a standardized benchmark with consistent experimental settings. In this paper, we introduce a reproducible GNN benchmarking framework, with the facility for researchers to add new models conveniently for arbitrary datasets. We demonstrate the usefulness of our framework by presenting a principled investigation into the recent Weisfeiler-Lehman GNNs (WL-GNNs) compared to message passing-based graph convolutional networks (GCNs) for a variety of graph tasks, i.e. graph regression/classification and node/link prediction, with medium-scale datasets.","tags":["Deep Learning","Graph Neural Networks","Benchmark","Spatial Graph ConvNets","Computer Vision","Operations Research","Combinatorial Optimization","Chemistry"],"title":"Benchmarking Graph Neural Networks","type":"publication"},{"authors":["Chaitanya Joshi"],"categories":[],"content":"Engineer friends often ask me: Graph Deep Learning sounds great, but are there any big commercial success stories? Is it being deployed in practical applications?\nBesides the obvious ones\u0026ndash;recommendation systems at Pinterest, Alibaba and Twitter\u0026ndash;a slightly nuanced success story is the Transformer architecture, which has taken the NLP industry by storm.\nThrough this post, I want to establish links between Graph Neural Networks (GNNs) and Transformers. I\u0026rsquo;ll talk about the intuitions behind model architectures in the NLP and GNN communities, make connections using equations and figures, and discuss how we could work together to drive progress.\nLet\u0026rsquo;s start by talking about the purpose of model architectures\u0026ndash;representation learning.\n Representation Learning for NLP At a high level, all neural network architectures build representations of input data as vectors/embeddings, which encode useful statistical and semantic information about the data. These latent or hidden representations can then be used for performing something useful, such as classifying an image or translating a sentence. The neural network learns to build better-and-better representations by receiving feedback, usually via error/loss functions.\nFor Natural Language Processing (NLP), conventionally, Recurrent Neural Networks (RNNs) build representations of each word in a sentence in a sequential manner, i.e., one word at a time. Intuitively, we can imagine an RNN layer as a conveyor belt, with the words being processed on it autoregressively from left to right. At the end, we get a hidden feature for each word in the sentence, which we pass to the next RNN layer or use for our NLP tasks of choice.\n I highly recommend Chris Olah\u0026rsquo;s legendary blog for recaps on RNNs and representation learning for NLP.\n Initially introduced for machine translation, Transformers have gradually replaced RNNs in mainstream NLP. The architecture takes a fresh approach to representation learning: Doing away with recurrence entirely, Transformers build features of each word using an attention mechanism to figure out how important all the other words in the sentence are w.r.t. to the aforementioned word. Knowing this, the word\u0026rsquo;s updated features are simply the sum of linear transformations of the features of all the words, weighted by their importance.\n Back in 2017, this idea sounded very radical, because the NLP community was so used to the sequential\u0026ndash;one-word-at-a-time\u0026ndash;style of processing text with RNNs. The title of the paper probably added fuel to the fire!\nFor a recap, Yannic Kilcher made an excellent video overview.\n Breaking down the Transformer Let\u0026rsquo;s develop intuitions about the architecture by translating the previous paragraph into the language of mathematical symbols and vectors. We update the hidden feature $h$ of the $i$'th word in a sentence $\\mathcal{S}$ from layer $\\ell$ to layer $\\ell+1$ as follows:\n$$ h_{i}^{\\ell+1} = \\text{Attention} \\left( Q^{\\ell} h_{i}^{\\ell} \\ , K^{\\ell} h_{j}^{\\ell} \\ , V^{\\ell} h_{j}^{\\ell} \\right), $$\n$$ i.e.,\\ h_{i}^{\\ell+1} = \\sum_{j \\in \\mathcal{S}} w_{ij} \\left( V^{\\ell} h_{j}^{\\ell} \\right), $$\n$$ \\text{where} \\ w_{ij} = \\text{softmax}_j \\left( Q^{\\ell} h_{i}^{\\ell} \\cdot K^{\\ell} h_{j}^{\\ell} \\right), $$\nwhere $j \\in \\mathcal{S}$ denotes the set of words in the sentence and $Q^{\\ell}, K^{\\ell}, V^{\\ell}$ are learnable linear weights (denoting the Query, Key and Value for the attention computation, respectively). The attention mechanism is performed parallelly for each word in the sentence to obtain their updated features in one shot\u0026ndash;another plus point for Transformers over RNNs, which update features word-by-word.\nWe can understand the attention mechanism better through the following pipeline:\n Taking in the features of the word $h_{i}^{\\ell}$ and the set of other words in the sentence ${ h_{j}^{\\ell} ;\\ \\forall j \\in \\mathcal{S} }$, we compute the attention weights $w_{ij}$ for each pair $(i,j)$ through the dot-product, followed by a softmax across all $j$'s. Finally, we produce the updated word feature $h_{i}^{\\ell+1}$ for word $i$ by summing over all ${ h_{j}^{\\ell} }$'s weighted by their corresponding $w_{ij}$. Each word in the sentence parallelly undergoes the same pipeline to update its features.\n Multi-head Attention mechanism Getting this dot-product attention mechanism to work proves to be tricky\u0026ndash;bad random initializations can de-stabilize the learning process. We can overcome this by parallelly performing multiple \u0026lsquo;heads\u0026rsquo; of attention and concatenating the result (with each head now having separate learnable weights):\n$$ h_{i}^{\\ell+1} = \\text{Concat} \\left( \\text{head}_1, \\ldots, \\text{head}_K \\right) O^{\\ell}, $$ $$ \\text{head}_k = \\text{Attention} \\left( Q^{k,\\ell} h_{i}^{\\ell} \\ , K^{k, \\ell} h_{j}^{\\ell} \\ , V^{k, \\ell} h_{j}^{\\ell} \\right), $$\nwhere $Q^{k,\\ell}, K^{k,\\ell}, V^{k,\\ell}$ are the learnable weights of the $k$'th attention head and $O^{\\ell}$ is a down-projection to match the dimensions of $h_i^{\\ell+1}$ and $h_i^{\\ell}$ across layers.\nMultiple heads allow the attention mechanism to essentially \u0026lsquo;hedge its bets\u0026rsquo;, looking at different transformations or aspects of the hidden features from the previous layer. We\u0026rsquo;ll talk more about this later.\n Scale issues and the Feed-forward sub-layer A key issue motivating the final Transformer architecture is that the features for words after the attention mechanism might be at different scales or magnitudes: (1) This can be due to some words having very sharp or very distributed attention weights $w_{ij}$ when summing over the features of the other words. (2) At the individual feature/vector entries level, concatenating across multiple attention heads\u0026ndash;each of which might output values at different scales\u0026ndash;can lead to the entries of the final vector $h_{i}^{\\ell+1}$ having a wide range of values. Following conventional ML wisdom, it seems reasonable to add a normalization layer into the pipeline.\nTransformers overcome issue (2) with LayerNorm, which normalizes and learns an affine transformation at the feature level. Additionally, scaling the dot-product attention by the square-root of the feature dimension helps counteract issue (1).\nFinally, the authors propose another \u0026lsquo;trick\u0026rsquo; to control the scale issue: a position-wise 2-layer MLP with a special structure. After the multi-head attention, they project $h_i^{\\ell+1}$ to a (absurdly) higher dimension by a learnable weight, where it undergoes the ReLU non-linearity, and is then projected back to its original dimension followed by another normalization:\n$$ h_i^{\\ell+1} = \\text{LN} \\left( \\text{MLP} \\left( \\text{LN} \\left( h_i^{\\ell+1} \\right) \\right) \\right) $$\n To be honest, I\u0026rsquo;m not sure what the exact intuition behind the over-parameterized feed-forward sub-layer was and nobody seems to be asking questions about it, too! I suppose LayerNorm and scaled dot-products didn\u0026rsquo;t completely solve the issues highlighted, so the big MLP is a sort of hack to re-scale the feature vectors independently of each other.\nEmail me if you know more!\n The final picture of a Transformer layer looks like this:\n The Transformer architecture is also extremely amenable to very deep networks, enabling the NLP community to scale up in terms of both model parameters and, by extension, data. Residual connections between the inputs and outputs of each multi-head attention sub-layer and the feed-forward sub-layer are key for stacking Transformer layers (but omitted from the diagram for clarity).\n GNNs build representations of graphs Let\u0026rsquo;s take a step away from NLP for a moment.\nGraph Neural Networks (GNNs) or Graph Convolutional Networks (GCNs) build representations of nodes and edges in graph data. They do so through neighbourhood aggregation (or message passing), where each node gathers features from its neighbours to update its representation of the local graph structure around it. Stacking several GNN layers enables the model to propagate each node\u0026rsquo;s features over the entire graph\u0026ndash;from its neighbours to the neighbours\u0026rsquo; neighbours, and so on.\n Take the example of this emoji social network: The node features produced by the GNN can be used for predictive tasks such as identifying the most influential members or proposing potential connections.\n In their most basic form, GNNs update the hidden features $h$ of node $i$ (for example, 😆) at layer $\\ell$ via a non-linear transformation of the node\u0026rsquo;s own features $h_i^{\\ell}$ added to the aggregation of features $h_j^{\\ell}$ from each neighbouring node $j \\in \\mathcal{N}(i)$:\n$$ h_{i}^{\\ell+1} = \\sigma \\Big( U^{\\ell} h_{i}^{\\ell} + \\sum_{j \\in \\mathcal{N}(i)} \\left( V^{\\ell} h_{j}^{\\ell} \\right) \\Big), $$\nwhere $U^{\\ell}, V^{\\ell}$ are learnable weight matrices of the GNN layer and $\\sigma$ is a non-linearity such as ReLU. In the example, $\\mathcal{N}$(😆) $=$ { 😘, 😎, 😜, 🤩 }.\nThe summation over the neighbourhood nodes $j \\in \\mathcal{N}(i)$ can be replaced by other input size-invariant aggregation functions such as simple mean/max or something more powerful, such as a weighted sum via an attention mechanism.\nDoes that sound familiar?\nMaybe a pipeline will help make the connection:\n If we were to do multiple parallel heads of neighbourhood aggregation and replace summation over the neighbours $j$ with the attention mechanism, i.e., a weighted sum, we\u0026rsquo;d get the Graph Attention Network (GAT). Add normalization and the feed-forward MLP, and voila, we have a Graph Transformer! Sentences are fully-connected word graphs To make the connection more explicit, consider a sentence as a fully-connected graph, where each word is connected to every other word. Now, we can use a GNN to build features for each node (word) in the graph (sentence), which we can then perform NLP tasks with.\n Broadly, this is what Transformers are doing: they are GNNs with multi-head attention as the neighbourhood aggregation function. Whereas standard GNNs aggregate features from their local neighbourhood nodes $j \\in \\mathcal{N}(i)$, Transformers for NLP treat the entire sentence $\\mathcal{S}$ as the local neighbourhood, aggregating features from each word $j \\in \\mathcal{S}$ at each layer.\nImportantly, various problem-specific tricks\u0026ndash;such as position encodings, causal/masked aggregation, learning rate schedules and extensive pre-training\u0026ndash;are essential for the success of Transformers but seldom seem in the GNN community. At the same time, looking at Transformers from a GNN perspective could inspire us to get rid of a lot of the bells and whistles in the architecture.\n What can we learn from each other? Now that we\u0026rsquo;ve established a connection between Transformers and GNNs, let me throw some ideas around\u0026hellip;\nAre fully-connected graphs the best input format for NLP? Before statistical NLP and ML, linguists like Noam Chomsky focused on developing fomal theories of linguistic structure, such as syntax trees/graphs. Tree LSTMs already tried this, but maybe Transformers/GNNs are better architectures for bringing the world of linguistic theory and statistical NLP closer?\n How to learn long-term dependencies? Another issue with fully-connected graphs is that they make learning very long-term dependencies between words difficult. This is simply due to how the number of edges in the graph scales quadratically with the number of nodes, i.e., in an $n$ word sentence, a Transformer/GNN would be doing computations over $n^2$ pairs of words. Things get out of hand for very large $n$.\nThe NLP community\u0026rsquo;s perspective on the long sequences and dependencies problem is interesting: Making the attention mechanism sparse or adaptive in terms of input size, adding recurrence or compression into each layer, and using Locality Sensitive Hashing for efficient attention are all promising new ideas for better Transformers.\nIt would be interesting to see ideas from the GNN community thrown into the mix, e.g., Binary Partitioning for sentence graph sparsification seems like another exciting approach.\n Are Transformers learning \u0026lsquo;neural syntax\u0026rsquo;? There have been several interesting papers from the NLP community on what Transformers might be learning. The basic premise is that performing attention on all word pairs in a sentence\u0026ndash;with the purpose of identifying which pairs are the most interesting\u0026ndash;enables Transformers to learn something like a task-specific syntax. Different heads in the multi-head attention might also be \u0026lsquo;looking\u0026rsquo; at different syntactic properties.\nIn graph terms, by using GNNs on full graphs, can we recover the most important edges\u0026ndash;and what they might entail\u0026ndash;from how the GNN performs neighbourhood aggregation at each layer? I\u0026rsquo;m not so convinced by this view yet.\n Why multiple heads of attention? Why attention? I\u0026rsquo;m more sympathetic to the optimization view of the multi-head mechanism\u0026ndash;having multiple attention heads improves learning and overcomes bad random initializations. For instance, these papers showed that Transformer heads can be \u0026lsquo;pruned\u0026rsquo; or removed after training without significant performance impact.\nMulti-head neighbourhood aggregation mechanisms have also proven effective in GNNs, e.g., GAT uses the same multi-head attention and MoNet uses multiple Gaussian kernels for aggregating features. Although invented to stabilize attention mechanisms, could the multi-head trick become standard for squeezing out extra model performance?\nConversely, GNNs with simpler aggregation functions such as sum or max do not require multiple aggregation heads for stable training. Wouldn\u0026rsquo;t it be nice for Transformers if we didn\u0026rsquo;t have to compute pair-wise compatibilities between each word pair in the sentence?\nCould Transformers benefit from ditching attention, altogether? Yann Dauphin and collaborators\u0026rsquo; recent work suggests an alternative ConvNet architecture. Transformers, too, might ultimately be doing something similar to ConvNets!\n Why is training Transformers so hard? Reading new Transformer papers makes me feel that training these models requires something akin to black magic when determining the best learning rate schedule, warmup strategy and decay settings. This could simply be because the models are so huge and the NLP tasks studied are so challenging.\nBut recent results suggest that it could also be due to the specific permutation of normalization and residual connections within the architecture.\nI enjoyed reading the new @DeepMind Transformer paper, but why is training these models such dark magic? \u0026quot;For word-based LM we used 16, 000 warmup steps with 500, 000 decay steps and sacrifice 9,000 goats.\u0026quot;https://t.co/dP49GTa4ze pic.twitter.com/1K3Fx4s3M8\n\u0026mdash; Chaitanya Joshi (@chaitjo) February 17, 2020 \tAt this point I\u0026rsquo;m ranting, but this makes me sceptical: Do we really need multiple heads of expensive pair-wise attention, overparameterized MLP sub-layers, and complicated learning schedules?\nDo we really need massive models with massive carbon footprints?\nShouldn\u0026rsquo;t architectures with good inductive biases for the task at hand be easier to train?\n Further Reading To dive deep into the Transformer architecture from an NLP perspective, check out these amazing blog posts: The Illustrated Transformer and The Annotated Transformer.\nAlso, this blog isn\u0026rsquo;t the first to link GNNs and Transformers: Here\u0026rsquo;s an excellent talk by Arthur Szlam on the history and connection between Attention/Memory Networks, GNNs and Transformers. Similarly, DeepMind\u0026rsquo;s star-studded position paper introduces the Graph Networks framework, unifying all these ideas. For a code walkthrough, the DGL team has a nice tutorial on seq2seq as a graph problem and building Transformers as GNNs.\nIn our next post, we\u0026rsquo;ll be doing the reverse: using GNN architectures as Transformers for NLP (based on the Transformers library by 🤗 HuggingFace).\nFinally, we wrote a recent paper applying Transformers to sketch graphs. Do check it out!\n Updates The post is also available on Medium, and has been translated to Chinese and Russian. Do join the discussion on Twitter, Reddit or HackerNews!\nTransformers are a special case of Graph Neural Networks. This may be obvious to some, but the following blog post does a good job at explaining these important concepts. https://t.co/H8LT2F7LqC\n\u0026mdash; Oriol Vinyals (@OriolVinyalsML) February 29, 2020 ","date":1581494919,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1581494919,"objectID":"94179cdab29f1ed0c593f37fbdead0da","permalink":"https://graphdeeplearning.github.io/post/transformers-are-gnns/","publishdate":"2020-02-12T16:08:39+08:00","relpermalink":"/post/transformers-are-gnns/","section":"post","summary":"Engineer friends often ask me: Graph Deep Learning sounds great, but are there any big commercial success stories? Is it being deployed in practical applications?\nBesides the obvious ones\u0026ndash;recommendation systems at Pinterest, Alibaba and Twitter\u0026ndash;a slightly nuanced success story is the Transformer architecture, which has taken the NLP industry by storm.\nThrough this post, I want to establish links between Graph Neural Networks (GNNs) and Transformers. I\u0026rsquo;ll talk about the intuitions behind model architectures in the NLP and GNN communities, make connections using equations and figures, and discuss how we could work together to drive progress.","tags":["Deep Learning","Graph Neural Networks","Transformer","Natural Language Processing"],"title":"Transformers are Graph Neural Networks","type":"post"},{"authors":["Chaitanya Joshi","Peng Xu","Xavier Bresson"],"categories":["Applications"],"content":"Representation Learning for Sketches Human beings have been creating free-hand sketches, i.e., drawings without precise instruments, since time immemorial. Due to the popularity of touchscreen interfaces, machine learning using sketches has emerged as an interesting problem with a myriad of applications: If we consider sketches as 2D images, we can throw them into off-the-shelf Convolutional Neural Networks (CNNs). While CNNs are designed for static collections of pixels with dense colors and textures, sketches are usually an extremely sparse sequences of strokes which capture high-level abstractions and ideas. Recurrent Neural Networks (RNNs) stick out as a natural architecture for capturing this temporal nature of sketches.\n Structure vs. temporal order: can we have the best of both worlds?\n Sketches as Graphs We are working on a novel representation of free-hand sketches as sparsely-connected graphs. We assume that sketches are sets of curves and strokes, which are discretized by a set of points representing the graph nodes. Each node encodes spatial, temporal and semantic information. Thus, representing sketches with graphs offers a universal representation that can make use of both the sketch structure (like images) as well as temporal information (like stroke sequences). To exploit these graph structures, we are developing Graph Neural Networks (GNNs) based on the Transformer model [Vaswani et al., 2017].\n","date":1578989967,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1578989967,"objectID":"7d87904d26648130a430d73db48d16e4","permalink":"https://graphdeeplearning.github.io/project/sketches/","publishdate":"2020-01-14T16:19:27+08:00","relpermalink":"/project/sketches/","section":"project","summary":"Representation learning for drawings via graphs with geometric and temporal information.","tags":["Deep Learning","Computer Vision","Graph Neural Networks","Applications","Sketches"],"title":"Free-hand Sketches","type":"project"},{"authors":["Peng Xu","Chaitanya Joshi","Xavier Bresson"],"categories":null,"content":"","date":1577145600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1577145600,"objectID":"df5c75cf16edaf9f688f7becec58bf91","permalink":"https://graphdeeplearning.github.io/publication/xu-2019-multi/","publishdate":"2020-01-14T16:08:24+08:00","relpermalink":"/publication/xu-2019-multi/","section":"publication","summary":"Learning meaningful representations of free-hand sketches remains a challenging task given the signal sparsity and the high-level abstraction of sketches. Existing techniques have focused on exploiting either the static nature of sketches with Convolutional Neural Networks (CNNs) or the temporal sequential property with Recurrent Neural Networks (RNNs). In this work, we propose a new representation of sketches as multiple sparsely connected graphs. We design a novel Graph Neural Network (GNN), the Multi-Graph Transformer (MGT), for learning representations of sketches from multiple graphs which simultaneously capture global and local geometric stroke structures, as well as temporal information. We report extensive numerical experiments on a sketch recognition task to demonstrate the performance of the proposed approach. Particularly, MGT applied on 414k sketches from Google QuickDraw: (i) achieves small recognition gap to the CNN-based performance upper bound (72.80% vs. 74.22%), and (ii) outperforms all RNN-based models by a significant margin. To the best of our knowledge, this is the first work proposing to represent sketches as graphs and apply GNNs for sketch recognition.","tags":["Deep Learning","Computer Vision","Graph Neural Networks","Sketches","Transformer"],"title":"Multi-Graph Transformer for Free-Hand Sketch Recognition","type":"publication"},{"authors":["Xavier Bresson","Thomas Laurent"],"categories":null,"content":"","date":1575158400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1575158400,"objectID":"683f5c06eabce515341c98c57c59c2fc","permalink":"https://graphdeeplearning.github.io/publication/bresson-2019-two/","publishdate":"2019-09-17T00:46:25.621256Z","relpermalink":"/publication/bresson-2019-two/","section":"publication","summary":"We propose a simple auto-encoder framework for molecule generation. The molecular graph is first encoded into a continuous latent representation , which is then decoded back to a molecule. The encoding process is easy, but the decoding process remains challenging. In this work, we introduce a simple two-step decoding process. In a first step, a fully connected neural network uses the latent vector to produce a molecular formula, for example CO (one carbon and two oxygen atoms). In a second step, a graph convolutional neural network uses the same latent vector to place bounds between the atoms that were produced in the first step (for example a double bound will be placed between the carbon and each of the oxygens). This two-step process, in which a bag of atoms is first generated, and then assembled, provides a simple framework that allows us to develop an efficient molecule auto-encoder. Numerical experiments on basic tasks such as novelty, uniqueness, validity and optimized chemical property for the 250k ZINC molecules demonstrate the performances of the proposed system. Particularly, we achieve the highest reconstruction rate of 90.5%, improving the previous rate of 76.7%. We also report the best property improvement results when optimization is constrained by the molecular distance between the original and generated molecules.","tags":["Deep Learning","Graph Neural Networks","Chemistry","Molecule Generation"],"title":"A Two-Step Graph Convolutional Decoder for Molecule Generation","type":"publication"},{"authors":["Chaitanya Joshi","Thomas Laurent","Xavier Bresson"],"categories":null,"content":"","date":1575158400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1575158400,"objectID":"03a8196783d781b067826437fd6b3933","permalink":"https://graphdeeplearning.github.io/publication/joshi-2019-learning/","publishdate":"2019-09-17T00:46:25.621256Z","relpermalink":"/publication/joshi-2019-learning/","section":"publication","summary":"We explore the impact of learning paradigms on training deep neural networks for the Travelling Salesman Problem. We design controlled experiments to train supervised learning (SL) and reinforcement learning (RL) models on fixed graph sizes up to 100 nodes, and evaluate them on variable sized graphs up to 500 nodes. Beyond not needing labelled data, our results reveal favorable properties of RL over SL: RL training leads to better emergent generalization to variable graph sizes and is a key component for learning scale-invariant solvers for novel combinatorial problems.","tags":["Deep Learning","Graph Neural Networks","Operations Research","Combinatorial Optimization","Travelling Salesman Problem"],"title":"On Learning Paradigms for the Travelling Salesman Problem","type":"publication"},{"authors":["Chaitanya Joshi"],"categories":null,"content":"","date":1571702400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1571702400,"objectID":"b55be0b37ff4d3d6c1af1e03fc805809","permalink":"https://graphdeeplearning.github.io/talk/informs-oct2019/","publishdate":"2019-09-18T15:41:13+08:00","relpermalink":"/talk/informs-oct2019/","section":"talk","summary":"The most famous NP-hard combinatorial problem today, the Travelling Salesman Problem, is intractable to solve optimally at large scale. In practice, existing techniques such as Concorde can efficiently solve TSP up to thousands of nodes. This talk introduces a recent line of work from the deep learning community to directly ‘learn’ good heuristics for TSP using neural networks. Our approach uses Graph ConvNets to operate on the graph structure of problem instances and is highly parallelizable, making it a promising direction for learning combinatorial optimization at large scale.","tags":["Deep Learning","Graph Neural Networks","Talks","Operations Research","Combinatorial Optimization"],"title":"Graph Neural Networks for the Travelling Salesman Problem","type":"talk"},{"authors":["Xavier Bresson"],"categories":null,"content":"","date":1569196800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1569196800,"objectID":"c1ea4c6aeb1e382b0b7d95f63756c2ae","permalink":"https://graphdeeplearning.github.io/talk/ipam-sept2019/","publishdate":"2019-09-18T15:41:13+08:00","relpermalink":"/talk/ipam-sept2019/","section":"talk","summary":"In this talk, I will discuss a graph convolutional neural network architecture for the molecule generation task. The proposed approach consists of two steps. First, a graph ConvNet is used to auto-encode molecules in one-shot. Second, beam search is applied to the output of neural networks to produce a valid chemical solution. Numerical experiments demonstrate the performances of this learning system.","tags":["Deep Learning","Graph Neural Networks","Talks","Chemistry"],"title":"Graph Convolutional Neural Networks for Molecule Generation","type":"talk"},{"authors":["Chaitanya Joshi","Xavier Bresson"],"categories":["Applications"],"content":"Operations Research and Combinatorial Problems Operations Research (OR) started in the first world war as an initiative to use mathematics and computer science to assist military planners in their decisions. Today, combinatorial optimization algorithms developed in the OR community form the backbone of the most important modern industries including transportation, logistics, scheduling, finance and supply chains.\nOR Problems are formulated as integer constrained optimization, i.e., with integral or binary variables (called decision variables). While not all such problems are hard to solve (e.g., finding the shortest path between two locations), we concentrate on Combinatorial (NP-Hard) problems. NP-Hard problems are impossible to solve optimally at large scales as exhaustively searching for their solutions is beyond the limits of modern computers. The Travelling Salesman Problem (TSP) and the Minimum Spanning Tree Problem (MST) are two of the most popular examples for such problems defined using graphs.\nTSP asks the following question: Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city? Formally, given a graph, one needs to search the space of permutations to find an optimal sequence of nodes, called a tour, with minimal total edge weights (tour length).\n Neural Combinatorial Optimization Solvers and heuristic algorithms developed in the OR community are able to solve classical problems such as TSP with up to millions of variables. However, designing powerful and robust optimization algorithms requires significant specialized knowledge and years of trial-and-error, especially for understudied but high-impact problems arising in scientific discovery or computer architecture. The state-of-the-art TSP solver, Concorde, leverages over 50 years of research on linear programming, cutting plane algorithms and branch-and-bound.\n At our lab, we\u0026rsquo;re working on automating and augmenting such expert intuition through Machine Learning [Bengio et al., 2018]. Since most problems are highly structured, heuristics take the form of rules or policies to make sequential decisions, e.g., determine the TSP tour one city at a time. Our research uses deep neural networks to parameterize these policies and train them directly from problem instances. In particular, Graph Neural Networks are the perfect fit for the task because they naturally operate on the graph structure of these problems.\nA generic five-stage pipeline for end-to-end learning of combinatorial problems on graphs\n Why study TSP in particular? (1) The problem has an amazing history of serving as an engine of discovery for applied mathematics, with several legendary computer scientists and mathematicians having a crack at it. Here\u0026rsquo;s an amazing talk by William Cook, the co-inventor of the current state-of-the-art Concorde TSP solver.\n(2) TSP has been the focus of intense research in the combinatorial optimization community. If you come up with a new solver, e.g., a learning-driven solver, you need to benchmark it on TSP. TSP’s multi-scale nature makes it a challenging graph task which requires reasoning about both local node neighborhoods as well as global graph structure.\n(3) Learning-based approaches for heuristic algorithms have the potential to be a breakthrough for OR if they are able to learn efficiently on small scale problems and then generalize robustly to larger instances. However, such scale-invariant generalization is an exciting and unsolved challenge, not just for TSP, but for machine learning as a whole. Update: We explore this in our latest paper!\n At the same time, the more profound motivation of using deep learning for combinatorial optimization is not to outperform classical approaches on well-studied problems. Neural networks can be used as a general tool for tackling previously un-encountered NP-hard problems, especially those that are non-trivial to design heuristics for [Bello et al., 2016]. We are excited about recent applications of neural combinatorial optimization for accelarating drug discovery, optimizing operating systems and designing computer chips.\n P.S. XB is organizing an exciting workshop at IPAM titled \u0026ldquo;Deep Learning and Combinatorial Optimization\u0026rdquo;.\n","date":1568730035,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1568730035,"objectID":"24dbe575959522cae5b8371ef64eccb7","permalink":"https://graphdeeplearning.github.io/project/combinatorial-optimization/","publishdate":"2019-09-17T22:20:35+08:00","relpermalink":"/project/combinatorial-optimization/","section":"project","summary":"Scalable deep learning systems for practical NP-Hard combinatorial problems such as the TSP.","tags":["Deep Learning","Graph Neural Networks","Applications","Operations Research","Combinatorial Optimization"],"title":"Combinatorial Optimization","type":"project"},{"authors":["Victor Getty","Xavier Bresson"],"categories":["Applications"],"content":"","date":1568730035,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1568730035,"objectID":"a96c5f414cb8f29ed26411f228728935","permalink":"https://graphdeeplearning.github.io/project/chemistry/","publishdate":"2019-09-17T22:20:35+08:00","relpermalink":"/project/chemistry/","section":"project","summary":"Chemical synthesis, structure and property prediction using deep neural networks.","tags":["Deep Learning","Graph Neural Networks","Applications","Chemistry"],"title":"Quantum Chemistry","type":"project"},{"authors":["Chaitanya Joshi","Vijay Prakash Dwivedi","Xavier Bresson"],"categories":["Models"],"content":"Non-Euclidean and Graph-structured Data Classic deep learning architectures such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) require the input data domain to be regular, such as 2D or 3D Euclidean grids for Computer Vision and 1D lines for Natural Language Processing.\nHowever, real-world data beyond images and language tends to an underlying structure that is non-Euclidean. Such complex data commonly occurs in science and engineering, and can be modelled intuitively by heterogeneous graphs. Prominent examples include graphs of molecules, 3D meshes in computer graphics, social networks and biological networks.\n Graph Neural Networks Obtaining insights from large and complex graph-structured datasets leads to an interesting challenge for machine learning architectures: The popular CNN and RNN models need to be redesigned for handling non-Euclidean data, as they cannot leverage familiar regularities such as coordinate systems, vector space structure, or shift invariance.\nGraph/Geometric Deep Learning is an umbrella term for emerging techniques attempting to generalize deep neural networks to non-Euclidean domains such as graphs and manifolds [Bronstein et al., 2017].\n We are interested to designing neural networks for arbitrary graphs in order to solve generic graph problems, such as vertex classification, graph classification and graph generation. These Graph Neural Network (GNN) architectures are used as backbones for challenging domain-specific applications in a myriad of domains, including chemistry, social networks, recommendations and computer graphics.\n Basic Formalism Each GNN layer computes $d$-dimensional representations for the nodes/edges of the graph through recursive neighborhood diffusion (a.k.a. message passing), where each graph node gathers features from its neighbors to represent local graph structure. Stacking $L$ GNN layers allows the network to build node representations from the $L$-hop neighborhood of each node.\nLet $h_i^{\\ell}$ denote the feature vector at layer $\\ell$ associated with node $i$. The updated features $h_i^{\\ell+1}$ at the next layer $\\ell+1$ are obtained by applying non-linear transformations to the central feature vector $h_i^{\\ell}$ and the feature vectors $h_{j}^{\\ell}$ for all nodes $j$ in the neighborhood of node $i$ (defined by the graph structure). This guarantees the transformation to build local reception fields, such as in standard ConvNets for computer vision, and be invariant to both graph size and vertex re-indexing.\nThus, the most generic version of a feature vector $h_i^{\\ell+1}$ at vertex $i$ at the next layer in the GNN is: \\begin{equation} h_{i}^{\\ell+1} = f \\left( \\ h_i^{\\ell} \\ , \\ { h_{j}^{\\ell}: j \\rightarrow i } \\ \\right) , \\end{equation} where ${ j \\rightarrow i }$ denotes the set of neighboring nodes $j$ pointed to node $i$, which can be replaced by ${ j \\in \\mathcal{N}_i }$, the set of neighbors of node $i$, if the graph is undirected.\n Classes of GNN Architectures In other words, a GNN is defined by a mapping $f$ taking as input a vector $h_i^{\\ell}$\u0026ndash;the feature vector of the center vertex\u0026ndash;as well as an un-ordered set of vectors {${ h_{j}^{\\ell}}$}\u0026ndash;the feature vectors of all neighboring vertices. The arbitrary choice of the mapping $f$ defines an instantiation of a class of GNNs, e.g., GCN, GraphSage, GIN.\nAs an illustration, here\u0026rsquo;s a simple-yet-effective Graph ConvNet from Sukhbaatar et al., 2016: \\begin{equation} h_{i}^{\\ell+1} = \\text{ReLU} \\Big( U^{\\ell} h_{i}^{\\ell} + \\sum_{j \\in \\mathcal{N}_i} V^{\\ell} h_{j}^{\\ell} \\Big), \\end{equation} where $U^{\\ell}, V^{\\ell} \\in \\mathbb{R}^{d \\times d}$ are the learnable parameters.\nIn a recent paper on benchmarking GNN architectures, we introduced block diagrams to intuitively describe feature update equations such as the one above:\n Anisotropic GNNs As graphs have no specific orientations (like up, down, left, right directions in images), message-passing layers such as Sukhbaatar\u0026rsquo;s Graph ConvNet are isotropic, treating all neighbors as equally important. However, this may not be true in general, e.g., in social network graphs, neighbors in the same community share different relationships and information compared to neighbors from separate communities.\nIsotropic GNNs can be upgraded to make the diffusion process anisotropic through mechanisms which learn to weigh neighbors based on their relative importance. For example, Marchegiani and Titov, 2017 upgrade Graph ConvNets by introducing edge gating for learning information flow on the graph structure for the task at hand: \\begin{equation} h_{i}^{\\ell+1} = \\text{ReLU} \\Big( U^{\\ell} h_{i}^{\\ell} + \\sum_{j \\in \\mathcal{N}_i} \\eta_{ij} \\odot V^{\\ell} h_{j}^{\\ell} \\Big), \\quad \\text{where } \\eta_{ij} = \\sigma \\big( A^{\\ell} h_i^{\\ell} + B^{\\ell} h_j^{\\ell} \\big), \\end{equation} $U^{\\ell}, V^{\\ell}, A^{\\ell}, B^{\\ell} \\in \\mathbb{R}^{d \\times d}$ are the learnable parameters, $\\sigma$ is the sigmoid function, $\\odot$ is the element-wise product, and $\\eta_{ij}$ act as edge gates.\n Other prominent approaches to introduce anisotropy into GNNs include GAT, which uses the attention mechanism from NLP, as well as MoNet, which relies on gaussian mixture models of graph connectivity. Through our benchmark, we found anisotropic aggregation to be a key property of powerful GNNs.\n","date":1568730035,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1568730035,"objectID":"60ff349a5ab17be5dbb8131562dd38b3","permalink":"https://graphdeeplearning.github.io/project/spatial-convnets/","publishdate":"2019-09-17T22:20:35+08:00","relpermalink":"/project/spatial-convnets/","section":"project","summary":"Graph Neural Network architectures for inductive representation learning on arbitrary graphs.","tags":["Deep Learning","Graph Neural Networks","Models","Spatial Graph ConvNets"],"title":"Spatial Graph ConvNets","type":"project"},{"authors":["Xavier Bresson"],"categories":null,"content":"","date":1558396800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1558396800,"objectID":"bfaf594a2cddd16a686088705a7d702b","permalink":"https://graphdeeplearning.github.io/talk/ipam-may2019/","publishdate":"2019-09-18T15:41:13+08:00","relpermalink":"/talk/ipam-may2019/","section":"talk","summary":"In this talk, I will discuss how to apply graph convolutional neural networks to quantum chemistry and operational research. The same high-level paradigm can be applied to generate new molecules with optimized chemical properties and to solve the Travelling Salesman Problem. The proposed approach consists of two steps. First, a graph ConvNet is used to auto-encode molecules and estimate TSP solutions in one-shot. Second, beam search is applied to the output of neural networks to produce a valid chemical or combinatorial solution. Numerical experiments demonstrate the performances of this learning system.","tags":["Deep Learning","Graph Neural Networks","Talks","Operations Research","Combinatorial Optimization","Chemistry"],"title":"Graph Convolutional Neural Networks for Molecule Generation and Travelling Salesman Problem","type":"talk"},{"authors":["Chaitanya Joshi","Thomas Laurent","Xavier Bresson"],"categories":null,"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1546300800,"objectID":"b87bd8cbb6b6d49da1ad5087b786f634","permalink":"https://graphdeeplearning.github.io/publication/joshi-2019-efficient/","publishdate":"2019-09-17T00:46:25.621256Z","relpermalink":"/publication/joshi-2019-efficient/","section":"publication","summary":"This paper introduces a new learning-based approach for approximately solving the Travelling Salesman Problem on 2D Euclidean graphs. We use deep Graph Convolutional Networks to build efficient TSP graph representations and output tours in a non-autoregressive manner via highly parallelized beam search. Our approach outperforms all recently proposed autoregressive deep learning techniques in terms of solution quality, inference speed and sample efficiency for problem instances of fixed graph sizes. In particular, we reduce the average optimality gap from 0.52% to 0.01% for 50 nodes, and from 2.26% to 1.39% for 100 nodes. Finally, despite improving upon other learning-based approaches for TSP, our approach falls short of standard Operations Research solvers.","tags":["Deep Learning","Graph Neural Networks","Operations Research","Combinatorial Optimization","Travelling Salesman Problem"],"title":"An Efficient Graph Convolutional Network Technique for the Travelling Salesman Problem","type":"publication"},{"authors":["Yao Yang Leow","Thomas Laurent","Xavier Bresson"],"categories":null,"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1546300800,"objectID":"7f44a49a4449c77f77725f074b04a600","permalink":"https://graphdeeplearning.github.io/publication/leow-2019-graphtsne/","publishdate":"2019-09-17T00:46:25.621256Z","relpermalink":"/publication/leow-2019-graphtsne/","section":"publication","summary":"We present GraphTSNE, a novel visualization technique for graph-structured data based on t-SNE. The growing interest in graph-structured data increases the importance of gaining human insight into such datasets by means of visualization. However, among the most popular visualization techniques, classical t-SNE is not suitable on such datasets because it has no mechanism to make use of information from graph connectivity. On the other hand, standard graph visualization techniques, such as Laplacian Eigenmaps, have no mechanism to make use of information from node features. Our proposed method GraphTSNE is able to produce visualizations which account for both graph connectivity and node features. It is based on scalable and unsupervised training of a graph convolutional network on a modified t-SNE loss. By assembling a suite of evaluation metrics, we demonstrate that our method produces desirable visualizations on three benchmark datasets.","tags":["Deep Learning","Graph Neural Networks","Graph Visualization"],"title":"GraphTSNE: A Visualization Technique for Graph-Structured Data","type":"publication"},{"authors":null,"categories":null,"content":" \u0026quot;Being able to articulate and explain ideas is the true test of having learned something.\u0026rdquo;\n Starting 2020, the NTU Graph Deep Learning Lab will host a weekly/bi-weekly paper discussion and reading group. We\u0026rsquo;ll cover the latest and greatest papers from the Graph Neural Networks community as well as general machine learning. We plan to keep things simple: talk about the key contributions and results, followed by discussions; no need for beautiful slides or long talks.\nThe primary aim of this reading group is to share ideas, blabber about your favorite papers, and get to know others with common interests. Its also an opportunity for lab members to keep each other up-to-date and get early feedback on their projects.\nWe may have snacks/coffee! ;)\n Details When: Weekly/bi-weekly, join the mailing list\nWhere: MICL Lab Meeting Room, Block N4, N4-B1C-17, SCSE, NTU\nDuration: Usually 30-45 minutes\nContact: Chaitanya Joshi (This page will be regularly updated with content from past sessions.)\n","date":1530144000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1530144000,"objectID":"1a82ea200d2448141f6137ff1b9ca004","permalink":"https://graphdeeplearning.github.io/reading-group/","publishdate":"2018-06-28T00:00:00Z","relpermalink":"/reading-group/","section":"","summary":"Reading Group for the latest papers in (Graph) Deep Learning.","tags":null,"title":"Reading Group","type":"page"},{"authors":["Xavier Bresson"],"categories":null,"content":"","date":1517961600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1517961600,"objectID":"4af49a26a13058d786bcf0d7f41536d1","permalink":"https://graphdeeplearning.github.io/talk/ipam-feb2018/","publishdate":"2019-09-18T15:41:13+08:00","relpermalink":"/talk/ipam-feb2018/","section":"talk","summary":"Convolutional neural networks have greatly improved state-of-the-art performances in computer vision and speech analysis tasks, due to its high ability to extract multiple levels of representations of data. In this talk, we are interested in generalizing convolutional neural networks from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, telecommunication networks, or words' embedding. We present a formulation of convolutional neural networks on graphs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Numerical experiments demonstrate the ability of the system to learn local stationary features on graphs.","tags":["Deep Learning","Graph Neural Networks","Talks","Spatial Graph ConvNets","Spectral Graph ConvNets"],"title":"Convolutional Neural Networks on Graphs","type":"talk"},{"authors":["Suyash Lakhotia","Xavier Bresson"],"categories":null,"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1514764800,"objectID":"d2066092208d9e8d9f6b1dd67e569dba","permalink":"https://graphdeeplearning.github.io/publication/lakhotia-2018-experimental/","publishdate":"2019-09-17T00:46:25.621256Z","relpermalink":"/publication/lakhotia-2018-experimental/","section":"publication","summary":"Text classification is the task of labeling text data from a predetermined set of thematic labels. It has become of increasing importance in recent years as we generate large volumes of data and require the ability to search through these vast datasets with flexible queries. However, manually labeling text data is an extremely tedious task that is prone to human error. Thus, text classification has become a key focus of machine learning research, with the goal of producing models that are more efficient and accurate than traditional methods. The objective of this work is to rigorously compare the performance of current text classification techniques, from standard SVM-based, statistical and multilayer perceptron (MLP) models to recently enhanced deep learning models such as convolutional neural networks and their fusion with graph theory. Extensive numerical experiments on three major text classification datasets (Rotten Tomatoes Sentence Polarity, 20 Newsgroups and Reuters Corpus Volume 1) revealed two results. First, graph convolutional neural networks perform with greater or similar test accuracy when compared to standard convolutional neural networks, SVM-based models and statistical baseline models. Second, and more surprisingly, simpler MLP models still outperform recent deep learning techniques despite having fewer parameters. This implies that either benchmark datasets like RCV1 containing more than 420,000 documents from 52 classes are not large enough or the representation of text data as tf-idf document vectors is not expressive enough.","tags":["Deep Learning","Graph Neural Networks","Natural Language Processing","Text Classification"],"title":"An Experimental Comparison of Text Classification Techniques","type":"publication"},{"authors":["Xavier Bresson","Thomas Laurent"],"categories":null,"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1514764800,"objectID":"41ebcf5e731d32acdfc83406ac3056f2","permalink":"https://graphdeeplearning.github.io/publication/bresson-2018-experimental/","publishdate":"2019-09-17T00:46:25.619962Z","relpermalink":"/publication/bresson-2018-experimental/","section":"publication","summary":"Graph-structured data such as social networks, functional brain networks, chemical molecules have brought the interest in generalizing deep learning techniques to graph domains. In this work, we propose an empirical study of neural networks for graphs with variable size and connectivity. We rigorously compare several graph recurrent neural networks (RNNs) and graph convolutional neural networks (ConvNets) to solve two fundamental and representative graph problems, subgraph matching and graph clustering. Numerical results show that graph ConvNets are 3-17% more accurate and 1.5-4x faster than graph RNNs. Interestingly, graph ConvNets are also 36% more accurate than non-learning (variational) techniques. The benefit of such study is to show that complex architectures like LSTM is not useful in the context of graph neural networks, but one should favour architectures with minimal inner structures, such as locality, weight sharing, index invariance, multi-scale, gates and residuality, to design efficient novel neural network models for applications like drugs design, genes analysis and particle physics.","tags":["Deep Learning","Graph Neural Networks","Spatial Graph ConvNets"],"title":"An Experimental Study of Neural Networks for Variable Graphs","type":"publication"},{"authors":["Xavier Bresson","Thomas Laurent"],"categories":null,"content":"","date":1483228800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1483228800,"objectID":"49bbf3fe527899f70dfc3309fc0edbb0","permalink":"https://graphdeeplearning.github.io/publication/bresson-2017-residual/","publishdate":"2019-09-17T00:46:25.620574Z","relpermalink":"/publication/bresson-2017-residual/","section":"publication","summary":"Graph-structured data such as social networks, functional brain networks, gene regulatory networks, communications networks have brought the interest in generalizing deep learning techniques to graph domains. In this paper, we are interested to design neural networks for graphs with variable length in order to solve learning problems such as vertex classification, graph classification, graph regression, and graph generative tasks. Most existing works have focused on recurrent neural networks (RNNs) to learn meaningful representations of graphs, and more recently new convolutional neural networks (ConvNets) have been introduced. In this work, we want to compare rigorously these two fundamental families of architectures to solve graph learning tasks. We review existing graph RNN and ConvNet architectures, and propose natural extension of LSTM and ConvNet to graphs with arbitrary size. Then, we design a set of analytically controlled experiments on two basic graph problems, ie subgraph matching and graph clustering, to test the different architectures. Numerical results show that the proposed graph ConvNets are 3-17% more accurate and 1.5-4x faster than graph RNNs. Graph ConvNets are also 36% more accurate than variational (non-learning) techniques. Finally, the most effective graph ConvNet architecture uses gated edges and residuality. Residuality plays an essential role to learn multi-layer architectures as they provide a 10% gain of performance.","tags":["Deep Learning","Graph Neural Networks","Spatial Graph ConvNets"],"title":"Residual Gated Graph ConvNets","type":"publication"}]