You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/modules/ROOT/pages/machine-learning/node-embeddings/hashgnn.adoc
+44-39Lines changed: 44 additions & 39 deletions
Original file line number
Diff line number
Diff line change
@@ -21,8 +21,8 @@ The neural networks of GNNs are replaced by random hash functions, in the flavor
21
21
Thus, HashGNN combines ideas of GNNs and fast randomized algorithms.
22
22
23
23
The GDS implementation of HashGNN is based on the paper "Hashing-Accelerated Graph Neural Networks for Link Prediction", and further introduces a few improvements and generalizations.
24
-
The generalizations include support for embedding heterogeneous graphs; relationships of different type are associated with different hash functions, which allows for preserving relationship-typed graph topology.
25
-
Moreover, a way to specifying how much embeddings are updated using features from neighboring nodes versus features from the same node can be configured via `neighborInfluence`.
24
+
The generalizations include support for embedding heterogeneous graphs; relationships of different types are associated with different hash functions, which allows for preserving relationship-typed graph topology.
25
+
Moreover, a way to specify how much embeddings are updated using features from neighboring nodes versus features from the same node can be configured via `neighborInfluence`.
26
26
27
27
The runtime of this algorithm is significantly lower than that of GNNs in general, but can still give comparable embedding quality for certain graphs as shown in the original paper.
28
28
Moreover, the heterogeneous generalization also gives comparable results when compared to the paper "Graph Transformer Networks" when benchmarked on the same datasets.
@@ -35,8 +35,10 @@ For more information on this algorithm, see:
35
35
36
36
=== The algorithm
37
37
38
+
To clarify how HashGNN works, we will walk through a virtual example <<algorithms-embeddings-hashgnn-virtual-example, below>> of a three node graph for the reader who is curious about the details of the feature selection and prefers to learn from examples.
39
+
38
40
The HashGNN algorithm can only run on binary features.
39
-
There is an optional first step to transform input features into binary features.
41
+
Therefore, there is an optional first step to transform (possibly non-binary) input features into binary features as part of the algorithm.
40
42
41
43
For a number of iterations, a new binary embedding is computed for each node using the embeddings of the previous iteration.
42
44
In the first iteration, the previous embeddings are the binary feature vectors.
@@ -45,45 +47,15 @@ During one iteration, each node embedding vector is constructed by taking `K` ra
45
47
The random sampling is carried out by successively selecting features with lowest min-hash values.
46
48
Features of each node itself and of its neighbours are both considered.
47
49
48
-
There are three types of hash functions involved: 1) a function applied to a node's own features, 2) a function applied to a subset of neighbor's feature 3) a function applied to all neighbor's features to select the subset for hash function 2).
49
-
For each iteration and sampling round `k<K` new hash functions are used, and the third function also varies by relationship type connecting to the neighbor.
50
+
There are three types of hash functions involved: 1) a function applied to a node's own features, 2) a function applied to a subset of neighbors' features 3) a function applied to all neighbors' features to select the subset for hash function 2).
51
+
For each iteration and sampling round `k<K`, new hash functions are used, and the third function also varies depending on the relationship type connecting to the neighbor it is being applied on.
50
52
51
-
The sampling is consistent in the sense that if nodes `a` and `b` are same or similar in terms of their features, the features of their neighbors and the relationship types connecting the neighbors, the samples for `a` and `b` are also same or similar.
52
-
The number `K` is called `embeddingDensity` in the configuration of the algorithm.
53
-
The algorithm ends with another optional step that maps the binary embeddings to dense vectors.
54
-
55
-
=== Virtual example
56
-
57
-
To clarify how HashGNN works, we walk through a virtual example of three node graph for the reader curious about the details of the feature selection and prefers to learn from examples.
58
-
Perhaps the below example is best enjoyed with a pen and paper.
59
-
60
-
Let say we have a node `a` with feature `f1`, a node `b` with feature `f2` and a node `c` with features `f1` and `f3`.
61
-
The graph structure is `a--b--c`.
62
-
We imagine running HashGNN for one iteration with `embeddingDensity=2`.
63
-
64
-
During the first iteration and `k=0`, we compute an embedding for `(a)`.
65
-
A hash value for `f1` turns out to be `7`.
66
-
Since `(b)` is a neighbor, we generate a value for its feature `f2` and it becomes `11`.
67
-
The value `7` is sampled from a hash function which we call "one" and `11` from a hash function "two".
68
-
Thus `f1` is added to the new features for `(a)` since it has a smaller hash value.
69
-
We repeat for `k=1` and this time the hash values are `4` and `2`, so now `f2` is added as a feature to `(a)`.
53
+
The sampling is consistent in the sense that if nodes `a` and `b` have identical or similar local graphs, the samples for `a` and `b` are also identical or similar.
54
+
By local graph, we mean the subgraph with features and relationship types, containing all nodes at most `iterations` hops away.
70
55
71
-
We now consider `(b)`.
72
-
The feature `f2` gets hash value `8` using hash function "one".
73
-
Looking at the neighbor `(a)`, we sample a hash value for `f1` which becomes `5` using hash function "two".
74
-
Since `(c)` has more than one feature, we also have to select one of the two features `f1` and `f3` before considering the "winning" feature as before as input to hash function "two".
75
-
We use a third hash function "three" for this purpose and `f3` gets the smaller value of `1`.
76
-
We now compute a hash of `f3` using "two" and it becomes `6`.
77
-
Since `5` is smaller than `6`, `f1` is the "winning" neighbor feature for `(b)`, and since `5` is also smaller than `8`, it is the overall "winning" feature.
78
-
Therefore, we add `f1` to the embedding of `(b)`.
79
-
We proceed similarily with `k=1` and `f1` is selected again.
80
-
Since the embeddings consist of binary features, this second addition has no effect.
81
-
82
-
We omit the details of computing the embedding of `(c)`.
56
+
The number `K` is called `embeddingDensity` in the configuration of the algorithm.
83
57
84
-
After the 2 sampling rounds, the iteration is complete and since there is only one iteration, we are done.
85
-
Each node has a binary embedding that contains some subset of the original binary features.
86
-
In particular, `(a)` has features `f1` and `f2`, `(b)` has only the feature `f1`.
58
+
The algorithm ends with another optional step that maps the binary embeddings to dense vectors.
87
59
88
60
=== Features
89
61
@@ -576,3 +548,36 @@ YIELD nodePropertiesWritten
576
548
577
549
The graph 'persons' now has a node property `hashgnn-embedding` which stores the node embedding for each node.
578
550
To find out how to inspect the new schema of the in-memory graph, see xref:graph-list.adoc[Listing graphs].
551
+
552
+
[[algorithms-embeddings-hashgnn-virtual-example]]
553
+
=== Virtual example
554
+
555
+
Perhaps the below example is best enjoyed with a pen and paper.
556
+
557
+
Let say we have a node `a` with feature `f1`, a node `b` with feature `f2` and a node `c` with features `f1` and `f3`.
558
+
The graph structure is `a--b--c`.
559
+
We imagine running HashGNN for one iteration with `embeddingDensity=2`.
560
+
561
+
During the first iteration and `k=0`, we compute an embedding for `(a)`.
562
+
A hash value for `f1` turns out to be `7`.
563
+
Since `(b)` is a neighbor of `(a)`, we generate a value for its feature `f2` which turns out to be `11`.
564
+
The value `7` is sampled from a hash function which we call "one" and `11` from a hash function "two".
565
+
Thus `f1` is added to the new features for `(a)` since it has a smaller hash value.
566
+
We repeat for `k=1` and this time the hash values are `4` and `2`, so now `f2` is added as a feature to `(a)`.
567
+
568
+
We now consider `(b)`.
569
+
The feature `f2` gets hash value `8` using hash function "one".
570
+
Looking at the neighbor `(a)`, we sample a hash value for `f1` which becomes `5` using hash function "two".
571
+
Since `(c)` has more than one feature, we also have to select one of the two features `f1` and `f3` before considering the "winning" feature as before as input to hash function "two".
572
+
We use a third hash function "three" for this purpose and `f3` gets the smaller value of `1`.
573
+
We now compute a hash of `f3` using "two" and it becomes `6`.
574
+
Since `5` is smaller than `6`, `f1` is the "winning" neighbor feature for `(b)`, and since `5` is also smaller than `8`, it is the overall "winning" feature.
575
+
Therefore, we add `f1` to the embedding of `(b)`.
576
+
We proceed similarily with `k=1` and `f1` is selected again.
577
+
Since the embeddings consist of binary features, this second addition has no effect.
578
+
579
+
We omit the details of computing the embedding of `(c)`.
580
+
581
+
After the 2 sampling rounds, the iteration is complete and since there is only one iteration, we are done.
582
+
Each node has a binary embedding that contains some subset of the original binary features.
583
+
In particular, `(a)` has features `f1` and `f2`, `(b)` has only the feature `f1`.
0 commit comments